Innovating With AI
A familiar approach to innovation in organizations is one where a new technology is developed or explored and, if deemed useful, organization-wide solutions and infrastructures are implemented.
These efforts often involve heavy investments in automation and routine-building. Think of knowledge management systems, clientrelationship management systems, or access to internet databases. The idea is to leverage scale to improve a substantial number of tasks and practices at once by providing a “corporate” solution. This approach can, and often does, make sense as it ensures proper testing, reliability, and accountability.
It may be early, but organizations seem to assume this same approach will work with generative AI technologies. This assumption appears especially prevalent when assessments and forecasts are made by units historically associated with AI— such as AI departments using machine learning, data science, and corporate AI model development. However, generative AI may be better understood as a generalpurpose technology, where deployment areas are not limited to a few well-defined cases that can be carefully specialized through resource-intensive development. Instead, generative AI can be used for a plethora of mostly undiscovered use cases, ranging from small, highly idiosyncratic applications to more scalable, organization-wide solutions.
Importantly, the very nature of generative AI is breaking down accessibility barriers typically associated with cutting-edge technologies. Generative AI can be directed and used through natural interactions—that is, text, voice, and video. It levels the playing field regarding knowledge gaps. Most significantly, the cost of access is extremely low: no special equipment is required, and much of the technology is free, open-source, or available at a minimal fee. As a consequence, individuals and teams may be faster, better equipped, and ultimately the greatest beneficiaries of innovating on-thejob using their task-specific expertise.
For innovation to take place, organizations need to provide the right policy and resource context for managers and teams to take advantage of this technology, and individuals need to explore how generative AI may augment their goals and workflows. While we are at an early stage of generative AI, and things are constantly evolving, certain skills, concepts, and insights are emerging that can be taught—many of which are general “lifelong learning” principles, such as logical reasoning and higher-order critical thinking. What’s new is that skills traditionally associated with leadership—being able to grasp the bigger picture, defining and breaking down complex problems, delegating tasks, and evaluating delegated work output—are now essentially required from every employee who uses generative AI. In other words, we all became managers overnight and now need to figure out how to act as such with AI.
Innovative use cases of LLMs
If you think — okay, a chatbot … — not quite. The recent LLM advancements have become popular and common knowledge because the chatbot implementation, that is, when there is an iterative “apparent” conversation – the AI generates a response given the user’s input. But thinking of LLMs as simply an eloquent chatbot is misleading and may hold back true innovation.
Large Language Models as Assistants and Mentors
With some deliberate instructions and guidance up front, users can make LLMs into highly capable and useful assistants, who take on roles from helping to disscetct a problem, playing devil’s advocate, and bringing users up to speed on various topics and tasks. This has tremendous potential for leveling up existing leadership and management tasks, and can offer a new way of team-work, where the AI is your “fifth team member.”
In my MBA class, I have developed a class assistant that would help students to not only ask any class and content related questions, but also provide students with mentoring on preparing for class, assignments, and lectures – strictly as a tutor with the student’s growth in mind – not a shortcut to skip preparing. Students reported that they felt better prepared for class and I witnessed their preparedness first hand in class.
Such tools are by far not limited to the classroom, but as I wrote about in a newsblog at Drexel University, can have enormous potential for leaders to keep their teams organized and knowledge flowing as needed and in conversational accessible manner rather than buried in memos, emails, and lengthy documentations. These assistants can also be instructed to nudge users – e.g., to remind them of upcoming deadlines or informing them about issues and misunderstandings that others may have experienced recently. All of this without the need of managers micromanaging these information.
Large Language Models as Multi-Agent Systems
Big waves are made currently with agentic AI systems, essentially, the AI is designed to solve (quasi) autonomously a specific problem and often can interact with their “environment” in multi-modal ways (e.g., vision, text, voice). Notable examples, include programming agents that develop, debug, and troubleshoot computer code.
Consider a “simple” example of a multi-agent systems, where the actual agents dynamically emerge and change depending on the user’s problem. I developed a simple proof of concept program that I showcase in this 2 minute demo video. The architecture of this MAS is shown in the figure below. The user provides an input, such as a problem, question, or statement – for which they seek the feedback and expertise from the MAS. For example, the user may face an issue of how to organize their team and allocate resources for ongoing projects. The MAS may help as an “outsider” perspective, a collection of well-developed and vetted suggestions, which the user may not be easily able to solicit or be simply time constraint to develop independently. The user input then is distilled and rephrased for clarity by an “interpreter” agent, which also defines three experts, each with specific and distinct foci, that will write independent responses (proposals) to the user’s problem. These proposals are then vetted and subject to feedback from other agents (a devil’s advocate poking holes in one round, and a client relationship manager ensuring the proposals are aligned with the user’s implicit needs) and revised by the experts accordingly. In the end the interpreter summarizes, integrates and provides a final document — all of this within ca. 60 seconds.
Innovating via simulated “consumers”
A remarkable finding of academic studies is that LLMs mimic a wide range of human behaviors and reasoning patterns. For example, market research scholars find that simulated consumers based on LLMs show similar product feature preferences for existing and fictional products compared to humans. LLM agents also show similarities with respect to human cognitive biases and limitations. In my own research, together with my collaborator, we compare humans with LLM agents with respect to how they behave in a strategic decision-making experiment under uncertainty. We find that LLM agents behave remarkably similar to humans in this experiment. The figure below shows how “distant” a human (or LLM agent) search for a better solution, made up of a combinatorial problem, within 25 trials. The kicker: we can analyze the LLM’s “thoughts” more easily by reading out their responses and analyze when and why they make certain choices. These insights then can be followed up with humans in a more focused manner. Imagine being able to get answers to why a simulated consumer may
reject or use your product before you even develop this product.