Innovation, knowledge workers and collective intelligence
Fiverr's CEO, Micha Kaufman, recently told his employees that no job is safe from AI. OpenAI's founder Sam Altman predicted that AI would soon allow a one-person team to build a company worth $1 billion. These statements paint a future of business as one of super-intelligent machines driving innovation. But John Kay's new book The Corporation in the 21st Century draws inspiration from complexity science to suggest these views deeply misunderstand how innovation works.
Kay argues that workers may have resembled cogs in the 20th century, but today's giants like Apple and Google rely more on workers' collective intelligence throughout the product development cycle. While employees on the factory floor were often given specific tasks to complete, today's knowledge workers are typically given complex problems to solve.[1] Innovation arises from the interaction of each of these individual workers.
Much like a flock of starlings produces patterns you could never predict by analyzing a single bird, or how consciousness somehow arises from a network of firing neurons, the modern company arises from this complex interaction among workers. Like the flock of starlings, the corporation is qualitatively different from the sum of its parts. It is not reducible to any single employee, super-genius or otherwise. In the words of complexity science, the modern corporation is emergent. Kay argues that innovation is emergent and cannot happen without collective intelligence. Nobody within Airbus, for example, knows every aspect of the double-decker A380 nor could someone single-handedly direct a team to produce one. Similarly, teams behind UX, design, engineering, supply chain management and more were all vital to the iPhone's success.
But perhaps one person with AI could build the next generation jumbo jet? Although current AI systems have massive knowledge bases to draw upon, Meta's Chief AI Scientist Yann Lecun argues that they cannot innovate because they don't truly understand the problem they are trying to solve. The best applications of AI systems, so far, seem to be with coding and homework. Notice that these applications have either exact or closely analogous known solutions that the system can quickly look up and that humans-in-the-loop can verify. These types of applications are useful, but they are solving tasks whereas true innovation stems from solving problems.
Kay never attempts to define any sort of critical number of humans interacting that would be necessary for innovation, as this would surely differ across firms and industries. But perhaps the task-problem dichotomy can provide a good heuristic. History has shown that tasks, easily verifiable with known solutions, are often outsourced and automated. But Kay argues that innovation requires an interaction among several problem-solving humans. Any minimum number of employees is likely related to the number of problems to be solved. Corporate leaders who assume that their knowledge plus AI can do everything and too aggressively downsize are likely to end up on Kay's long list of companies now dead because they failed to innovate.
Footnotes #
Jeremy Utley and Perry Klebahn's Ideaflow defines a task as being verifiable and with a known solution, whereas a problem is one that nobody knows how to solve. ↩︎