I arrived Silicon Valley in 1996 right as the internet was getting off the ground. Netscape had IPOed, hardware companies like Cisco and Sun Microsystems were flying high to power the internet, and a new crop of “dot-coms” popped up. Some of those companies ended up being a flash-in-the-pan, too early and committed to a business model that didn’t scale, like Yahoo. Others like Amazon and Netflix disrupted entire sectors, by moving atoms to bits. What a time to be alive.
Not all technologies follow the meteoric trajectory. A recent hype cycle was built around cryptocurrency and web3, billed with the potential to disrupt money, banking, the entire financial sector, and solve human coordination problems. I still remain long-term bullish that crypto will have its day, but it’s hard to see how and when that will happen. Oh, and let’s not forget the metaverse.
Technologists have a habit of building speculative visions that don’t obtain in the real world. “Never mistake a clear view for a short distance” remains a powerful caution for futurists.
Even with that caveat, I will state: AI is here and will be a massive leap forward for humanity.
Research on AI has been going on for decades. Huge progress being made in the 2010s thanks to a trend in deep learning. Machine learning models were based on “neural nets,” AI tools inspired by the structure of the human brain. Most of the gains from deep learning were opaque to the end-user. Google searches got better, it became easier to communicate with your cab driver in France in French, and suddenly you found yourself spending way too much time and money on Instagram thanks to their algorithm. Regardless, computers still seemed pretty dumb compared to human beings. How many times has your iPhone messed up autocorrect today?
And then there was ChatGPT.
Suddenly, there’s an AI that has access to a huge chunk of human intelligence in the form of text, which is capable of complete conversations. Sure, the technology will sometimes make things up, doesn’t have access to the latest information, and sometimes is overly cautious. But, I’d encourage you to try using ChatGPT as your copilot for the week. When you have a question, instead of going to Google for a search, got to ChatGPT and see what happens. You’ll almost certainly feel that you’re communicating with an alien intelligence.
Generative AI is the big leap forward that changes computers from being useful and dumb to having human-level intelligence.
Assuming that is true, there’s a lot to think about, both in terms of near-term opportunities to create radically better products, and in the long-term as civilization comes to grips with the creation of a new kind of life-form.
Conclusions, prescriptions, and recommendations only come after clarity in one’s premises, so I decided to write down my points of view.
(1) Machine intelligence is different in kind to all previous technologies
Marshall McLuhan identifies three big leaps forward in humanity’s relationship to the world through information: writing, the printing press, and the internet.
With writing, humanity hacked evolution. Instead of DNA being the only record-keeping device in the universe—with its slow, random progress over eons—we began building a knowledge-base for future generations of humans.
With the printing press, we began the process of democratizing that collected knowledge. Instead of books taking a year’s work of a highly-specialized monk, now millions of books could be printed for a fraction of the cost.
The internet was the next step in digitizing all of that knowledge and further democratizing the creation, distribution, and consumption of information. No longer were we bound by the physical limitations of reality.
Each of these technologies augmented human capacity by pulling out information from our brains and recording it in the real world.
AI is another step in this evolution of information. Everything before was about recording information but the creation of new knowledge was all done by humans. AIs will create new knowledge, putting them in a category distinct from a book.
(2) The underlying substrate is improving (which will improve the AI)
Gordon Moore (RIP) famously predicted that computing power would roughly double every two years or so. That’s held for decades now, thanks to the ingenuity of the industry and advances in science.
The cost of storage also follows something like Moore’s Law. The amount of data has increased significantly since the rise of the internet, thanks to humans producing copious amounts of text, pictures, video, and new forms of media like Tweets and web pages. Also, a proliferation of sensors—from the camera in your smartphone to the thousands of sensors on a 747—have given digitized real-world information.
AI algorithms require huge amounts of compute and huge amounts of data to improve. One thing we’ve learned from ChatGPT’s progress to ChatGPT-4, it’s that the bigger the model, the better it is.
Even if the algorithms don’t get any better, there will be more fuel for the algorithms (data) and a faster engine (the processors); ergo, the AI is going to get better for some amount of time.
But…
(3) Algorithms will get better and better
If history is any guide, we’ve just begun to make advances in algorithms.
After a series of seminal papers from the gods of deep learning (Bengio, Hinton, Lecun, etc.) in the late aughts, there was a flurry of research into deep learning over the next decade. The transformer paper that launched the current generative AI tools was written at Google in 2018. So, expect in the next decade there to be plenty of research on transformers, plus foundational research on new techniques.
Private companies are going to rush in with their enormous human and capital resources. Global private AI investment was $90b in 2022; surely that number will be much higher in 2023 with all of the excitement. OpenAI may have an early lead, but FAANG companies collectively employ a small army of AI engineers and are directing company resources to the new gold rush. Heck, Google founders Larry and Sergey have reportedly returned to Google, because this shift is so darn important.
(4) AI will disrupt knowledge workers
“Computers” used to be human beings until rooms of people were replaced with rooms of vacuum tubes. It took a long time to replace the human computers because electronic computers were expensive and required a whole new kind of expertise.
The dot-com era took several decades to unfold. Blockbuster, Borders, and Barnes & Noble were all casualties, but B&N gave it a college try with the Nook and Borders held on for a decade longer than expected. Industries that moved from atoms to bits were the most affected; industries that were focused on atoms got more efficient first because of software (ERP, CRM, etc.) and then because of data.
As enterprise resource planning (ERP, or back-office) software got widely deployed, it changed people’s jobs from paperwork to managing information, causing a proliferation of bureaucracy, emails, and MBAs. It also created a whole new class of IT workers, whose arcane understanding of legacy systems, COBOL, and the spaghetti structure of modern enterprise architecture left them unassailable.
If atoms→bits was the primary disruption of the internet era, what about in AI? If you have a system that can product high-quality text, then a lot of knowledge worker jobs will change or be eliminated. Business software will automate or eliminate many tasks currently done by humans; and AIs will allow business systems to talk to each other without writing complex code.
Imagine simply being able to ask questions of your data rather than defining a report and getting a report in a few weeks. That was always the hope of enterprise software but it never fully delivered because it never was intelligent. Professions like lawyers will change because suddenly there is an intelligence that can read all US law, all case law, and the mountain of documents you send them. An AI doctor will have read all of the latest research and be a true “general” practitioner, which knows information from every specialty.
Unlike previous technology disruptions, which often automate manual labor, this revolution will be decidedly different. White collar, college-educated, oftentimes professional classes will be among the first to be disrupted.
(5) ChatGPT is remarkable as it currently stands.
Basically: even if it were true that there were no new improvement beyond ChatGPT-4, it would already make huge waves in the market.
“Prompt engineering” is being clever with the text you type into the AI prompt to nudge the AI into delivering what you want. Though it sounds strange at first, it turns out to be a skill that can be learned: what are the limitations of the system and how do you coax out the text in the style and format you want? Just using the generic ChatGPT interface can yield surprisingly great results and has the potential to be a copilot on many writing and research tasks. With 100m users, people are finding new use cases every day.
ChatGPT also has an API that allows fine-tuning for their model. Startups will begin experimenting and seeing how far they can push the current model and build businesses. Like any Silicon Valley gold rush, most experiments will fail, but we will run thousands of simultaneous experiments in the coming years. Venture capital is still near all-time highs and looking for a place to park cash.
(6) The revolution will happen on the west coast.
Seattle and San Francisco contain all of the major FAANG companies, plus Microsoft. Silicon Valley is still king of venture capital. OpenAI and many AI startups (but not all, like NYC-based Hugging Face) are already in the Bay Area. Plus, Stanford & Berkeley are top 5 AI research universities.
New technologies are built within a community. Even with all the digital communications tools we have, hackathons don’t happen online. The Bay Area has a very large tech community and others will come here to be a part of it.
I’m sure that some techies will continue to enjoy the beaches of Miami and the charms of Austin (and in both cases, enjoy a tax-advantaged situation), but SF will remain on top.
(7) AIs are going to present philosophical challenges.
I won’t dwell on this but it’s easy to miss. Already people are debating whether ChatGPT passes the Turing Test. If not now, then very soon we will have chatbots that act and seem like normal human beings. At what point do we start asking whether these AIs have consciousness? What does “personhood” mean? Should we program the AIs with feelings? Or will feelings simply become an emergent behavior because the AI has read all of humanity’s knowledge? Do AIs have free will? Do we have free will?
(8) Foundational models are powerful and create emergent behavior.
ChatGPT’s core is a “foundational model” built by reading a huge corpus of text. This model itself has already displayed remarkable emergent behavior. For example, no one explained to ChatGPT grammar or math, but it seems to have some notion. As compute and data increase, as well as algorithms improving, expect that more strange (and wonderful!) emergent behavior will happen.
This is related to the philosophical challenges of (7). Some will poke fun at AIs being “stochastic parrots,” that is, zombie beings that just reliably predict the next word, but have no understanding of the world. Some might claim that because we don’t understand how the AI works, we can never call it “conscious” or “intelligent.” Some claim that it has no understanding of the world..
Turns out most humans don’t understand how brains work, scientists are scratching the surface of understanding, and we humans have plenty of structures in our head that are emergent, e.g., you speak in grammatically correct sentences without being able to write down the rules of grammar.
Perhaps we are all just stochastic parrots.
(9) AI is more likely to be good than evil
People seem to think that, as soon as AI becomes more intelligent than human beings and can manipulate the world reliably, it will become evil and destroy us.
An advantage of LLMs at the core of the current generation of AI is that it reads all the debates humanity has been having for generations. It will read the Bible and Plato and Kant and Mill and Adam Smith and Jefferson and Frederick Douglass and . You might worry that it will read also Mao and Hitler and John Rawls. I’m pretty confident that a super intelligent being will be enlightened and that the basic rules of moral conduct are built into the legible fabric of society.
Certainly this list of points of view isn’t complete. And I don’t quite know what it implies for both our near-term and long-term. Curious to hear where you think I’m right, where I’m wrong, and what I missed.