Are deep learning models the forerunners of superintelligence?

Tech blog topic

Learn about the pre-trained language model, GPT-3, and how its neural network compares to the human brain.

Have you ever wanted to write an original alternative ending to your favorite film? Sheepishly curious if the third generation Generative Pre-trained Transformer 3 (GPT-3) could do a better job than the screenwriters of my favorite action sci-fi film, I asked it to "write an alternative ending to the 'The Matrix Trilogy.'" In mere seconds, GPT-3 answered, "Neo wakes up in his bed, realizing that the Matrix was just a dream. He gets up and goes to work like normal, not knowing that the world he knows is a computer-generated simulation." An unimaginative but properly formulated line.

How does the state-of-the-art pre-trained language model developed by the AI research lab, OpenAI, create such near-human level quality content? It certainly did not paraphrase or plagiarize an alternative ending from a fan blogger. It was trained on a large corpus, including Wikipedia, Common Crawl web pages, and a collection of books. GPT-3 read more text than humanly possible, and "The Matrix" was buried somewhere in its dataset.

The deep learning model has 175 Billion parameters, a massive number by any standard. Parameters are the weights on the connections between neurons. When information travels from one neuron to the next, it is amplified or dampened by the parameter. During the model's training, the values of the parameters change until the optimal result is reached.

175 Billion is a significant expansion from the second generation GPT-2, which achieved good results using 1.5 Billion parameters. The original GPT model had 117 Million parameters. Even larger models exist, like Google's 1 Trillion parameter language model.

How does the GPT-3 neural network compare to the human brain? Neuroscientists believe the human brain contains around 100 billion neurons. While the number of dendrites (i.e., tree-like branches that receive rapid-fire messages from other neurons) is inestimable because neural connections vary per neuron type, let us assume each neuron, on average, has 1000 dendritic branches. That means the human brain has 100 trillion connections. If the human brain has an estimated 1 * 10^14 dendrites, the GPT-3 model with its 1.75 * 10^11 parameters has 570 times fewer connections.

However, given that the GPT model has increased its parameters by a factor of 10 to 100 annually, it could reach parity with the human brain by as early as next year. If my estimate for the number of dendritic branches is too low, AI might surpass it in 2024 or 2025.

My comparison is grossly simplistic. The human brain can do more than just advanced linguistics. Nevertheless, once an AI model is developed that compares to the human brain (a monumental milestone), what happens next?

GPT-3 is an example of artificial narrow intelligence (ANI) imitating human linguistics. Computer vision or reinforcement learning are other examples of ANI technology. Real-world applications of ANI are all around us. Think voice assistants like Siri, Alexa, and Cortana or self-driving cars and targeted ads on social media. Now think about this…If artificial general intelligence (AGI) were to become a reality (in other words, if AI models could mimic human intelligence and apply it to solve any problem), such machines would be conscious and therefore learn and plan for the sake of their survival. A scary thought.

Big tech, as well as extensive academic teams, are making strides in AGI. US and Chinese tech companies burn through billions of dollars in R&D budgets each year, and the most highly-cited research is in AI. It seems a matter of when, not if, AI will surpass human intelligence. A question that often races past my mind is, "Will this happen in my lifetime?" My fear is a resounding, "Yes."

Development is unlikely to stop at AGI. Artificial Super Intelligence (ASI), or the ability of computer intelligence to greatly exceed the cognitive performance of humans and humanity's eventual displacement, is a common dystopian trope. The 1968 movie, "The 2001 Space Odyssey," introduced the supercomputer HAL; "The Terminator" was the creation of Skynet, and a supercomputer imprisoned people in pods in "The Matrix."

The renowned futurologist Ray Kurzweil is optimistic about our future. In his book "The Singularity Is Near," he envisions a posthuman world in which human-machine hybrids will evolve to be god-like. Conversely, Nick Bostrom, the director of the Oxford Future of Humanity Institute, has a rather bleak outlook on ASI. In his book "Super Intelligence: Paths, Dangers, Strategies" (a must-read for anyone interested in learning about intelligent machines), superintelligence poses an existential threat to humanity. The doomsday philosopher thinks humans could be squashed like ants under the heel of superintelligence. Such a machine may not be inherently evil; it simply does not care about ants.

Will humanity enjoy eternal life or be squashed into oblivion in the not-so-distant future? I posed this quandary to the only expert out there: GPT-3. The question, "will Artificial Super Intelligence (ASI) be benign?" is, and it replied, "There is no guarantee that artificial superintelligence will be benign, and there is a risk that it could become a threat to humanity."

Oh dear, we might be in a world of trouble. Joking aside, while I am genuinely impressed by the GPT-3 model's ability to generate content indistinguishable from human writing, it is still devoid of logic and creativity. Let me know what you think. The GPT-3 text-generator API is accessible to everyone, so try it out (https://beta.openai.com/docs/quickstart).

Must-read books and journal article:

Bostom, N. (2014). Super intelligence: Paths, dangers, strategies. Oxford University Press.

Brown, T.B., Mann, B., Ryder, … Sutskever, I., & Amodei, D. (2020). Language Models are Few-Shot Learners. Advances in neural information processing systems, 33, 1877-1901.

Kurzweil, R. (2006). The singularity is near. Penguin Publishing.

Erwin Lubbers GILO-Are deep learning models the forerunners of superintelligence?
dotted-grid GILO-Blog