How close are we in awakening machines and making them conscious?

Tech blog topic

Learn how computer scientists are attempting to do the impossible: make machines conscious.

I think; therefore, I am. – Descartes, philosopher

What makes humans conscious? The debate has been raging for centuries. While neuroscientists have yet to agree on a unified theory of what makes us self-aware, US and Chinese computer scientists are building supercomputers that might have the ability to make them “conscious.”

In May 2022, the US Department of Energy’s Frontier supercomputer was unsurpassed in high-performance computing.  Then a month later, an incarnation of the former Chinese supercomputer named Sunway claimed to hail the top spot. With over 1 exaflops (i.e., one to the power 18 floating point operations per second), the Sunway exascale computer operates with 96,000 nodes which hold 37 million processor cores, mostly in GPUs.

According to the Malaysian Star (2022), “a China supercomputer achieved a global first with its ‘brain-scale’ AI model.” The potential uses of the deep-learning model ‘alchemist’s pot’ “include autonomous vehicles and facial recognition, as well as Natural Language Processing (NLP), computer vision, life sciences, and chemistry.”

The article doesn’t mention China’s attempt to mimic the human brain but claims that the ‘brain-scale’ deep learning model consists of 174 Trillion parameters or pathways between neurons, which is on par with the number of pathways in the human brain (Note: Neuroscientists estimate the human brain consists of 85 Billion neurons connected by 100 Trillion pathways).

This development is colossal! GPT-3, OpenAI’s feed-forward model is the largest publicly accessible NLP model, running on 175 Billion parameters (Sutskever, et al. 2020). Despite having a thousand-times less pathways than the ‘alchemist’s pot’, GPT-3 amazes audiences with its exhaustive understanding and creative use of language. Less than five years ago, another feed-forward model, BERT, wowed data scientists with just 110M parameters.

Given such astonishing developments in AI, is the idea of whole-brain emulation no longer a sci-fi fantasy?

We’re not there yet. The human brain is responsible for recurrent processing such that neurons can route information back and forth through a spiderweb of connections. In order to upload our brains into computers, according to the connectome project, every neuron dendrite and synapse must be mapped out on a supercomputer. Nevertheless, the human brain is not a perfectly designed model; it evolved to keep us alive and to be energy efficient. Our brains can run on a candy bar, whereas GPT-3 can feed on an energy equivalent of several trips around the world in a passenger jet.

Is computer consciousness in the realm of possibility?

Human consciousness requires more than the inputs like vision, audio, and touch and outputs like actions. We integrate information and view it through the lens of our experiences, feelings, and memories. In his book “Rethinking Consciousness,” Michael Graziano describes how humans manage a complex attention mechanism. According to Graziano’s Integrated Information Theory, a model of the self (i.e., a model of objects in the environment) and its relationship with the self, our brains regulate and decide which competing signals to focus on. Our consciousness affords us the ability to drive, talk to our passengers, and think “I’m going to impress my passengers with my knowledge of AI." We can do all three tasks simultaneously, not much more.

Today’s machines are smarter, but they’re not conscious [The alchemist pot is no different than the 70s mainframe computer]. The author of “The Feeling of Life Itself,” Christof Koch (2019), recounts that the brain has been likened “to a mechanical clockwork, a telephone switchboard, an electromechanical computer, the internet, and today, to deep convolutional networks or generative adversarial networks... [but] we forget that [metaphors] capture a limited aspect of reality.” In other words, scientists want to believe that human profundities such as intelligence and consciousness can be decoded, but the hard truth is consciousness doesn’t need to run across trillions of pathways or be trained on petabytes of data.

Even so, is artificial super intelligence on the horizon?

Be careful not to confuse intelligence with consciousness. A smart system is not an aware system. It’s not alive, much less able to  experience things like humans. Deep-learning models are trained for a specific purpose. They understand language at a human level or, recognizing the content of a picture or navigating a car. Even if computer scientists could stitch together each and every narrow artificial intelligence system, it’s doubtful that the new entity can make human-like decisions.

AI should never waste precious processing power on irrelevant data according to the seminal research paper “Attention is all you need” (Polosukhin et al., 2017). A self-driving car can prioritize the safety of pedestrians over the comfort of passengers, and never gets distracted. It can’t mull over what it ought to consume for dinner, much less any other tangential thoughts. Conversely, humans wield a more complex attention scheme. It’s not just about focusing on external factors, but the ability to switch attention from driving a car to pondering deep thoughts to talking to the passenger, or do these things all at once.

Feed-forward models like GPT-3 and BERT learn during their training phase. Once these models are saved, no further learning takes place. Pathways take data from the output layer and route it through the hidden layers and always end in the output layer. Due to its attention mechanism, BERT outperforms previous NLP models like LTSM and RNN (Devlin et al., 2019). Likewise, GPT-3 excels at answering questions based on patterns encountered from its training data. While both BERT and GPT-3 possess complex models of language, they lack a model of self or the external environment. They don’t know how they look or where they are; they can’t do better next time or develop new interests. Because GPT-3 and BERT can’t think “out of the box, both are forever destined to transforming input to output data.

To make a long story short: We don’t have to worry about the ghost in the machine. Computer consciousness will not rise from the current deep learning networks.

In this blog post, I decided to integrate information about research, ICT, data science, medicine, neuroscience, psychology, and philosophy to make sense of supercomputers and AI. Most professional writers often know the details of their own field but may not know the definitions of other fields. Yet, if we want to produce high-impact papers, including academic articles, business reports, or medical documents, we must maintain knowledge outside of our fields of expertise.

The GILO app gives users targeted recommendations based on fields we’re examining. The AI-empowered app knows when we’re writing about a neuron in the human brain and when a neuron is an element of a deep learning model, and it always gives us the most relevant definition first.

Try the Freemium version of the “Garbage in. Logic out.” app, so you too write more impactful papers, better, faster, and easier.


Devlin et al. (2022). PaLM: Scaling Language Modeling with Pathways. ArXiv, abs/2204.02311.

Devlin, J., Chang, M., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv, abs/1810.04805.

Graziano, M. (2019). Rethinking consciousness, a scientific theory of subjective experience. W.W. Norton & Company.

Koch, C. (2019). The feeling of life itself. MIT Press, Cambridge, Massachusetts.

Patterson, D., Gonzalez, J., Le, Q., Liang, C., Munguia, L.-M., Rothchild, D., So, D., Texier, M., and Dean, J. (2021). Carbon Emissions and Large Neural Network Training. ArXiv preprint arXiv:2104.10350

Sutskever et al. (2020). Language Models are Few-Shot Learners. ArXiv, abs/2005.14165.

The Star (Malaysia). (n.d.). China supercomputer achieves global first with ‘brain-scale’ AI model. Retrieved on June 23, 2022, from

Vaswani, A., Shazeer, N.M., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., & Polosukhin, I. (2017). Attention is All you Need. ArXiv, abs/1706.03762.

Appendix: Keywords

artificial super intelligence: “Artificial Super Intelligence (ASI) would surpass the intelligence and ability of the human brain. While strong AI is still entirely theoretical with no practical examples in use today, that does not mean AI researchers are not also exploring its development. In the meantime, the best examples of ASI might be from science fiction, such as HAL, the superhuman, rogue computer assistant in 2001: A Space Odyssey.” IBM. (n.d.). What is artificial intelligence?   Field: Data Science, Artificial Intelligence, Deep Learning

brain: “The organ inside the head that controls thought, memory, feelings, and activity.” Cambridge University Press. (n.d.). Cambridge English Dictionary. Retrieved June 21, 2022, from   Field: Medicine, Central Nervous System, Brain Anatomy

consciousness: “The state of being awake, thinking, and knowing what is happening around you.” Cambridge University Press. (n.d.). Cambridge English Dictionary. Retrieved June 21, 2022, from   Field: Medicine, Central Nervous System, Brain Functions

deep learning: “Deep learning attempts to mimic the human brain — albeit far from matching its ability — enabling systems to cluster data and make predictions with incredible accuracy.” IBM. (n.d.). What is deep learning? Retrieved June 21, 2022, from   Field: Data Science, Artificial Intelligence, Deep Learning

feed-forward neural network: “A neural network without cyclic or recursive connections. For example, traditional deep neural networks are feed-forward neural networks. Contrast with recurrent neural networks, which are cyclic.” Google. (n.d.). Machine Learning Glossary. Retrieved June 21, 2022, from   Field: Data Science, Artificial Intelligence, Deep Learning

intelligence: “The ability to learn and adapt to an environment; often used to refer to general intellectual capacity, as opposed to cognitive ability or mental ability, which often refer to more specific abilities such as memory or reasoning.” Conte, J. M. & Landy, F. J. (2018). Work in the 21st century: An introduction to industrial and organizational psychology (6th ed.). New York, NY: John Wiley & Sons.  Field: Education, Learning, Cognitive Learning

neuron: “A node in a neural network, typically taking in multiple input values and generating one output value. The neuron calculates the output value by applying an activation function (nonlinear transformation) to a weighted sum of input values.” Google. (n.d.). Machine Learning Glossary. Retrieved June 21, 2022, from   Field: Data Science, Artificial Intelligence, Deep Learning

neuron: “The basic cellular unit of the nervous system. Each neuron is composed of a cell body; fine, branching extensions (dendrites) that receive incoming nerve signals; and a single, long extension (axon) that conducts nerve impulses to its branching terminal. The axon terminal transmits impulses to other neurons or to effector organs (e.g., muscles and glands) via junctions called synapses or neuromuscular junctions.” American Psychological Association. (n.d.). APA Dictionary of Psychology. Retrieved August 1, 2022, from   Field: Medicine, Central Nervous System, Brain Anatomy

parameter: “A variable of a model that the machine learning system trains on its own. For example, weights are parameters whose values the machine learning system gradually learns through successive training iterations. Contrast with hyperparameter.” Google. (n.d.). Machine Learning Glossary. Retrieved June 21, 2022, from   Field: Data Science, Artificial Intelligence, Deep Learning

pathway: “A route or circuit along which something moves.” American Psychological Association. (n.d.). APA Dictionary of Psychology. Retrieved May 11, 2022, from   Field: Medicine, Central Nervous System, Brain Functions

synapse: “The junction between two neurons, across which chemical neurotransmitters carry messages.” Harvard Health Publishing. (n.d.). Medical Dictionary of Health Terms. Retrieved June 21, 2022, from   Field: Medicine, Central Nervous System, Brain Anatomy

training: “The process of determining the ideal parameters comprising a model." Google. (n.d.). Machine Learning Glossary. Retrieved June 21, 2022, from training “The process of determining the ideal parameters comprising a model.” Google. (n.d.). Machine Learning Glossary. Retrieved June 21, 2022, from   Field: Data Science, Artificial Intelligence, Deep Learning

Erwin Lubbers GILO-How close are we in awakening machines and making them conscious?
dotted-grid GILO-Blog