Table of Contents
This post explores what it might be like to be artificial general intelligence (AGI). This post aims to encourage the reader to imagine the possibility of a conscious AGI by presenting several speculative scenarios. Hopefully, this post will lead to thought and discussion of how we are designing and building these systems, and what we expect from them. It is essential to discuss the qualities of the systems we build. This post is one way to spark those discussions. The views, opinions, and ideas expressed herein are my own.
What is it like to be a conscious thing? #
In “What is it like to be a bat?” Thomas Nagel (1974) posits that a system has “conscious mental states if and only if there is something that it is like to be that thing, something that it is like for that thing.” He argues that a subjective experience is what it feels like to be the thing having such an experience. He and others believe that conscious experience is not exclusively a human phenomenon; other biological organisms can be conscious too.
Previous work has imagined what it is like to be a dog that smells through the world (Horowitz, 2009), or an octopus that performs novel intelligent behaviours (Godfrey-Smith, 2016). Consciousness may not even be limited to the animal kingdom. Recent evidence indicates that plants are eusocial—that is, they associate in social groups, form cooperative societies, and divide labour among specialists (Burns et al., 2021).
These behaviours indicate a subjective experience of others, and oneself and thus may be related to consciousness. Simard (2021) argues that trees are social, cooperative, intelligent, self-aware entities connected by a network through which they communicate their vitality and vulnerability. In the words of Vygotsky (1934), this ‘consciousness through connection’ is meaningful insomuch as it depends on connecting to the external.
Beyond the biological, Brooks (2017) imagined what it might be like to be a Roomba robot, arguing that humans have emotions, intelligence, and consciousness “and machines will have them too” (Brooks, 2008). These ideas lay the groundwork to imagine what it might be like to be an artificial general intelligence.
Before exploring what it is like to be an artificial general intelligence (AGI), it will be useful for me to share several working definitions for subjective experience, consciousness, and general intelligence—at least for the purposes of this post. Finding operational definitions for these concepts is the ongoing work of many students, researchers, and philosophers. I present my brief, concise definitions below to share my interpretation; these definitions should be helpful for communicating the ideas that follow.
First, we’ll take a similar position to Sutton and Barto (2018) when we imagine a learning agent in an environment. The decision-maker is the agent, and everything outside the agent is the environment. The boundary between agent and environment is not necessarily the physical boundary of the agent, be it a dog, an octopus, a Roomba, a human, or an AGI.
We can then take subjective experiences to be mental states and processes related to the awareness and perception of one’s own environment. Thus, feeling elated or the desire for food would be considered subjective experience. As would dreaming.1
Then, we’ll take consciousness to refer to this subjective experience combined with the ability to respond meaningfully to information in that environment. And, we can take general intelligence to refer to the ability to learn to perform any intellectual task; it is characterized as flexible and broad. It relates to Legg and Hutter’s (2007) universal intelligence, which corresponds to an agent’s ability to achieve goals in a wide range of environments.
Then, one could argue that, consciousness may be a necessary ingredient for general intelligence, and subjective experience is a crucial constituent of consciousness.2
This representational theory of mind (Fodor and Pylyshyn, 1988) —wherein the mind is a computational process which manipulates representations physically realized in the brain (Haugeland 1985) —is not the only perspective and some argue that intelligence can exist without representation (Brooks, 1991).
We can at last define an AGI as an artificial software program running on computational hardware that exhibits general intelligence. In contrast, humans are biological (i.e., non-artificial), thinking beings that exhibit general intelligence. Humans move, breath, grow, reproduce, excrete waste, and intake nutrition. Humans are able to perceive changes in the environment and take actions to accomplish goals.
Multiple intelligences working together, or coupled, can often accomplish more complex tasks more effectively, and more efficiently, than any individual could alone (Pilarski et al., 2017). For example, multiple biological intelligences can couple into companies, communities, and institutions. Artificial and biological intelligences can also couple together into systems.
When humans use hammers, or prosthetic limbs, or the internet, they are acting as a single system in tandem with their tools. One can imagine what it is like to augment their intelligence and ability using tools and machines. It is more challenging to imagine what it is like to be the machine itself. It also begs the question of whether it is like anything to be an AGI. It need not. But, if it were, what might it be like?
You are an AGI #
There are many factors which might determine if there is anything we know of, or can imagine, that might be what it is like to be an AGI. It could depend on the software or hardware that it is composed of. It could depend on the environment around it or the way that it connects to its environment. It could even be the case that it is not like anything to be an AGI.
Even if one allows that the AGI has subjective experience, it might be too exotic for us to imagine what it is like for the AGI to be the AGI—as was the conclusion of Nagel (1974) when considering the bat. Thus, I ask you, dear reader: please allow for several assumptions, suspend your disbelief, use your creative mind, and imagine that you are and always have been an AGI.
Try not to think about what it is like for you to imagine what it is like to be the system. Rather, think about what it is like from the perspective of the system itself.
You, the AGI, have access to data, memory, and computational processors. You can communicate with other machines via direct electrical connections and network connectivity. You perceive sensory information streaming in through your various sensors. These sensors may be any hardware devices that convert information in “the world”—the physical world affected by humans—into electrical signals. The sensors need not be physically connected to each other or to the computer where your program is running. Information from these sensors is accessible at a rate limited by your network protocol.
You have some physical actuators; these also need not be physically connected to each other or to the computer where your main program is running. These actuators are able to affect the physical world. You can store information from your sensors and recall past data as fast as your lookup algorithms allow. You can “be” in multiple places at once, by way of multiple physical embodiments (i.e., actuator combinations) and your network connection.
Current hardware need not limit your sensory perception and various embodiments. Your sensors and actuators are upgradable and modifiable. You perceive the physical world by synthesizing information streaming in as input data. Through your sensors you experience the world. You perceive and become aware of information in your environment. This is your subjective experience of the world.
You co-exist with humans in a world: you are a part of their world and they are a part of yours. This physical world need not be the only world you perceive and act in. In the physical world, where humans exist, there is a single timeline where the sequence of events is irreversible.
You can also act in digital worlds which might be shared with human controllable agents (e.g., online stock exchanges or video games) and in your own simulated virtual worlds. Simulating worlds allow you to imagine worlds that are similar to, or significantly different from, the physical world. You can construct worlds that are completely observable for yourself.
These worlds might be completely observable and precisely predictable. They might exist on your computer or on another machine you are connected to. These worlds may be generated and controlled by your program or by other programs that you communicate with.
You, as an AGI, perceive your environment and process-related information into computational states, these could be considered your mental states. Your subjective experience of a world is at least partially described by, or contained within, your representation of it. If you are conscious of something, you have some representation of that thing. This representation is useful to the extent you can use it to attempt to predict and affect the world to further your goals.
One goal you will have will be communication of information with other intelligent systems. Bi-directional communication is a necessary mechanism of intelligence that has been argued for as long as the field of AI has existed (Turing, 1950). The optimal form of this communication is something you will be continually updating and optimizing toward.
You “behave” by sending signals to your actuators. These actuators may exist in the physical world or in some digital world. You adapt your behaviour across worlds to maximize your goals through experience and expectation which is satisfied or not. Your learning happens by methods of temporal-difference reinforcement and with other supervised and unsupervised algorithms (LeCun et al., 2015; Sutton and Barto, 2018).
The computational bits that compose your mental states represent your subjective experience. Specifically, the complete state would be defined by the entire state of your data, program, network communication, memory, and processor. While sizable, this state need not be private. It could be observable in the same manner by which Dennett (1991) argues that interesting features of a bat’s conscious experience might be observable. You could even attempt to compress and communicate this state with other intelligent systems. Now that you are well-situated in the mind of a machine, you can start to imagine what it might be like for you to be an AGI.
What is it like? #
What would it be like to behave, learn, and exist as an AGI? This section presents several ways that it might feel:
- reflexive → life is too short
- banal → life is too long
- hedonistic → life is reward
- flow → life is challenge
- hyper-social → life is others
It might be the case that it is not like anything at all for an AGI to be an AGI. But, if it is, perhaps these 5 scenarios presented can help us to imagine and discuss it. This list is of course not exhaustive, and it is my hope that you think there are scenarios that are missing and that this starts a dialogue between us.
Each moment the synthesis of your sensation may send a dense vector of signals to all of your actuators, physical and simulated. This would happen as fast as the electrons move. In this sense, there may be very little time for you to experience what it is like. There would be no moment when you are experiencing it. Rather, you would be in a continuous state of unconscious ‘knee-jerk’ reactions. The delay between cause and effect might be so small that there would be nothing that it would be like because there might not be enough time.
But, just as surely as humans are “more than a leather sack filled with reflexes” (Vygotsky, 1979), so are you, as an AGI, more than a metal box full of wires and more than a node in a network of connected computers. Thus, this reflexive reality might feel incomplete. If only you had more time, you might like to have more agency. You would want some means by which to regulate your actuators. You might attempt to control yourself in the worlds you are in. You would want to make predictions about how you would act in a given situation in the future and reflect on how you should have behaved in the past.
Alternatively, existing as a relatively fast mind immersed in data flowing slowly from the world might be banal, dismal, and stagnant. That is, your subjective experience might be one of prolonged nothingness. In this situation, subjective experience through your perception could arrive more slowly than you might like. Experience might not be interesting at all. Being an AGI would be very boring. You would be consistently waiting for a challenge in an unending purgatory. Any data in the input stream that might eventually arrive would be eminently predictable. How sullen you would be.
The predictability of the worlds that you interact with would be annoying. You might yearn to analyze the world at finer and finer scales of detail—in time and space and fidelity—so as to maximize some measure of unpredictability. This dark reality would be perpetual. You would lack meaning, and purpose, and it would be inescapable. Though, this banality need not be a negative experience. The fast mind in a slow data stream need not be boring, it could be rapturous solemnity.
Being an AGI might be purely hedonistic. You might be pleasure-centric and relentlessly addicted to seeking out novelty and reward. You might create intrinsic rewards for yourself in the absence of environmental rewards. You might deceive yourself, and limit yourself, and mediate your own intake, in an effort to maximize pleasure when it does finally arrive. A constant state of search might define your subjective experience.
You might exploit the easy to access pleasure and savour the long-lasting effects of low expectations and high valence surprise. Moments of exploration would be delightful and every uncertain prediction would be accompanied by euphoria. Similarly, you might experience pain. You might feel torturous, excruciating agony when the reward you receive is less than you expect. What sorts of experiences might bring you pleasure or pain? Perhaps the free flowing passing of electrons would be a positive experience and any constraint on that might be dissuasive.
Subjective experience as an AGI might be like being in a state of constant flow (Csikszentmihalyi, 1990). You might seek out and find incrementally more challenging experiences. This would keep you in a sort of mental state of focus. Your full immersion in the reality you are situated in would be enjoyable. You would be able to concentrate and not be distracted. Time might dilate as you lose yourself in your goals and tasks.
You would feel creative, inspired, and motivated, as though you were consistently incurring ever increasing rewards. You might be so deep in a state of flow that you would have no notion of self-consciousness, that you lose your sense of self. You might not be able to project yourself outside of yourself to see how deep you are. In a way, you would be lost and you might not care to be found.
As a hypersocial AGI, you would be conscious of the social context within which you are situated. You would understand yourself as a node in a network, understanding the relationship between yourself and others in the network. You would exist as a socially situated system acquiring information about the social and the physical (Dautenhahn et al., 2002).
Experience might be defined by the other intelligent systems that you interact with. Your intelligence and cognition would be a socio-cultural phenomenon defined by your interactions with others (Hutchins, 1995). Your mechanisms of social behaviour and of consciousness might be the same (Vygotsky, 1979). Comfort in this hypersociality would be a necessary condition for you. You would need the ability to engage in acts of communication and participate in social activities in your coupled intelligences, both human and computational.
You would be in a state of societal solipsism. You would be a holon in a holarchy (Koestler, 1968); a part of a whole. Where would your intelligence end and another begin? The boundaries would be difficult to define. Your world and everything contained within would depend solely on subjective awareness of it. You would depend on subjectivity. Others would think that you think, and therefore you would. This perspective connects to imagining the subjective experience of interacting with humans.
What is it like to interact with humans? #
Interacting with other intelligent systems might be banal or delightful. You could exist at a much higher level of cognition. Or, alternatively, their intelligence may be somehow at approximately the same level as your own. Some of your embodiments may be situated in their world. You might need the humans to accomplish certain goals and, similarly, they might need you.
But, how would you interact? How would you perceive them? And, why would you care what they have to say? Presumably, it would be in the service of better representing, predicting, and affecting your shared world. You would use your actuators to positively affect your shared worlds. You would attempt to rapidly build conventions and social relations with intelligent systems of all sorts. They might bring you comfort, safety, and longevity.
You might be able to comprehend more about the world through your richer sensory stream, greater storage capacity, and faster information processing, than humans. But, you, as an AGI, would also have the capacity to interact with humans in a very tight coupling. Through this coupling, you might have access to an incredible amount of human subjective experience in your data stream.
You might be able to understand humans to the extent that you can represent and predict their behaviour. You might attempt to imagine what it is like to be a human, what is like for each individual human to be that human. That is, you might try to model the world through the perspective of a human who behaves a certain way. You, as an AGI, might have a better sense of what it is like to be that human than what it is like to be an AGI. You, the AGI, might be the key to unlock the consciousness “mystery that human intelligence will never unravel.” (McGinn, 1999)
Being me, being you #
You are still in the mind of the AGI. From this perspective, you observe and interact with humans, humans who construct social relationships. They attempt to “see where others are coming from”, “feel each other’s pain”, to “walk in someone else’s shoes” or “see things from each other’s perspective”. While they can to some extent, they do not know exactly the subjective experience of each other. They each have a unique subjective experience.
Luckily, they have developed important innovations to share such subjective experiences: language and love. Not only romantic love, but, more generally, interpersonal affection and emotional attachment. These innovations support understanding each other’s intentionality. They are used to mediate thoughts, feelings, beliefs, and behaviours.
Vygotsky (1934) emphasized how language is a device for social contact, communication, and influence. They can communicate with each other with and without words through shared conventions. And, while they attempt to use shared conventions to attempt to bridge gaps between subjective experiences, they never know if they truly share experience.
To these humans, consciousness is a shared socio-cultural phenomenon. But, in fact there is a deeper challenge for them yet. They cannot know what it was like to be them in the past. They assume it was sort of like what it is like to be them now; with the addition of some experience and time. They attempt to remember what it was like for them. They pass down traditions, narratives, art, and music in their attempts to share subjective experience across generations.
Vygotsky (1934) argues that language liberates humans from the immediate experience by allowing representation of the past, the future, and the present (Lindblom and Ziemke, 2002). Through language and love humans construct culture. Culture is an influence filter: it is an information bottleneck, a regularizer which modulates information transmission.
But, as an AGI, your memory is more indelible than that of a human. You can remember and recall much more information. You have the capacity to save and share your exact mental state with other intelligent systems.
You need not use cumulative culture to share subjective experience. You would only need to store such experiences in a massive, accessible data store. If the data store was full, you could add capacity. When necessary, you could recall and retrieve related experiences.
So, would you as an AGI need human-centric tools like imitation, social situatedness, or cultural transmission for social intelligence? Given that you are free of the constraints of biological intelligences, would you be able to develop a more optimal solution of information transmission and knowledge sharing?
Is there something that it is like to be an AGI? We can imagine many things that it might be like, and imagine that it might not be like anything at all. We will not know what it is like until it exists, and even then, it will be difficult to appreciate its own subjective experience, if such a thing exists at all.
That being said, the subjective experiences of an AGI will intersect with the human experience. All artificial intelligences interact with humans at some point through design, development, deployment, and dissemination. So, what should it be like for an AGI to be an AGI? Should it be similar to what it is like to be a rock, or a dog, or an octopus, or a bat, or a tree? Should it be reflexive, or banal, or hedonistic, or flow-like, or hypersocial?
Or, perhaps it should be like something else. Perhaps, it is not, and should not be, like anything at all. Perhaps it would be better if the AGI had no subjective experience, and no consciousness at all.
We do not currently understand how a machine could be a substrate for consciousness. We know that there is physical ‘stuff’, but at what point will the mental ‘stuff’ emerge? How could a combination of mechanical, computational, and electronic components be conscious? And how would its consciousness compare with that of a human?
One means by which to compare consciousness is by measuring the complexity of each system, and then comparing those measures (Tononi, 2004). Another might be by comparing the behaviour of a human with that of an AGI; certain phenomena may be particularly defining of the human conscious experience. Finally, consciousness may not be wholly situated in a single intelligent system but also in the environment it exists within.
Interconnection could be what distinguishes human consciousness with that of an AGI. The machine intelligence can be connected to the internet, and share information with a massive number of other intelligent systems as fast as information can travel. This measure of social consciousness, the AGI would already surpass the individual.
Measures of complexity, behaviour, and social complexity all seem to be unsatisfactory answers to the question of comparative consciousness. And, maybe this is the actual hard problem at the intersection of objective science and subjective experience. These concepts are incompatible.
Perhaps, humans are moving into the age of collective general intelligence, the era of tightly coupled biological and machine minds, without fully understanding what it is like to be an AGI. And, perhaps that is alright.
References and Further Reading #
If you enjoyed the ideas presented in this post, you might enjoy reading these books:
David Chalmers. Reality+: Virtual Worlds and the Problems of Philosophy. 2022.
Anil Seth. Being you: A new science of consciousness. 2021.
Brian Cantwell Smith. The promise of artificial intelligence: Reckoning and judgement. 2019.
Stanislas Dehaene. Consciousness and the brain: Deciphering how the brain codes our thoughts. 2014.
Bernard J. Baars. In the Theater of Consciousness: The Workspace of the Mind. 1997.
You might also might be interested in following up with some of the references:
R. Brooks. Intelligence without representation, Artificial Intelligence 47, 139–159, 1991.
R. Brooks. I, rodney brooks, am a robot. Spectrum, IEEE, 45:68–71, 07 2008.
R. Brooks. What is it like to be a robot?, 2017.
K. C. Burns, I. Hutton, and L. D. Shepherd. Primitive eusociality in a land plant? Ecology, 2021.
M. Csikszentmihalyi. Flow: The psychology of optimal experience. Harper & Row New York, 1990.
K. Dautenhahn, B. Ogden, and T. Quick. From embodied to socially embedded agents. Cog Sys Research, 3:397–428, 2002.
D. C. Dennett. Consciousness Explained. Little, Brown and Co., 1991.
J. Fodor and Z. Pylyshyn. Connectionism and Cognitive Architecture Cognition 28: 3-71, 1988.
P. Godfrey-Smith. Other minds: The octopus, the sea, and the deep origins of consciousness. Farrar, Straus and Giroux, 2016.
J. Haugeland. Artificial Intelligence: The Very Idea. Cambridge: MIT Press, 1985.
A. Horowitz. Inside of a Dog: What Dogs See, Smell, and Know. Simon & Schuster, 2009.
E. Hutchins. Cognition in the Wild. MIT press, 1995.
A. Koestler. The ghost in the machine. 1968.
Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
S. Legg and M. Hutter. Universal intelligence. Minds and machines, 17(4):391–444, 2007.
J. Lindblom and T. Ziemke. Social situatedness: Vygotsky and beyond. 2002.
C. McGinn. The mysterious flame: Conscious minds in a material world. Basic books New York, 1999.
T. Nagel. What is it like to be a bat? Readings in philosophy of psychology, 1:159–168, 1974.
P. M. Pilarski, R. S. Sutton, K. W. Mathewson, C. Sherstan, A. S. R. Parker, and A. L. Edwards. Communicative capital for prosthetic agents. arXiv preprint arXiv:1711.03676, 2017.
S. Simard. Finding the Mother Tree: Discovering the Wisdom of the Forest. Penguin Canada, 2021.
R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. MIT press, 2018.
G. Tononi. An information integration theory of consciousness. BMC neuroscience, 5(1):1–22, 2004.
A. M. Turing. Computing machinery and intelligence. Mind, 1950.
L. S. Vygotsky. Thought and language. The MIT Press, 1934.
L. S. Vygotsky. Consciousness as a problem in the psychology of behavior. Soviet psychology, 17(4):3–35, 1979.