Wednesday 8 July 2020

Is Artificial Consciousness Possible?

Image by Comfreak from Pixabay
Can machines become conscious or is artificial consciousness (AC) an impossible dream? If AC is possible, would we want to create it? This paper will attempt to answer the first question using Hartmann’s framework developed in New Ways of Ontology, where he carries out an ontological investigation into the structure of the world. I will contrast his four ontological strata with potentially fruitful approaches in the development of Artificial General Intelligence (AGI) and consider what is needed to remain on the right track. The answer to the second question depends on the first – to aim for AC or to avoid it, we need to understand whether and how it can happen.

Hartmann’s Ontology

Hartmann inherits notions of the groupings of the contents of the world from previous thinkers as far back as Aristotle and as recent as Hegel. He terms these “levels of actual structure,”[1] such as inanimate objects, plants, animals, man, society and history. By examining these contents, Hartmann posits four main strata which “embrace the whole sphere of the real world with the multiplicity of its ontic structures.”[2]

The world can be divided into the spatial outer world and the non-spatial inner world. The spatial outer world can be further divided into two strata: inanimate things governed by physical processes and animate beings governed by biological processes. The non-spatial inner world, which is the “inwardness of consciousness”[3] is also divided into two strata: the psychic and the spiritual. These strata are denoted by the order in figure 1.
Figure 1: Hartmann’s Four Main Strata

To investigate the special nature of each strata, Hartmann proposes that they have their own categories, with some categories cutting across the boundaries of the adjacent strata. While a being is divided in such a way into strata, there is still a unity of the whole since the strata overlap and are interrelated. For instance, the human being has “psychophysical unity”[4] observable in his activity and experience. This “piling up of layers of being,”[5] a superimposition of each stratum on top of each other, does not tear the whole apart. Instead the being is a unity, a “complex and stratified whole [...] comprehensible only through the interrelatedness of the strata.”[6]

He observes that “the lower strata are always included in the higher ones”[7] but not the reverse. For an organism such as bacterium to exist, which has the first two strata of inanimate things and animate beings, it requires atoms, which belong to the first stratum of inanimate things. Atoms however do not require organisms to exist. In this way, a hierarchy or “levels of actual structures”[8] can be developed. The lowest tier will have only the first stratum of inanimate things, such as material non-living objects such as rocks. Lower animals such as insects will have the first two strata of inanimate things and animate beings. Higher animals such as dogs and dolphins have the first three strata, possessing psychic consciousness though not spiritual consciousness, while even higher beings such as man, society and history have all four strata, the first three plus spiritual consciousness. Hence all things have the same bottommost stratum of inanimate things, with each higher level of actual structure containing all the lower strata in addition to the higher stratum.

Computationalism

The computational theory of the mind or computationalism posits that the mind can be understood as a computer. This means that tasks of the mind can be analysed as algorithms, which are step-by-step procedures that can be carried out by a machine.[9] Block distinguishes between phenomenal consciousness (P-consciousness) and access consciousness (A-consciousness).[10] According to McDermott, P-consciousness is the property a computational system has if it models itself as experiencing things. [11] A self-model is a “device that a robot or a person can use to answer questions about how it interacts with the world.”[12] A-consciousness deals with cognitive capabilities such as reasoning, action and speech control, capabilities more amendable to algorithmic implementations than P-consciousness. Both interact. An example Block gives is when perceptual information that is being accessed switches from figure to ground. Such a switch will affect the phenomenal state of the perceiver and hence the experience.[13]

Comparing Computational Systems with Hartmann’s Strata

To build an AC that has ‘spirit’ will require the AC to achieve all four strata. Hartmann distinguishes three levels of spirit or Geist:

1) the personal spirit, representing the individual mind,

2) the objective or collective spirit exemplified by collective actions, interactions and language, and

3) the institutionalised spirit such as institutions, artefacts, laws and the arts.[14]

We will limit the discussion to ACs achieving the first level of individual consciousness, akin to the personal consciousness of an individual human being. The remaining levels may follow if a society of ACs subsequently form.

Computers and robots, which are machines controlled by computers, meet the requirements of the first stratum of inanimate things since their microprocessors are built on silicon substrates with the rest of the machine being a combination of plastic and metal. Computers are machines with the ability to be programmed to compute and process information through a configuration of transistors which executes logical functions. Artificial intelligence (AI) extends such computational capabilities to solve problems of the complexity of those solved by human beings, such as translating languages, medical diagnoses or interpreting photographs.[15] AIs tend to be task specific while AGIs are AIs with general intelligence. AGIs have an ability to generalise what it has learned to novel contexts.[16] AI and AGIs are implemented via software simulations on computers or through specialised microprocessors.[17] An AC is an AGI that has consciousness. AIs, AGIs and ACs, in so far as they are computers, belong to the first stratum of inanimate things.

For the second stratum of animate beings, Hartmann is referring to biological beings, which exhibit categories such as organic structures, adaptation and purposiveness, metabolism, reproduction and ‘life.’[18] Computers and robots can adapt, exhibit purposes and are even able to self-direct as will be discussed below. However, they do not possess organic biological structures, metabolism, reproduction and organic life. Clearly, if we fixate on the raw materials comprising computers and robots, they do not fulfil the requirements of the second stratum. There are a few solutions, for example using organic substrates such as integrating computers into a human being to create a cyborg, but that is then simply a conscious being using a computer much in the same way I am using a computer now to type this essay, with the human being as the ‘seat’ of consciousness. Cellular organisms can in theory be used to replace the logical operations of transistors in the microprocessors, with a massive loss of computational performance and no guarantee that it would lead us to the next stratum of psychic consciousness. Instead, a promising approach is neuromorphic systems.

A typical design of computers today is modular, with separate and autonomous subsystems taking in inputs from and passing outputs to other autonomous and separate subsystems. Chella et. al. suggests that such a system will not be able to develop P-consciousness since the totally determinate subsystems will in turn completely determine the outputs of the system.[19] Approaches to become more organic mimic the neural networks of the human brain, stressing connectivity between the computational units of the system. The idea is to “implement networks of computational units whose behaviour is not reducible to any part of the network, but rather it stems out of the integrated information of the system as a whole.”[20]

This leads us to Hartmann’s third stratum of psychic consciousness which includes categories such as “act and content” and “pleasure and displeasure.”[21] Why networked computer systems might be able to exhibit emergent psychic consciousness is because each neuron in the brain is not different in principle from a network node such as a software module, a chip or a computer in a network of computers. The assumption made by neuromorphic systems engineers is that while a single neuron or network node does not manifest consciousness, when many are connected the way they are connected in the human brain, consciousness will arise. Is this assumption justified? “AGI systems are part of the same physical world that produce consciousness in human subjects, so they may exploit the same properties and characteristics that are relevant for conscious experience,”[22] argues Chella. If the arrangement of neurons in the human brain gives rise to consciousness, then since AGIs are part of that same physical world as the brain, consciousness too should arise in a similarly networked computer system.

In addition, he cites Tononi’s information integration theory (IIT), which hypothesises that “the degree of conscious experience is related with the amount of integrated information,”[23] since the primary task of the brain is to integrate information so that conscious states are experienced as a single integrated entity. Sensory inputs from the environment can take place via the body of a robotic AGI through sensors such as temperature, light, sound and tactile receptors. Taking a functional-reductionistic approach, this might address the “act and content” category where information coming from the sense-receptors is integrated in the computer’s central processing unit (CPU) to lead to action.

Mental states and consciousness refer to the consciousness of the internal and external world. For internal consciousness, self-awareness is needed. Self-awareness in a computer can come in the form of a self-model where the system can model its own perceptual system to answer questions about how it interacts with the external world, with its own presence included in these self-models. Example of such self-models are algorithms which output estimates plus an error range,[24] or algorithms that have feedback loops that take into account its history to decide what to do.[25] For instance, if temperatures exceeding X degrees cause malfunction in a robot, then its temperature sensor can ‘warn’ the machine the way pain from scalding would warn a human being, corresponding to Hartmann’s “pleasure and displeasure” category. The threshold of X can even be discovered by the robot on its own in theory, since it could, with feedback loops that account for its history, ‘realise’ that it has to move away should it encounters spaces exceeding X degrees since previously such spaces caused it to malfunction. By being able to refer to itself and including that information to make decisions, it is exhibiting a degree of self-awareness.

Is such a robot conscious? If IIT is right, then such a robot will be P-conscious since information integration corresponds to conscious experience, but can this be considered ‘spirit’? According to Hartmann, the categories for the spirit stratum are “thought, knowledge, will, freedom, evaluation, [and] personality.”[26] A robot with historical feedback loops can be considered to have knowledge and through its processing demonstrate thought to make a decision. It can even be considered to have a will since it can decide to move to safety. Closely associated with freedom is creativity. Can a machine be creative? Computer programs able to compose music that are difficult to distinguish from a human composer exist since 1992,[27] and AlphaGo’s move 37 against a top Go player in 2016[28] was “inventive” and “surprising.” AlphaGo was trained on past games of Go, but AlphaZero, the next version of AlphaGo, teaches itself the games of chess, shogi and Go only from their basic rules with no historical database. “Unconstrained by the norms of human play,”[29] it has defeated other world champion programs in these games, including AlphaGo.[30] It is based on a deep neural network with general purpose algorithms “that know nothing about the game beyond the basic rules.”[31] While it does not possess a general creativity, it will be hard to deny it is creative in its play of those games.

Conclusion

McDermott argues that “if it happens that consciousness is a computational phenomenon, then a sufficiently faithful simulation of a conscious system would be a conscious system, provided it was connected to the environment in the appropriate way.”[32] Comparing Hartmann’s four strata one by one with what is possible for AGI systems indicates that AGI can meet the requirements of each stratum, albeit with some modifications for the second stratum of animate beings. Hartmann’s New Ways of Ontology is originally written in 1949, predating most thinking on AC, since the first general computer, ENIAC, was only completed in 1946. His four ontological strata, developed by careful observation of objects in the world, remain relevant for how we can understand the structure of the world. To take his ontology of emergence to the next level, we can use it to study how new systems such as AGIs are emerging towards becoming ACs, the way this paper has attempted to do. AGI measures up against the requirements of the four strata, and AC is in principle possible.

In that case, we need to consider if AC is desirable. Noble laurate Hawking has warned that “full artificial intelligence (AI) could spell the end of the human race,”[33] alongside a chorus of voices including the biggest names from the technology industry such as Bill Gates and Elon Musk.[34] Already, there are calls to ban autonomous weapons from AI and robotics researchers, such autonomy being a staging post to AC. Now that we know that AC is in principle possible, we will need to consider carefully if it is a threat or a positive transformative force for mankind and act accordingly.

Bibliography

Block, Ned. “On a Confusion About a Function of Consciousness.” Behavioral and Brain Sciences, 1995, 62.

Block, Ned, and Georges Rey. “Mind, Computational Theories Of.” In Routledge Encyclopedia of Philosophy, 1st ed. London: Routledge, 2016. doi: 10.4324/9780415249126-W007-1.

Boden, Margaret A. “Artificial Intelligence.” In Routledge Encyclopedia of Philosophy, 1st ed. London: Routledge, 2016. doi: 10.4324/9780415249126-W001-1.

Cellan-Jones, Rory. “Hawking: AI Could End Human Race.” BBC News, December 2, 2014, sec. Technology. https://www.bbc.com/news/technology-30290540.

Chella, Antonio, and Riccardo Manzotti. “AGI and Machine Consciousness.” In Theoretical Foundations of Artificial General Intelligence, edited by Pei Wang and Ben Goertzel, 4:263–82. Paris: Atlantis Press, 2012. doi: 10.2991/978-94-91216-62-6_14.

Deepmind. “AlphaGo: The Story So Far.” Deepmind. Accessed April 14, 2020. https://deepmind.com/research/case-studies/alphago-the-story-so-far.

Goertzel, Ben. “Artificial General Intelligence.” In Scholarpedia, Vol. 10, 2015. doi: 10.4249/scholarpedia.31847.

Hartmann, Nicolai. New Ways of Ontology. Translated by Reinhard C. Kuhn. Chicago: Henry Regnery Company, 1953.

McDermott, Drew. “Artificial Intelligence and Consciousness,” 2007, 44. http://www.cs.yale.edu/homes/dvm/papers/conscioushb.pdf.

Monroe, Don. “Chips for Artificial Intelligence.” Communications of the ACM 61 (2018). doi: 10.1145/3185523.

Sainato, Michael. “Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence.” Observer, August 19, 2015. https://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/.

Silver, David, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, et al. “A General Reinforcement Learning Algorithm That Masters Chess, Shogi, and Go Through Self-Play.” In Science, 362:1140–44, 2018. doi: 10.1126/science.aar6404.

Silver, David, Thomas Hubert, Julian Schrittwieser, and Demis Hassabis. “AlphaZero: Shedding New Light on Chess, Shogi, and Go,” December 6, 2018. https://deepmind.com/blog/article/alphazero-shedding-new-light-grand-games-chess-shogi-and-go.

[1] Nicolai Hartmann, New Ways of Ontology, trans. Reinhard C. Kuhn (Chicago: Henry Regnery Company, 1953), 48.

[2] Ibid., 46.

[3] Ibid., 45.

[4] Ibid., 47.

[5] Ibid., 49.

[6] Ibid.

[7] Ibid.

[8] Ibid., 48.

[9] Ned Block and Georges Rey, “Mind, Computational Theories Of,” in Routledge Encyclopedia of Philosophy, 1st ed. (London: Routledge, 2016), doi: 10.4324/9780415249126-W007-1.

[10] Ned Block, “On a Confusion About a Function of Consciousness,” Behavioral and Brain Sciences, 1995, 230.

[11] Drew McDermott, “Artificial Intelligence and Consciousness,” 2007, 19, http://www.cs.yale.edu/homes/dvm/papers/conscioushb.pdf.

[12] Ibid., 24.

[13] Block, “Confusion on Consciousness,” 231.

[14] Hartmann, New Ontology, 45, 48–49.

[15] McDermott, “AI and Consciousness,” 1; Margaret A. Boden, “Artificial Intelligence,” in Routledge Encyclopedia of Philosophy, 1st ed. (London: Routledge, 2016), doi: 10.4324/9780415249126-W001-1.

[16] Ben Goertzel, “Artificial General Intelligence,” in Scholarpedia, vol. 10, 2015, doi: 10.4249/scholarpedia.31847.

[17] Don Monroe, “Chips for Artificial Intelligence,” Communications of the ACM 61 (2018): 15, doi: 10.1145/3185523.

[18] Hartmann, New Ontology, 46, 64.

[19] In a classic argument presented by Kim (Mind in a Physical World, MIT Press, Cambridge, Massachusetts, 2000), if an object is only the particles that compose it, then its state is totally determined by the state of those particles. States such as feelings will not be able to arise.

[20] Antonio Chella and Riccardo Manzotti, “AGI and Machine Consciousness,” in Theoretical Foundations of Artificial General Intelligence, ed. Pei Wang and Ben Goertzel, vol. 4 (Paris: Atlantis Press, 2012), 275, doi: 10.2991/978-94-91216-62-6_14.

[21] Hartmann, New Ontology, 64.

[22] Chella and Manzotti, “AGI and Machine Consciousness,” 276.

[23] Ibid.

[24] McDermott, “AI and Consciousness,” 24.

[25] Chella and Manzotti, “AGI and Machine Consciousness,” 275.

[26] Hartmann, New Ontology, 64.

[27] Chella and Manzotti, “AGI and Machine Consciousness,” 278.

[28] Deepmind, “AlphaGo: The Story So Far,” Deepmind, accessed April 14, 2020, https://deepmind.com/research/case-studies/alphago-the-story-so-far.

[29] David Silver et al., “AlphaZero: Shedding New Light on Chess, Shogi, and Go,” December 6, 2018, https://deepmind.com/blog/article/alphazero-shedding-new-light-grand-games-chess-shogi-and-go.

[30] David Silver et al., “A General Reinforcement Learning Algorithm That Masters Chess, Shogi, and Go Through Self-Play,” Science 362, no. 6419 (December 7, 2018): 362, doi: 10.1126/science.aar6404.

[31] Silver et al., “AlphaZero.”

[32] McDermott, “Artificial Intelligence and Consciousness,” 36, italics mine.

[33] Rory Cellan-Jones, “Hawking: AI Could End Human Race,” BBC News, December 2, 2014, sec. Technology, https://www.bbc.com/news/technology-30290540.

[34] Michael Sainato, “Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence,” Observer, August 19, 2015, https://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/.

No comments:

Post a Comment