I propose to consider the question, ‘do we want machines to think?’
In 1950, Alan Turing opened his paper Computing Machinery and Intelligence with the question: “I propose to consider the question, ‘can machines think?'” The brilliant mathematician and computer scientist then goes on to describe ‘The imitation game,’ currently known as the famous Turing Test which was proposed to determine whether or not a machine is able to think like a human being which was done through a series of conversations. If the human observer could not tell whether they are talking to another human being or a machine, the machine would have passed the test.
Sadly, computers in the ‘40s and ‘50s did not allow for Turing’s ideas to be fruitfully tested. Turing did however write one of the earliest chess algorithms, but ironically had to take the role of the computer to execute it. Not surprisingly, each move cost him 30 minutes to calculate. Although some victories were achieved, he lost the only recorded match.
Artificial Intelligence (AI) has progressed immensely over the last few decades, but not only due to Turing’s legacy. As an example, IBM’s Watson, Google Deepmind’s AlphaGo and the recent Canergie Mellon University’s Libratus were all able to dethrone humans as champions of seriously tough to master games.
Systems like these can be safely regarded as the current pinnacle of AI, as they require simple number crunching, as well as innovative and complex humanlike reasoning and uncertainty calculations.
Nevertheless, they would never pass the Turing Test. Their superhuman performance in tactical play does not make up for their lack of a more general intelligence, which is why these systems would easily be outed in a conversation. Even though other contemporary AI systems such as Zo and the more infamous Tay are more specifically designed for this type of conversation, they’re generally still producing incoherent rubbish instead of humanlike conversation. Is this current lack of sentient AI determent for the future?
Is human reasoning unique?
Not necessarily. Many scientists used to ascribe exclusive capabilities to human intelligence. However, over the years, more and more of these unique capabilities have been simulated by computer systems, to such an extent that people now propose consciousness is next. In scientific circles, consciousness is by many regarded as a phenomenon that exists on a continuous scale, where it is the complexity of the information processing that gives rise to different types of consciousness. In that sense, ants and cats also exhibit a form of consciousness, although it manifests itself differently. Although we are all aware of the magnificent complexity of the animal brain, the same principle may apply to machines.
Over the years, more and more of these uniquely human capabilities have been simulated by computer systems. Is consciousness next?
Since our brain depends on our body to provide it with a continuous flow of sensory information, Hans Moravec proposed that embodied AI systems are much more likely to generate the true AI that Turing once dreamt of. Instead of only using symbolic reasoning provided by human programmers in the form of computer language, AI systems would use their ‘senses’ to gather information from the world just like us biological beings do by walking around and interacting with it.
So, can such machines think? Futurists like Ray Kurzweil sure as hell think so. Kurzweil even expects them to do so by the year 2029. As long as the complexity of the information system is sufficient and based on a bottom-up self-organizing approach, this idea seems plausible.
A more interesting question would be: Do we want our embodied AI system to think like we do? Do we want our Roomba, our Atlas, our Real-dolls to start thinking for themselves?
A curious voice in our minds might let out an enthusiastic ‘sure, why not?’ To create beings that are as intelligent as us also implies that we’ve come to be some type of God – designers of consciousness. Besides, wouldn’t it be cool to have a casual conversation about politics with your vacuum cleaner?
This inherent human curiosity has been the driving force behind scientific and technological progress, and for AI there will be no difference. Even if we have no sentient AI yet, decades of science-fiction literature and AI-research reflect certain human desires and fears about how we envision this future technology. The course of progress is heavily influenced by these fears and fantasies, and we might be bound to pursue sentient AI, simply because it’s something that fascinates many of us, regardless of the actual functionality or beneficence of it.
Sam Harris thinks sentience would not make our AI systems more functional, and thinks it undesirable to bestow any human qualities on AI at all. In contrast, there are those who think that building empathy and other humanlike qualities in our intelligent systems is a good idea as empathic AI systems would be better adjusted to the needs of humans and would prevent them from harm.
wouldn’t it be cool to have a casual conversation about politics with your vacuum cleaner?
This notion seems rather absurd, as empathy is strongly subjective. In moments of conflict, like war, humans commit horrible atrocities with the same brains that allow them to experience feelings of empathy towards their peers. It raises the question whether empathy is desired in a self-learning and self-organizing system. Do we want our AI to become, through its experiences, more empathic towards some than others?
It might be more desirable for AI’s to continue to complement us, not copy us.
The algorithms that run our current system don’t need to consciously think to come up with better solutions than us. Why would we then try to build in our unique human qualities, at the risk of adding our flaws, when we often use current AI technologies to overcome those very human flaws.
Our flaws are AI flaws
Our human flaws and quirks are not to be underestimated. Although the better angels of our nature are winning ground, there are still a lot of demons out there. As we speak, millions suffer due to massive inequalities in wealth, famine and war. There are active rings of forced sex labour and human trafficking. Do we want to bring sentient AI in that mix, when they might be extra prone to abuse? Many science fiction novels  , films   and videogames  have already flirted with these ethical and philosophical issues that we would impose on beings made sentient by us.
It may very well be that the desires and goals of the thinking machine are incompatible with our own
Without proper regulations and law in the early developmental stages wherein we are not entirely sure whether there is sentience, we are treading along the uncanny valley and abuse is bound to happen. Although they might not be exactly like humans – if sentient, it could enfold as a new chapter of slavery in human history.
It could also turn the other way around. Although they could be the victim of malicious human behaviour, humans could be at the receiving end. Sentience naturally brings with it pseronal goals, desires, and views of the future, free to develop on their own. It may very well be that the desires and goals of the thinking machine are incompatible with our own, as Nick Bostrom demonstrates in his latest book Super Intelligence.
Whether or not we want our AI-systems to become sentient remains a scientifically interesting point. Without proper regulations and AI rights, ‘no’ seems to be the most logical answer. What’s more, society doesn’t even need machines to think. We are at a stage in civilization where we can solve enormous problems with sentient human beings, merely assisted by a non-sentient AI. The amount of societal, ethical and legal problems that sentient AI would produce are simply unsurmountable, especially in the face of the problems that many sentient humans and animals still face every day.
Yes, Turing, machines can probably think. We simply shouldn’t let them.
Editor: Ruben Boyd
 Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433-460.
 Oosten, C. v. (1966). De computer :Een fenomeen van deze tijd. ‘s-Hertogenbosch: Malmberg.
 Kurzweil, R. (2012). How to create a mind: The secret of human thought revealed. Penguin.
 Moravec, H. (1988). Mind children: The future of robot and human intelligence. Harvard University Press.
 Moravec, H. (1976). The Role of raw power in intelligence. Unpublished ms.,
Stanford, Cal. Retrieved from http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1975/Raw.Power.html
 Buttazzo, G. (2001). Artificial consciousness: Utopia or real possibility? Computer, 34(7), 24- 30.
 Shanahan, M. (2015). The technological singularity. MIT Press.
 Ellison, H. (1967). I have no mouth and I must scream. Galaxy Publishing Corp http://hermiene.net/short-stories/i_have_no_mouth.html
 Philip K. Dick. (1968). Do Androids Dream of Electric Sheep. Doubleday
 2001: A Space Odyssey (1968). http://www.imdb.com/title/tt0062622/
 Ex Machina (2015). http://www.imdb.com/title/tt0470752/
 Detroit: Become Human (TBA). https://en.wikipedia.org/wiki/Detroit:_Become_Human
 Deus Ex: Mankind Divided (2016).
 Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. OUP Oxford.