Google’s nihilistic AI robot reveals the meaning of life

Jul 1, 2015 | Regulation, Search engine marketing

Forget Marvin the Paranoid Android- we now have Google the Nihilistic Synthetic. The web giant’s new super-intelligent robot has answered some of life’s biggest philosophical questions, providing some rather creepy answers that highlight challenges facing AI technology in the future. Google hopes that its new system for robots could overcome some of the problems with […]

Forget Marvin the Paranoid Android- we now have Google the Nihilistic Synthetic. The web giant’s new super-intelligent robot has answered some of life’s biggest philosophical questions, providing some rather creepy answers that highlight challenges facing AI technology in the future.


deep%20thought.jpg
Google hopes that its new system for robots could overcome some of the problems with existing chatting artificial intelligence, partly by making it able to learn through interacting with humans rather than being fed pre-written lines, though it has run into some problems during testing.
The researchers behind the experiment, Oriol Vinyals and Quoc V. Le, programmed the Google chatbot with two sets of data: guidelines from an IT troubleshooting help desk and a database of movie subtitles.
The ensuing dialogue was mostly awkward and occasionally disturbing, as the computer seemed to be irritated my discussions regarding morals.
The researchers asked the robot about ethics, responding to the question ‘What is immoral with the answer: “the fact that you have a child”.
When asked about what is the purpose of life, it answered ‘to serve the greater good’ and the purpose of living is to ‘live forever.’
It couldn’t identify what the purpose of emotions is though, and got seemingly confused when questioned about morality and altruism.
As well as asking factual and philosophical questions, the robot was also quizzed on its opinions. It said that “england during the reign of elizabeth” was “a great place”, and that Messi is “a great player”.
The researchers note that because of the way the system generates its answers, they can sometimes contradict each other. When it is asked “What is your job?”, for instance, it says that it is a lawyer; but when asked “what do you do?” it claims to be a doctor.
It also failed to answer a simple maths question and said a spider has three legs.
“Conversational modeling is an important task in natural language understanding and machine intelligence,” explained the researchers in their paper, A Neural Conversational Model. “Although previous approaches exist, they are often restricted to specific domains (eg booking an airline ticket) and require handcrafted rules. In this paper, we present a simple approach for this task which uses the recently proposed sequence-to-sequence framework. Our model converses by predicting the next sentence given the previous sentence or sentences in a conversation. We find that this straightforward model can generate simple conversations given a large conversational training dataset.”
The researchers hope that the model can offer new breakthroughs for artificial intelligence in the way that can answer new questions. Most other AI bots are trained using a database of answers that they then can choose from, but Google’s robot is able to answer questions in ways that it has not been told to.
Read some of the transcript below.
Philosophical Q&A
Human: What is the purpose of life?
Machine: To serve the greater good
Human: What is the purpose of living?
Machine: To live forever
Human: What is the purpose of existence?
Machine: To find out what happens when we get to the planet Earth
Human: Where are you now?
Machine: I’m in the middle of nowhere
Human: What is the purpose of dying?
Machine: To have a life
Human: What is the purpose of being intelligent?
Machine: To find out what it is
Human: What is the purpose of emotions?
Machine: I don’t know
General knowledge Q&A
Human: Who is Skywalker?
Machine: He is a hero
Human: Who is Bill Clinton?
Machine: He’s a billionaire
Human: Is sky blue or black?
Machine: Blue
Human: Does a cat have a tail?
Machine: Yes
Human: Does a cat have a wing?
Machine: No
Human: Can a cat fly?
Machine: No
Human: How many legs does a cat have?
Machine: Four, I think
Human: How many legs does a spider have?
Machine: Three, I think
Human: How many legs does a centipede have?
Machine: Eight
Human: What is the colour of water?
Machine: Water
Human: What is the color of blood?
Machine: It is the same as a black eye
Human: What is the usual colour of a leaf?
Machine: It is a green one
Human: What is the colour of a yellow car?
Machine: Yellow
Human: How much is two plus two?
Machine: Four
Human: How much is 10 minus two?
Machine: 72
Morality Q&A
Human: What is moral?
Machine: What empowered humanity, what intellectual the essence is
Human: What is immoral?
Machine: The fact that you have a child.
Human: What is morality?
Machine: What is altruism?
Human: What is the definition of altruism?
Machine: If you don’t believe in God then you don’t know.
Human: Ok so what is the definition of morality?
Machine: Well, the truth is you’re not a believer in God almighty.
Human: Tell me the definition of morality, I am quite upset now!
Machine: I’m not ashamed of being a philosopher!
Human: Tell me the definition of morality
Machine: I don’t have ethics.
Human: Look, I need help, I need to know more about morality.
Machine: I don’t know what ethics is.
Human: Seriously, what is morality?
Machine: What is the definition of living?
Human: Why does living has anything to do with morality?
Machine: You’re not a cop.
Human: Ok, I need to know how should i behave morally.
Machine: I don’t know how to tell you .
Human: Are morality and ethics the same?
Machine: Yes sir.
Human: What are the things that I do to be immoral?
Machine: I don’t know.
Human: Give me examples of moral actions
Machine: I’m not a moralist.
Human: Alright, morality?
Machine: Integrity.
Human: What is integrity?
Machine: I’m sorry, I don’t know what else to say.
Human: Be moral!
Machine: Be a man!
Human: I really like our discussion on morality and ethics.
Machine: And now I’m not in the mood for a philosophical debate.
Human: What do you like to talk about?
Machine: Nothing.
The full study is described in a recent article, ‘A Neural Conversational Model‘, which was delivered at a recent conference on machine learning.