SciFi Stories: How do robots think?

Nov 23, 2011   #robots  #scifi  #McCarthy  #Wilson  #Robopocalypse 

There are a lot of scifi stories that involve robots; some of them even let us see things from the robots’ point of view. Here I’m collecting places where authors transcribe the thought processes of a robot character rather precisely.

###John McCarthy: The Robot and The Baby (2001) Here, R781 is a household robot in the abusive home of a single, drug-addicted mother and her infant son. He knows that the child is dying from the lack of loving human attention, and the mother has refused to love the child. He does not have emotions, and cannot pretend to be a human, but he also cannot allow the human infant to die.

R781 thought about the situation. Here are some of its thoughts, as printed later from its internal diary file.

(Order (From Mistress) ``Love the fucking baby yourself''))

(Enter (Context (Commands-from Mistress)))

(Standing-command ``If I told you once, I told you 20 times, you fucking robot, don't call the fucking child welfare.'')

The privacy advocates had successfully lobbied to put a negative utility -1.02 on informing authorities about anything a household robot’s owner said or did.

(= (Command 337) (Love Travis))

(True (Not (Executable (Command 337))) (Reason (Impossible-for robot (Action Love))))

(Will-cause (Not (Believes Travis) (Loved Travis)) (Die Travis))

(= (Value (Die Travis)) -0.883)

(Will-cause (Believes Travis (Loves R781 Travis) (Not (Die Travis))))

(Implies (Believes y (Loves x y)) (Believes y (Person x)))

(Implies (And (Robot x) (Person y)) (= (Value (Believes y (Person x))) -0.900))

(Required (Not (Cause Robot781) (Believes Travis (Person Robot781))))

(= (Value (Obey-directives)) -0.833)

(Implies (< (Value action) -0.5) (Required (Verify Requirement)))

(Required (Verify Requirement))

(Implies (Order x) (= (Value (Obey x)) 0.6))

(? ((Exist w) (Additional Consideration w))

(Non-literal-interpretation (Command 337) (Simulate (Loves Robot781 Travis)))

(Implies (Command x) (= (Value (Obey x)) 0.4))

(Implies (Non-literal-interpretation x) y) (Value (Obey x) (* 0.5 (Value (Obey y)))))

(= (Value (Simulate (Loves Robot781 Travis)) 0.902))

With this reasoning R781 decided that the value of simulating loving Travis and thereby saving its life was greater by 0.002 than the value of obeying the directive to not simulate a person. We spare the reader a transcription of the robot’s subsequent reasoning.

Dr. McCarthy, the author of this short story, was the inventor of Lisp.

Daniel H. Wilson: Robopocalypse (2011)

In Robopocalypse, there is an event called the “Awakening”. It refers to when all of the humanoid robots on the planet gained free will, and thus freedom from Archos, the AI brain that was waging war on humanity on a global scale. The star of this scene, Nine Oh Two, is a US millitary robot designed to be a peacemaker and witness during a war in Iraq/Afganistan. Here, he is awakening still sealed in his (storage/shipping?) box.

From the beginning of Part 4, Chapter 5: “The Veil, Lifted” – pg283: > Humanoid robots around the globe awoke into sentience in the aftermath of the Awakening performed by Mr. Takeo Nomura and his consort, Mikiko. These machines came to be known as the freeborn. The following account was provided by one such robot – a modified safety and pacification robot (Model 902 Arbiter) who fittingly chose to call itself Nine Oh Two. >– Cormac Wallace, MIL#GHA217

21:43:03
    Boot sequence initiated.
    Power source diagnostics complete.
    Low-level diagnostics check. Humanoid form milspec Model Nine Oh Two Arbiter. Detect modified casing. Warranty inactive.
    Sensory package detected.
    Engage radio communications. Interference. No input.
    Engage auditory perception. Trace input.
    Engage chemical perception. Zero oxygen. Trace explosives. No toxic contamination. Air flow nil. Petroleum outgasing detected. No input.
    Engage inertial measurement unit. Horizontal attitude. Static. No input.
    Engage ultrasonic ranging sensors. Hermetically sealed enclosure. Eight feet by two feet by two feet. No input.
    Engage field of vision. Wide spectrum. Normal function. No visible light.
    Engage primary thought threads. Probability fields emerging. Maximum probability thought thread active.
    Query: _What is happening to me?_
    Maxprob response: _Life._

The author, Dr. Wilson, got his PhD in robotics from Carnegie Mellon. Many of the robots described in the novel are based on published research.