Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Post History

50%
+0 −0
Q&A Characterizing a sentient robot: inhuman PoV

One way this has been done is by using a human foil; perhaps somebody that doesn't trust the robot. The way this works is the robot presents as a caring human you could care about. But the robot w...

posted 5y ago by Amadeus‭  ·  last activity 5y ago by System‭

Answer
#4: Attribution notice removed by user avatar System‭ · 2019-12-19T22:13:43Z (almost 5 years ago)
Source: https://writers.stackexchange.com/a/43947
License name: CC BY-SA 3.0
License URL: https://creativecommons.org/licenses/by-sa/3.0/
#3: Attribution notice added by user avatar System‭ · 2019-12-08T11:29:20Z (almost 5 years ago)
Source: https://writers.stackexchange.com/a/43947
License name: CC BY-SA 3.0
License URL: https://creativecommons.org/licenses/by-sa/3.0/
#2: Initial revision by (deleted user) · 2019-12-08T11:29:20Z (almost 5 years ago)
One way this has been done is by using a human foil; perhaps somebody that doesn't trust the robot.

The way this works is the robot presents as a caring human you could care about. But the robot will answer honestly if the foil asks how it came to this decision, and the explanation can be much unlike a human, quite analytical and manipulative.

> Foil: "I'm thinking of taking a walk."
> 
> AI: "This is an excellent time to take a walk. You should wear the light tan jacket, shall I bring it?"
> 
> Foil: "Wait. Tell me why this is an excellent time to take a walk."

When queried, the AI has checked the latest satellite weather data and extrapolated it to the local area, it has checked traffic patterns and local air quality sensors, it is monitoring the police channel and city services channel to ensure the walk will be safe, traffic will be light, air quality is decent and they won't encounter any trouble or obstacles. It has also been monitoring the apartment cameras, and Mrs. Razwicky returned with groceries twenty minutes ago, based on her normal schedule they should not encounter her in the halls until after four fifteen. It suggests the light tan jacket because the temperature on a walk will be three degrees beneath what the foil usually finds comfortable at this particular time of the day, and this jacket will achieve the closest fit to his comfortable temperature.

You probably need only one instance of something like this, just show an inhuman amount of academic and technical knowledge and thought going into a simple reply, without any delay. Readers will understand, in this throwaway conversation, that this is the **norm** for this AI. Kind of like IBM's Watson (the real thing) referencing 100,000 works and tens of millions of words in a few hundred milliseconds to answer a simple Jeopardy puzzle answer.

Then at the end of all this, your human foil can say, "Yeah, get my jacket."

Or, "I don't really feel like a walk."

Currently, at least, this is how AI differ from humans, they process _enormous_ amounts of data and models to come up with relatively simple answers. We aren't sophisticated enough, in either measurement or understanding, to truly mimic a brain **in the same way that a brain works.** We have a very superficial understanding of how the brain works, or some researchers (like me) would say no real understanding of how it all works together. (It is like understanding 90% of the parts of a machine, without understanding how they all actually come together to make the machine.)

Human brains do not work like Watson; we know that. IBM found a way to use a million times the processing power of a human brain to simulate the _results_ of a tiny part of a brain; without simulating how the brain actually does it.

That is the key to your desired alienation; the AI manages to appear human, while internally doing a million or a billion times as much work as a human to achieve this charade. And, like Watson, it can do that faster and better than any human ever could, just like computers can do arithmetic faster and better than any human ever could.

#1: Imported from external source by user avatar System‭ · 2019-03-22T22:20:39Z (over 5 years ago)
Original score: 5