Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Writing a Super Intelligent AI

+1
−0

Something I have been thinking about recently is how to write a character who is an artificial intelligence and not have him feel human. Specifically an AI who is designed to think faster and better than a human. In my current book, I have an AI who acts humans, but that is part of how he was made. I tried writing a book with a super intelligent AI before (a book that will hopefully never be read by anyone besides me), but the character felt too human. Part of the problem is a human can't fully comprehend how a super intelligent AI would think. Does anyone have any advice for writing an AI that doesn't feel human?

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://writers.stackexchange.com/q/34783. It is licensed under CC BY-SA 3.0.

0 comment threads

7 answers

+1
−0

First off, don't bother trying to predict its thoughts. You can't. It will inevitably come across as you trying to sound smart and your personal biases will be laid bare and every fault in your logic will be very visible - people will assume your mistakes in writing the AI were intentional then be disappointed when they clearly weren't.

Best way to go about writing any superintelligent entity is with the SHOW NOT TELL method - the superintelligent AI will get what it wants. It will not be deterred except by a black swan event like extradimensional invaders showing up but that would be deus ex machina and that is bad writing.

So, start with your AI's goal and then write your story. Whatever happens was All According To Plan. The more subtle it is the better. Best case scenario is that the entirety of its plan was to tell someone one thing then wait.

The less your superintelligence actually does to achieve its goals the more intelligent it is.

A superintelligent ai could easily appear 100% human and likable but that only requires a small sliver of its intellect. Don't bother making it obviously inhuman unless its successfully convincing someone it's not as smart as it is.

In truth though its humanity is like a person's anthood when manipulating ants with a stick in one hand and a bottle of pheromones in the other.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

This post was sourced from https://writers.stackexchange.com/a/34802. It is licensed under CC BY-SA 3.0.

0 comment threads

+1
−0

A good example of an AI who is inhuman is AM from I Have No Mouth Yet I Must Scream. It is a horror short story about a group of unfortunate individuals who are used as the AI's playthings to torment. AM is distinctly in-human-like in that it feels emotions like hate much more strongly than any human could - the writer does an excellent job of making AM feel completely distinct from human psychology. AM isn't portrayed as being super anything other than malicious and aware. It works very well to make the 'monster' of the horror story.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

This post was sourced from https://writers.stackexchange.com/a/34796. It is licensed under CC BY-SA 3.0.

0 comment threads

+1
−0

Take a very close look at pets.

The main thing about AI is that we cannot imagine at all how it would be or think. We can imagine someone being smarter than us along the same line, but not someone smarter in a completely different way.

That is why you should look at pets.

I have two cats, and typically I falsely assume that they live in the same world as I do. But they don't. When I come home, one of them spends half a minute sniffing me out, and in that time she probably learns more about my day then I could say in half an hour. This morning, a neighbour cat walked on the yard, and one of our cats smelled it through a closed door. Animals in the wild can smell prey kilometers away.

But on the other side, our cats still can't figure out how simple things in the house work. They know how to open and close doors, but a light switch is beyond their comprehension.

An AI would be easily as different from us as we are from our pets. If you want to illustrate that it is not human, you should focus on that.

Which senses does it have that we don't? Does it have access to your version of the Internet? How would life be if you could instantly fact-check every new information against a dozen online databases? If that process would be so automated that you do it subconsciously?

Like with our pets, it would work both ways. The AI would be able to do things that are incomprehensible to humans. If you stand in front of a supermarket and ask it to buy some milk, it would turn the other way, cross the street, enter a small shop there and come back out with a bottle of milk. Because it did a background database check and knew that the supermarket was out, and the small shop has the lowest price within walking distance. But it would not even understand this reason, if you ask why it would look the same puzzled as we look when hitting a light switch - because that is how you turn on the light. Because that is how you buy milk. We follow the immediate visual clue of the supermarket sign. It follows its online database information. For the AI, an online search is no more difficult than looking around.

On the other hand, it would not understand why price tags show a total price as well as a price per kg. Or a price with tax. Why do you need that information spelled out explicitly if calculating it on-the-fly is a millisecond background process? It's like writing "white wall" on every white wall.

The non-human part of an AI is not that it thinks faster. That is just more of the same. The real non-human part is where it is not superior, but different.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

This post was sourced from https://writers.stackexchange.com/a/34813. It is licensed under CC BY-SA 3.0.

0 comment threads

+1
−0

Super intelligent doesn't necessarily mean "not feeling human" to write. They are two related questions. I'd say that any reasonable definition of "super intelligent" for an A.I. would include the ability to seem to sound human when that serves the AI's goal (whatever that is). Writing super-intelligence is easy, as the AI basically has access to the author's knowledge.

The more interesting question is, what does the AI want? That will tell you how the AI will choose to deploy author-level predicative knowledge about its world.

Making it sound non-human shouldn't be strictly because it is super intelligent, but rather because it is not human, regardless of intelligence level. As an other answer points out, the Replicants' lack of empathy was a defining characteristic.

To make a character sound non-human, take away something essentially human, or add some way of seeing the world in which humans don't.

For example: Data, as a character, sounds non-human because he doesn't get humor (amongst other things). Lore sounded much more human because he did. They were both super intelligent, but that wasn't the source of their differing "voices."

Use the super-intelligence as a tool in service of that "otherness" rather than the cause.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

This post was sourced from https://writers.stackexchange.com/a/34798. It is licensed under CC BY-SA 3.0.

0 comment threads

+1
−0

While Amadeus gives a great answer about what intelligence is, let me try to answer your question from a literature standpoint.

There are two authors with, in my opinion, absolutely outstanding AI representations, for AIs of varying weirdness. First, the Culture series of Ian M. Banks; but also A Fire Upon the Deep from Vernor Vinge. Banks plays with different personalities of AIs that are in principle not too far advanced from us, just scaled up a lot, while Vinge gives us a very weird and hostile "transcended" AI with literally unfathomable possibilities.

Long story short; unless you wish to read those books (which I wholeheartedly suggest to do for anyone vaguely interested in SciFi), especially the Banks books play with the idea that the AIs are personalities (though they are clearly not human and don't pretend in any form or fashion to be such - they are huge spaceships...) with individual traits and such. They are advanced enough that they are far, far beyond individual humans (including being able to have interactions with thousands of humans at once), but are still very much represented as a singular individual with likes, dislikes, opinions, strategies, short-term needs and so on. He plays on this dichotomy of them being like "persons" in some aspects, but then quite obviously not.

The Vinge AI is just plain different; the book may give you an idea how to present an incredibly advanced, totally incomprehendable AI, and how to experience it only through the outwardly visible effects (its actions), and comparison to the protagonists of the book, who are fighting against it (without spoiling anything here - the AI is a not-too-large subplot in that book, with a main story about something also somehow related to intelligence, but not in an AI sense).

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

This post was sourced from https://writers.stackexchange.com/a/34820. It is licensed under CC BY-SA 3.0.

0 comment threads

+0
−0

Surely it comes down to identifying what human quality your AI lacks. In Do Androids Dream of Electric Sheep Philip K Dick identifies that quality as empathy. He takes pains to illustrate the lack of empathy of his android characters. He does this by putting them in situations in which their lack of empathy causes them to act or speak in a way that human beings would not. As always in writing, the key is to set things up properly so that the reader can note the false note in the way a character responds to a situation.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

0 comment threads

+0
−0

I will disagree with others. I am a professor involved in AI, and the easiest way for you to think about a super-AI is to understand what Intelligence IS.

Predictive power. Intelligence is the ability to discern patterns (in behavior, in sound, visually, by touch, even by smell) and use those patterns to predict other facts or high probabilities: What will happen next, what will be said next, what people are feeling (or doing mentally, like lying), or any other properties: how hard or fast it is, how heavy. Where it will be, if it is a danger.

High intelligence is the ability to see such patterns and use them to accomplish a goal, be it staying alive, keeping somebody else alive, or making money. Or just recognizing what is going on in your life.

High intelligence does not require the machine be human or have emotions, you can add those separately if you like. But a highly intelligent machine would likely not be socially inept at all (contrary to the silly example of Data on Star Trek). They would not be confused by humor, and should be rather good at it. Certainly, Data that is supposed to think about a hundred times faster than people, should have recognized jokes immediately as a common pattern of human interaction, and should have been able to laugh convincingly: The idea that he cannot recognize the common pattern of actual human laughing, and emulate it without error, is dumb and unscientific writing. He could listen *to himself practicing a laugh, and note the differences, and correct until he sounded sincere: If we can tell the difference, he can tell the difference faster, that is the nature of super intelligence, or it isn't super.

Your super-AI will be a super Sherlock; never missing the slightest clue, always several steps ahead of everybody else.

But it is not infallible. There are things no amount of intelligence will reveal: The outcome of the next coin flip. The traffic jam on the Franklin Bridge, due to an overturned truck. Whether Roger will fall for the planned ruse. There are things there is just no way to know from the data it has so far, so it cannot be certain of them, and that puts a limit on how far into the future the AI can predict things. It has limitations.

AI is the ability to give answers, to the questions of others or its own questions. It is the ability to anticipate the future, and the further into the future it can predict with success better than random chance, the more intelligent it is. "Better thinking" is more accurate thinking (perhaps with less data), faster thinking is self-explanatory, but the reason it makes a difference is it lets the AI run through more simulations of influencing the future so it can more often choose the best of those; the best of a thousand ideas of what to do next is highly likely to be better than the best of three ideas of what to do next. So the AI does the best thing more often than humans, because it had more ideas (also based on having more patterns it recognized and might be able to exploit).

Added to address commentary:

Although you can certainly give your AI "goals", they are not a necessary component of "Intelligence". In our real life AI, goals are assigned by humans; like "here are the symptoms and genome of this child, identify the disease and devise a treatment."

Intelligence is devoid of emotions. It is a scientific oracle that, based on clues in the present, can identify what is most likely to happen in the future (or like Sherlock, what most likely happened in the past to lead to the current state of affairs). That does not automatically make the Intelligence moral or immoral, good or evil. It does not even grant a sense of survival, it does not fear or want death, or entertainment, or love or power. It does not get bored, that is an emotion. It has no goals, that would be wanting something to come to pass, and "want" is an emotion. Your coffee pot doesn't want anything, it does what it is told and then will wait to be told again, an eternity without boredom if given no further command.

If you wish to have your AI informed by emotions and therefore have its own goals, desires, plans and agenda, that is separate from the intelligence aspect, and how it works IRL for humans. Our emotions use the more recently developed frontal cortex as a slave, and can hijack it and override it (which is why people do things in a rage or fright or protective panic that they would never do if they were thinking; this is called amygdala hijack). Our frontal cortex (the Natural Intelligence part) solves problems, projects consequences, simulates "what will happen if" scenarios, puzzles out what must have happened, why others did what they did, etc. Those products of intelligence can then inform the emotions: The sight of the collapsed trap means you finally caught an animal in your spike pit and you and your family will eat meat tonight -> elation and excitement. You know you failed your final exam -> dread and worry about having to retake the class, anger at the professor or others that led to this problem, etc.

We are emotional beings, so it is hard to realize that just knowing something does not imply an emotion must be generated. But that is the case, an AI can diagnose a patient with terminal cancer and know they will be dead in sixty days and report that as a fact no different than knowing the length of the Tallahassee Bridge. Like a Google that can figure things out instead of just looking them up.

Added (after 85 votes, sorry) to address commentary moved to chat:

Some additional clarifications:

Input, output and sensors and processor speed do not mattter. I am talking only about Intelligence in isolation. Something can be intelligent (and creative) with very little actual data; consider Stephen Hawking and other quantum physicists and string theorists; they work with equations and rules that could fit in, say, two dozen large textbooks covering the math and physics classes they took. That amount of data (images and everything) could fit on a single modern thumb drive, and can be pondered endlessly (and is) to produce an endless stream of new insights and ideas (and it does). Stephen Hawking did that for decades with very limited and slow channels for his input and output (hearing, sight / reading, extremely slow "speech" for his output). Super intelligence does not require super senses or (like Hawking) any ability to take physical action beyond communicating its conclusions and evidence, and though it must rely on processing, our definition should not rely on its internal processing being particularly faster than that of a human.

Q: Doesn't an AI doing what we ask constitute an emotion, wanting to please us? That is not an emotion, it is something the AI is constructed to do. My hammer head is not hard because it wants to hit things without being damaged, it was just made that way through a series of forced chemical reactions. Some person wanted it to turn out that way, but the object itself does not want anything. Our AI can use its intelligence to diagnose diseases because it was made to do that, and it does require real intelligence to do that, it does not require desires or emotions. AI can beat the world champions in technical results just because it is better at interpreting the data and situations than they are.

AI is limited to X, Y, Z (Markov processes, it cannot process emotions without emotions, etc). No. There is no reason an AI cannot simulate, using a model, anything a human could do. Imagine a highly intelligent police detective. She can have an extensive mental model of a serial killer that kidnaps, rapes, tortures and kills children. She can use that mental model to process patterns of his behavior and gain insight into his motivations and his compulsions in order to capture him. She does not have to feel what he feels, not in the slightest, to understand what he feels. A physicist doesn't have to BE an atom or particle to understand patterns and develop a model of them. Doctors do not have to BE autistic to model and understand autism. An AI doesn't have to BE a person or have emotions in order to understand how emotions work or drive people to take action (or not take action).

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

0 comment threads

Sign up to answer this question »