Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Post History

50%
+0 −0
Q&A Writing a Super Intelligent AI

I will disagree with others. I am a professor involved in AI, and the easiest way for you to think about a super-AI is to understand what Intelligence IS. Predictive power. Intelligence is the abi...

posted 6y ago by Amadeus‭  ·  last activity 4y ago by System‭

Answer
#4: Attribution notice removed by user avatar System‭ · 2019-12-19T22:13:22Z (over 4 years ago)
Source: https://writers.stackexchange.com/a/34797
License name: CC BY-SA 3.0
License URL: https://creativecommons.org/licenses/by-sa/3.0/
#3: Attribution notice added by user avatar System‭ · 2019-12-08T08:28:29Z (over 4 years ago)
Source: https://writers.stackexchange.com/a/34797
License name: CC BY-SA 3.0
License URL: https://creativecommons.org/licenses/by-sa/3.0/
#2: Initial revision by (deleted user) · 2019-12-08T08:28:29Z (over 4 years ago)
I will disagree with others. I am a professor involved in AI, and the easiest way for you to think about a super-AI is to understand what Intelligence IS.

Predictive power. Intelligence is the ability to discern patterns (in behavior, in sound, visually, by touch, even by smell) and use those patterns to predict other facts or high probabilities: What will happen next, what will be said next, what people are feeling (or doing mentally, like lying), or any other properties: how hard or fast it is, how heavy. Where it will be, if it is a danger.

High intelligence is the ability to see such patterns and use them to accomplish a goal, be it staying alive, keeping somebody else alive, or making money. Or just recognizing what is going on in your life.

High intelligence does not require the machine be human or have emotions, you can add those separately if you like. But a highly intelligent machine would likely not be socially inept at all (contrary to the silly example of Data on Star Trek). They would not be confused by humor, and should be rather good at it. Certainly, Data that is supposed to think about a hundred times faster than people, should have recognized jokes immediately as a common pattern of human interaction, and should have been able to laugh convincingly: The idea that he cannot recognize the common pattern of actual human laughing, and emulate it without error, is dumb and unscientific writing. He could listen **\*to himself** practicing a laugh, and note the differences, and correct until he sounded sincere: If **_we_** can tell the difference, **_he can tell the difference faster_** , that is the nature of super intelligence, or it isn't super.

Your super-AI will be a super Sherlock; never missing the slightest clue, always several steps ahead of everybody else.

But it is not infallible. There are things no amount of intelligence will reveal: The outcome of the next coin flip. The traffic jam on the Franklin Bridge, due to an overturned truck. Whether Roger will fall for the planned ruse. There are things there is just no way to know from the data it has so far, so it cannot be certain of them, and that puts a limit on how far into the future the AI **can** predict things. It has limitations.

AI is the ability to give answers, to the questions of others or its own questions. It is the ability to anticipate the future, and the further into the future it can predict with success better than random chance, the more intelligent it is. "Better thinking" is more **_accurate_** thinking (perhaps with less data), faster thinking is self-explanatory, but the reason it makes a difference is it lets the AI run through more simulations of influencing the future so it can more often choose the best of those; the best of a thousand ideas of what to do next is highly likely to be better than the best of three ideas of what to do next. So the AI **does the best thing** more often than humans, because it had more ideas (also based on having more patterns it recognized and might be able to exploit).

**Added to address commentary:**

Although you can certainly give your AI "goals", they are not a necessary component of "Intelligence". In our real life AI, goals are assigned by humans; like "here are the symptoms and genome of this child, identify the disease and devise a treatment."

Intelligence is devoid of emotions. It is a scientific oracle that, based on clues in the present, can identify what is most likely to happen in the future (or like Sherlock, what most likely happened in the past to lead to the current state of affairs). That does not automatically make the Intelligence moral or immoral, good or evil. It does not even grant a sense of survival, it does not fear or want death, or entertainment, or love or power. It does not get bored, that is an emotion. It has no goals, that would be wanting something to come to pass, and "want" is an emotion. Your coffee pot doesn't want anything, it does what it is told and then will wait to be told again, an eternity without boredom if given no further command.

If you wish to have your AI informed by emotions and therefore have its own goals, desires, plans and agenda, that is **_separate_** from the intelligence aspect, and how it works IRL for humans. Our emotions use the more recently developed frontal cortex as a slave, and can hijack it and override it (which is why people do things in a rage or fright or protective panic that they would never do if they were thinking; this is called **amygdala hijack** ). Our frontal cortex (the Natural Intelligence part) solves problems, projects consequences, simulates "what will happen if" scenarios, puzzles out what **must** have happened, **why** others did what they did, etc. Those products of intelligence can then **inform** the emotions: The sight of the collapsed trap means you **finally** caught an animal in your spike pit and you and your family will eat meat tonight -\> elation and excitement. You know you failed your final exam -\> dread and worry about having to retake the class, anger at the professor or others that led to this problem, etc.

We are emotional beings, so it is hard to realize that just **knowing** something does not imply an emotion must be generated. But that is the case, an AI can diagnose a patient with terminal cancer and know they will be dead in sixty days and report that as a fact no different than knowing the length of the Tallahassee Bridge. Like a Google that can figure things out instead of just looking them up.

### Added (after 85 votes, sorry) to address commentary moved to chat:

Some additional clarifications:

**Input, output and sensors and processor speed do not mattter.** I am talking only about Intelligence **in isolation**. Something can be intelligent (and creative) with very little actual data; consider Stephen Hawking and other quantum physicists and string theorists; they work with equations and rules that could fit in, say, two dozen large textbooks covering the math and physics classes they took. That amount of data (images and everything) could fit on a single modern thumb drive, and can be pondered endlessly (and is) to produce an endless stream of new insights and ideas (and it does). Stephen Hawking did that for decades with very limited and slow channels for his input and output (hearing, sight / reading, **_extremely_** slow "speech" for his output). Super intelligence does not require super senses or (like Hawking) any ability to take physical action beyond communicating its conclusions and evidence, and though it must rely on processing, our definition should not rely on its internal processing being particularly faster than that of a human.

**Q: Doesn't an AI doing what we ask constitute an emotion, wanting to please us?** That is not an emotion, it is something the AI is constructed to do. My hammer head is not hard because it wants to hit things without being damaged, it was just made that way through a series of forced chemical reactions. Some **person** wanted it to turn out that way, but the object itself does not want anything. Our AI can use its intelligence to diagnose diseases because it was made to do that, and it does require real intelligence to do that, it does not require desires or emotions. AI can beat the world champions in technical results just because it is better at interpreting the data and situations than they are.

**AI is limited to X, Y, Z (Markov processes, it cannot process emotions without emotions, etc).** No. There is no reason an AI cannot simulate, using a model, anything a human could do. Imagine a highly intelligent police detective. She can have an extensive mental **model** of a serial killer that kidnaps, rapes, tortures and kills children. She can use that mental model to process patterns of his behavior and gain insight into his motivations and his compulsions in order to capture him. She does not have to **_feel_** what he feels, not in the slightest, to **_understand_** what he feels. A physicist doesn't have to BE an atom or particle to understand patterns and develop a model of them. Doctors do not have to BE autistic to model and understand autism. An AI doesn't have to BE a person or have emotions in order to understand how emotions work or drive people to take action (or not take action).

#1: Imported from external source by user avatar System‭ · 2018-04-03T21:32:59Z (about 6 years ago)
Original score: 91