Post History
For what it's worth, I am a professor in CS and a mathematician, and I've published in peer-reviewed academic journals original work in the field of statistics, and worked extensively in AI. he...
Answer
#4: Attribution notice removed
Source: https://writers.stackexchange.com/a/33343 License name: CC BY-SA 3.0 License URL: https://creativecommons.org/licenses/by-sa/3.0/
#3: Attribution notice added
Source: https://writers.stackexchange.com/a/33343 License name: CC BY-SA 3.0 License URL: https://creativecommons.org/licenses/by-sa/3.0/
#2: Initial revision
For what it's worth, I am a professor in CS and a mathematician, and I've published in peer-reviewed academic journals original work in the field of statistics, and worked extensively in AI. > her character traits are about dogmatic science, trust in law and institutions, and choosing the "safe" options through statistical probability. As a professional scientist, I think this is a terribly misguided set of traits for a "science hero." Scientists do not believe in dogma; I am here to overturn dogma, to break new ground, develop new understandings that **improve** upon the old. Discovering the structure of DNA was not accomplished by adhering to dogma; The general theory of relativity, Quantum Chromo Dynamics, and Darwinian Evolution all flew in the face of dogma. **We still do not trust even these models,** I am 100% certain Relativity and QCD are incompatible, both have a dozen major problems, gravity is still unexplained and probably poorly modeled, the Big Bang is almost certainly B.S. We do NOT trust in "law" or "institutions", the scientist does not trust either of those, they are human enterprises, humans are flawed in myriad ways, and the "law" and "institutions", even our own universities, are demonstrably flawed, corrupt, self-serving, unfair and often just plain unbelievably stupid and misinformed. And speaking as a statistician: Statistics are terribly flawed too, the vast majority of distributions are barely approximations of idealized mathematical objects that do not exist in reality. The standard normal distribution extends to infinity in both directions, including into negative numbers. Which means, if you fit, say, the ages and heights and weights of first graders with the standard normal distribution, it gives a non-zero chance of you someday discovering a first grader that is negative one years old, negative one meter tall, and weighing negative one kilograms. Also a non-zero chance of one 150 years old, 3 meters tall and weighing 200 kilograms. Not only that, but it is virtually impossible to pick the **right** distribution from a list of hundreds, without having tens of thousands of samples, and even then the error rate prevents a great deal of inferences from being drawn at all, because in reality measurements are inexact and the influences are never, ever, uniform. Finally, "safe" is a relative and value-loaded term, and safety may not be an option at all. Does it do me any good whatsoever if I am "safe" and alive, but the rest of humanity dies? Relative because, safety for **whom?** Her? Her crew? The world? All the worlds? What is her value system? Presuming there are two and only two choices, would she sacrifice herself to save the life of a baby, or would she choose to live and let the baby die? How about her crew? Would she choose to kill herself and her crew to save a planet of a billion from certain death (say by preventing a rogue moon from crashing into it). ## You have to give her a value system. Begin with "fairness." She needs some system that lets her evaluate new situations and make non-trivial moral choices. Typically such value systems are based upon self-evident truths (self-evident to the entity contemplating them, at least). For your android to have a character crisis, she must have a belief system that works for her in most circumstances, but in a special circumstances presents a conundrum. Here is one you could use: From the scientific point of view, the universe and reality are certainly unfair to all life; there is no consideration in physics whatsoever about what is "deserved" and "undeserved", or who is guilty or innocent. Bombs do not discriminate between who deserves to die and who does not, nor do asteroids, or supernovae. But the value of science is that we can recognize where the unfairness is and what in nature causes it, and perhaps avert it or correct it, to make things more fair. More safe, more kind, more productive, more enjoyable. We can extend life, alleviate fear, maximize our collective potential by eliminating unfair random obstacles in its way. The android may see that as her mission in life, as many scientists do: to enjoy her life while helping others enjoy theirs, to contribute to the collective body of knowledge in order to make that more likely in the future. (I see no reason an android cannot enjoy life. I would have her refer to herself, not as "artificial intelligence" or an "artificial sentient," but as a "constructed" sentient; she considers herself every bit as sentient and intelligent as any biological human constructed in a womb. And like such humans, she can have pleasures. Then, for your setup, she doesn't need so much a crisis of character, but a belief system. She has heretofore always chosen to pursue her life mission safely, playing the odds in her favor, and planning to continue her life as long as possible to accomplish the most good for others and enjoyment for herself. But then she is confronted with a statistical anomaly that challenges this approach. There is a 1% chance she can preserve the lives of trillions of sentients, biological and constructed, but failure may well mean her own death. When she multiplies out the game theory, the answer is clear. On the 99% leg, she dies, and all the people she might help in a typical 10,000 year android lifespan amount to X. On the 1% leg, she survives, and the trillion people she helps amount to a million times X. The math says she should take a 99% chance of early death. But like humans, her survival instinct is built in and cannot be changed, it isn't some software easily changed. She does not want to take this chance. But refusing to do it may sentence trillions to an early death. She doubts herself. Has she been truly selfish all these centuries? Does she believe **at all** in the greater good she has praised all this time? Is she so afraid of the death she thought she had accepted as inevitable? Does she so covet her experiences for the next 10,000 years that she would sacrifice a trillion sentients to have them? Would she really have them at all, anyway? There is a very obvious safe answer for her, that is unsafe for everybody else. She is virtually certain her friends will die without her. This isn't a matter of arithmetic, this is choosing a near certainty of dying for her beliefs. Because if she doesn't, they aren't true beliefs and she is a liar. She is just another machine, not the real sentience she claimed to be, and believed herself to be. Because a real sentient can choose to die fighting instead of choosing to live cowering, even if the cause is hopeless, because they choose not to live in the universe as a coward without values. She chooses to die fighting, if that is how it must be. Because she may be a _constructed_ sentient, but just like any biologic human constructed in a womb, she is a **_real_** sentient.