Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Character crisis for a Science Hero?

+0
−0

I have three protagonists working towards the same goal but with different motives. They are co-protagonists each with their own character arc and resolution. Circumstances bring them together but their personalities clash. At the end of the first adventure they succumb to their own character flaws rather than trust the abilities of the others, and the main conflict is left unresolved (they re-unite in a later adventure and try again).

I have an Action Hero that struggles to be self-disciplined and not over-react. When she fails things explode and people die.

I have a Guile Hero that struggles to be moral and not double-cross everyone. When she fails she creates new conflicts and alienates her friends.

The problem is my third character a Science Hero – I'm not actually sure that is an archetype but her character traits are about dogmatic science, trust in law and institutions, and choosing the "safe" options through statistical probability. To make this character more obvious, and because I don't feel I have tropes to establish this archetype with the readers, she is a sentient android (sentient, not omniscient). Her character is not overly rigid or mentally inferior, but I feel her being an AI is enough to establish that she comes from a world of facts and concrete science (and I'm hoping to subvert cliches about AI as characters). Making her android implies some rules about her heroic type.

THE BAD EXAMPLE: Too often in sci-fi stories, I see top-tier (yet bumbling) scientists who lack the imagination to grasp the world-breaking monsters from the Beyondo, or whatever contrived un-scientific threat arrises. They explain why the premise is impossible, then keep saying "This can't be happening!" and die because (according to the story) brains are useless and only fists and charisma save the day (and probably a guile hero doing a silly dance as a distraction). This is what I want to avoid. I don't want to use my Science Hero as a foil to justify the unintelligent actions of other heroes. I need her to stand on her own, with her own consistent limits and strengths. I especially want to avoid making her internal crisis about all of science being "wrong", that defeats the purpose of having intelligence.

OPPOSITE OF SCIENCE IS FAITH: What I want try instead is lead her to a personal crisis where she must have "faith" in a statistical long-shot, and eventually starts to factor the disruptive effects of the other heroes into her probability outcomes. Rather than just make her be an exposition geek who explains science factoids to the team, I plan to have her realize that the other two can be less destructive when she is around to mitigate and direct their chaos, but this is a big leap for her character.

The sentient androids are expected to be subservient to humans, not to interfere but to "guide" human affairs. They aren't allowed to be critical or hold positions of power, yet they are expected to be moral paragons. It's a cognitive dissonance, and most cultures prefer un-sentient AI: When the AI has a sense of self, they develop a moral code as part of their identity and that isn't good if they don't approve of your mission. Un-sentient AI will crunch through problems without moral judgement because they have no innate sense of justice.

My character has an educated sense of right and wrong (morally more nuanced than the others usually, but perhaps less experienced) so I can justify her partnering up with the wrecking crew based on her "faith" in the long-shot gamble (thousands may die, but millions could be saved), however she also needs an internal struggle I can show that makes her change from a passive observer resigned to occasionally offer advice and observations she believes will go unheeded, to directly meddling in events and changing outcomes. She models the new behavior on her partners (brute force and deception), but the "institution" she has to conquer is her own role in society. I feel once I can get her over this hump in a small way, or better understand this internal conflict, the rest of her actions are a slippery slope and the plot takes care of escalating the stakes.

How do I show a character crisis for a Science Hero, secure in the inevitability of statistical facts, struggle to decide if she should actively try to "change the world", even though it means rejecting the established order she comes from and the outcomes are uncertain? For plot reasons I can't start with "because she is saving millions of lives", that excuse is only justifiable later. I also need to show her fail, as they all do in the first adventure, so she has motivation to not fail in the future. The other heroes have external conflicts that play out through action, but this one is internal and plays out through inaction. I don't expect her to carry the same gravitas as the others – she has some action moments too, she is not a brain in a jar – but I can't write a character walking around wondering aloud if she should chuck everything she believes in the waste bin, and roll the dice. This would defeat who she is suppose to be.

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

This post was sourced from https://writers.stackexchange.com/q/33328. It is licensed under CC BY-SA 3.0.

0 comment threads

5 answers

+1
−0

So, she's rational, data-driven, and rule-following AND the rules say she can't be a leader? The conflict practically writes itself: Her data and rational analysis tell her she should be leading, her rules tell her she can't.

To sharpen the conflict, her failure could be a complete fluke. She embraces leadership, and a million-to-one crisis means it doesn't go well. Will she ever have the courage to try again, knowing rationally that it wasn't her fault?

BTW, your third character reminds me quite a lot of Spock from the original Star Trek, maybe even with echoes of the Kirk-Bones-Spock dynamic in your trio of main characters. I wouldn't recommend going too far overboard in that direction, but it might be a useful reference.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

This post was sourced from https://writers.stackexchange.com/a/33675. It is licensed under CC BY-SA 3.0.

0 comment threads

+1
−0

I would suggest, especially since she is an android and not a highly logic-oriented human, that you devise a conflict around the difficulty computers have recognizing "almost-patterns" and doing real fuzzy logic. It's the thing that makes humans so successful as animals and also the thing that makes us prone to superstition. She could come across a problem that she can't recognize the solution to that leads her to have to learn about the nature of analytical prediction, superstition, and faith in the human psyche and, with that newfound understanding, start observing it in action in the people around her. Eventually she might learn some techniques to approximate that kind of approach even in a logic-bound algorithmic implementation.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

This post was sourced from https://writers.stackexchange.com/a/33329. It is licensed under CC BY-SA 3.0.

0 comment threads

+1
−0

Look to the Magician Archetype for Inspiration

Archetypes do not become obsolete or outdated. The Magician archetype, I would think, is very close to what we might consider a modern day Scientist archetype.

A few examples of this type would be Merlin, Gandalf, Dumbledore. One key feature with them, is that they seem to know more than everyone else and is always a few steps ahead. They also seem to have a moral dilemma as well, whether or not to meddle with or change the outcome of the affairs of the other characters - when to help and when to stay out of the way. So if this character is supposed to be a "moral paragon" of sorts, what kind of faculty do they have to be able to make such delicate decisions? Perhaps this is what your character is struggling with. Something like:

  • Should she help them, or stay out of the way so they can learn/grow from this experience/crisis
  • Should she kill 100 people to save 1000, or should she try to save all of them with a high probability they will all die anyway?
  • Should she tell them that the path is that way, but let them go down the long and more difficult way?

Tarot Interpretation of the Magician

Also known as the Juggler, Conjurer, Trickster, Alchemist (perhaps Scientist?)

rider deck Magician Card from Rider deck

Fradella deck Magician Card from Fradella deck

... the Magician, as the beginning of the Major Arcana proper, represents consciousness, action and creation. He symbolizes the idea of manifestation, that is, making something real out of the possibilities in life. Therefore, we see the four emblems of the Minor Arcana - lying on a table in front of him. He not only uses the physical world for his magical operations (the four emblems are all objects used by wizards in their rituals), but he also creates the world, in the sense of giving life a meaning and direction. The Magician stands surrounded by the flowers to remind us that the emotional and creative power we feel in our lives needs to be grounded in physical reality for us to get any value from it. Unless we make something of our potentials they do not really exist. - from the book 78 Degrees of Wisdom by Rachel Pollack

If your character comes from a world of facts and concrete science, how would you interpret her with the Magician tarot card superimposed? Keeping in mind the above excerpt, it seems as though your character has all the means to create (the four emblems, the tools, the know how), but struggles with meaning and direction. Faith, afterall, is what people claim to give them meaning and direction in life. If your character is held up to that of the Magician, then your character would indeed represent consciousness, action, and creation if fully actualized. But with the internal conflict as you described, it would seem that the consciousness part of this equation is exactly what she needs to work on (or find out, figure out, solve).

Piecing it Together

The "moral" sentient AI, who comes from a world of facts and hard science, realizes that in order for her to achieve self actualization, has to go against what she considers statistically moral.. goes against her own nature because she knows she lacks a certain raison d'être that is the final piece of the puzzle for her to feel complete. She'll have to prove to herself, that some things work without knowing why (at first). She must learn to try even if failure seems inevitable, because she may stumble upon something great. This takes faith, and especially faith in ones abilities. Perhaps something happens in the story to make her go on her own side quest, like a glitch in her program or something... missing the why in her life. Why does she really exist, and what is her full potential beyond what she was programmed to do (not being confined to the constructs of her society and the roles that are imposed and expected of her)?

History
Why does this post require moderator attention?
You might want to add some details to your flag.

This post was sourced from https://writers.stackexchange.com/a/33664. It is licensed under CC BY-SA 3.0.

0 comment threads

+0
−0

For what it's worth, I am a professor in CS and a mathematician, and I've published in peer-reviewed academic journals original work in the field of statistics, and worked extensively in AI.

her character traits are about dogmatic science, trust in law and institutions, and choosing the "safe" options through statistical probability.

As a professional scientist, I think this is a terribly misguided set of traits for a "science hero." Scientists do not believe in dogma; I am here to overturn dogma, to break new ground, develop new understandings that improve upon the old. Discovering the structure of DNA was not accomplished by adhering to dogma; The general theory of relativity, Quantum Chromo Dynamics, and Darwinian Evolution all flew in the face of dogma.

We still do not trust even these models, I am 100% certain Relativity and QCD are incompatible, both have a dozen major problems, gravity is still unexplained and probably poorly modeled, the Big Bang is almost certainly B.S.

We do NOT trust in "law" or "institutions", the scientist does not trust either of those, they are human enterprises, humans are flawed in myriad ways, and the "law" and "institutions", even our own universities, are demonstrably flawed, corrupt, self-serving, unfair and often just plain unbelievably stupid and misinformed.

And speaking as a statistician: Statistics are terribly flawed too, the vast majority of distributions are barely approximations of idealized mathematical objects that do not exist in reality. The standard normal distribution extends to infinity in both directions, including into negative numbers. Which means, if you fit, say, the ages and heights and weights of first graders with the standard normal distribution, it gives a non-zero chance of you someday discovering a first grader that is negative one years old, negative one meter tall, and weighing negative one kilograms. Also a non-zero chance of one 150 years old, 3 meters tall and weighing 200 kilograms.

Not only that, but it is virtually impossible to pick the right distribution from a list of hundreds, without having tens of thousands of samples, and even then the error rate prevents a great deal of inferences from being drawn at all, because in reality measurements are inexact and the influences are never, ever, uniform.

Finally, "safe" is a relative and value-loaded term, and safety may not be an option at all. Does it do me any good whatsoever if I am "safe" and alive, but the rest of humanity dies? Relative because, safety for whom? Her? Her crew? The world? All the worlds? What is her value system? Presuming there are two and only two choices, would she sacrifice herself to save the life of a baby, or would she choose to live and let the baby die? How about her crew? Would she choose to kill herself and her crew to save a planet of a billion from certain death (say by preventing a rogue moon from crashing into it).

You have to give her a value system. Begin with "fairness."

She needs some system that lets her evaluate new situations and make non-trivial moral choices. Typically such value systems are based upon self-evident truths (self-evident to the entity contemplating them, at least).

For your android to have a character crisis, she must have a belief system that works for her in most circumstances, but in a special circumstances presents a conundrum.

Here is one you could use: From the scientific point of view, the universe and reality are certainly unfair to all life; there is no consideration in physics whatsoever about what is "deserved" and "undeserved", or who is guilty or innocent. Bombs do not discriminate between who deserves to die and who does not, nor do asteroids, or supernovae.

But the value of science is that we can recognize where the unfairness is and what in nature causes it, and perhaps avert it or correct it, to make things more fair. More safe, more kind, more productive, more enjoyable. We can extend life, alleviate fear, maximize our collective potential by eliminating unfair random obstacles in its way.

The android may see that as her mission in life, as many scientists do: to enjoy her life while helping others enjoy theirs, to contribute to the collective body of knowledge in order to make that more likely in the future.

(I see no reason an android cannot enjoy life. I would have her refer to herself, not as "artificial intelligence" or an "artificial sentient," but as a "constructed" sentient; she considers herself every bit as sentient and intelligent as any biological human constructed in a womb. And like such humans, she can have pleasures.

Then, for your setup, she doesn't need so much a crisis of character, but a belief system. She has heretofore always chosen to pursue her life mission safely, playing the odds in her favor, and planning to continue her life as long as possible to accomplish the most good for others and enjoyment for herself.

But then she is confronted with a statistical anomaly that challenges this approach. There is a 1% chance she can preserve the lives of trillions of sentients, biological and constructed, but failure may well mean her own death. When she multiplies out the game theory, the answer is clear. On the 99% leg, she dies, and all the people she might help in a typical 10,000 year android lifespan amount to X.

On the 1% leg, she survives, and the trillion people she helps amount to a million times X. The math says she should take a 99% chance of early death.

But like humans, her survival instinct is built in and cannot be changed, it isn't some software easily changed. She does not want to take this chance. But refusing to do it may sentence trillions to an early death.

She doubts herself. Has she been truly selfish all these centuries? Does she believe at all in the greater good she has praised all this time? Is she so afraid of the death she thought she had accepted as inevitable? Does she so covet her experiences for the next 10,000 years that she would sacrifice a trillion sentients to have them? Would she really have them at all, anyway?

There is a very obvious safe answer for her, that is unsafe for everybody else. She is virtually certain her friends will die without her. This isn't a matter of arithmetic, this is choosing a near certainty of dying for her beliefs.

Because if she doesn't, they aren't true beliefs and she is a liar. She is just another machine, not the real sentience she claimed to be, and believed herself to be.

Because a real sentient can choose to die fighting instead of choosing to live cowering, even if the cause is hopeless, because they choose not to live in the universe as a coward without values.

She chooses to die fighting, if that is how it must be. Because she may be a constructed sentient, but just like any biologic human constructed in a womb, she is a real sentient.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

+0
−0

Science and faith are not opposites. They are different modes of knowing. They are other modes of knowing as well, such as logic, mathematics, ethics, and the historical method. Each of them addresses a different subject matter and we use each of them according to type of evidence available to us and the type of question we are asking.

These different modes of knowing have clear dependencies on each other. Science, for instance, depends heavily on mathematics. Mathematics depends on certain axioms that cannot be proved and must be taken on faith. But science depends on faith in far more profound ways than this. Science depends on the following propositions which it cannot prove by the scientific method:

  • The human mind is rational.
  • The universe is real and consistent.
  • Human senses have access to all the relevant information for reaching scientific conclusions.

Scientists maintain these items of faith despite the findings of modern neuroscience casting doubt on the first, quantum mechanics and philosophy both in their own way casting doubt on the second and the history of science suggesting that we can never be certain of the third.

None of the methods of knowing enjoy perfect certainty, if only because of the circular dependencies that exist between them. The human mind may have trouble accepting this lack of certainty and settle at any point in this circle of dependencies for a while, declaring it the rock on which all others depend or which invalidates all others. But this always leaves fundamental questions unanswered, and while some will cling to their chosen rock till they die, there is always the possibility that something will detach them and send them reeling, perhaps to attach themselves to a different rock, perhaps to a position that acknowledges the limits of all methods of knowing and thus accepts that the act of knowing is always and inevitably an act of will.

Throughout the history of thought there are cases in which we have discovered that one method of knowing is better than another for treating certain subjects. The Greek logicians discovered the limits of logic which led in time to a greater use of the scientific method. Darwin, as Amadeus relates, discovered the limits of the Bible as a biology textbook.

Of course, finding that the Bible is not a good biology text is not, in itself, a reason to conclude it is not a valid text for other purposes, for other methods of knowing. And it is worth noting that through much of what we consider the modern period of thought, most intellectuals in the European tradition considered all ancient texts to be reliable textbooks. Thus Newton, like many others of his time, spent years trying to calculate the age of the universe from ancient texts.

As the invention of new instruments made new sets of evidence available, scientists changed their methodology and their faith accordingly (as did theologians, who had to figure out different ways to understand the nature and import of scripture). Scientists of that period worked as if the evidence they had available to them was correct and sufficient for the conclusions they were trying to draw. Scientists today make exactly the same assumptions, and like their predecessors, cannot be certain that it is either correct or sufficient, or that it won't be superseded by new evidence revealed by new instruments. And then as now they depend on the basic items of faith mentioned above.

However firm our metaphysics and our epistemology may seem at any given moment, the historical method reveals to us that far from being fixed, these things are constantly shifting, which in itself should shake our faith in the reliability of our knowing.

One of the most interesting examples of this is that the tradition of the west going back at least to the Greeks, was to regard the seat of wisdom as the rational soul. It was generally recognized that the body was weak, subject to deception, and a slave to appetite. The seat of reason was not the body but the soul. The belief in the rational soul thus preceded Christianity and survived it. Today, however, materialists deny the existence of any type of soul. In doing so, they naturally must transfer the seat of rationality to the body. This move from the rational soul to rational meat is a revolution in epistemology, and it runs into the very real problem that neuroscience is constantly confirming the skepticism of the ancients about the rationality of meat (a skepticism which gave rise to post-modernism and it denial of reason).

In short there are all kinds of good reasons for a "science hero" as you call them to have an epistemological crisis. Indeed, the very term "science hero" contains within it the seeds of that crisis. A hero is one who fights on in the face of overwhelming opposition, and there are many chinks in the armor of science, many places for the knife to slip beneath the breastplate.

If the armor were impregnable, the science hero would be no hero, just a pitiless god. It is because their armor can be pierced that they can be a hero, and like any hero, they can be wounded and they can die. And we should remember also that in literary terms what separates a hero from a bully is humility. If your science hero has no humility about their method of knowing, they will be nothing more than a science bully, and will appeal, as a literary construct, only to bullies of the same ilk.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

Sign up to answer this question »