The nice AI theme continues in “Today I am Paul” by Martin L. Shoemaker / El tema de la Inteligencia Artificial buena continúa con “Today I am Paul” de Martin L. Shoemaker

This is the fourth review of the 2015 short story nominees for the Nebula Awards. Spanish translation below by Daniela Toulemonde.

“Today I am Paul” by Martin L. Shoemaker is the third Nebula nominated short story I’ve read this year where the main character is an AI and the story is told via first-person (ha!) narration by that AI. Who knows whether the popularity of this theme was mere chance or a reflection of the growing concern about the rise of the machines, but this AI story, like the other two, features a friendly, helpful intelligence.

In “Today I am Paul” we enter a near-future world where an embodied android provides medical support and care for an elderly woman who is losing her memory. The AI can change its appearance, depending on the degree of information downloaded about the person it’s trying to emulate. This is a comfort to the patient, Mildred, who is drifting in and out of the past and present, seeking out various loved ones to talk to. In the course of a day, the AI emulates Mildred’s son Paul, her deceased husband, her daughter in-law, and her granddaughter, but the number of people who could be emulated via the “emulation net” is limitless as long as data is available.

We are informed that this is a new feature for androids, as is this AI’s empathy subnet, which directs the AI to avoid upsetting Mildred and positively finding ways to comfort her. The two programming nets do not always calibrate. While the AI does a very convincing job of emulating her argumentative son, its empathy net warns the AI that Mildred’s anxiety is increasing, which requires the AI to resolve the competing directives. In this conflicted space, self awareness is born, and the android develops an understanding of its programming, analysis and actions that are separate from the roles it plays. Shoemaker does a good job in creating a character that is no character and any character, but what is specifically outstanding about the story is that it convincingly depicts how the AI’s consciousness seems to inevitably emerge from programming tension.  Continue reading

Review of Nebula short story nominee “Damage” by David D. Levine / Reseña del cuento nominado al Premio Nebula: “Damage” de David D. Levine

Note: This is the second review of the 2015 Nebula Award nominees for best short story. Spanish translation by Daniela Toulemonde.

While reading “Damage” by David D. Levine, I was reminded how motifs in literature may rise and fall in popularity, but they come back in style again and again because there is a fan base that gets something important from that type of story.  And that “something” is what I was looking for while reading this story about the tormented inner life of a spaceship designed for war.

The story unfolds over a couple of weeks where this AI-operated ship narrates its experiences in a simulations, repairs and a couple of battles. Readers get the barest description of a war between a human rebel group at a space base and humans representing Earth’s government. We learn the war has reached the final stages with bitter costs on both sides, when the ship, called Scraps, is informed it will be carrying out a secret, final mission for the rebels.

Rebuilt from two damaged ships, Scraps differs from other Frankenstein-like monsters because of its memories of the earlier ships, including its two deaths in battle, and because it has been programmed to be loyal to its pilot. Memory, however, complicates loyalty, and that creates the story’s tension.

The secondary characters, Commander Ziegler and Specialist Toman, are the only two humans who interact directly with Scraps, but they are barely more than types created to serve the story’s architecture. I appreciated Levine’s inversion of roles–where the ship appears to be developing human reasoning and morality and the humans appear to be devolving into non-thinking machines. Still, in my opinion, the story could have as easily been told by a morally conflicted human co-pilot.
Continue reading

Worrying our way through the 21st century

Which is worse? A world where human beings engineer a more successful intelligence or one where we engineer more successful biological life forms? What if both things are happening simultaneously? Most of us are not scientists who happen live in Cambridge or Boston, which evidently are the main hotspots for thinking about these sorts of problems. What if you’re just a humble writer who gets itchy when it seems that near future probabilities are outpacing cataclysmic conditions in your imagined worlds? I think we need a CalamityCon. Seriously. We need a place to learn about these issues in greater depth and contribute to the thinking about preventive measures. Note: This is not the duty of writers, and no one should be obligated to do any such thing, but it surely wouldn’t harm the world’s science community to hear from us, no?