So, I asked Bing AI to write a character backstory

This is a bit of a departure from my usual content, but I could not resist sharing the results. I asked the Bing AI to write an example of a character backstory for a Rogue in Diablo IV. I do not know the data set it is trained on or whose work it may have dredged to create this, but I was impressed at how specific it got. What follows is the output from my single sentence prompt, no edits.

Your name is Zara, a female human rogue who specializes in Exploit Weakness and Shadow Imbuement. You were born in the city of Westmarch, where you grew up as an orphan on the streets. You learned to survive by stealing, lying, and fighting. You also developed a keen sense of observation and a knack for finding and exploiting your enemies’ weak spots.

One day, you caught the eye of a mysterious woman named Lilith, who offered you a place in her secret cult. She promised you power, wealth, and freedom in exchange for your loyalty and service. You accepted, hoping to find a new purpose and a new family. You became one of her most trusted agents, carrying out her orders and spreading her influence across Sanctuary.

You learned to imbue your weapons with shadow magic, making them more deadly and unpredictable. You also mastered the art of creating shadow clones, which you used to confuse and ambush your foes. You enjoyed the thrill of hunting down and eliminating Lilith’s enemies, whether they were demons, angels, or humans. You felt no remorse or pity for your victims, only satisfaction and pride.

However, as time went on, you began to notice that Lilith was not what she seemed. She was cruel, manipulative, and obsessed with her own agenda. She cared nothing for you or the other cultists, only for her own power and glory. She also had a dark secret that she kept hidden from everyone: she was the daughter of Mephisto, the Lord of Hatred, and the mother of the Nephalem, the first humans.

You realized that Lilith was using you and the other cultists as pawns in her grand scheme to overthrow the High Heavens and the Burning Hells, and to create a new world order under her rule. You also learned that she was planning to unleash a terrible evil upon Sanctuary: the Worldstone, a powerful artifact that could shape reality itself. She intended to corrupt it with her blood and use it to create an army of loyal Nephalem who would serve her as gods.

You decided that you had enough of Lilith’s lies and tyranny. You resolved to stop her before it was too late. You gathered some allies who shared your doubts and suspicions, and prepared to confront Lilith at her lair. You knew it would be a dangerous and suicidal mission, but you had nothing left to lose. You hoped to find redemption for your sins, or at least a worthy death.

Bing AI

Thoughts on the state of AI in 2022

AI certainly gets a bad reputation from Hollywood, where it is often shown in ways that either threaten mankind or subvert social norms in a way that makes us uncomfortable. If that wasn’t bad enough, real life applications for AI and machine learning are often of cruel intent, like the automated machine gun turrets in the occupied West Bank and software written to create facial recognition “digital fences” targeting immigrants and other vulnerable populations. Add in a scare about police robots with “lethal force” authorization coupled with a personal data privacy crisis and people have every right to be wary of semi-autonomous and completely autonomous AI and robot systems today and in the near future. But not all of machine learning, AI and similar algorithms are out to do harm.

In the tail of a second year of one of the worst pandemics in human history, the need to limit close human interaction has many of us talking to and interacting with software, from simple kiosks to automated tech support, the gap between labor and purpose-built software has enabled AI to move out of the theoretical spaces into real jobs with more success than ever. Many of these projects leverage OpenAI, specifically ones that have to interpret text input like tech support chat bots that can automate ticket creation and some troubleshooting tasks. Other chat bots do just that, chat, like the for-profit AI friend mobile apps like Replika. Although the concept isn’t much newer than previous iterations like the ill-fated “Tay” bot by Microsoft and the prior scripts that inspired it, the software has evolved to stay on topic and sound more authentic to avoid frustration from the user who may not want to “talk to a computer”.

Other proprietary learning AI like Google’s LaMDA are sophisticated enough to spark new debate on what constitutes a sentient AI. However, I don’t think sentience should be the end-all goal of AI research, and not just because of the Hollywood factor. Purpose-built AI like service desk chat bots can focus development resources on a specific set of tasks and can integrate ticket data and customer responses into cycles that improve the chat bot over time. In other fields like medical and other forms of patient care could benefit from improved AI applications, especially in parts of the world where loneliness is a growing concern.

Real world research may have inspired the troubled Hollywood interpretations like “Her”, where a man falls in love with his phone’s voice assist application, the dystopian holographic girlfriend in “Blade Runner 2049”, and the hyper realistic cyborgs of “Ex Machina”. Each illustrate important topics in their own way, but do not handle well the stigma attached to developing relationships between humans and machines. Of course, a lot of this comes from an attempt to replicate human intimacy, which is worth examining, but it also perpetuates negative stereotypes in a way that deflects attention from potentially valuable applications of AI, like augmenting staff in rehabilitation facilities and those that care for the elderly, which in some parts of the world face dire shortages of nurses and other skilled staff. Coupled with the pandemic’s restrictions on close contact with other people, the opportunity for robot help has never been greater. But “cyborgs” and virtual companions are hardly the limits of good AI development. I think common use applications are just as important.

I would argue that the learning algorithm at Spotify qualifies as AI, and not just because the music recommendations based on my listening “feel” personal, but precisely because that itself is a valuable service. It learns entirely based on the collective listening habits of its users. This does illustrate a boundary between fair data use and also data that users may not want to be used inappropriately. All user data ultimately is private data, especially when it includes anything about their daily habits and often times, their location. With AI this is as important as ever. With that comes a rant.

Amid calls from lawmakers to create “encryption backdoors” to “combat terrorism” and “protect law-abiding citizens”, academics and individuals need to push back and demand more laws to protect user privacy, not less. Every company is responsible for the security of its user data, and every week there is a new story about security breaches that expose user data and AI is no exception.

As a sidebar here, any call to weaken privacy and encryption to “protect law-abiding citizens” should be heeded with suspicion as this almost always excludes journalists, activists, political rivals, and most recently – women seeking essential health care services like cancer screenings. Demand better from your representatives.

AI developers must put user privacy and security at the forefront of their product designs. The power of machine learning depends on the trust of its users that their data will not be misused and will be protected.

Finally, for those who went into the field dreaming of the future only to find their work tilted by racial bias in the data or leveraged to create weapons systems, I am sorry for you.

%d bloggers like this: