AI certainly gets a bad reputation from Hollywood, where it is often shown in ways that either threaten mankind or subvert social norms in a way that makes us uncomfortable. If that wasn’t bad enough, real life applications for AI and machine learning are often of cruel intent, like the automated machine gun turrets in the occupied West Bank and software written to create facial recognition “digital fences” targeting immigrants and other vulnerable populations. Add in a scare about police robots with “lethal force” authorization coupled with a personal data privacy crisis and people have every right to be wary of semi-autonomous and completely autonomous AI and robot systems today and in the near future. But not all of machine learning, AI and similar algorithms are out to do harm.
In the tail of a second year of one of the worst pandemics in human history, the need to limit close human interaction has many of us talking to and interacting with software, from simple kiosks to automated tech support, the gap between labor and purpose-built software has enabled AI to move out of the theoretical spaces into real jobs with more success than ever. Many of these projects leverage OpenAI, specifically ones that have to interpret text input like tech support chat bots that can automate ticket creation and some troubleshooting tasks. Other chat bots do just that, chat, like the for-profit AI friend mobile apps like Replika. Although the concept isn’t much newer than previous iterations like the ill-fated “Tay” bot by Microsoft and the prior scripts that inspired it, the software has evolved to stay on topic and sound more authentic to avoid frustration from the user who may not want to “talk to a computer”.
Other proprietary learning AI like Google’s LaMDA are sophisticated enough to spark new debate on what constitutes a sentient AI. However, I don’t think sentience should be the end-all goal of AI research, and not just because of the Hollywood factor. Purpose-built AI like service desk chat bots can focus development resources on a specific set of tasks and can integrate ticket data and customer responses into cycles that improve the chat bot over time. In other fields like medical and other forms of patient care could benefit from improved AI applications, especially in parts of the world where loneliness is a growing concern.
Real world research may have inspired the troubled Hollywood interpretations like “Her”, where a man falls in love with his phone’s voice assist application, the dystopian holographic girlfriend in “Blade Runner 2049”, and the hyper realistic cyborgs of “Ex Machina”. Each illustrate important topics in their own way, but do not handle well the stigma attached to developing relationships between humans and machines. Of course, a lot of this comes from an attempt to replicate human intimacy, which is worth examining, but it also perpetuates negative stereotypes in a way that deflects attention from potentially valuable applications of AI, like augmenting staff in rehabilitation facilities and those that care for the elderly, which in some parts of the world face dire shortages of nurses and other skilled staff. Coupled with the pandemic’s restrictions on close contact with other people, the opportunity for robot help has never been greater. But “cyborgs” and virtual companions are hardly the limits of good AI development. I think common use applications are just as important.
I would argue that the learning algorithm at Spotify qualifies as AI, and not just because the music recommendations based on my listening “feel” personal, but precisely because that itself is a valuable service. It learns entirely based on the collective listening habits of its users. This does illustrate a boundary between fair data use and also data that users may not want to be used inappropriately. All user data ultimately is private data, especially when it includes anything about their daily habits and often times, their location. With AI this is as important as ever. With that comes a rant.
Amid calls from lawmakers to create “encryption backdoors” to “combat terrorism” and “protect law-abiding citizens”, academics and individuals need to push back and demand more laws to protect user privacy, not less. Every company is responsible for the security of its user data, and every week there is a new story about security breaches that expose user data and AI is no exception.
As a sidebar here, any call to weaken privacy and encryption to “protect law-abiding citizens” should be heeded with suspicion as this almost always excludes journalists, activists, political rivals, and most recently – women seeking essential health care services like cancer screenings. Demand better from your representatives.
AI developers must put user privacy and security at the forefront of their product designs. The power of machine learning depends on the trust of its users that their data will not be misused and will be protected.
Finally, for those who went into the field dreaming of the future only to find their work tilted by racial bias in the data or leveraged to create weapons systems, I am sorry for you.
My take is that as long as AI cannot distinguish between facts and wrong/misleading information, the attempts to turn it into something that can write will not work. And for AI to be accurate, one would need to have both a retrieval and generative model working seamlessly. Instead, I think that AI developers should work on improving conversational AI models, such as Replika. Unfortunately, there’s not much money to be made with those models. But imagine AIs conversational models that could converse with people in nursing homes and provide comfort to otherwise lonely people?