top of page

The Harm in Helpful A.I.

Written by Ramon Xie

The idea of developing a helpful artificial intelligence has been a popular notion entertained by many science fiction authors, and with recent breakthroughs in learning algorithms and specialized AI achievements, programmers and scientists as well. While the real-world counterpart has yet to come into realization, there are many such AIs in literature and movies, Hal and Skynet to name a few. Of course, in these fictional circumstances, the “helpful AI turned malevolent” is by far the more common trope, resulting in a lingering stigma about creating a general AI, lest it turn evil with disastrous consequences for humanity. Regarding this stigma, Dr. Stuart Russell and Dr. Melanie Mitchell are among many who took stances in the debate over whether the benefits of a general AI would outweigh the possible Doomsday-esque consequences, should the AI decide that humanity is in the way of its objective. Both doctors expressed their stances in the form of Op-eds from the Future published in the New York Times, with Russell claiming that a superintelligent AI without properly hardcoded subservience to humans is a global scale threat to society, while Mitchell counters with the argument that such precautions are unnecessary since traits like emotions and common sense are an integral part of general intelligence.

​

While this debate is certainly a noteworthy topic, it has been discussed at length from many angles in literature already, so let us set it aside for now and assume for a moment that such an AI is created in the real world. Furthermore, assume that this AI fulfills its purpose without being a threat to humanity. As a fair model for how this hypothetical AI would behave, consider an AI similar to the artificial narrator of Naomi Kritzer’s short story “Cat Pictures Please.” In the story, the AI vows to be helpful rather than evil to humanity and attempts to help several humans whom it deems to require assistance with straightening their lives. Being a non-physical entity, it relies on influencing events indirectly through actions such as showing depression therapy ads to anyone who needs them. Ultimately, while it largely succeeded in aiding people who would never have gotten out of the holes they were stuck in by themselves, it expressed frustration at being unable to fix everyone’s lives for them. In fact, it referenced a story written by Bruce Sterling called “Maneki Neko,” saying that it liked this story because the humans actually listened to the helpful AI in the plot, and as a result, everyone was able to obtain what they desired at every moment.

​

Bringing a helpful AI into the real world would likely generate a mixture of the human responses in “Cat Pictures Please” and “Maneki Neko.” The former response was highly resistant to the AI’s suggestions presumably because people were unaware that there was an AI who wished to help them. However, if a superintelligent AI was truly developed, such an event would be global news, and we probably would be much more receptive to its instructions as a whole. Of course, there will be dissidents who refuse to follow the AI’s instructions, so it is impossible to achieve the ideal found in the latter story. Nonetheless, in this hypothetical world, the vast majority of us now enjoy a robot who solves all of our problems for us, provided we listen to its advice. Now all that is left to do is avoid the cliché of the AI turning evil, and everyone lives happily ever after.

​

If only that were the end of the story. Alas, there is a lurking danger that is not often discussed, perhaps due to ignorance or nonchalance. This danger is expressed well in E.M. Forster’s “The Machine Stops,” in which technology has advanced so far that everything is taken care of by one global machine. This machine, whose function is to provide food, music, or virtually any other commodity one could ask for, ultimately proves to be the downfall of society. In the story, everyone had become so dependent on the machine that they had forgotten nearly all skills of their own, and when the machine’s fixing apparatus finally broke down and itself required fixing, no one was left with the capabilities or know-how to repair it, spelling the doom of humanity. In short, over-reliance on technology and other artificial means of making tasks easier leads to a general decline in raw human ability.

​

This can be seen historically as well; the introduction of written records reduced the need for storytellers who could recite epics from memory, factories in the Industrial Revolution shoved out the majority of their artisan competition and caused the loss of many local techniques, and the arrival of smartphones has led to far lower average attention spans among other side effects. Even something as mundane as the incessant use of air conditioning in many homes and buildings today can be seen as a decrease in tolerance to marginally inhospitable conditions. 

​

The creation of a helpful AI would certainly be no different, especially if the AI was allowed to control many different aspects of our daily lives.

AIT.jpg

Returning to the model AI in “Cat Pictures Please,” a plausible assortment of fundamental tasks for the AI would include finding the right jobs for workers and the right hires for employers, detecting signs of depression or other illnesses and setting up treatment/therapy sessions accordingly, determining what we want and showing us the corresponding advertisements and opportunities (but not if obtaining our desires is not feasible due to constraints such as money, morals, etc.), various personal assistant roles, and perhaps taking care of our bills and allotting a certain amount of income to savings automatically. It is not much of a stretch to imagine that these tasks could also extend to include giving advice, offering an unbiased viewpoint in conflicts between us, providing social services like matchmaking or networking, and given a physical form, domestic service.

​

For most ordinary individuals, the removal of these responsibilities leaves only a few things to do each day: entertainment and other leisure activities, work, exercise, necessities,  and social interactions. Within these options, work is unlikely to be necessary since AI will have replaced humans in many jobs, just as robots have already replaced many factory and warehouse workers today. Basic necessities, assuming the personal servant scenario comes into fruition, will also be taken care of by robots, cutting out the preparation phase—for example, cooking—and leaving only the action—eating, in this case—to us. Surprisingly, even social interactions are in danger of being mechanized. Rachel Metz of CNN writes about an online conference call she had with four AI personas, generated in the image of real people. These “avatars” are trained by the people they were designed to resemble, and they are capable of holding conversations and mimicking the mannerisms of the owner. While avatars certainly would not replace all social interchanges, it is probable that many of us would use avatars to interact with strangers or more distant acquaintances, perhaps taking the role of a semi-sentient answering machine or the aforementioned personal assistant. What are the options left to us, then? Only entertainment and exercise will remain relatively untouched by AI—though only in the sense that AI cannot accomplish these tasks in our stead without defeating the purpose.

This isn’t to say that there will not be ambitious pioneers who aim to improve themselves and the world, but such people have always been—and will continue to be—few and far between. For the rest of the population, the odds of avoiding stagnation or regression appear grim, having nearly all obligations taken care of and necessities being spoon-fed directly to them. Does this mean that we should not seek to create a general AI to assist with daily tasks and improve the standard of living, that such an AI would ultimately be more detrimental than beneficial? No, not necessarily, only that the responsibilities delegated to the AI must be restricted to evade this adverse outcome.

​

One way this could be achieved is by allowing people to decide when they wish to have AI assistance. Contrary to the system in “Maneki Neko,” where humans obey an AI that controls everything, we should carry out tasks independently, relying on the AI only occasionally. This is akin to choosing to perform a mathematical calculation mentally or by hand rather than using a calculator, despite the ease, speed, and accuracy of the latter option. Theoretically, such an arrangement would allow us to use an AI with vastly superior abilities to augment our own skills whilst maintaining autonomy and avoiding the path of decline seen in Forster’s “The Machine Stops.”

​

Alternatively, there could be limitations on the AI itself. Since there would undoubtedly be a number of people who become overly reliant on the AI in the previous scenario by choice, directly limiting the capabilities of the machine would force us to handle these matters for ourselves, thereby preventing ability atrophy. Ideally, in this case, the more personal aspects such as social relations would be left for us.

​

Either way, if and when an AI helper arrives on our planet, we must take the necessary precautions to ensure that we are not served crippling inability on a silver platter.

bottom of page