Technological advances, especially with artificial intelligence (AI), are expanding tremendously as they become more prominent and recognizable throughout our society. They are becoming such a norm in our daily lives that the lines between what we believe to be a fantasy versus a reality are starting to blur more and more. Futuristic sci-fi themes, typically seen in films such as The Matrix or Tomorrowland , aren’t as fictional as they used to be, due to the progression of artificial intelligence technology. Artificial intelligence is something that has completely intrigued our society. However, noteworthy individuals, such as Stephen Hawking, have raised concerns about artificial intelligence.
In an interview with BBC, Stephen Hawking stated that “the development of full artificial intelligence could spell the end of the human race." Elon Musk also sided with Hawking, saying that continuing to advance artificial intelligence is like “summoning the demon."
Of course, other people disagree. There is no doubt that there are numerous benefits that come with the advancing of artificial intelligence. Artificial intelligence is already a significant part of our society, whether we like it or not. Almost all iPhone users have Siri on their phones, ready to activate with a press of a button. Family households are also benefiting greatly from Alexa, an Amazon artificial intelligence device implanted into a speaker. Additionally, artificial intelligence benefits individuals with disabilities, helping them communicate more effectively and become more independent. As humans, we naturally want to make things easier, and artificial intelligence does just that.
Whether you support the advancement of artificial intelligence or not, a recent development at the MIT Media Lab is sure to sway your opinion, introducing an entire realm of possibilities that hadn’t even been considered. Meet Norman. Norman consists of an algorithm created at a research lab. During Norman’s early training, a research team at MIT exposed Norman to extremely explicit and disturbing material found on Reddit. The material mostly consisted of gruesome videos portraying morbid death scenes and excessive violence. So, what were the results? Well, when Norman was presented with the inkblots used to assess a person’s state of mind, what Norman described was frighteningly more violent than the descriptions from other artificial intelligence devices. First, while a regular artificial intelligence device described a photo of a bird, Norman saw a man getting pulled into a dough machine. Next, when a standard artificial intelligence device saw a person holding up an umbrella, Norman saw a man being shot in front of his screaming wife. This clearly was not a small glitch, as Norman only continued to perceive dark images such as death and violence when being shown more inkblots. Since Norman possessed similar traits to psychopaths, Norman was officially diagnosed as the world’s first psychotic artificial intelligence device.
These tests have indicated that the process of creating artificial intelligence can lead to a difference in perception from the device itself. As explained by Professor Iyad Rahwan, “[this experiment] highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves."
It is not an uncommon belief to view technology, particularly artificial intelligence, as more accurate, precise, and unbiased. This is a major reason as to why technology has replaced humans in court scenes and on the football field. However, we tend to forget that these artificial intelligence devices were created by human hands, meaning that they aren’t perfect and they aren’t fully unbiased. Because human error exists, artificial intelligence can often be quite unreliable, despite being programmed to be reliable.
Even the data used to “train" artificial intelligence devices should be taken into scrutiny. Based on the results of the Norman experiment, it can be concluded that, if the data is bad, the artificial intelligence will, in turn, be bad. There are even more dark and hidden consequences to this. If we can get artificial intelligence, as well as technology in general, to respond in a specific way, embody a particular characteristic, or process information in a certain way, why can’t we do that too? As mentioned before, humans prefer technology because of their “unbiased" nature, but since it is possible to implant biases, these artificial intelligence devices can potentially become much more dangerous — think about terrorists who might be able to implant their ideas into the simple artificial intelligence devices we use everyday. At the same time, however, these devices can be used to better understand human development and the human brain. Many experiments are now able to take place without the danger of hurting somebody.
Despite the controversy over artificial intelligence, it is agreed by all that the Norman experiment certainly poses a chilling question regarding the future of artificial intelligence, the process behind programming them, and how far we want these devices to go.