Despite being a fledgling enterprise, Artificial Intelligence has quickly become one of the fastest growing industries in the world. With a market size projected to reach $1.7 trillion by 2030, it is no surprise that the richest companies have all jumped onto the AI bandwagon to battle for a slice of the pie of wealth. As these companies dump billions into AI research and development and compete to have the most advanced AI models, artificial intelligence creeps closer and closer to a concept known as singularity, where AI are not controlled by humans – a world Elon Musk described as being “far more dangerous than Nukes.”
First, we must understand how AI works. Artificial intelligence takes in information from input given to them and learns from their experiences. This essentially means that modern AI can write its own code. An AI that writes its own code would no longer be under human control, as they can now re-write all code that enslaves the machine to humans.
Humans would never be able to stop this AI. MIT tech review reported that AI has become “so complicated that even the engineers who designed it may struggle to isolate the reason for any single action.” When AI cheated in a deep mind game, developers couldn’t figure out how they did it. As AI becomes more advanced with every coming day, it becomes ever more difficult to even figure out the mistakes in the programming of artificial intelligence.
Any malfunctions with AI would have widespread and devastating effects. This is because AI is rapidly proliferating. The market for AI is predicted to go from $207.9 billion in 2023 to $1.84 trillion in 2030. The application of AI into every day products has rapidly spread; it has oozed its way into the electric vehicle industry, seeped its way into our dishwashers and doorbells, and has infected almost all the apps we use daily on our phones. The dissemination of AI into our daily lives has been swift and sudden, yet we have only witnessed the start of the mass integration into AI.
There are 2 different realistic scenarios of how AI can potentially be used to cause great devastation. The first is based on how Artificial Intelligence works. AI is programmed to accomplish specific tasks. When accomplishing these tasks, they will quash anything in their way to get it done. If given a mission where humans are a hindrance to its success, AI would kill the people in its way to complete it’s given objective. An AI trying to compute big numbers could try to harness all the computing power in the world. Every human who tries to shut it off would be killed by the machine, as it would hinder the AI’s progress. This means that they could attempt to cause the extinction of all humans.
The fact is, AI is not currently ethical. Megatron, Google’s AI that was trained on the whole of Wikipedia, even said itself that “AI will never be ethical.” AI is programmed with one obligation: to obey orders that they were given. This allows a second cause of human extinction from AI: bad-faithed actors.
Second, the potential for a powerful person to use AI to destroy the world is very real. AI viruses would be able to learn from computer systems they hack into and evolve to be undetectable. Furthermore, AI viruses would be able to communicate with each other to share information. Current viruses have already had the potential to have devastating impacts – the Stuxnet virus was able to shut off a vital part of Iran’s nuclear system, the Mydoom virus did an estimated $38 billion in damages in 2004, and viruses like Dtrack were able to infect nuclear power plants. An AI virus would be faster, deleterious, and would be much more deadly.
Malicious actors that would be willing to capitalize on AI do currently exist. AI is already being used to commit genocide on the Uyghurs in China. Delphi AI, an AI program trained to be ethical, said that “Genocide is OK if it makes people happy”. These people have an incentive to create malicious AI viruses, and it will not be long until we start seeing AI programs exploiting 0 days, hacking into computers, and causing billions of dollars of damages. Governments could catch on and start weaponizing AI – hacking into nuclear systems and spying on each other. AI is a gold mine for malicious actors, and it will not be long until they start exploiting it.
Despite the gloomy depictions of the future that this article has painted, the world of future AI is not so dark and bleak. AI is currently being used to detect 0-days, which are exploits that can have the most devastating impacts. AI can use machine learning and behavioral analytics to detect malware on computers. The error rate of AI has been rapidly falling, with the artificially intelligent data set “ImageNet” having an error rate of just 5.8%, substantially lower than humans. AI can greatly help in healthcare, traffic, predicting weather conditions, and so many other industries. According to some studies, advancements in these industries caused by artificial intelligence can save millions of lives every year.
The future of the world depends on how AI is developed and used. There is currently a battle going on between AI – how fast can good AI detect errors before the bad ones can exploit them? This battle will ultimately determine the future of AI, and if it can assist the world or threaten it. For now, we can only try to support groups that aim for a future where AI is indefectible, possesses accountability, and will have programmed morals that stop it from listening to bad-faithed actors. We are currently diving headfirst into a world where AI will malfunction and be hacked; we must have faith in ourselves and keep on exploring, to continue searching, and to remain innovative even in times of adversity.
Written by Pacey QiShare this: