Artificial Intelligence (AI) has become a significant contributor to everyday life. Many are familiar with using technology to perform tasks and simulate human behavior and tasks. An aspect of AI, generative AI (gen AI), is on the rise, becoming increasingly useful to people due to its ability to create original content (via text, images, video) in seconds.
One of the many dilemmas that arise as AI evolves is its ethics. There are a lot of complex challenges that people have to deal with to keep AI a positive influence on mankind. Some of these concerns include AI being able to access large amounts of data and hack into personal information at a user’s command, or how AI is slowly tearing away from human control and could potentially replace mankind. Some of these challenges are the main reason why AI and technology systems are being brought up in the court of law.
California is one of the first to take a step forward and create nationwide safety measures against AI systems, hoping for the law to help regulate the rate at which technology in the AI field is revolving. The proposal that was made by Democratic Senator Scott Wiener, was challenged by tech companies that work with AI on a daily, replying that the regulation should be handled by the federal government and should be dealing with developers instead of the company as a whole when AI is used wrongly and not at any other time. The vague bill had also been opposed by some of the California House members. But, many companies also back up the need for that bill, like Anthropic and later Amazon and Google. They even helped improve the bill by giving suggestions and inputs, and they have gained online support from Elon Musk.
The proposal’s main aim is to decrease the risks of AI, having them follow protocols before releasing systems for the world to use. Some of these protocols include testing the models and restricting the user from manipulating the technology to perform harmful and dangerous tasks. For example, if an untested/unregulated technology is given to the user with no boundaries, it might cause them to build chemical weapons or hack into company databases. Currently, the bill accounts for the abilities that might come with the rapid advancements in the technology industry.
Wiener also mentioned that this bill would be for a cautionary measure, in the case of misuse of AI. He also claimed that if people didn’t think that AI could be manipulated and used in a harmful should also sign the bill because it is never misused, the use of the bill wouldn’t arise. Wiener’s precaution is to establish ground rules for AI companies. Currently, the bill itself is with Governor Gavin Newsom. According to the U.S. News, “… [he] has until the end of September to decide whether to sign them into law, veto them or allow them to become law without his signature.” He also stated that he didn’t support AI overregulation.
Written by Divya Saha