Skip to Content

Korbel Professor Attends Innovative Conference on the Future of Artificial Intelligence

Back to Article Listing

Article  •

“The further development of Artificial Intelligence (AI) will provide numerous benefits to humankind; however, researchers must remain vigilant about the potential risks associated with such an undertaking,” said one professor who studies the development of autonomous weapons. 

Heather Roff Perkins, a Visiting Professor at the Josef Korbel School of International Studies, attended a conference in early January in Puerto Rico, sponsored by the Future of Life Institute. The attendees explored the favorable effects of AI’s expansion in the future, while also deliberating about the potential adverse consequences should that growth go unchecked. 

“AI is very beneficial in a number of metrics, and in a number of ways,” she said, “but we also have to balance how much power we’re giving over to this.”

The three-day conference, titled, “The Future of AI: Opportunities and Challenges,” featured around 80 representatives from today’s leading scholars and technology-based companies, such as executives from Google, and the founder of Skype, Jaan Tallinn, and Elon Musk from Tesla and SpaceX.  

Roff Perkins participated in the panel on law and ethics during the conference, contributing her expertise on the advancing weaponization of AI and robotic systems. “My job was to say, ‘These are the types of weapons that we currently have, and here is where the (Department of Defense) says it really wants to go in the next 20 or 30 years,” she said.

According to Roff Perkins, there currently exist two human supervised autonomous weapons systems in deployment. One is the well-known Israeli Iron Dome, and the other is the U.S. Aegis system employed on 40 combat ships worldwide. Both of these are defensive systems, which constantly scan their environments, identify and prioritize threats, and finally when or how to counter the attack, she said. However, a human can always override these systems whenever it is deemed necessary.

“The types of autonomous systems that mostly worry people now are systems that a human being cannot intervene at any time,” she said, “and that the system will decide what is an acceptable target and then will choose to fire on the target by itself.” 

Roff Perkins noted that Samsung is already developing such a weapon, the SGR1.  This weapon is a sentry bot to be placed along a defensive perimeter that can track and fire upon a target up to two miles away without human intervention.  Also in development is Lockheed Martin’s Long-Range Anti-Ship Missile (LRASM), that can seek and navigate autonomously and then choose which targets to fire on from a preselected area.  

Such systems appear to display a level of autonomy that Roff Perkins said worries her, as well as motivates her to raise awareness and promote legislation at the United Nations to prohibit their use. 

However, Roff Perkins said the possible hazards posed by AI’s refinement and deployment on weapons systems should not overshadow the likely good that will come from them too. 

“As AI becomes stronger with time,” she said “it could better assist with finding patterns in crime, predicting weather patterns, and translating unfamiliar languages while traveling abroad or communicating with someone online. So these could really foster social relationships…and build kind of a global community,” she said.

“I think there are lots of beneficial aspects of AI.”

The conference was a success by evidence of these and similar discussions produced throughout its duration, she said, but the primary goal it achieved was to start a dialogue between academics and the developers currently engaged in AI research and development. 

“There was a potential for these two groups not to talk to one another, but in reality they did…they saw one another’s concerns. And I think that is really positive – building those bridges and building that community.

"This conference did a great job at doing that.”