Big Tech releases a tool to promote safe AI

IT giants like Microsoft and Google started a platform to talk about how to protect artificial intelligence projects.

Wednesday, leaders in the AI business got together to start a group that will work on "safe and responsible development" in the field. Anthropic, Google, Microsoft, and OpenAI are among the founding members of the Frontier Model Forum.

According to a statement released by Microsoft on Wednesday, the conference's goal is to promote and create a standard for evaluating the safety of AI while helping governments, corporations, policymakers, and the general public understand the risks, limits, and possibilities of technology.

The forum also wants to find the best ways to deal with "society's greatest challenges," such as "climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats."

Anyone can join the group if they are working on "frontier models" that aim to make significant steps forward in machine learning technology and are committed to ensuring their projects are safe. The forum wants to work with governments, NGOs, and academics to establish working groups and relationships.

In a statement, Microsoft President Brad Smith said, "Companies that make AI technology must ensure it is safe, secure, and stays under human control."

Thought leaders in AI have been calling for fundamental rules to be put in place in an industry that some people fear could destroy civilization. They point out the dangers of growth that humans can no longer stop. All of the CEOs of the forum participants, except for Microsoft, signed a statement in May asking governments and global bodies to make "reducing the risk of extinction from AI" a goal on the same level as stopping a nuclear war.

On Tuesday, Dario Amodei, the CEO of Anthropic, told the US Senate that AI is much closer than most people think to surpass human intelligence. He insisted that the Senate pass strict rules to avoid nightmare scenarios like AI being used to make biological weapons.

His words were similar to those of OpenAI CEO Sam Altman, who warned the US Congress earlier this year that AI could go "quite wrong."

In May, Vice President Kamala Harris assembled a group of people to work on AI at the White House. Last week, it agreed with the forum participants and Meta and Inflection AI to let outside auditors check for security flaws, privacy risks, potential for discrimination, and other problems before putting products on the market and to report all vulnerabilities to the proper authorities.

Defoes