Stanford's FMTI Reveals the Dark Side of AI
Transparency and accountability in AI development have become paramount concerns in a world increasingly reliant on artificial intelligence. A recent study by Stanford University, in collaboration with MIT and Princeton, has cast a critical light on some of the tech giants in the field, with OpenAI, Meta, and Google scoring dismally low on the Foundational Model Transparency Index (FMTI).
A New Era of Transparency Assessment
The FMTI is a groundbreaking initiative designed to evaluate the transparency of the most significant AI models. It assesses these models on 100 dimensions related to openness, encompassing how they were constructed, the data they were trained on, computational resources, policies, data protection, and risk mitigation. Understanding these facets is essential in a world where AI increasingly impacts society.
Alarming Transparency Deficiencies
The results of the FMTI are a cause for concern. The mean score across all models was 37 out of 100, signalling a widespread lack of transparency. None of the models scored high enough to be considered adequately transparent. The study highlights the urgency of addressing this issue, mainly as AI models play an increasingly integral role in everyday life.
Meta's Llama 2 Claims the Top Spot
Meta's Llama 2 model led the pack with a score of 54, which, while the highest in the study, falls far short of acceptable transparency. Rishi Bommasani, a PhD student involved in the project, emphasises that we should not set Meta as the goalpost but aim for even higher levels of transparency, ideally reaching 80, 90, or even 100.
OpenAI's Controversial Position
OpenAI's GPT-4 secured the third position but was criticised for its lack of transparency. OpenAI, despite its name, has explicitly stated that it will not be transparent about most aspects of its flagship model, GPT-4. This stance raises questions about the company's commitment to transparency and accountability, especially in light of the technology's widespread use.
Open Models Shine
An interesting trend from the FMTI is that open-source models outperformed their closed counterparts regarding transparency. Models like Llama 2 and Bloomz, which have their code published publicly as open-source software, scored notably higher. This raises an essential question in the AI policy debate – whether models should be open or closed.
Policy Implications and the Road Ahead
Stanford University intends the FMTI to drive positive policy changes in AI. It is set to be published annually, with 2023 marking its inaugural year. The initiative could also play a pivotal role in shaping the European Union's AI Act, offering policymakers invaluable insights into the current state of transparency and where improvements are needed.
Nine companies evaluated in the study have pledged to support the White House's responsible AI initiatives. The hope is that the FMTI will serve as a motivator, encouraging these companies to uphold their commitments and advance transparency in AI.
In conclusion, Stanford's Foundational Model Transparency Index has shed light on a pressing issue in the AI landscape. OpenAI, Meta, and Google may be leaders in the field, but they must be more transparent. As AI continues impacting our lives, the call for higher transparency and accountability grows louder, and the FMTI offers a vital starting point for this transformation.