Why Mark Zuckerberg has intensified the debate over the future of AI: Here’s what the Meta founder said

The CEO of the tech giant also shared his views in a letter, in which he refers to open source systems and what they can entail.

End of July 2024, the founder and CEO of Aim , Mark Zuckerberg sharing a letter in which he assured that the artificial intelligence Open Source (AI) is “the way” to go .

In his letter, the tycoon presented his arguments explaining why believes that these systems should be freely accessible rather than closed and be subject to strict screening to identify imminent risks.

Your writing came hand in hand with the presentation of Call 3.1 a system the company describes as “the first next-generation open-source AI model.”

In Zuckerberg’s vision, open source AI will benefit not only Meta and developers, but also will contribute “to the world” .

AI has more potential than any other technology modern to increase productivity, creativity and quality of life human rights, and accelerate economic growth while enabling progress in medical and scientific research,” he said.

In this sense, he argued that “open source will ensure that more people around the world have access to the benefits and opportunities of AI and that power is not concentrated in the hands of a few companies and that technology can be implemented more uniformly and securely across society.

Zuckerberg’s statements come amid a debate in which accelerated progress in this area has raised doubts about what the regulation should be .

However, Different voices from the industry defend what was proposed by the founder of Meta .

American investor Bill Gurley shared the letter on his X account (former Twitter) and said that in this “Zuck weighs in on others trying to use government to protect themselves and ‘capture’ the AI ​​market.” .

“It is something heroic from Mr. Zuckerberg . Great for the entire ecosystem of entrepreneurs and developers,” he added.

Gurley’s post was commented on by, among others, Elon Musk, who wrote “+1 for Zuck”, meaning he supports what he is saying .

Similarly, British programmer and investor Paul Graham shared the letter on his social media account and said it was “an eloquent manifesto” .

It should be noted that “When we talk about an open source AI system, we are talking about a pre-trained algorithm or model whose source code is accessible to everyone.” and all this, usually for free,” explained the founder and executive director of the association. Eon Institute Claudia del Pozo, in a column for which she wrote Wired in March 2024.

Why Mark Zuckerberg has intensified the debate over the future of AI

According to an analysis published in the magazine time , Meta con Llama’s strategy translates into pressure on its competitors, while arousing the support of many developers in the technology field. who tend to oppose regulations that might limit the spread of open source AI systems.

However, It also attracts criticism from many cybersecurity specialists. who believe that the open use of cutting-edge AI models can cause greater societal damage.

The Executive Director of AI Control Andrea Miotti, suggested to the aforementioned media outlets that Zuckerberg’s letter “is part of a broader trend by some Silicon Valley CEOs and venture capitalists, who They refuse to take responsibility for any harm their AI technology may cause “.

“Including catastrophic results” he added.

As expected, Zuckerberg is aware of the criticism and has also alluded to it. in his writings.

“There is an ongoing debate about the security of open source AI models and my opinion is that they will be more secure than the alternatives. I think governments will conclude that it is in their interest to support open source because it will make the world more prosperous and more secure. “.

In this sense, he clarified that he considers the existence of two categories of damage: “unintentional” and “intentional” .

With the first ones he refers to “when an AI system can cause harm even if that was not the intention of those running it” .

On the other hand, the seconds correspond to “when a bad actor uses an AI model with the intention of causing harm” .

According to him, “unintentional” includes “most of the concerns people have about AI.” from the influence that AI systems will have on the billions of people who will use them to the most catastrophic science fiction scenarios for humanity.

“On this front, open source should be significantly more secure, as the systems are more transparent and can be widely scrutinized . Historically, open source software has been more secure for this reason,” said Meta’s founder and CEO.

Mark Zuckerberg
Why Mark Zuckerberg has intensified the debate over the future of AI: This is what the founder of Meta said. Photo: File / Mark Zuckerberg.

Regarding what he calls “intentional” damage, he said that “it is useful distinguish between what individual or small-scale actors can do and what large-scale actors, such as nation-states with vast resources, can do. to be able to do.”

Along the same lines, he continues: “At some point in the future, individual malicious actors will be able to use the intelligence of AI models to create entirely new harms from information available on the internet. At that point, the balance of power will be fundamental for AI safety.

“I think that it would be better to live in a world where AI is widely deployed, so that big players can check the power of small, malicious actors . “This is how we have managed security on our social networks: our most robust AI systems identify and block threats from less sophisticated actors who often use AI systems at a smaller scale.”

“In more general terms, Large institutions implementing AI at scale will promote security and stability across society . As long as everyone has access to similar generations of models (which is what open source promotes), “Governments and institutions with more IT resources will be able to control bad actors with less IT,” he added.

Faced with this argument, the specialist in American politics Future of Life Institute Hamza Tariq Chaudhry told Time magazine that Zuckerberg does not consider that “all “big players” are not benevolent” .

“The most authoritarian states will likely reuse models like Llama to perpetuate their power and commit injustices.” .

In this sense, he stressed: “Coming from the South (the expert is from Pakistan), I am very aware that Cyberattacks, disinformation campaigns and other AI-powered harms represent a much greater danger for countries with nascent institutions and serious limitations of resources.

Similarly, Andrea Miotti of Control AI said that “Zuckerberg’s statements show a disturbing disregard for basic security in Meta’s approach to AI” .

“When it comes to catastrophic hazards, it’s a simple fact that the offense only needs to get lucky once, but the defense needs to get lucky every time. A virus can spread and kill in days, while treatment can take years. “.

However, Meta’s founder and CEO is convinced that “It seems more likely that a world of closed models will result in a small number of large companies, in addition to our geopolitical adversaries, having access to the most advanced models, while startups, universities and small businesses miss out on opportunities.” .

“Furthermore, limiting American innovation to closed development increases the possibility that we will not be leaders at all. Rather, I believe our best strategy is to build an open and strong ecosystem and have our leading companies work closely with our government and our allies. to ensure they can make the most of the latest advances and gain a sustainable long-term advantage,” Zuckerberg stressed.

Source: Latercera

Related articles

Comments

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share article

Latest articles

Newsletter

Subscribe to stay updated.