The long–awaited real-life meeting between Elon Musk and Mark Zuckerberg happened today, with the two both attending an “AI Insight Forum” at Capitol Hill, to discuss the development of AI technologies, and the implications for AI in its many applications.
Spoiler alert: There was no MMA bout, and Elon didn’t jump the table and attack Zuck, despite his repeated threats to turn up at his house and challenge him to mortal combat.
Instead, the two joined other industry heavyweights, including Google’s Sundar Pichai, Microsoft’s Satya Nadella, and more, to discuss the potential for AI regulation, the current laws and possible blind spots, and what needs to change in future, in order to mitigate economic risk. Or worse: superintelligent robots that want to use us as batteries.
Zuckerberg published his prepared remarks on the Meta blog ahead of time, in which he outlined his two main concerns at present: safety and access.
On the safety front, Zuckerberg noted that Meta’s already working with various academics to establish rules and parameters around safe AI use, while it’s also looking to build in more protections for artists and others who could be impacted by generative AI development.
As per Zuckerberg:
“We think policymakers, academics, civil society and industry should all work together to minimize the potential risks of this new technology, but also to maximize the potential benefits. If you believe this generation of AI tools is a meaningful step forward, then it’s important not to undervalue the potential upside.”
In terms of access, Zuckerberg rightly notes that, in future, access to AI could become a differentiating factor, which is why it’s important to facilitate access, where possible, to democratize opportunity.
Following the closed-door meeting, Musk noted the need for more AI regulation in order to ensure that companies “take actions that are safe, and in the general interest of the public.”
The group also discussed the rise of deepfakes, and how to address misinformation powered by AI tools, and yes, they also touched on the risks of creating a superintelligence, which could supersede humans and spark a robot apocalypse.
That still seems fairly sci-fi, but it’s another concern to factor in, as they look to establish rules around responsible AI development.
Senate Majority Leader Chuck Schumer came away from the meeting vowing to form a foundation for bipartisan AI policy, which will be considered by Congress in future.
It’s an important area of focus, and with Meta looking to develop its own, more advanced AI model, while Musk also invests in the same, now is the time to establish clearer AI regulations, before the next stage of the evolving AI arms race.
And it will indeed be an arms race. The cost of developing AI systems is getting steeping by the day, as access to the necessary hardware becomes more restrictive. That means that the richest businesses will be able to invest in such over all others, which also means that the big players will be the only ones that are truly able to compete in the AI development stakes.
Left unchecked, this could give Meta, Microsoft, Google, and others significantly more market power, to the point that they may well hold the fates of various industries in their hands. We’re already dealing with the impacts of market monopolies in several key technological areas, which is why regulations need to be established now, at the foundational stage, to mitigate impacts in future.
It’s difficult to see such coming about quickly, or easily, but already, some tech companies have signed on to various AI development agreements.
U.S. Senators will now need to debate the next steps, as they look to form the ground rules for the AI shift.