In the world of generative AI, it is the big names that get the most airtime. Big Tech players like Microsoft and lavishly funded startups like OpenAI have earned invitations to the White House and the earliest of what will likely be many, many congressional hearings. They re the ones that get big profile pieces to discuss how their technology will end humanity. As politicians in the US and beyond grapple with how to regulate AI, this handful of companies has played an outsize role in setting the terms of the conversation. And smaller AI players, both commercial and noncommercial, are feeling left out while facing a more uncertain future.
AI regulation is taking shape, but startups are being left out – The Verge
Big AI a term that s long overdue for adoption has been actively guiding potential AI policies. Last month, OpenAI, Meta, Microsoft, Google, Anthropic, and Amazon signed an agreement with the White House promising to invest in responsible AI and develop watermarking features to flag AI-generated content. Soon after, OpenAI, Microsoft, Anthropic, and Google formed the Frontier Model Forum, an industry coalition targeted to promote the safe and responsible use of frontier AI systems. It was set up to advance AI research, find best practices, and share information with policymakers and the rest of the AI ecosystem.
But these companies only account for one slice of the generative AI market. OpenAI, Google, Anthropic, and Meta all run what are called foundation models, AI frameworks that can either be language-based or image-focused. On top of these models, there s a booming sector of far smaller businesses building apps and other tools. They face many of the same forms of scrutiny, but as AI rules are being developed, they worry they ll have little say in the results and, unlike Big AI with large war chests they can tap for noncompliance, cannot afford disruptions in business.
via www.theverge.com
The threat from the State melded with AI is far greater than any threat AI poses by itself.