AI Use Desperately Needs Proactive Guardrails Across Industries
Over the next few years, AI promises to transform virtually every industry sector. While several leading AI companies including Meta, Google, Microsoft, Amazon, and others recently committed to voluntarily install safeguards against these risks, little has been discussed about how consumers of this technology can protect themselves.
Although AI offers great promise, it can also pose significant risks, and it s critical for individuals and companies that increasingly rely on AI to develop frameworks to manage them.
Those risks are numerous. AI technology can be used as an instrument of threat activity from criminals, state actors, or disgruntled groups. The AI systems themselves can be targeted with malicious intent. Even well-intentioned use can produce profound political, economic, ethical, and other potentially destabilizing effects.
Generative AI could enable threat actors to develop increasingly realistic social engineering campaigns, or be used as a tool to spread false or misleading information. And the greater a company s dependence on AI, the greater the likelihood it will be targeted by threat actors whether to illicitly obtain sensitive information, disrupt core business functions for extortion purposes, or make a political statement.
Furthermore, AI can only perform tasks based on the information we give it, which creates risk in both input and output. Inputting information can lead to leaks of sensitive data being loaded into prompts and underlying models, or diminished quality of prompts.
A terrible idea paid for no doubt by whoever Michael Chertoff’s current clients are. We need government to regulate AI because they’re bound to be good at recognizing the fake news, misinformation or inconvenient truths that your chatbot comes up with.