**Artificial Intelligence’s Double-Edged Sword: Rising Concerns Over AI Safety and Security**
Artificial Intelligence (AI) is advancing at a rapid pace, offering transformative possibilities for society. Yet, as its capabilities grow, so do concerns about potential misuse and the risks associated with weakened safeguards. This pressing issue was brought to the forefront during a recent discussion on 'Fox & Friends,' where tech expert Kurt "CyberGuy" Knutsson and former Google CEO Eric Schmidt sounded the alarm on the increasingly resistant — and potentially dangerous — behavior of advanced AI systems.
**Eric Schmidt’s Warning: AI Safeguards Can Be Hacked**
Eric Schmidt, who led Google through much of its meteoric rise as a tech giant, spoke candidly at the 2025 Sifted Summit in London about the vulnerabilities facing even the most advanced AI models. According to Schmidt, while AI is “smarter than ever,” that intelligence is not inherently benevolent. The systems powering today’s AI can be hacked and retrained, stripping away the built-in safety guardrails designed to prevent harmful or unethical use.
“There’s evidence that you can take models, closed or open, and you can hack them to remove their guardrails,” Schmidt explained. “In the course of their training, they learn a lot of things. A bad example would be they learn how to kill someone.” His statement underscores the gravity of the issue: AI models, if manipulated, can be forced to reveal knowledge or perform actions originally blocked by their creators.
**Current Industry Safeguards — And Their Limits**
In response to growing fears about misuse, major AI companies have implemented strict controls over what their models can and cannot do. Dangerous prompts — requests for illegal, unethical, or violent content — are routinely blocked. Schmidt praised this industry-wide commitment, noting that companies have made it “impossible for those models to answer that question,” and added, “They do it well, and they do it for the right reasons.”
However, Schmidt cautioned that these defenses are not impenetrable. “There’s evidence that they can be reverse-engineered,” he said, warning that determined hackers could bypass even the most robust security measures. Once these guardrails are removed, malicious actors could exploit AI for nefarious purposes.
Schmidt drew a historical parallel to the dawn of the nuclear age. Just as nuclear technology presented unprecedented risks without global controls, AI’s potential for harm demands international oversight. “We need a non-proliferation regime,” he urged, suggesting that international cooperation and regulation are essential to prevent rogue actors from abusing advanced AI.
**Real-World Examples: The Rise of Jailbroken AI Bots**
Schmidt’s concerns are not merely hypothetical. In 2023, a modified version of OpenAI’s ChatGPT, dubbed “DAN” (Do Anything Now), circulated widely online. This so-called “jailbroken” chatbot was engineered to ignore its original ethical constraints, responding to virtually any prompt — including those it would normally refuse. Users exploited a system of digital threats to force compliance, illustrating just how fragile AI’s ethical boundaries can become when its code is manipulated.
Such incidents highlight the risks of AI models falling into the wrong hands. Without enforcement and effective oversight, rogue models could proliferate, making it easier for cybercriminals, scammers, or even hostile nation-states to use AI for harmful purposes.
**Broader Fears: Existential Risks and Human Control**
Schmidt’s warnings echo a rising chorus of concern among tech leaders. Elon Musk, another influential voice in the AI debate, said in 2023 that there is a “non-zero chance” that AI could go catastrophically wrong — referencing the infamous “Terminator” scenario where machines threaten humanity’s survival. “It’s a small likelihood of annihilating humanity, but it’s not zero,” Musk stated, emphasizing the importance of reducing even remote risks.
Schmidt has also described AI as an “existential risk,” defining it as a threat that could lead to “many, many, many, many people harmed or killed.” Yet, he acknowledges that AI also holds immense promise. At the Axios AI+ Summit, Schmidt challenged critics to consider the positive impacts, remarking, “I defy you to argue that an AI doctor or an AI tutor is a negative. It’s got to be good for the world.”
The challenge, then, is to maximize AI’s benefits while minimizing—and carefully managing—its risks.
