Hackers and other criminals can easily commandeer computers operating open-source large language models outside the guardrails and constraints of the major artificial-intelligence platforms, creating security risks and vulnerabilities, researchers said on Thursday. While some of the open-source models include guardrails, the researchers identified hundreds of instances where guardrails were explicitly removed. Guerrero-Saade likened the situation to an "iceberg" that is not being properly accounted for across the industry and open-source community. STUDY EXAMINES SYSTEM PROMPTSThe research analyzed publicly accessible deployments of open-source LLMs deployed through Ollama, a tool that allows people and organizations to run their own versions of various large-language models. "Ultimately, responsible open innovation requires shared commitment across creators, deployers, researchers, and security teams."
Source: The Telegraph January 29, 2026 14:53 UTC