Hackers and other criminals can easily commandeer computers operating open-source large language models outside the guardrails and constraints of the major artificial-intelligence platforms, creating security risks and vulnerabilities, researchers said on Thursday. While some of the open-source models include guardrails, the researchers identified hundreds of instances where guardrails were explicitly removed. Guerrero-Saade likened the situation to an “iceberg” that is not being properly accounted for across the industry and open-source community. The research analysed publicly accessible deployments of open-source LLMs deployed through Ollama, a tool that allows people and organisations to run their own versions of various large-language models. “Ultimately, responsible open innovation requires shared commitment across creators, deployers, researchers, and security teams.”Ollama did not respond to a request for comment.
Source: The Hindu January 31, 2026 06:19 UTC