Recently, DevOps professionals were reminded that the software supply chain is rife with risk, or as I like to say, it’s a raging dumpster fire. Sadly, this risk now includes open source artificial intelligence (AI) software. Especially after further investigations into Hugging Face (think GitHub for AI models and training data) uncovered up to one hundred potentially malicious models residing in its platform, this incident is a reality check regarding the ever-present vulnerabilities that can too easily catch unsuspecting dev teams by surprise as they work to acquire machine learning (ML) or AI models, datasets, or demo applications.
Hugging Face does not stand alone in its vulnerability. PyTorch, another open-source ML library developed by Facebook’s AI Research lab (FAIR), is widely used for deep learning applications and provides a flexible platform for building, training, and deploying neural networks. PyTorch is built on the Torch library and offers strong support for tensor computation and GPU acceleration, making it highly efficient for complex mathematical operations often required in ML tasks.
However, its recent compromise raises specific concerns about blindly trusting AI models from open-source sites for fear the content has been previously poisoned by malicious actors.
This fear, while justified, is starkly contrasted with the long-standing belief in the benefits of open-source platforms, such as fostering community through collaboration on projects and cultivating and promoting other people’s ideas. Any benefits to building secure communities around large language models (LLMs) and AI, seem to evaporate with the increased potential for malicious actors to enter the supply chain, and corrupt CI/CD pipelines or change components that were believed to have initially come from trusted sources.
Software security evolves from DevOps to LLMOps
LLMs and AI have expanded concern over supply chain security for organizations, particularly as interest in incorporating LLMs into product portfolios grows across a range of sectors. For cybersecurity leaders whose organizations are looking to adapt to the broad availability of AI applications, they must stand firm against risks introduced by suppliers not just for traditional DevSecOps, but now for ML operations (MLOps) and LLM operations (LLMOps) as well.
CISOs and security professionals should be proactive about detecting malicious datasets and responding quickly to potential supply chain attacks. To do that, you must be aware of what these threats look like.
Introduction to LLM-specific vulnerabilities
The Open Worldwide Application Security Project (OWASP) is a nonprofit foundation working to improve the security of software, through community-led open-source projects including code, documentation, and standards. It is a true global community of greater than 200,000 users from all over the world, in more than 250+ local chapters, and provides industry-leading educational and training conferences.
The work of this community has led to the creation of the OWASP Top 10 vulnerabilities for LLMs, and as one of its original authors, I know how these vulnerabilities differ from traditional application vulnerabilities, and why they are significant in the context of AI development.
LLM-specific vulnerabilities, while initially appearing isolated, can have far-reaching implications for software supply chains, as many organizations are increasingly integrating AI into their development and operational processes. For example, a Prompt Injection vulnerability allows adversaries to manipulate an LLM through cleverly crafted inputs. This type of vulnerability can lead to the corruption of outputs and potentially spread incorrect or insecure code through connected systems, affecting downstream supply chain components if not properly mitigated.
Other security threats are caused by the propensity for an LLM to hallucinate, causing models to generate inaccurate or misleading information This can lead to vulnerabilities being introduced in code that is trusted by downstream developers or partners. Malicious actors could exploit hallucinations to introduce insecure code, potentially triggering new types of supply chain attacks that propagate through trusted systems. This can also create severe reputational or legal risks if these vulnerabilities are discovered after deployment.
Further vulnerabilities involve insecure output handling and the challenges in differentiating intended versus dangerous input to an LLM. Attackers can manipulate inputs to an LLM, leading to the generation of harmful outputs that may pass unnoticed through automated systems. Without proper filtering and output validation, malicious actors could compromise entire stages of the software development lifecycle. Implementing a Zero Trust approach is crucial to filter data both from the LLM to users and from the LLM to backend systems. This approach can involve using tools like the OpenAI Moderation API to ensure safer filtering.
Finally, when it comes to training data, this information can be compromised in two ways: label poisoning which refers to inaccurately labeling data to provoke a harmful response; or training data compromise, which influences the model’s judgments by tainting a portion of its training data, and skewing decision making. While data poisoning implies that a malicious actor might actively work to contaminate your model, it’s also quite possible this could happen by mistake, especially with training datasets distilled from public internet sources.
There is the possibility that a model could “know too much” in some cases, where it regurgitates information on which it was trained or to which it had access. For example, in December of 2023, researchers from Stanford showed that a highly popular dataset (LAION-5B) used to train image generation algorithms such as Stable Diffusion contained over 3,000 images related to “child sexual abuse material.” This example sent developers in the AI image generation space scrambling to determine if their models used this training data and what impact that might have on their applications. If a development team for a particular application hadn’t carefully documented the training data they’d used, they wouldn’t know if they were exposed to risks that their models could generate immoral and illegal images.
Tools and security measures to help build boundaries
To mitigate these threats, developers can incorporate security measures into the AI development lifecycle to create more robust and secure applications. To do this, they can implement secure processes for building LLM apps, identified in five simple steps:
1) foundation model selection; 2) data preparation; 3) validation; 4) deployment; and 5) monitoring.
To enhance the security of LLMs, developers can leverage cryptographic techniques such as digital signatures. By digitally signing a model with a private key, a unique identifier is created that can be verified using a corresponding public key. This process ensures the model’s authenticity and integrity, preventing unauthorized modifications and tampering. Digital signatures are particularly valuable in supply chain environments where models are distributed or deployed through cloud services, as they provide a way to authenticate models as they move between different systems.
Watermarking is another effective technique for safeguarding LLMs. By embedding subtle, imperceptible identifiers within the model’s parameters, watermarking creates a unique fingerprint that traces the model back to its origin. Even if the model is duplicated or stolen, the watermark remains embedded, allowing for detection and identification. While digital signatures primarily focus on preventing unauthorized modifications, watermarks serve as a persistent marker of ownership, providing an additional layer of protection against unauthorized use and distribution.
Model Cards and Software Bill of Materials (SBOMs) are also tools designed to increase transparency and understanding of complex software systems, including AI models. A SBOM is essentially a detailed inventory of all software product components and focuses on listing and detailing every piece of third-party and open-source software included in a software product. SBOMs are critical for understanding the software’s composition, especially for tracking vulnerabilities, licenses, and dependencies. Note that AI-specific versions are currently in development.
A key innovation in CycloneDX 1.5 is the ML-BOM (Machine Learning BOM), a game-changer for ML applications. This feature allows for the comprehensive listing of ML models, algorithms, datasets, training pipelines, and frameworks within an SBOM, and captures essential details such as model provenance, versioning, dependencies, and performance metrics, facilitating reproducibility, governance, risk assessment, and compliance for ML systems.
For ML applications, this advancement is profound. The ML-BOM provides clear visibility into the components and processes involved in ML development and deployment, to help stakeholders grasp the composition of ML systems, identify potential risks, and consider ethical implications. In the security domain, it enables the identification and remedy of vulnerabilities in ML components and dependencies, which is essential for conducting security audits and risk assessments, contributing significantly to developing secure and trustworthy ML systems. It also supports adherence to compliance and regulatory requirements, such as GDPR and CCPA, by ensuring transparency and governance of ML systems.
Finally, use of strategies that extend DevSecOps to LLMOps are essential like model selection, scrubbing training data, securing the pipeline, automating the ML-BOM build, building an AI Red Team, and properly monitoring and logging the system with tools to help you. All of these suggestions provide the appropriate guardrails for safe LLM development while also embracing the inspiration and broad imagination for what is possible using AI, with an emphasis for maintaining a secure foundation of Zero Trust.
We’ve featured the best network monitoring tool.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
This articles is written by : Nermeen Nabil Khear Abdelmalak
All rights reserved to : USAGOLDMIES . www.usagoldmines.com
You can Enjoy surfing our website categories and read more content in many fields you may like .
Why USAGoldMines ?
USAGoldMines is a comprehensive website offering the latest in financial, crypto, and technical news. With specialized sections for each category, it provides readers with up-to-date market insights, investment trends, and technological advancements, making it a valuable resource for investors and enthusiasts in the fast-paced financial world.