Google, Microsoft, NVIDIA, Intel, and others establish Coalition for Secure AI (CoSAI) to improve security of AI products



The Coalition for Secure AI (CoSAI), an open source initiative aimed at providing guidance and tools for security-focused AI development, was announced at the 15th

Aspen Security Forum held in Aspen, Colorado, USA, on Thursday, July 18, 2024 (local time) . The initiative is hosted by OASIS Open , a computer and communications standards organization.

Introducing the Coalition for Secure AI, an OASIS Open Project - OASIS Open
https://www.oasis-open.org/2024/07/18/introducing-cosai/



Google announces the Coalition for Secure AI
https://blog.google/technology/safety-security/google-coalition-for-secure-ai/

Intel Welcomes the Coalition for Secure AI
https://www.intel.com/content/www/us/en/newsroom/opinion/intel-welcomes-coalition-for-secure-ai.html

Introducing the Coalition for Secure AI (CoSAI) - Cisco Blogs
https://blogs.cisco.com/security/introducing-the-coalition-for-secure-ai-cosai

Chainguard joins Coalition for Secure AI with OpenAI, Google, Anthropic
https://www.chainguard.dev/unchained/chainguard-joins-coalition-for-secure-ai

CoSAI was founded to foster a collaborative ecosystem for sharing open-source methodologies, standardized frameworks and tools. CoSAI brings together diverse stakeholders, including industry leaders, academics and other experts, to address the fragmented landscape of AI security.

CoSAI's founding premium sponsors include Google, IBM, Intel, Microsoft, NVIDIA, and PayPal. Other founding sponsors include Amazon, Anthropic, Cisco, Chainguard, Cohere, GenLab, OpenAI, and Wiz.

OASIS Open, the organizer of CoSAI, explained that 'CoSAI is an initiative to strengthen trust and security in the use and deployment of AI.' CoSAI's scope of activities includes the secure construction, integration, deployment, and operation of AI systems, focusing on mitigating risks such as model theft, data contamination, prompt injection, scaled abuse, and inference attacks.

OASIS Open lists CoSAI's goal as 'developing comprehensive security measures that address traditional and inherent risks in AI systems,' and describes it as 'an open source community led by a Project Management Committee that drives and manages the overall technical agenda, and a Technical Steering Committee made up of AI experts from academia and industry that oversees its work streams.'



Regarding the need for CoSAI, OASIS Open explains, 'AI is rapidly changing the world and has great potential to solve complex problems. To ensure trust in AI and promote responsible development, it is important to prioritize security, identify and mitigate potential vulnerabilities in AI systems, and develop and share methodologies that lead to the creation of systems that are secure by design.'

OASIS Open points out that the existing AI industry's approach to securing AI and AI applications and services is fragmented. In fact, developers have complained that it is 'inconsistent and siloed.' Because the AI industry lacks clear best practices and standardized approaches, assessing and mitigating common AI-specific risks presents significant challenges even for experienced organizations. CoSAI was established to address these challenges.

'CoSAI was founded out of a need to democratize the knowledge and advancements essential to the safe integration and deployment of AI,' said David LaBianca of Google, Co-Chair of CoSAI's Board of Directors. 'With the support of OASIS Open, we look forward to continuing this work and collaboration among leading companies, experts, and academia.'

Co-chair Omar Santos of Cisco said, 'We are committed to working with organizations at the forefront of responsible and safe AI technology. Our goal is to eliminate redundancies and amplify our collective impact through key partnerships focused on important topics. At CoSAI, we will combine our expertise and resources to fast-track the development of robust AI security standards and practices that will benefit the entire industry.'



CoSAI will form three workstreams, with plans to add more over time:

・Software supply chain security for AI systems
Secure your AI applications with enhanced configuration and provenance tracking.

- Preparing defenders for a changing cybersecurity environment
Addressing investment and integration challenges in AI and traditional systems.

・AI security governance
Developing AI security best practices and risk assessment frameworks.

in AI,   Security, Posted by logu_ii