
Today Google has announced the release of a conceptual framework to help collaboratively secure AI technology. SAIF is inspired by the security best practices, such as reviewing, testing and controlling the supply chain, says Google. Which the company has applied to software development, while incorporating its understanding of security mega-trends and risks specific to AI systems.
“The potential of AI, especially generative AI, is immense. However, in the pursuit of progress within these new frontiers of innovation, there needs to be clear industry security standards for building and deploying this technology in a responsible manner. That’s why today we are excited to introduce the Secure AI Framework (SAIF), a conceptual framework for secure AI systems.
A framework across the public and private sectors is essential for making sure that responsible actors safeguard the technology that supports AI advancements, so that when AI models are implemented, they’re secure-by-default. Today marks an important first step.”
Google Secure AI Framework
“The latest AI innovations can improve the scale and speed of response efforts to security incidents. Adversaries will likely use AI to scale their impact, so it is important to use AI and its current and emerging capabilities to stay nimble and cost effective in protecting against them. Consistency across control frameworks can support AI risk mitigation and scale protections across different platforms and tools to ensure that the best protections are available to all AI applications in a scalable and cost efficient manner.
At Google, this includes extending secure-by-default protections to AI platforms like Vertex AI and Security AI Workbench, and building controls and protections into the software development lifecycle. Capabilities that address general use cases, like Perspective API, can help the entire organization benefit from state of the art protections.”
“Constant testing of implementations through continuous learning can ensure detection and protection capabilities address the changing threat environment. This includes techniques like reinforcement learning based on incidents and user feedback and involves steps such as updating training data sets, fine-tuning models to respond strategically to attacks and allowing the software that is used to build models to embed further security in context (e.g. detecting anomalous behavior). Organizations can also conduct regular red team exercises to improve safety assurance for AI-powered products and capabilities.”
For more information on the new Google Secure AI Framework jump over to the official company blog by following the link below.
Source : Google
Latest Geeky Gadgets Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.