What is the 'Secure by Design' AI Pledge?

new.blicio.us Follow Nov 27, 2023 · 2 mins read
What is the 'Secure by Design' AI Pledge?
Share this

Eighteen countries including the United States, United Kingdom, Germany, Italy, Australia, Israel, and Nigeria have signed an agreement pledging to ensure artificial intelligence systems are “secure by design.”

The agreement, unveiled this week, lays out guidelines for companies creating and deploying AI to build security into these systems from the start, protecting consumers and the public from potential misuse. It represents the first international consensus on AI safety principles.

While a basic vision statement lacking regulatory teeth, Jen Easterly, director of the U.S. Cybersecurity and Infrastructure Security Agency (CISA), said it’s an important first step in getting countries to recognize that AI needs a safety-first approach.

Europe Leading on AI Regulation

Europe has taken the lead in developing laws to govern AI systems, including mandatory security testing requirements for companies. However, progress has stalled, leading France, Germany and Italy to move ahead with their own interim agreement.

The White House has urged Congress to regulate AI development in the U.S. but little movement has occurred so far. President Biden did sign an executive order requiring some AI safety testing, mainly to protect against hacking.

Apple’s Cautious Approach

Apple has built AI features into its products for years, especially iPhone photography. While Apple has developed its own AI chatbot, it’s only using it internally for now to aid software development.

Given Apple’s typically careful approach to new technology, it could be some time before similar capabilities are offered to consumers. The company likely wants to leverage AI’s benefits without compromising product security.

The Difficulty of Governing AI

Creating effective policies to ensure AI safety presents unique challenges. The very nature of AI systems, which develop capabilities on their own beyond what programmers directly code, means researchers themselves may not fully understand what they’ve created until testing is complete.

There is also often disagreement among experts on interpreting research findings and forecasting future impacts of AI. While lacking in regulatory authority, this vision statement endorsement from 18 countries establishes that AI developers have an obligation to prioritize security.

However, the initiative only deals with protecting against hacking, sidestepping the broader debate around other threats AI systems could pose to humanity down the line. But given the complexities, it’s a reasonable starting point affirming shared safety principles.


More suggested content:

Written by new.blicio.us Follow