New Coalition Seeks To Make AI Trustworthy, One Standard At A Time

Seeking a better AI.
Artificial intelligence, now ubiquitous and cheap, is spinning out of control. It’s growing increasingly difficult to identify fake images and content. It has become a breeding ground for deepfakes, erroneous manufactured content, and issues around bias, intellectual property theft, and hallucinations. This may destroy trust in all images and content produced by even the most legitimate sources.
Standards may offer a way through this – with watermarking, digital verification, and traceability that can help sort what is real from the fakery. At the same time, attempting to assign standards to today’s emerging AI systems and their many vendors may be akin to herding cats – the market is extremely diverse, still relatively immature, and evolving in new directions every few months.
At the recent UN “AI for Good” Summit in Geneva, Switzerland, I had the opportunity to discuss the latest initiatives to rein in AI’s chaos with Philippe Metzger, secretary-general and CEO of the International Electrotechnical Commission (IEC). His organization, historically known for creating standards around electricity usage and a host of other day-to-day modern services, is teaming up with other international standards bodies to form the AI and Multimedia Authenticity Standards Collaboration (AMAS). Other participants include the International Organization for Standardization (ISO), and the International Telecommunication Union (ITU).
Categories of standards under development within AMAS include content provenance, trust and authenticity, asset identifiers, and human rights declarations. The first foundational standard, covering AI trustworthiness, was published in 2020, providing guidelines for assessing the reliability and integrity of AI systems.
Earlier this year, IEC and ISO published the first part of a new JPEG Trust series of international standards for media, including video and audio — a key weapon against the rise of deepfakes and images.
Recently enacted additional standards seek to help build trust in digital media and AI, include JPEG Trust Part 1, which focuses on trust and authenticity in JPEG images through provenance, detection and fact-checking, and content credentials. Standards now being developed in the pipeline include digital watermarking, trust guidelines, and a framework for authentication of multimedia content.
While all the standards proposed or created are voluntary, Metzger says market pressures may help ensure conformity. “None of these standards are mandatory, and that’s not going to change. That’s core DNA of what we do and stand for.”
Conforming to emerging AI standards comes in at least three forms: adapting to market forces, formal assessments conducted by IEC, and government mandates. “It’s fine to have a standard, but if nobody tells you that you’re living up to that standard, well probably the effect is going to be very, very limited,” he said.
Ability to show adoption of standards may help companies integrate with broader ecosystems, as well as show consumers they are keeping up. “To have fair dealings, fair trade, and fair exchanges, you need a reference point,” said Metzger. Formal assessments can come in the form of ad-hoc audits, he explained. For example, IEC launched a new service that offers conformity assessments on quality and carbon footprints. “All these companies make big claims: ‘our product is super green, with minimal CO2, etc.’” Those companies that have their adherence to standards assessed can show third-party verification, he said. Expect this to be applied to AI quality standards as well.
A big question is whether standards bodies – which deliberately move slowly as they are consensus based, can keep up with the fast-moving world of AI. For example, agentic AI only first started percolating less than a year ago, yet now consumes development activity. By next year at this time, some other new variation of AI may be roiling the landscape.
“The speed is really a challenge,” said Metzger. “How quickly can we address these dimensions, and how many dimensions are there if they’re growing so quickly.”
While IEC committees are already working on standards for agentic AI, such development goes through processes. “There are some arrangements where you can get to not fully baked standards,” he pointed out.
“Of course, there are forces, science and market drivers which run very fast,” Metzger pointed out. “But I tend to be more optimistic than pessimistic. There is new technology which will change quickly. But fundamentally, there is also some auto-corrective mechanisms built into society, including governance systems, where developments will meet resistance at different levels, through different channels, by different stakeholders.”