Advancing Safe Machine Learning: “The Community Now Owns This”


At the recent SAE World Congress, Torc took the stage to share something big: a new safety approach to using machine learning (ML) in high-stakes areas like self-driving trucks. Paul Schmitt, Torc’s Senior Manager for Autonomy Systems, presented a paper called “The ML FMEA: A Safe Machine Learning Framework.” The work, co-authored with experts from Torc and safety partner TÜV Rheinland, addresses a major challenge in using AI for safety-critical applications: how do you know the AI is safe?

Machine learning models are often described as “black boxes”—it’s hard to see how they make decisions, and that makes it hard to ensure they’re making the right ones. As Schmitt explained during the talk, existing safety standards highlight the importance of managing risk but don’t give clear, practical tools for how to do it. That’s what inspired the team to create the ML FMEA.

ML FMEA stands for Machine Learning Failure Mode and Effects Analysis. It builds on a well-known tool, FMEA, that industries have used for decades to catch potential problems before they happen. Torc and its partners adapted this trusted method to fit the unique challenges of machine learning systems—like those used in autonomous trucks.

What makes this approach special is how it brings two very different groups—machine learning engineers and safety experts—into the same conversation. “My favorite benefit is that it gives both teams a shared language to understand and reduce risk,” Schmitt said. The framework helps teams walk through each step of the ML process and think through what could go wrong, why it might go wrong, and how to prevent it.

The team didn’t stop at the idea—they created a working template to help others put the approach into action. It includes real examples of possible failures and how to fix them, from the moment data is collected to the time the ML model is deployed and monitored in the real world.
And in the spirit of industry collaboration, Torc and TÜV Rheinland made the framework public. “We see this as a first step toward safety-certified machine learning systems,” Schmitt said. “These challenges don’t just affect self-driving trucks. They affect healthcare, manufacturing, aerospace—you name it. So we open sourced the method and template, and we’re excited to see how others improve it.”

Partnership

Schmitt also highlighted the importance of partnership: “We were thrilled to work with TÜV Rheinland on this project. Bodo Seifert instantly brought depth and credibility to the work.”

The presentation drew strong interest, with attendees snapping photos of slides and downloading the paper on the spot. During the Q&A, co-authors Krzysztof Pennar and Bodo Seifert joined Schmitt on stage to take questions. “We heard great ideas on how to expand the approach from automakers, safety experts, and standards committee members,” said Schmitt. “Seeing that level of engagement—especially from the standards community—was honestly a dream come true.”

The paper was co-authored by Bodo Seifert, Senior Automotive Functional Safety Engineer at TÜV Rheinland, Jerry Lopez, Senior Director of Safety Assurance; Krzysztof Pennar, Principal Safety Engineer; Mario Bijelic, AI Researcher; and Felix Heide, Chief Scientist.

As AI becomes more common in critical systems, tools like ML FMEA will be key to making sure it’s not just powerful—but also safe.

By admin

Deixe um comentário

O seu endereço de email não será publicado. Campos obrigatórios marcados com *