From FMEA to AI: Adapting Functional Safety for Machine Learning in Vehicles
As vehicles become increasingly reliant on AI-powered systems, particularly for perception, decision-making, and control, the question facing the industry is no longer just “Is it safe?” but “How can we prove that it’s safe?”
Traditional functional safety frameworks, such as ISO 26262, were designed for deterministic systems. Their tools, like FMEA (Failure Mode and Effects Analysis) or FMEDA (Failure Mode, Effects, and Diagnostic Analysis), assume we can predict, document, and mitigate every potential fault in a system. But what happens when the system is a black box neural network that changes with data?
In short, traditional methods fall short.
The Limitations of Traditional Functional Safety Methods
FMEA and FMEDA are built to identify single-point failures and ensure that systems have diagnostic or redundant paths to manage them. This works effectively for sensors, ECUs, and communication lines like CAN buses. But in AI-driven systems, such as computer vision models for object detection, faults don’t always occur in hardware or firmware.
They can occur in the data itself.
For example:
- A vision model trained mostly on sunny weather may misclassify pedestrians in rain.
- A mislabelled training dataset might teach a vehicle to ignore stop signs that are partially covered by foliage.
- A rare event, like a child chasing a ball into the road, might not be represented in the training data at all.
These issues are not failures in the traditional sense. Instead, they represent vulnerabilities in how the system behaves. Traditional tools are not well suited to identify or resolve these kinds of problems.
Bridging the Gap: What the Industry Is Doing
To meet these new challenges, the industry is evolving its approach and supplementing existing safety frameworks with new tools and practices.
- Data Set Validation
Engineers now treat datasets as critical safety artifacts. New tools help assess the completeness, balance, and relevance of data to ensure safety-critical scenarios are properly represented.
- Explainability and Interpretability
Efforts are underway to make AI systems more transparent. Explainable AI (XAI) techniques help engineers understand why a system made a specific decision. This supports traceability, which is a key requirement under ISO 26262.
- Scenario-Based Testing
Simulated environments such as CARLA or IPG CarMaker are used to evaluate system behavior across millions of scenarios. These include edge cases and rare conditions that would be difficult or unsafe to recreate in the real world.
- Safety Envelopes and Redundant Systems
Rather than relying entirely on AI outputs, developers are implementing safety envelopes and fallback strategies. These may include parallel rule-based systems or limiters that constrain behavior when uncertainty is detected.
Functional Safety Is Adapting
ISO 26262 was never intended to be static. In fact, it already includes the concept of a “Safety Element out of Context” (SEooC), which allows components to be assessed based on well-defined assumptions. What is needed now is an expansion of the industry’s toolkit to assess learning systems more effectively.
Functional safety is no longer the responsibility of engineering teams alone. Today, it requires collaboration between engineers, data scientists, systems architects, and even ethicists. Together, they must ensure that AI-powered features are not only functional, but provably safe.
Final Thought
Artificial intelligence introduces new challenges to functional safety, but it also creates an opportunity to redefine what safe design looks like. If the industry can adapt its thinking and its tools, functional safety can continue to serve as a critical framework, not just for compliance, but for public trust.
In the age of autonomous driving, safety is no longer just a requirement. It is the product.