Artificial intelligence is transforming the automobile from a largely mechanical product into a dynamic, software-driven platform. This shift is not simply a matter of improved features; it fundamentally alters how vehicles are tested, approved, and monitored for compliance. Traditional conformity assessment, which once operated as a single pre-market checkpoint, is giving way to a continuous process that spans the entire product lifecycle. AI-enabled cars demand constant vigilance, because the technology that powers them is capable of evolving long after the vehicle leaves the factory floor.
AI innovation and the changing nature of the vehicle
Modern cars increasingly rely on machine learning for perception, decision-making, and driver monitoring. These systems integrate camera feeds, radar data, and sophisticated algorithms to interpret the road environment and decide how the vehicle should respond. However, unlike traditional rule-based software, machine learning models are non-deterministic and sensitive to changes in input data.
This reality clashes with conventional safety assurance approaches like ISO 26262, which assumes fixed, predictable behavior. Newer frameworks such as the Safety of the Intended Functionality (SOTIF, ISO 21448) have emerged to address these performance insufficiencies, focusing on operational design domains, known-unknowns, and the mitigation of residual risks.
At the same time, the automotive world is embracing the Software-Defined Vehicle (SDV) model, in which features can be added or improved after sale through over-the-air (OTA) updates. These updates bring benefits, but they also complicate regulatory compliance. Under UNECE Regulation No. 156, every manufacturer must operate a Software Update Management System, and Regulation No. 155 requires a comprehensive Cybersecurity Management System.
Together, these frameworks make it clear that updating a car’s software is not just an engineering task; it is a regulated activity that must be auditable and safe. The introduction of Level 3 automated driving systems, formalized in UN Regulation No. 157, underscores the need for rigorous, scenario-based evidence of system performance and fallback behavior.
How regulation is adapting to AI in cars
The European Union’s AI Act represents a watershed moment for AI governance, with significant implications for the automotive sector. Many AI functions in vehicles will be classified as high-risk systems, triggering strict requirements for risk management, data governance, human oversight, technical documentation, and post-market monitoring. While some AI systems can be self-assessed through an internal quality management process, others will require review by a notified body. The AI Act’s obligations will come into force in stages: governance and GPAI rules from August 2025, and most remaining requirements by August 2026.
In parallel, global automotive regulations are shifting from one-off compliance checks to continuous oversight. UNECE WP.29 regulations now mandate that manufacturers maintain cybersecurity and software update management systems as a precondition for type approval.
Standards such as ISO/SAE 21434 for cybersecurity, ISO/IEC 23894 for AI-specific risk management, and UL 4600 for safety cases in autonomous systems are becoming integral to the conformity assessment process. The combination of these rules reflects a regulatory consensus: AI-equipped vehicles cannot be considered “finished” products, and their compliance status must be actively managed.
Conformity assessment in the age of continuous updates
AI is forcing conformity assessment to evolve from a snapshot to a live feed. Manufacturers can no longer rely on a single set of test results or hazard analyses to prove compliance. Instead, they must maintain a continuous evidence pipeline that captures model versioning, data provenance, performance metrics, and in-field results. Post-market monitoring, explicitly required by the AI Act, turns field data into a compliance obligation, not just an engineering best practice.
Safety cases are also changing in substance. Regulators expect manufacturers to define and defend the operational limits of AI systems, rather than claiming flawless performance. Data governance has become a first-class compliance requirement, with auditors looking for clear records of how training data was sourced, processed, and validated against bias or distribution shifts.
Cybersecurity is inseparable from safety, as any compromise to an AI model or OTA infrastructure can undermine the entire safety argument. And because over-the-air updates can alter the intended purpose or risk profile of a system, each significant change may trigger a re-assessment of conformity.
The automotive industry’s path forward is clear: compliance must be embedded into the lifecycle of AI-enabled products. That means integrating AI-specific controls into quality management systems, pairing functional safety with SOTIF performance assurance, and building the capability to produce up-to-date technical documentation on demand.
By treating every software update as a mini-homologation and every in-field observation as part of the safety case, manufacturers can keep pace with regulators while retaining the flexibility to innovate. The promise of AI in mobility will only be realised if it is matched by a culture of continuous conformity — a mindset that keeps cars both cutting-edge and trustworthy.
Related services offered by TAM CERT:
Explore our certification solutions supporting responsible AI and Automotive Cybersecurity compliance: