We have all seen the advertisements. A person sits in the driver’s seat of a modern car, relaxing, eating, or chatting on the phone while the steering wheel moves on its own. For years, this was the primary image of the future of driving. However, for engineers working in autonomy, the flashy demo is the easy part. The difficult part, the problem that keeps us up at night, is the unpredictable nature of a Tuesday afternoon in a Canadian winter.
Autonomous Vehicles (AVs) are no longer just a research project. They are already being tested and deployed in selected industrial, logistics, and urban use cases. As this technology moves from the testing phase to real-world use, we need to talk about risk with more than just optimism. We need to talk about it with engineering discipline.
The Human Cost of the Status Quo
It is tempting to view AVs only as a tool for innovation, but the primary goal is safety. The numbers are serious. In Canada, 1,964 people died in motor vehicle collisions in 2023. This was the highest count in ten years. Globally, road crashes cause about 1.19 million deaths every year.
When we discuss autonomy, we are talking about a way to save lives. However, for these systems to be successful, they must be more than just better than a human in a simulation. They must be credible, measurable, and safe in the chaos of the real world.
Defining the Boundaries: The Operational Design Domain
One of the most important concepts in AV engineering is the Operational Design Domain (ODD). As defined by the National Highway Traffic Safety Administration, the ODD sets the specific conditions under which a system is intended to function. This includes road types, weather, speed, and lighting.
In plain language, every autonomous system has limits. When an engineer defines a clear ODD, they are performing a basic act of safety engineering. Risk increases quickly when these boundaries are vague. An AV that works perfectly in the sunny streets of Phoenix faces a completely different risk level in a snowy construction zone in northern Ontario.
When Systems Do Not Break, But Still Fail
Traditional engineering often focuses on functional safety. This ensures that if a physical part fails, the system stays safe. AVs introduce another type of hazard: situations where no component has technically failed, but the system still encounters unsafe conditions because of sensing, perception, specification, or performance limitations.
Consider an edge case like a pedestrian in a bulky costume or a temporary stop sign held by a construction worker. The sensors see them, but the logic might not know how to categorize them. This is why AV safety is a systems engineering challenge rather than a branding exercise. A successful pilot test in a controlled area is proof of concept. There is not sufficient proof that the vehicle is ready for public roads.
The Professional Engineers Ontario (PEO) Code: Public Welfare is the Priority
For engineers practicing in Ontario, this discussion is about professional identity. Public welfare is the paramount obligation. The PEO guidelines remind us that practitioners must avoid giving engineering opinions unless they are based on adequate knowledge and honest conviction. In the AV context, this means engineers must be the skeptics in the room. We must not confuse technical promise with technical readiness. If the data does not support the safety case, the engineer has a duty to say so.
The Standardized Path Forward
Engineers have several international standards to help manage these responsibilities:
- ISO 26262: This is the standard for functional safety in electronic systems. It addresses hazards caused by malfunctioning behaviour in automotive electrical and electronic systems.
- ISO/PAS 21448 (SOTIF): SOTIF stands for Safety of the Intended Functionality. It helps engineers look at performance limits and environmental hazards where no actual part has failed.
- ISO/SAE 21434: In a connected vehicle, a cyber weakness is a safety hazard. This standard ensures that cybersecurity is managed throughout the life of the vehicle.
- UL 4600: This standard uses a safety-case approach, requiring an evidence-based argument that the product is safe for its specific use.
Asking the Hard Questions
Before any autonomous system is deployed, we must focus on how it might fail. Engineers and organizations should ask specific questions, for example:
- What is the exact ODD? What happens the moment the vehicle leaves that domain?
- What is the minimum-risk fallback state? If the system becomes confused, can it reach a safe stop without human help?
- How does the system handle degraded modes? What happens when a camera is blocked by mud or GPS signals drift?
This is why we are seeing the most responsible progress in constrained use cases. Low-speed shuttles, fixed-route logistics, and closed-campus industrial vehicles are not lesser forms of autonomy. They are examples of good engineering discipline. They match the complexity of the task to the current maturity of the technology.
Conclusion: The Final Sensor
Autonomous vehicles represent one of the greatest engineering challenges of our generation. They hold the potential to save thousands of lives and reshape our cities. But that potential will only be realized if we maintain public trust.
Technology can build the sensors, the processors, and the actuators. It cannot replace engineering judgment. As professionals, we are the final sensor in the system. Our job is to ensure that when these vehicles finally take the wheel, they do so with a level of rigour that honours our commitment to the public we serve.
The road ahead is autonomous, but the responsibility remains entirely human.
If you are interested in what Omid Sadeghi discussed in this blog, you may be interested in his upcoming workshop with OSPE:
|
If you’re interested in insights from Omid Sadeghi’s blog, you may also be interested in his upcoming OSPE workshop:
Autonomous Vehicles in Practice: Market Progress, Safety Engineering, and Building Public Trust June 2 | 12:30–2:00 PM | 1.5 CPD Hours $50 (Member) | $100 (Non-Member) This workshop provides a practical, engineering-focused overview of autonomous vehicle progress across North America. It covers key safety challenges, including ISO 26262, ISO 21448 (SOTIF), ISO 21434, and UL 4600, and explores how engineers can design, validate, and deploy safe, trusted systems. Register now |
By: Omid Sadeghi, Co-Founder and Chief Product Officer at Telebotics
Leave a Comment