OSPE’s first EngTalks symposium of the year was a dynamic and forward-looking day of discussions on the role of Artificial Intelligence (AI) in engineering practice. Bringing together engineers, industry professionals, and academia, the sessions explored how AI influences engineers’ decision-making, productivity, risk management, and responsibilities, as well as ethical implications of using AI in engineering work.
Below are some key themes that emerged throughout the event:
AI as an Engineering Tool
AI is best understood as an augmentation tool, enhancing engineers’ ability to process complex data, explore design options, and improve outcomes. It supports, rather than replaces, the expertise and professional judgment that define engineering practice.
Managing Risk in AI Applications
Participants highlighted several key risks associated with AI adoption, including algorithmic bias, limitations in model reliability, and data sovereignty. These challenges underscore the importance of assessing AI-generated information, particularly in high-risk applications where public safety is at stake.

Evolving Skills and Workforce Readiness
As AI becomes more embedded in engineering workflows, the profession’s skill requirements are evolving. Engineers must develop AI literacy and skills in applying AI. Continuous professional development will be essential. To support the engineering workforce, engineers must also support Engineering Interns (EITs) to ensure they have the skills necessary to succeed in traditional engineering and AI-enhanced work.
The Need for Modernized Policy and Regulation
Participants emphasized the need to modernize standards and guidance while maintaining the profession’s core mandate of protecting the public. A strong consensus emerged around the value of risk-tiered regulatory approaches where high-risk applications require stringent oversight, and lower-risk uses allow for greater flexibility. Well-designed regulatory frameworks support competitiveness by providing certainty for investment, encouraging responsible innovation, and preventing inconsistent or unsafe practice.

Building Trust: Explainability and Validation
Trust in AI systems depends on more than transparency alone. While full explainability may not always be feasible, engineers must be able to justify decisions and ensure outputs meet defined safety and performance standards. Validation, testing, and auditability were identified as critical safeguards.
Conclusion: Shaping AI’s Role in the Public Interest
In an AI-enabled future, engineering expertise remains essential to ensure new, smarter systems serve the public good. OSPE will continue to advance conversations, provide resources and support engineers as they navigate this transition.
OSPE’s next EngTalks symposium being held this June in Ottawa will explore building Ontario’s Net-Zero future. Learn more about it here: Home | EngTalks.
Leave a Comment