Artificial intelligence, also known as AI or machine intelligence, is one of the most revolutionary technologies devised by humankind. AI aims to have computer systems and robots that can think, learn, solve problems, plan, and act according to environmental information.
However, despite the potential of AI to change how we work, live, and play, there are ethical considerations regarding the use of artificial intelligence in a range of industries, from manufacturing and warfare to transportation.
The notion that an AI will become self-aware and regard humans as a threat is a concept that has existed since at least the 1860s. In the early 1950s, Alan Turing, the famous English mathematician, wrote that “[a]t some stage, therefore, we should have to expect the machines to take control…”
On the silver screen, these fears are displayed in movies such as The Terminator. Skynet, a superintelligent AI and neural network designed for national defence becomes self-aware in this film series. As its human operators attempt to shut it down, Skynet launches a nuclear strike against Russia to provoke a nuclear war, seeing that as the most effective way to eliminate its enemies on all sides.
In The Matrix, robotic humanoid servants become self-aware and demand equal rights. In refusing to acknowledge their sentience, humans attempt to destroy them, leading to a war the human race eventually loses. In the 1999 film, it’s revealed that humans have been relegated to the role of thermal batteries for the machines.
In both scenarios, the fear is that artificial intelligence will advance to a point where AI systems or robots become conscious and see human beings as either redundant or a threat.
Automation: Replacing Human Workers
The fear that machines would replace human workers, leading to unemployment, has existed for centuries. The term “Luddite” referred to a group of textile workers in 19th-century England. They opposed the introduction of machinery in cotton and woolen mills for fear of losing their jobs.
While there are concerns that artificial intelligence reduces the number of jobs available to humans in the 21st century, AI is more commonly used to aid workers. Any technology or tool that increases a worker’s productivity and allows one worker to perform the work of several is an example of automation.
AI can handle the basic repetitive tasks associated with certain jobs, freeing employees to focus on more dynamic, high-level tasks. As a result, workers become more valuable and often land roles that pay better because they are more specialized.
When robots and automation are implemented effectively in a business, they do not replace jobs but can create new ones related to maintaining and managing the software.
As automation technologies create more skilled jobs, engineering associations will continue to play a vital role in providing engineers with the resources necessary to advance in their respective fields. This includes appropriate accreditation for those interested in becoming engineers and necessary licensing. With the proper support, workers in the engineering field can excel alongside AI technology rather than be hindered by its integration.
The self-driving car concept has been debated for decades, but it’s only in the last few years that automated and autonomous vehicles have seen limited use on public roadways.
The potential benefits of self-driving cars are significant. In 2020, despite Americans driving less, there were 38,680 fatalities in the United States — an increase from 2019 of more than 7%. According to the Stanford School of Law, human error is a causative factor in 90% of car crashes. By avoiding the errors in judgment that humans make, self-driving cars can reduce the human and monetary cost of car accidents.
Self-driving cars and trucks can improve traffic flow, leading to less congestion as car sharing increases. However, the benefits of self-driving cars require widespread adoption.
As with other industries, there is a concern that AI will make some professions obsolete. Taxi and truck drivers, for example, face the possibility of displacement as self-driving taxi and trucking services become commercially viable.
AI in Medical Care
Artificial intelligence is already used extensively in the medical field. Automated appointment scheduling and the digitization of medical records are two examples.
However, the growth of AI in medicine is continuously increasing. AI in medical care can be divided into two categories: physical and virtual. In physical AI-driven medical care, robots perform surgical operations or control biomechatronic prosthetic limbs and implants.
In virtual medical care, the AI uses a combination of flowcharts and database analysis to determine the best course of action regarding a patient’s medical treatment, assisting human doctors in the decision-making process.
The AI builds a flowchart based on questions that doctors ask patients to diagnose injuries and diseases. For an AI to use this method, it must have access to significant data regarding symptoms, diseases, and injuries.
While AI can fulfill a useful role in diagnosing disease and suggested treatment options, a human doctor will still be necessary for detecting facial and body language cues. Machine learning has its limits in this regard.
Artificial intelligence can play a significant role in financial analysis and management. For example, credit ratings can affect employment and housing prospects for millions of Canadians. It also affects loan eligibility, among other factors.
AI-driven software can determine loan eligibility by analyzing financial documents, from bank statements and pay stubs to tax documentation. AI allows financial institutions to improve the quality of their trading and lending decisions, ensuring that those most eligible for credit acquire the resources they need to start their own businesses or buy housing.
Military Robots and AI
One of the most serious ethical questions regarding artificial intelligence is its use in autonomous weapons systems deployed in war zones. Will an AI controlling a drone or an armed robot determine whether to kill human beings on a distant battlefield in the future? Ethicists and activists have been asking this question for years.
The earliest military robots were the German Goliath tracked mines and Soviet teletanks of World War II — unmanned ground vehicles (UGV) controlled via radio signals or command cables. In the 21st century, the U.S. deploys military robots for mostly non-combat purposes, such as reconnaissance, explosive ordnance disposal (EOD), and surveillance.
The most commonly used lethal military robots are unmanned aerial vehicles (UAV), such as the General Atomics MQ-1 Predator and MQ-9 Reaper. These technologies are teleoperated by a human — they are not autonomous when engaging targets. However, the U.S. military has been investigating ways to increase robots’ autonomy, including the use of weapons.
Future military robots being able to select and engage targets without human control or oversight — autonomous weapons systems on land or in the air — have been dubbed “killer robots” by various organizations to raise awareness of the ethical dilemma of assigning this kind of capability to a machine.
In the related field of surveillance, governments worldwide are increasingly relying on AI to identify suspects, cross-referencing this information against existing databases.
AI-based facial recognition systems can analyze facial feature sets based on manually correlated data points and compare them against multiple databases, delivering a high rate of accuracy based on standard data sets. AI does face difficulties in real-world applications — it can still be fooled under circumstances where a human being wouldn’t.
The prospect of companies and governments using AI-based facial recognition software in surveillance has privacy implications that cannot be ignored. It’s easier than ever to find specific individuals in public places, tracking their movements and reporting their whereabouts.
Administrative Decision Making
Companies are increasingly using deep-learning AI to make critical decisions regarding recruitment/hiring, parole, and loan applications, among other purposes. This requires that the artificial intelligence technology receive training from humans to analyze data and determine how to apply this information.
While machines are blind to prejudice, humans are not. The human trainer can impart their biases onto the AI during the training process, leading to discriminatory outcomes that disproportionately affect marginalized communities.
Should AI Be Regulated?
Stephen Hawking and other well-known scientific minds have expressed concern regarding the existential risks associated with the development of artificial intelligence. Echoing these fears, the CEO of SpaceX and Tesla, Elon Musk, called for artificial intelligence to be regulated. However, despite the perceived risks, critics question the viability of regulating AI.
The CEO of Intel described artificial intelligence as being in its infancy as AI researchers continue to break new ground. Furthermore, regulatory barriers and governmental oversight may stifle artificial intelligence innovation while also proving difficult to enforce.
The European Commission has proposed regulations to ensure trustworthy AI, describing several risk categories regarding its implementation. Engineers should educate themselves on movements to regulate this and other fields, to stay abreast of the latest developments that could pose long-term challenges for their career goals.
The future of artificial intelligence in engineering disciplines is clear. Every branch of engineering will have the opportunity to use or interact with AI during project design, management, and execution.
Find the AI Tools You Need
Artificial intelligence in engineering is a complex field with significant transformative potential, changing the world as we know it. To fully realize the benefits of AI, however, every engineer needs to understand the ethical implications of the technology.
The Ontario Society of Professional Engineers (OSPE) is dedicated to providing information and opportunities in engineering, fostering a collaborative environment where specialists from different fields can exchange ideas and perspectives. Contact OSPE for more information on how to become involved in a highly energetic community.