Press "Enter" to skip to content

Ethics, Technology and the Future

By Preston Coleman

Before his Uncle Ben died, Peter Parker, the alter ego of superhero Spider-Man, was told, “With great power comes great responsibility.” This axiom holds true today, and it’s one that we are faced with when we consider the future of technology.

Technology is evolving at a rapid pace. With spaceflights becoming increasingly commercially available, facial recognition software becoming more and more powerful and self-driving cars becoming progressively engrained into society, the future certainly seems hopeful as more and more everyday activities are automated.

With these new technologies, however, come questions of ethics as those who program advances in artificial intelligence are increasingly tasked with installing both software and complex ethical frameworks. Here’s a hypothetical situation that demonstrates this — a modified trolley car problem.

Imagine a businesswoman, taking a self-driving Uber from her home on the outskirts of Chicago to a high-rise building downtown. On her way there, her way is obstructed by a man who has fallen and is trapped under his motorcycle in the middle of the street. To her left, there is an elderly woman who has just successfully crossed the street, and to her right is a pregnant woman.

If we assume that the car is unable to stop itself in time and will end up hitting one of the innocent people in front of it, what should it do? Should it continue on its path and hit the helpless man lying in the road? Should it swerve to the left, hitting the older woman who is out running errands? Or should it swerve to the right, potentially ending two lives if it hits the pregnant woman?

These alternatives, however, don’t even fully address one of the key, ethical issues present in this situation — should the self-driving vehicle prioritize the life of its passengers, the lives of those passing by, or people riding in other vehicles? In short, does the car manufacturer have some sort of duty to program “self-preservation” into its product, both for the vehicle itself and the passengers inside? Or does the manufacturer have a larger duty to the rest of society to protect the lives of innocent pedestrians at all costs? Whatever decision is made will have major implications for both the users of self-driving cars, and for all other people using the streets and crosswalks.

These questions, and more, face programmers of self-driving cars, and that is just one (admittedly unlikely) scenario. Similar dilemmas could be created for developers of autonomous drones, emotional analytics software (which examines the faces of participants, and, by monitoring microexpressions, learns how a group of people or an individual is reacting to certain stimuli in real time) and automatic stock trading tools.

Who, then, is responsible for making the artificial intelligence that will end up governing these different hardware and software components? At this time, these decisions are made by technology specialists. The closest equivalent here at Oklahoma Christian would be our computer engineering and computer science programs. Graduates of these programs are theoretically fully equipped to enter the work force and participate in the development of AI software.

Do these programs equip students to deal with the complex, ethical dilemmas they may face in the “real world” outside of Oklahoma Christian? I would argue that they do not. The computer science program offers one class that deals with ethical issues. The computer engineering program offers none. Additionally, the University of Central Oklahoma and Oklahoma State University have, at most, one course in their respective computer engineering degree plans dealing with the ethical implications involved with computing.

What does this mean? I would contend, with the increasing use of artificial intelligence in everyday functions of society, it is imperative that ethical questions be approached head on, and a framework is built to dictate how technology and ethics will intersect. This could take shape in a variety of ways, but one practical solution is to pull together a coalition of government bodies and technology companies exploring concepts that involve artificial intelligence, and invite them to create certain rules that would govern technology tasked with making decisions. These regulations could be as simple as Isaac Asimov’s Three Laws of Robotics or be more complex — tailored to specific situations as deemed necessary.

I believe it’s imperative we apply the adage “With great power comes great responsibility,” to the increasingly powerful world of computing. Perhaps the saying, however, needs a timely update — “With exponentially-growing power comes increased ethical responsibility.”

 

Preston Coleman is a senior at Oklahoma Christian University. 

The opinions of guest columnists are their own and do not necessarily reflect the opinions of the Talon or Oklahoma Christian University. Guest opinions are presented to foster public debate on important topics and comments should be respectful and signed.

Email this to someonePrint this pageShare on Facebook0Tweet about this on TwitterShare on LinkedIn0

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *