AI Needs Strong Ethical Standards

Jesse Hirsch, Futurist, gave the Keynote address at this year’s National Technology Forum in Toronto, which is offered by CPA Canada. Much of the address focused on the increasing role of AI in our world. He pointed to how AI driven tech is changing our perceptions of the world, in terms of institutional authority, cognitive authority and algorithmic authority. 

Institutional authority is the authority that comes from important institutions or the status of particular people within those institutions. For example, the traditional spokespersons in relation to news events have been the top executives of major corporations or institutions that might have been involved with the story.

Cognitive authority, on the other hand, comes from people who have demonstrated a high level of knowledge or expertise on a particular topic. This might have been demonstrated through social media or writings, often through the internet. Cognitive authority comes from demonstrated expertise, as opposed to position within an organization.

Algorithmic authority comes from the inclusion of algorithms in AI that are core to the decisions it makes.

Modern AI supported applications have meant that there is less emphasis on institutional authority and more on cognitive authority. Everyone has a voice on social media and indeed on the internet.

There is also more emphasis on algorithmic authority, which is built into AI and which many people are not aware of. An example comes from self-driven cars, which must be programmed to make decisions in many possible circumstances. Suppose, for example, an elderly woman suddenly walks across the road in front of a self-driven car, which is carrying four people. To the left is an eighteen-wheeler bearing down in the opposing lane. To the right is a mother pushing her carriage along the sidewalk. There’s a harsh split-second choice to be made – run over the elderly woman, veer to the right and run over the mother and her baby or veer to the left and run head-on into the truck, killing everyone in the car.

Decisions like this (not necessarily this exact decision) need to be built into every self-driven car. An issue that arises for the purchaser of the car is what are the ethical bases of the decisions that are built into the car. Do the ethical standards on which the car is built coincide with those of the (prospective) owner?

Buying cars in the future will be much more complicated! The same applies to many other devices in our society – airplanes, boats, buses, trucks, trains - one could go on.

One thing is clear. Algorithmic authority must remain under the close supervision of people, not only those creating it but those using it as well.