
How do we want to live in a digitalized world? How do we want to share data across the globe? These and more questions touching privacy and digital ethics are coming up as we spend more and more time online. Businesses across the globe are standing in front of high-developed and fast-paced technology without having enough rules on how to deal properly with it.
Of course, if there are only a few policies on technology until now, one could assume that companies cannot go wrong because they have no one to give account for their decisions. But in fact, the public’s ethical perception will be the judge. No company would like to risk a damaged brand.
As the Chief Privacy Officer at eBay Inc, Dr. Anna Zeiter has to deal will privacy issues every day and as a member of the editorial board of Delphi – The Interdisciplinary Review of Emerging Technologies, she has valuable insights about current developments in the tech sector. She hosted a think tank about privacy and digital ethics at the Global Female Leaders summit 2019 and gave us an exclusive interview about her perceptions of development in this business field. This article gives a summary of the talk. If you are interested in all insights, watch the full video below!
Privacy is a hot topic now, but for the next ten years, digital ethics will be way more important.
Dr. Anna Zeiter, Chief Privacy Officer at eBay Inc., Switzerland
Whose life is more valuable?
Dr. Anna Zeiter exhibits an extraordinary education and career in law before she moved to eBay Inc. With this background, she knows about the importance of clear rules when it comes to ethical decisions in tech. But the difficulty is to set up the rules.
As humans, we have to teach robots and Artificial Intelligence how to make ethical decisions. The problem is that we don’t know either how to decide morally right in specific situations. Let us take the example of self-driving cars. We all know that traffic is not always predictable because not only machines are part of it (even with autonomous vehicles) but also pedestrians and bicyclists. Currently, many car accidents are caused by human mistakes.
Let us assume three kids are crossing the street at a red traffic light and a self-driving car with four older passengers drives so fast that it has no chance to stop in time. Which decision should the car make? Should it run down the three kids? Or should it draw aside and crash a wall killing four passengers? It is a tough decision. If the car would not be self-driving, the driver would rather make a choice based on reflex. But using AI, we have to decide before programming them.
Cultural differences in ethical dilemmas
Whoever now assumes that we could just set up universal rules in our globalized world is wrong. It is way more complicated. The online experimental platform “Moral Machine” was established to explore moral dilemmas faced by autonomous vehicles. If you are interested in more scenarios, you can start judging which outcomes you think are more acceptable as an outside observer. The platform gathered 40 million decisions from millions of people in 233 countries and territories and discovered that the choices vary by country.
The survey found out that the 233 countries could be divided into three groups. The Southern group (including South and Central America and France) for example, is more likely to spare young people, females and those with a higher status. In contrast, the Eastern group (including countries with strong Confucian or Islamic traditions) would rather spare pedestrians, humans and the lawful.
For self-driving car developers like Alphabet, Uber, and Tesla, this means that the vehicles’ moral decisions need to be adjusted to countries individually. Dr. Anna Zeiter predicts that this topic will be highly discussed – probably not only behind the scenes but on diverse media channels too. We are curious about the development here and look forward to your discussion at the next Global Female Leaders summit!