Sending digitized information across fiber-optic wires raises no ethical questions.

Sending digitized information across fiber-optic wires raises no ethical questions.
Image: Liu Zishan/Shutterstock.com

Will widespread adoption of emerging digital technologies such as the Internet of Things and Artificial Intelligence improve people’s lives? The answer appears to be an easy “yes.” The positive potential of data seems self-evident. Yet, this issue is being actively discussed across international summits and events. Thus, the agenda of Global Technology Government Summit 2021 is dedicated to questions around whether and how “data can work for all”, emphasizing trust aspects, and especially ethics of data use. Not without a reason, at least 50 countries are grappling independently with how to define ethical data use smoothly without violating people’s private space, personal data, and many other sensitive aspects.
 

Ethics goes online

What is ethics per se? Aristotle proposed that ethics is the study of human relations in their most perfect form. He called it the science of proper behavior. Aristotle claimed that ethics is the basis for creating an optimal model of fair human relations; ethics lie at the foundation of a society’s moral consciousness. They are the shared principles necessary for mutual understanding and harmonious relations.

Ethical principles have evolved many times over since the days of the ancient Greek philosophers and have been repeatedly rethought (e.g., hedonism, utilitarianism, relativism, etc.). Today we live in a digital world, and most of our relationships have moved online to chats, messengers, social media, and many other ways of online communication.  We do not see each other, but we do share our data; we do not talk to each other, but we give our opinions liberally. So how should these principles evolve for such an online, globalized world? And what might the process look like for identifying those principles?
 

Digital chaos without ethics

2020 and the lockdowns clearly demonstrate that we plunge into the digital world irrevocably. As digital technologies become ever more deeply embedded in our lives, the need for a new, shared data ethos grows more urgent. Without shared principles, we risk exacerbating existing biases that are part of our current datasets.  Just a few examples:

  • The common exclusion of women as test subjects in much medical research results in a lack of relevant data on women’s health. Heart disease, for example, has traditionally been thought of as a predominantly male disease. This has led to massive misdiagnosed or underdiagnosed heart disease in women.
  • A study of AI tools that authorities use to determine the likelihood that a criminal reoffends found that algorithms produced different results for black and white people under the same conditions. This discriminatory effect has resulted in sharp criticism and distrust of predictive policing.
  • Amazon abandoned its AI hiring program because of its bias against women. The algorithm began training on the resumes of the candidates for job postings over the previous ten years. Because most of the applicants were men, it developed a bias to prefer men and penalized features associated with women.

These examples all contribute to distrust or rejection of potentially beneficial new technological solutions. What ethical principles can we use to address the flaws in technologies that increase biases, profiling, and inequality? This question has led to significant growth in interest in data ethics over the last decade (Figures 1 and 2). And this is why many countries are now developing or adopting ethical principles, standards, or guidelines.

Figure 1. Data ethics concept, 2010-2021    

Sending digitized information across fiber-optic wires raises no ethical questions.


Figure 2. AI ethics concept, 2010-2021

Sending digitized information across fiber-optic wires raises no ethical questions.
Source: Google Trends
 

Guiding data ethics

Countries are taking wildly differing approaches to address data ethics. Even the definition of data ethics varies. Look, for example, at three countries—Germany, Canada, and South Korea—with differing geography, history, institutional and political arrangements, and traditions and culture.

Germany established a Data Ethics Commission in 2018 to provide recommendations for the Federal Government’s Strategy on Artificial Intelligence. The Commission declared that its  operating principles were based on the Constitution, European values, and its “cultural and intellectual history.” Ethics, according to the Commission, should not begin with establishing boundaries. Rather, when ethical issues are discussed early in the creation process, they may make a significant contribution to design, promoting appropriate and beneficial applications of AI systems.

In Canada, the advancement of AI technologies and their use in public services has spurred a discussion about data ethics. The Government of Canada’s recommendations focuses on public service officials and processes. It provided guiding principles to ensure ethical use of AI and developed a comprehensive Algorithmic Impact Assessment online tool to help government officials explore AI in a way that is “governed by clear values, ethics, and laws.”

The Korean Ministry of Science and ICT, in collaboration with the National Information Society Agency, released Ethics Guidelines for the Intelligent Information Society in 2018. These guidelines build on the Robots Ethics Charter. It calls for developing AI and robots that do not have “antisocial” characteristics.” Broadly, Korean ethical policies mainly focused on the adoption of robots into society, while emphasizing the need to balance protecting “human dignity” and “the common good."
 

Do data ethics need a common approach?

The differences among these initiatives seem to be related to traditions, institutional arrangements, and many other cultural and historical factors. Germany places emphasis on developing autonomous vehicles and presents a rather comprehensive view on ethics; Canada puts a stake on guiding government officials; Korea approaches questions through the prism of robots. Still, none of them clearly defines what data ethics is. None of them is meant to have a legal effect. Rather, they stipulate the principles of the information society. In our upcoming study, we intend to explore the reasons and rationale for different approaches that countries take.

Discussion and debate on data and technology ethics undoubtedly will continue for many years to come as digital technologies continue to develop and penetrate into all aspects of human life.  But the sooner we reach a consensus on key definitions, principles, and approaches, the easier the debates can turn into real actions. Data ethics are equally important for government, businesses, individuals and should be discussed openly. The process of such discussion will serve itself as an awareness and knowledge-sharing mechanism.

Recall the Golden Rule of Morality: Do unto others as you would have them do unto you. We suggest keeping this in mind when we all go online.

How can you we use the digital space ethically?

Ethics for digital success In the digital economy, the successful organisation will be the one that is not only aware of ethical values such as trust, honesty, fairness, confidentiality and accountability, but actively adopts them to do the right thing and make decisions that are above reproach.

What is digital ethical behavior?

Digital ethics refers to ethical behavior in digital—a.k.a. online—environments. These include email, instant messaging, chat platforms, social media, and other environments where in-person contact with another human has been removed.

Which of the following actions does the Computer ethics Institute consider unethical?

Which of the following actions does the Computer Ethics Institute consider unethical? Packet-sniffing software can intercept and archieve all communications on a network.

What are the ethics for IT professionals and IT users?

A computing professional should....
1.1 Contribute to society and to human well-being, acknowledging that all people are stakeholders in computing. ... .
1.2 Avoid harm. ... .
1.3 Be honest and trustworthy. ... .
1.4 Be fair and take action not to discriminate..