Our team of experts is ready to answer!
You can contact us directly
Telegram iconFacebook messenger iconWhatApp icon
Fill in the form below and you will receive an answer within 2 working days.
Or fill in the form below and you will receive an answer within 2 working days.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Reading Time
11 minutes
Sergey Korol
OpenCV.ai author
Ethical considerations in artificial intelligence

Ethics in artificial intelligence and computer vision

Smart machines are making decisions in people's life death and taxes, — and let's be honest, they don't always do it well or transparently
October 17, 2024

Problems of artificial intelligence in ethics

The problem of ethical considerations in artificial intelligence is more complicated than we think. It's not about whether powerful artificial intelligence will be able to threaten us in the future — AI ethical issues affect our lives right now.

The Black Box. One of the main concerns in AI ethics is the "black box" problem, where AI systems make decisions that are difficult to understand or explain. Even the developers of these systems may not fully understand how a particular decision was made, leading to a lack of transparency and accountability. This is particularly problematic in critical areas such as healthcare and criminal justice, where understanding why a decision was made is as important as the outcome itself. Without interpretability, it's impossible to build trust in AI systems or challenge their decisions.

Bias in training data. AI systems are trained on historical data, which often reflects societal biases. If the training data contains biased assumptions or imbalances - such as racial or gender disparities - AI will perpetuate and even amplify these biases. This has led to biased hiring systems, unfair risk assessments in criminal justice, and healthcare algorithms that favour certain demographics. Correcting these biases is complex, but it starts with using diverse and representative data sets and regularly auditing systems.

Project human understanding onto AI. AI systems are often designed with the assumption that they should mimic human decision making, but what humans perceive as a 'correct' decision may not align with how AI processes data. For example, humans may prioritise ethical considerations that are not easily coded into an AI system, leading to discrepancies in decision making. This projection of human understanding onto AI can lead the system to make decisions that appear rational from a data-driven perspective, but are ethically questionable or socially unacceptable.

Privacy and consent. The vast amounts of data needed to train AI systems often come from scraping the internet or tracking individuals without their explicit consent. Companies may use user data from social media, online platforms or even internal surveillance of employees to improve AI models. This raises significant concerns about privacy violations and data misuse. Ethical AI development requires strict data governance, where individuals are informed about how their data is being used and have the opportunity to opt out.

Human experts in the loop. Introducing human experts to oversee AI decision making may seem like a solution to address errors or biases. However, as AI systems achieve extremely high levels of accuracy, human experts may become overly reliant on the system and begin to defer to its judgement. Over time, experts may stop critically evaluating AI results, assuming that the system's decisions are always correct. This creates a false sense of security and can lead to serious errors going unchecked, especially in high-stakes environments such as healthcare or autonomous driving.

AI in hiring

One of the most controversial uses of AI today is its application in the recruitment process. Companies are increasingly relying on AI systems to screen candidates, but many of these systems operate as 'black boxes'. This lack of transparency can lead to biased results, where qualified candidates are rejected without understanding why.

For example, Amazon's AI recruitment tool was found to be biased against women because it was trained on data from male-dominated fields. The system learned to favor male candidates, exacerbating gender inequality in the workplace.

This raises significant ethical concerns about fairness and accountability. Without transparency, it's impossible to know whether AI is perpetuating bias, making decisions based on flawed assumptions or sees something relevant. Critics argue that companies should be required to disclose the decision-making criteria and allow for human oversight to ensure that the process remains fair. However, in most cases this is simply not possible — due to the high complexity and frequency of systems.

The social media dataset

A well-known case of ethical misuse of AI involves a company scraping data from a popular social media platform to train an AI assistant. Users had no idea their personal conversations, images, and posts were being used for this purpose. When the AI assistant launched, it exhibited inappropriate and rude behavior, which was traced back to the informal and sometimes offensive language patterns it had learned from the dataset.

One notable case that fits the narrative involves Microsoft’s DIALOGPT, which was trained using vast amounts of data from Reddit. Microsoft developed this conversational AI with the goal of creating natural-sounding dialogue systems. However, because Reddit contains a wide range of discourse, including offensive and inappropriate content, the chatbot exhibited problematic behavior. When deployed, DIALOGPT sometimes generated inappropriate and offensive responses because it had learned from unfiltered Reddit conversations, where users frequently engage in unregulated and sometimes toxic discussions.

This case highlights significant ethical issues, particularly regarding the training data used for AI systems. Reddit, while a rich source of human interaction, is not necessarily moderated or suitable for training models without extensive data filtering.

AI monitoring in the workplace: gaze-tracking and emotion recognition

An emerging trend in AI applications is the use of surveillance technologies in workplaces to monitor employee behavior. Some companies have implemented AI-driven systems that track where employees are looking during work hours to ensure they are focused on their tasks.

For example, a solution from Lattice analyses employee eye position and screen content, and generates regular AI reports on employee performance. The data was collected and analyzed to gauge productivity, with some employees facing penalties for low engagement.

Such practices push the boundaries of acceptable workplace monitoring and surveillance. While companies argue that this technology increases productivity, employees often feel that their privacy is being violated. The ethical debate centers on the balance between efficiency and respect for workers' rights. AI surveillance systems need to be carefully regulated to prevent invasive practices that harm employees' well-being. In addition, people learn to “hack” the system very quickly.

AI sentencing tools

In the criminal justice system, AI tools are increasingly used to assess risks associated with recidivism and inform sentencing decisions. However, these tools have been found to be biased against certain racial groups.

One notable example is the COMPAS system, which has been criticised for assigning higher risk scores to black defendants than to white defendants with similar profiles. This bias is the result of training the AI on historical data that reflects systemic racism in the justice system.

The ethical concerns here are profound: AI systems may perpetuate existing injustices rather than mitigate them. The uninterpretability of such systems makes it difficult for defendants to challenge decisions, raising questions about fairness and due process. Ethicists argue that AI should not replace human judgement, especially in high-stakes scenarios such as criminal sentencing.

Facial recognition in public spaces

The use of AI-driven facial recognition technology in public spaces has sparked widespread debate about privacy and civil liberties. Governments and law enforcement agencies have used this technology for surveillance, often without the consent of the public.

For example, several cities in the US have deployed facial recognition systems to monitor crowds during large events or protests. While touted as tools to enhance security, these systems have raised ethical concerns about privacy, as citizens are often unaware that they are being watched.

The key ethical issue revolves around consent and the right to privacy. Facial recognition systems can track people's movements and interactions without their knowledge, leading to potential abuses of power. Many argue for stricter regulation and transparency in the use of such technologies to ensure that they do not violate fundamental rights.

AI in healthcare: unintended discrimination

AI systems are increasingly being used in healthcare to diagnose and recommend treatment.

However, a study has found that an AI system used to allocate healthcare resources was biased against black patients. The system, which was designed to predict which patients would benefit from certain medical interventions, favored white patients because it used healthcare spending as a proxy for health needs. Because black patients have historically had less access to healthcare, the AI system exacerbated this disparity.

This example highlights the ethical challenge of ensuring that AI in healthcare does not exacerbate existing inequalities. Ethical AI in healthcare requires diverse and representative data, transparency in algorithms, and oversight to prevent discrimination based on race or socioeconomic status.

Autonomous vehicles: the trolley problem in real life

The development of autonomous vehicles has revived one of philosophy's most famous ethical dilemmas: the trolley problem. When a self-driving car is faced with an unavoidable accident, how should it choose between two harmful outcomes? Should it prioritise the safety of its passengers or minimise overall damage?

In 2018, a self-driving Uber car struck and killed a pedestrian, sparking a debate about how such vehicles should be programmed to handle life-and-death situations.

This scenario raises ethical questions about responsibility and decision-making in autonomous systems. If AI makes a fatal mistake, who is to blame - the developers, the car manufacturer or the AI itself? The debate continues as companies and regulators work to define ethical guidelines for the behaviour of autonomous vehicles.

Let's discuss your project

Book a complimentary consultation

Read also

October 4, 2024

Artificial intelligence and computer vision in education

How smart machines make learning easier and cheating harder
September 27, 2024

Robotics and agriculture

Let's explore how AI for agriculture improving the lives of animals, plants, and humans alike
September 11, 2024

Computer vision and artificial intelligence in smart cities

Let's talk about how cities are getting smarter, greener, safer, more dynamic — right now