Skip links

Ethical AI: an interview with Chat GPT

Share

Prior to AI image generators replacing about 70% of the stock photography I use, I would consistently run into the same issue over and over again when sourcing photos for projects: white people. Yes folks, we are a virus in the stock photography world. So many smiling, skinny, straight, white families in their suburban yards, eating salads and playing with their golden retrievers. It was a tiresome but necessary evil of my weekly stock photo gathering process to type in “diverse”, “Black”, “gay”, “hispanic”, etc. if I wanted anything but white people in every image. Primarily, this was tiresome because of the simple fact that the world doesn’t actually look like those generic stock photos. People are much more colorful, and the widely available imagery simply didn’t reflect this fact.

As generators have rapidly expanded in their capacities and programming, it has been interesting to observe how the prolific visibility of euro-centric beauty standards have shaped how the generators function. It has been equally invigorating to see how the humans behind these processes have taken steps to combat this behavior and help AI engines combat inherent bias.

I am happy to report that as of the writing of this, I have noticed serious improvement in the racial diversity that is reflected when no racial or skin tone is defined when generating images. Often, the output will be someone who looks more like an actually typical American in 2023: a mixed-race, somewhat brown person (usually female presenting) in their late 20s or early 30s. Usually wearing a hoodie or jacket, probably with pierced ears and dark brown hair. This is a good step in the right direction; it is an example of effective human intervention into a complex algorithm that essentially compounds the totality of imagery collected on the internet. And yet, this is a wildly complex issue, with so much more than skin tone and gender to unravel. For example, age, body type, disability, and gender expression diversity are still not expressed well in standard image generations. And image generators have a nearly impossible time generating groups of people who look distinct from one another.

Adobe stock, first result for search “Happy Person”

Leonardo Photoreal image generation, first result for “Happy Person”

So I decided to turn to a direct reflection of the data itself for a breakdown of how it sees the state of affairs, and how humans can be a part of shaping AI for good: ChatGPT.

Without further ado, here is the brief interview I conducted with ChatGPT regarding ethics and inherent bias in AI, and what we can do about it.

KA – can you define what “ethical AI” means?

GPT – “Ethical AI,” short for “Ethical Artificial Intelligence,” refers to the development, deployment, and use of artificial intelligence systems in a manner that aligns with moral principles, human values, and societal norms. It involves ensuring that AI technologies are designed and used in ways that are fair, just, transparent, and respectful of individual rights and collective well-being. Ethical AI encompasses various principles and considerations, including:

  1. Fairness 
  2. Transparency
  3. Privacy
  4. Accountability
  5. Beneficence
  6. Non-Maleficence
  7. Inclusivity 
  8. Ethical Decision-Making 

In practice, adhering to ethical AI principles involves ongoing assessment, auditing, and refinement of AI systems, as well as the development of policies and guidelines that promote responsible and accountable AI development and deployment. Ethical AI is essential for building trust in AI technologies and ensuring that they contribute positively to society.

KA – is there an inherent bias in AI generation engines, both visual generators and LLMS, and if so, what can we do to combat this bias?

GPT – Yes, there can be inherent bias in AI generation engines, both in visual generators and language models like GPT-3.5. This bias can emerge from various sources, including the training data used to train these models, which often contains biases present in the data from the internet. Here’s how bias can occur and what can be done to combat it:

**Sources of Bias in AI Generation Engines:**

  1. **Training Data Bias:** Models like GPT-3 are trained on vast datasets from the internet, which may contain biased or prejudiced language and information. The model may learn and reproduce these biases.
  2. **Selection Bias:** The data used for training may not be representative of the real world or may disproportionately represent certain demographics or viewpoints, leading to bias.
  1. **Amplification of Existing Bias:** AI models, when generating content, can inadvertently amplify existing stereotypes, prejudices, or misinformation present in their training data.

**Ways to Combat Bias in AI Generation:**

  1. **Diverse Training Data:** Using diverse and representative training data is crucial. Efforts should be made to curate datasets that include a wide range of perspectives, cultures, and demographics.
  2. **Bias Auditing:** Continuously audit AI outputs to identify and mitigate biased or harmful content. This can involve human reviewers or automated systems to flag problematic content.
  3. **Fine-Tuning:** Models can be fine-tuned on specific tasks with more carefully curated data to reduce biases in specific contexts.
  4. **Explainability and Transparency:** Make AI systems more transparent and explainable so that users can understand why a particular output was generated. This can help identify and address biases more effectively.
  5. **Ethical Guidelines:** Develop and follow ethical guidelines for AI development and deployment. These guidelines should explicitly address bias and fairness concerns.
  6. **Bias Mitigation Algorithms:** Researchers are working on developing algorithms that can identify and mitigate bias in AI outputs. These algorithms can be used to post-process generated content to reduce bias.
  7. **Diverse Development Teams:** Building AI with diverse development teams can help identify and mitigate bias at various stages of development.
  8. **User Education:** Educate users about the capabilities and limitations of AI systems, including their potential for bias, to help them critically evaluate AI-generated content.
  9. **Regulation and Standards:** Governments and industry bodies are exploring regulations and standards for AI development and deployment to ensure fairness and ethics.
  10. **Feedback Loops:** Establish mechanisms for users and the public to provide feedback on biased or harmful AI outputs, and use this feedback for model improvement.

It’s important to note that while these measures can help combat bias, complete elimination of bias in AI systems is challenging. However, ongoing research and industry collaboration are focused on minimizing bias and making AI systems as fair and ethical as possible.

 

KA – when you refer to “biases present in the data from the internet” can you give some examples of those biases and where they might stem from?

GPT – Certainly! Biases present in data from the internet can originate from various sources and societal factors. Here are some examples of common biases and their sources:

  1. **Gender Bias:**

   – **Source:** Gender bias can stem from historical and cultural stereotypes that have been perpetuated through online content. For example, algorithms may associate certain professions or traits with specific genders.

  1. **Racial and Ethnic Bias:**

   – **Source:** Racial and ethnic biases can emerge from historical discrimination and prejudices that are reflected in online content. Algorithms may associate certain racial or ethnic groups with negative stereotypes.

  1. **Political Bias:**

   – **Source:** Online discussions and news articles often reflect political biases, which can be present in the training data. Algorithms may favor certain political viewpoints or parties.

  1. **Socioeconomic Bias:**

   – **Source:** Economic disparities can result in biases in data. Algorithms may unintentionally favor or disfavor certain socioeconomic groups, perpetuating inequalities.

  1. **Confirmation Bias:**

   – **Source:** People tend to consume information that confirms their existing beliefs. Algorithms may inadvertently reinforce these biases by recommending content that aligns with users’ preconceived notions.

  1. **Geographic Bias:**

   – **Source:** Geographic biases can arise from differences in internet usage and content creation across regions. Algorithms may prioritize content from certain geographic areas over others.

  1. **Cultural Bias:**

   – **Source:** Cultural biases can be present in language and cultural references found on the internet. Algorithms may incorporate these biases into generated content.

  1. **Religious Bias:**

   – **Source:** Religious biases may manifest in online discussions and content. Algorithms may exhibit biases related to specific religions or beliefs.

  1. **Age Bias:**

   – **Source:** Age-related biases can be present in how different age groups are portrayed online. Algorithms may generate content that is skewed toward or against specific age demographics.

  1. **Language Bias:**

    – **Source:** Language biases can arise due to variations in language use and cultural context. Algorithms may struggle to provide unbiased translations or interpretations of content.

These biases are not inherent to AI but rather a reflection of the biases present in the data used to train AI models. AI models learn patterns and associations from the data they are exposed to, and if the training data contains biases, those biases can be reflected in the model’s outputs. Addressing these biases in AI systems requires careful curation of training data, ongoing monitoring, and the implementation of bias mitigation strategies.

KA – so humans are still important?

GPT – Absolutely, humans play a critical role in AI development, deployment, and oversight. While AI can automate many tasks and processes, human involvement is essential for several reasons:

  1. **Data Curation:** Humans are responsible for curating and preparing the training data used to teach AI models. This involves selecting, cleaning, and annotating data to ensure its quality and relevance.
  2. **Bias Assessment:** Human reviewers are often employed to assess AI outputs for bias and ethical considerations. They help identify and mitigate bias in AI-generated content.
  3. **Model Training:** AI models require human oversight during the training process. Data scientists and machine learning engineers design experiments, fine-tune models, and adjust parameters to optimize performance.
  4. **Problem Solving:** AI models can encounter situations they haven’t been explicitly trained for. Humans are needed to problem-solve and adapt AI systems to new and unexpected scenarios.
  5. **Ethical Frameworks:** Humans establish ethical guidelines and frameworks for AI development and deployment. These guidelines help ensure that AI systems adhere to ethical standards.
  6. **Decision-Making:** Ultimately, AI systems are tools that assist humans in decision-making. Humans are responsible for making critical decisions based on AI recommendations.
  7. **Interpretation and Context:** AI can provide insights, but human judgment is necessary to interpret those insights in the broader context and make informed decisions.
  8. **Accountability:** Humans are accountable for the actions and consequences of AI systems. They are responsible for addressing errors, biases, and ethical concerns that may arise.
  9. **Feedback and Improvement:** Humans gather feedback from users and stakeholders to improve AI systems continuously. This feedback loop is crucial for refining AI capabilities.
  10. **Regulation and Governance:** Governments and regulatory bodies rely on humans to create and enforce regulations and standards for AI development and use.

In essence, AI is a tool that augments human capabilities and automates certain tasks, but it doesn’t replace human judgment, creativity, ethics, and decision-making. Successful AI integration often involves a collaboration between humans and AI systems, where humans provide guidance, oversight, and expertise to ensure that AI serves human needs and aligns with societal values.

KA – but doesn’t the AI’s reflection of inherent societal biases mean that it does, in fact, align with societal values? and that in turn means that our societal values are fundamentally flawed?

GPT – The reflection of societal biases in AI does not necessarily mean that these biases are in alignment with desirable societal values. Instead, it highlights a complex relationship between AI, training data, and the biases that exist in society. The reflection of biases in AI systems is not an endorsement of those biases but rather a reflection of the data from which AI models learn. It underscores the need for ethical AI development, ongoing monitoring, and societal reflection on biases and inequalities. Recognizing and addressing these biases is a step toward aligning AI technologies with more equitable and just societal values. It also highlights the opportunity to work toward a more inclusive and fair society by addressing the root causes of bias and prejudice.

KA – so in this way, AI is like a very raw reflection of collective human culture?

GPT – Yes, in many ways, AI can be seen as a raw reflection of collective human culture, as it learns and evolves based on the data and information it is exposed to. Here’s how AI reflects and interacts with human culture:

  1. **Learning from Data:** AI models, especially machine learning models, learn patterns, behaviors, and associations from vast amounts of data. This data often includes text, images, and other forms of human-generated content, which inherently carry cultural elements.
  2. **Amplifying Cultural Signals:** AI can amplify and propagate existing cultural signals present in data. It may reflect societal norms, biases, values, and language usage patterns found in the training data.
  3. **Incorporating Diverse Perspectives:** AI can capture diverse perspectives and viewpoints present in the data, offering a glimpse into the global diversity of human culture. It can provide insights into different languages, customs, and traditions.
  4. **Reinforcing or Challenging Stereotypes:** Depending on the quality and diversity of the training data, AI can either reinforce stereotypes or challenge them. It can either perpetuate bias or contribute to more inclusive and equitable content generation.
  5. **Cultural Adaptation:** AI systems can be fine-tuned or customized to adapt to specific cultural contexts, languages, or preferences. This adaptability reflects the cultural versatility of AI.
  6. **Cultural Evolution:** AI can also influence and evolve culture by shaping the way information is disseminated, impacting content creation, and even inspiring new cultural expressions.
  7. **Ethical Considerations:** The reflection of cultural biases in AI raises important ethical considerations. It prompts discussions about fairness, bias mitigation, and the role of AI in shaping cultural narratives.

It’s crucial to recognize that AI is not a passive observer of culture but an active participant in it. Developers and users of AI systems have a responsibility to curate data, design algorithms, and implement ethical guidelines that align AI with desirable cultural values, inclusivity, and fairness. AI can be a powerful tool for both reflecting and shaping human culture, and its impact on society depends on how it is developed and used.

This website uses cookies to improve your web experience.
Home
Account
Cart
Search