Skip to Main Content

Generative AI: Questioning

Ethical Considerations

Academic integrity means being honest in your studying and assessments. It is the basis for ethical decision-making and behaviour in an academic context. Academic integrity is informed by the values of honesty, trust, responsibility, fairness, respect and courage.

With the rise of Generative AI tools, maintaining these principles requires new strategies and awareness of how AI tools should (or should not) be used.

The opposite of this is academic misconduct. Academic misconduct is seeking to gain an academic advantage by deception or other unfair means. 

Examples of AI-Related Misconduct:

Plagiarism:
  • Using AI to generate essays, research papers, or responses.
  • Submitting AI-generated work as one’s own.
Fabrication of Data or Sources:
  • Generative AI produces false citations or fake data that appear credible but are entirely hallucinated.
Unpermitted Collaboration:
  • Using AI to undertake assessments without your lecturers explicit approval.
Lack of Disclosure:
  • Failing to disclose AI assistance in academic work
  • Misrepresenting AI-generated outputs as evidence or research findings.
     

Best Practices:

  • Follow any instruction provided by your lecturers.

  • Understand Otago University's Policies.

  • Acknowledge any and all use of AI in your assignments.

  • Focus on learning, not short-cuts. 

  • Evaluate all AI outputs.

AI systems are trained on data, and the quality and fairness of that data greatly affect the outcomes AI produces. Bias in AI arises when:

  • Data is incomplete or skewed: Historical inequalities or imbalances in data can reinforce stereotypes.
  • Algorithms amplify bias: Without careful design, AI models can replicate and even magnify biased patterns in data.
  • Lack of diversity in development: Teams building AI often unintentionally introduce their own biases into systems.

Examples of Bias in AI:

  • Image creation models furthering gender or racial stereotypes.
  • Facial recognition systems misidentifying individuals from certain racial groups.
  • Language models perpetuating stereotypes.

How to Spot Bias:

  • Ask, What data was this AI trained on?
  • Look for disparities in outcomes for different groups.
  • Use diverse sources to verify AI-generated content.

Intellectual Property and AI

AI introduces challenges in Intellectual Property ownership, particularly in:

  • Authorship: Who owns the outputs generated by AI tools—Is it the person who is doing the prompting? Or, the company and/or developers of the AI model? 
  • Copyright: There is currently a lack of legislation in New Zealand which specifically addresses AI-generated outputs (like text or images). These will likely change with the update to the Copyright Act 1994. 
  • Data Use: AI systems are trained on large datasets, which may include copyrighted or proprietary materials. Using these datasets without proper licensing can lead to IP infringement.
  •  

Indigenous Cultural and Intellectual Property (ICIP) and AI

AI systems may inadvertently use or replicate Indigenous knowledge, culture, or data without proper acknowledgment or consent, raising significant ethical concerns.

Key Considerations:

  • Ethical Use of Data: Avoid using datasets that include Indigenous knowledge without explicit permission and guidance from the community.
  • Data Sovereignty: Ensure that any of your research aligns with frameworks like the CARE Principles for Indigenous data governance.
  • Cultural Context: Be mindful of how AI models represent Indigenous identities, languages, and symbols—misrepresentation can cause harm.

AI systems are powerful tools, but their development and use come with a significant environmental cost. Understanding these impacts is essential for making sustainable decisions about AI technologies.

Energy Consumption

  • Training AI Models: Training large AI models, like those used in natural language processing or image recognition, requires vast amounts of computational power, consuming energy equivalent to powering hundreds of homes for weeks.
  • Deployment: Even after training, running AI systems (like chatbots or recommendation algorithms) continues to draw energy, especially in real-time applications.

Data Centres and Carbon Emissions

  • Data Centres: AI relies on data centres, which consume energy for computation and cooling. Many data centres still rely on fossil fuels, contributing to global carbon emissions.
  • E-Waste: The rapid turnover of hardware, such as GPUs and processors used in AI research, contributes to growing electronic waste.

Indigenous Perspectives and Environmental Responsibility

  • Interconnectedness: Many Indigenous knowledge systems emphasize balance and respect for natural resources, offering valuable perspectives for developing AI sustainably.
  • Data Sovereignty: When AI systems involve environmental or cultural data, Indigenous communities must be consulted to ensure ethical use and alignment with sustainability principles.

AI systems are increasingly used to create and share information, but they can also contribute to the spread of misinformation (false information shared without intent to deceive) and disinformation (false information deliberately created to mislead).

How AI Contributes

  • Deepfakes: AI can generate realistic but fake images, videos, or audio that can be used to spread false narratives.
  • Generated Content: AI tools like chatbots or content generators may produce inaccurate or misleading outputs if they are based on biased or unreliable training data.
  • Amplification: Social media algorithms powered by AI can amplify misleading content, prioritising engagement over accuracy.

Why It Matters

  • Public Trust: Misinformation erodes trust in institutions, media, and science.
  • Real-World Impact: Disinformation campaigns can influence elections, public health decisions, and social harmony.
  • Academic Risks: AI-generated misinformation can spread into scholarly spaces, impacting research integrity and public knowledge.

How to Evaluate AI-Generated Content

Further Reading