top of page

Code of Ethics

University of Michigan does not have a curated document for Code of Ethics, however here are few ethical considerations University has to offer and few we curated as a class.  

Here are University’s  Ethical considerations

GenAI is not sentient  

 

GenAI models or Large Language models (LLMs) might appear to possess sentience or self-awareness as a human would, but are simply systems trained on large and biased datasets. LLMs are designed to output the most likely, or most common results possible based on their data, and will invariably tend to suppress less common or marginalized information.

 

GenAI is biased

 

GenAI models carry implicit biases in them that make them unsuitable for use in cases of ethical deliberation and decision, and should not be used in those circumstances. Furthermore, this data is from the past, which results in a loss of context for current social changes.

​

​

GenAI can mislead 

 

GenAI in its current stage will tend to ‘hallucinate’ or make up random data that is not true. Models have no real sense of what is true or false. These models are built to output what is most likely in a verbose manner, even if there might not be enough real information to back it up.

 

GenAI prefers English

 

LLM models are currently heavily biased toward Standard American English. This means that writing styles and dialects adopted by other cultures and ethnic groups such in cases of African American or Indigenous English are at risk of being penalized for a privileged White-dominated form of writing instead.

Here are few of our Ethical considerations

Fairness: How will the AI ensure equal treatment of all users?

  • Ensuring that AI will be accessible for ALL stakeholders: making the tool accessible for all faculty/students and being inclusive to help people with disabilities

  • Proper training for students and faculty for them to accurately prompt the AI models

  • ITS should be managing access, tech support, and ensuring the tools remain without bias

 

​Non-discrimination: How will it avoid biases based on race, gender, etc?

  • Training to be provided for all users in order to prevent users prompting AI with possible bias

  • Educate users about generative AI’s tendency to hallucinate and recognize the signs of biases in the AI’s responses 

  • Ensuring that ALL groups have equal access to AI

 

Consent: How will users' consent be obtained and managed?

  • Terms & conditions signed at the start of the year/semester

    1. Outline the policies set by the university

    2. Maybe an additional outline of policies made by professors for their specific class

  • Transparency: disclosure statements are very important for the university in order to mitigate privacy violations and for community members to always know when AI is being utilized

 

Access: How can we ensure equitable access?

  • Any paid AI tools or paid version of AI tools such as ChatGPT-4 and MidJourney that students are allowed to use should be provided to all students. If paid versions cannot be provided, students can only use the free version. 

  • UM students/faculty in other geographic locations should be provided the same access to AI tools

bottom of page