The main audience of our ethical analysis is educational institutions like K-12 and higher education. Generative AI like ChatGPT is innovative for education, completely transforming the way students and faculty members interact. Students (mainly in higher education) are the largest and most active user groups for ChatGPT, a study from Science Direct found that 70% of students globally regularly use ChatGPT. This makes educational institutions the ideal external stakeholder for our ethical analysis.
Generative AI is designed to replicate—and at times replace—human intelligence and interaction. This can cloud our judgement, affecting our ability to detect bias and unintentionally spread misinformation, invade our right to digital privacy, interrupt our learning processes and critical thinking, and fundamentally change the way we interact with eachother face-to-face. Bias and misinformation are the biggest reason why we should care about ethics in generative AI; not only does it provide the user with emotional and non-fact based information, it can also have horrific consequences for society, specifically underrepresented or marginalized groups. Based on our ethical analysis, we need to develop generative AI like ChatGPT in a manner that maximizes its benefits and minimizes the downsides. This is a more efficient and likely approach than abolishing ChatGPT as a whole. Generative AI is an emerging technology that will continue to become more capable and widely adapted in our society, it's important to design and implement ethical use guidelines early in it's lifetime to ensure these values are incorporated in this technology in the future.
Integrating AI into education has to be done with clear intent in order to preserve the judgement of educational professionals, improve the learning experience, and maintain cognitive effort. Our recommendations to our external stakeholder, educational institutions, are the following:
Anyone who regularly uses ChatGPT should be aware of the risks posed by it. By providing accessible AI literacy and ethical use resources, we can bring awareness to the inaccuracies of ChatGPT and mitigate the spread of bias and misinforation.
In a school setting, providing additional resoucres for ethical AI usage will provide a clear example of what acceptable usage looks like in the classroom like studying, creating practice tests, and more. At the same time, this will also demonstrate academic dishonesty and its consequences, dissuading students from cheating and plagiarism using ChatGPT.
Designing generate AI like ChatGPT for ethical usage needs to be a collaborative effort amongst several entities to examine the social, economic, and political issues that intersect with AI and determine how it can be implemented in the least harmful way possible. Entities like academic researchers, government officials, and tech companies to enforce ethical AI use and ensure regulatory frameworks are implemented to protect people and allow ChatGPT to co-exist with daily life.
Increased funding for ethical AI usage can dramatically improve the state of ChatGPT in education. For starters, increased funding can help develop a clear vision of what generative AI should look like in schools and how it can create opportunities for students, enhance educators' AI competencies by providing them with the opportunity to test ChatGPT functionalities to determine its implementation in the course curriculum, and design infrastructure and tools tailored for education that support critical thinking and problem solving skills rather than giving the user the answer right away.