Academic Integrity in AI: Avoiding Plagiarism, Contending with Bias, and Forecasting the Future of Scholarly Authenticity
In universities across Kazakhstan, a groundbreaking shift is underway, propelled not by politics or economic progress, but by artificial intelligence (AI). From writing centers to dorm rooms, students are leveraging AI resources such as ChatGPT, Grammarly, and QuillBot to aid their writing. While some utilize these tools for basic editing or idea generation, others depend on them to create whole essays.
As these resources become more accessible, the discussion is no longer about AI's suitability in an academic setting. Instead, it's about how to wisely implement it.
Embracing AI offers promise for enhancing academic life in Kazakhstan. It can provide instant, tailored feedback to multilingual learners dealing with writing requirements in Kazakh, Russian, and English. However, unquestioning adoption of AI in writing brings concerning ethical issues into focus, such as plagiarism and bias.
These aren't abstract concepts; they challenge fundamental educational principles, including originality, critical thinking, and equity.
Plagiarism and AI: Redefining the Norm
Plagiarism has long plagued education, but AI is drastically altering our perception of it. Traditionally, plagiarism refers to taking someone else's work without giving credit. But with AI, the boundaries are murky. If a student asks an AI model to write 500 words on what triggered World War I and submits the content unaltered, is that a case of plagiarism? What about if it's slightly revised? What if AI is used merely for structure and transitions?
This isn't about academia; it's about education. Students who rely on AI for intellectual labor miss out on what writing should teach them: thinking, synthesizing, analyzing. Universities in Kazakhstan, like those everywhere, must adapt their academic integrity policies regarding AI. To be effective, these policies should accommodate nuance and recognize that not everything involving AI is cheating. It's about transparency, when and how students reveal their AI usage, and intent.
Most students are aware that copying and pasting AI-generated content is cheating and amounts to academic misconduct. The issue is uncertainty. Most aren't confident about their university's particular policy, particularly since institutional policy concerning AI use remains in flux.
The uncertainty is compounded by inconsistency, as one professor may welcome modest AI tool usage for ideas generation or language assistance, while another forbids it entirely. Without a unified institutional policy, each student must navigate these shades of gray individually. In response, universities in Kazakhstan should learn from international institutions that are now creating transparent, nuanced guidelines and even citation practices for AI-generated content.
However, a punishment-based approach alone will not succeed. Instead, a change in academic culture is needed. Students need to learn not just to avoid plagiarism but also to appreciate the importance of originality and authorship. Faculty must teach in environments where writing is seen as a cognitive process, not a final product. AI may facilitate the process, but it should not replace it.
The Unnoticed Biases of Neutral Technology
Another significant ethical issue that often goes unrecognized is bias. While some believe that since AI is powered by algorithms, it is impartial, this isn't entirely true. AI models are trained on vast data sets, the majority of which is in English and mostly sourced from Western origins. Even Open AI, the owners of ChatGPT, acknowledge this on their website. What this signifies is that these AI models reflect Western cultural, linguistic, and ideological assumptions inherent in the data.
For students, this gives rise to two significant, interrelated challenges.
First, there's a genuine threat that AI-assisted writing reinforces Anglo-American scholarly practices at the expense of local systems of knowledge. Content produced by such models tends to prioritize linear, thesis-based argument structures, citation practices, and critical styles that may not align with native or multilingual scholarly practices. If Kazakh students use AI tools to support their writings, they risk adopting these practices, depriving them of opportunities to develop a unique academic voice that reflects their regional context.
Equally troubling is how AI can perpetuate and intensify existing disparities. Students from rural areas or those more comfortable using Kazakh or Russian might find that AI tools favor content in English or Western examples. This sets up an uneven playing field based on linguistic ability and access to global discourse, which determines the quality of AI assistance a student receives. Such inequalities risk exacerbating educational disparities, favoring those already fluent in the dominant discourse of global academia.
To address these issues, universities must take action by integrating discussions on these biases into their courses. Assessments can involve local interpretations of regional or global issues in a way that counters the cultural threats arising from AI use.
A Call for Ethical Leadership
With its unique multilingual and multicultural environment, coupled with investment in education, Kazakhstan has the potential to lead on this issue. This will require reforming academic integrity policy to address AI-generated content and investing in widespread training for faculty, staff, and students.
Institutions may hold regular workshops on responsible AI use, develop standardized institutional guidelines for AI disclosure and citation, and embed discussions on digital ethics and algorithmic bias into the curriculum.
Banning AI from classrooms wouldn't be realistic or beneficial to either students or educators. Instead, we must openly address how it is transforming the way students learn and think. This includes reinforcing the importance of originality and critical inquiry, promoting educators to treat writing as a thought process rather than a final product, and fostering assignments that prioritize individual voice and reflection. AI should be a tool that aids learning, not replaces it. A commitment to fairness, equity, and intellectual honesty must remain the core of education.
The AuthorMichael Jones is a writing and communications instructor at the School of Social Science and Humanities, Nazarbayev University, Astana.
- The integration of technology such as QuillBot and other AI resources into education-and-self-development in Kazakhstan is raising questions about academic integrity, particularly concerning plagiarism and bias.
- In the discussion about plagiarism and AI, it's crucial for universities in Kazakhstan to adapt their academic integrity policies regarding AI usage, ensuring transparency and acknowledging nuances in AI use without penalizing students who reveal their AI assistance.
- There's a need for universities in Kazakhstan to address the unnoticed biases present in neutral technology like AI, by integrating discussions on cultural, linguistic, and ideological biases into their courses and promoting assignments that prioritize individual voice and reflection.
- To effectively navigate these ethical issues surrounding AI in education and self-development, it's essential for institutions in Kazakhstan to learn from international counterparts developing transparent, nuanced guidelines and even citation practices for AI-generated content, while fostering a change in academic culture that values originality, authorship, and critical thinking.
