Skip navigation EPAM
Dark Mode
Light Mode
CONTACT US

The Responsible Use of Generative AI in Education Technology

The Responsible Use of Generative AI in Education Technology

As generative AI (GenAI) technologies transition from novelty to commodity, the impact has been felt across businesses in virtually every industry — the education sector is no exception. From adaptive and personalized learning products to new tools fueled by automation, educational technology is a prime candidate for GenAI-enabled change. Modern learners are ready to adopt new digital experiences as a part of the larger shift away from institutional education, towards lifelong learning, with new formats of education and credentialing. EdTech companies, learning content providers and educational institutions can seize this opportunity for technical tooling and address the appetite for new learning approaches.

With this opportunity comes potential risk and an increased awareness of it. One example is the proposed AI Act, which places “educational or vocational training that may determine the access to education and professional course of someone’s life” in the high risk category of AI systems, necessitating a set of strict obligations for in-market use. Some tangible challenges that must be addressed in EdTech include: the unpredictable accuracy of generated responses, LLM hallucinations in model output, ambiguity surrounding the legality of copywrite and authorship, a lack of explainability in automated systems, and the potential amplification of existing bias. Not to mention, the overall non-deterministic behavior of LLMs, as well as security and privacy concerns.

Below we discuss how GenAI could enhance the EdTech industry for key stakeholder groups, but only when ensuring a responsible and thoughtful use of these tools.

Learning Content Providers

New technologies like LLMs and educational trends have pushed content creators to dramatically change their approach of how content is produced and communicated to students. GenAI can become a force multiplier for learning content providers by accelerating the development of interesting, augmented teaching materials that focus on personalization and continuous improvement.

As mentioned above, one of the main concerns is the accuracy of generated responses, which is difficult given the vast uncurated information these models are trained on. By training GenAI models on textbooks and other high-quality materials instead of uncontrolled information, learning content providers can improve the quality, truthfulness and trustworthiness of responses. To ensure accuracy, content creators should always involve SMEs in validating AI-generated content; but as with other quality assurance approaches, learning content providers need to use a reasoned strategy to systematic sampling that is grounded in good statistical design.

Students

Students have the potential to be the largest users of GenAI in education — not only because of their generally higher comfort level with technology and desire to find alternative solutions to coursework tasks but also because of the opportunities LLMs present for learning assistance.

You may have heard about campuses banning GenAI tools due to concerns of academic honesty. This is not a sustainable solution for several reasons: It is extremely hard to enforce, and it takes away the benefits GenAI offers, not only from students but from educators as well. GenAI-enabled tools open the door to scalable, iterative, discovery-based learning with the infinite patience afforded by a conversational AI agent. In a similar way to what a human instructor or a tutor would do, the tool replaces simplistic Q&A interactions with a Socratic style response where a student finds the answer with guidance from an AI tutor. It then uses adaptive learning by discovering students’ academic weaknesses and addressing them. So, instead of the tool just telling a student that their answer is wrong, it guides the student in helping them rethink and re-evaluate why their first answer was incorrect and what the right answer is. 

An effective education often involves empowering students to take responsibility for their own learning paths. When used responsibly, GenAI can simulate peer-to-peer discussions and personalized tutoring and provide a general aid to students.

Teachers

With great power comes great responsibility, and with the educational sector leveraging GenAI, the name of the game is transparency. Institutions, as well as each individual teacher, must acknowledge where and how AI is utilized, so anyone can raise concerns and evaluate potential bias and shortcomings. Ensuring that data is stored and used ethically is key to creating a safe teaching environment, where there is no fear of data misuse.

When GenAI is used safely and responsibly, teachers have an enormous source of ideas at their fingertips to enrich the class curriculum and individual session design. And when used effectively, these tools can free up precious time for the teacher to interact with students. Here are some examples of how GenAI can be leveraged: creating supplemental materials (such as lesson plans and presentations), enhancing assessments, analyzing student performance and streamlining daily activities (such as writing emails and generating reports). For these use cases, careful attention must be given to monitoring and avoiding bias. To do this, meticulous statistical analysis of systematic interactions with students should be used. This will help ensure fair and equitable learning experiences for all students.

What’s Next 

Recently, several big players in GenAI have reached an agreement to put extra effort in testing LLMs internally and externally before sharing them with the public to ensure accuracy, privacy protection and reduced bias. We can expect new legislation on the responsible use of AI to continue across different countries and jurisdictions. But for now, the legislative process cannot keep up with the daily changes in the GenAI world. This means all businesses – including EdTech stakeholders – must develop and embrace their own responsible AI principles to ensure compliance.

Pivoting towards these responsible AI design principles and implementation frameworks, while building organization-wide awareness and culture, should help to future-proof AI-powered EdTech initiatives so they are safe, trustworthy and beneficial for teachers and learners in the long run. By integrating AI responsibly into the educational landscape, we can dramatically enhance solutions and products that nurture the generation of learners who are well equipped to thrive in a rapidly changing digital world.

GET IN TOUCH

Hi! We’d love to hear from you.

Want to talk to us about your business needs?