The New AI Landscape for Students: Opportunities and Concerns
As exam season approaches, a notable shift in the educational landscape is occurring, one shaped predominantly by artificial intelligence (AI). Companies like OpenAI, xAI, and Google are making headlines with their initiatives to provide AI tools to students. OpenAI has introduced a two-month free offer of ChatGPT Plus for college students signing up before May 31st. Similarly, xAI is offering students with .edu emails free access to SuperGrok, while Google is launching its Gemini chatbot, giving college students a full year of Gemini Advanced—and a whopping 2 terabytes of Google One storage—at no cost.
This sudden generosity raises questions about the motivations behind these strategies. Young people, particularly college-aged individuals, are significant users of generative AI tools, with data suggesting that they are the largest weekly users of ChatGPT. However, it’s worth noting that Generation Z has mixed feelings about AI’s role in their lives. A recent survey by Gallup and the Walton Family Foundation revealed that 41% of 13 to 28-year-olds reported feeling anxious about AI tools, even as most students look to educational institutions for guidance on their use.
The Legislative Landscape
Adding complexity to the narrative, the current administration has waded into the discussion. President Trump recently signed an executive order aimed at advancing AI education in K-12 schools and workforce training programs. This move might seem paradoxical for an administration advocating for the dismantling of the U.S. Department of Education, including its Office of Education Technology. However, this executive order could pave the way for educational policies that prioritize comprehensive AI literacy and thoughtful integration of AI into learning environments.
Five Principles for Responsible AI in Education
The road ahead is fraught with challenges, yet the executive order provides a framework that could enhance AI’s positive impact in educational settings. Here are five guiding principles that should inform any initiatives moving forward:
-
Getting Corporate Incentives Right: Companies engaging with students and educational institutions should prioritize education over profit. This entails adopting a benefits corporation structure or similar framework, allowing organizations to make decisions aimed at long-term social good rather than immediate financial gain. Such a shift is essential for ensuring that AI products are not only safe but also beneficial for young users.
-
Ensuring Product Safety: Just as car seats and toys undergo rigorous safety evaluations, AI products for educational use must adhere to stringent safety standards. The potential dangers of commercial AI products have already begun to surface, showcasing the need for a “duty of care,” which includes protecting students’ personal data and shielding them from harmful content.
-
Defining AI Literacy: While the executive order leaves room for interpretation regarding what AI literacy entails, an emerging consensus among educational experts argues that it should encompass not just the use of AI tools but also critical and creative thinking about their implications. This includes ethical considerations about how AI fits into our lives.
-
Empowering Teachers: Teachers must be actively involved in integrating AI into education. A bottom-up approach to teacher training can empower educators to experiment with safe and innovative AI products, fostering better educational outcomes and student engagement.
- Maximizing Real Student Engagement: The growing concern over student disengagement necessitates a focus on authentic engagement. Tech companies often equate engagement with screen time, but true engagement involves motivation, exploration, and connection—both online and in real life.
The Pushback from Tech Companies
It’s likely that large tech companies will resist these proposed principles, arguing that they introduce complexities that hinder innovation. However, if these companies can harness psychological research to create addictive technology, there’s no reason they can’t apply the science of learning and child development to create safe and supportive AI products aimed at aiding children’s growth.
Parental Influence and Public Sentiment
Tech companies must recognize that parents will not hesitate to push back against unsafe or harmful technology marketed to their children. Grassroots movements advocating for healthier tech usage are already gaining momentum. In numerous states, parents have mobilized against practices such as unrestricted screen time, calling for bans on cell phones in schools and limitations on children’s social media usage. A backlash against legislation restricting states from regulating AI could further accelerate this trend if companies fail to prioritize safety.
If implemented correctly, AI in education holds the potential to enhance learning experiences, but it requires careful, responsible planning and execution. As we stand at this critical juncture, the choice lies in developing frameworks that promote well-being alongside technological advancement. Ensuring that the dialogue around AI in education is proactive rather than reactive could lead to a win-win scenario for all stakeholders involved.