AI in Education

Good, Bad, or Proceed with Caution?

interacting with an AI chatbot
It is imperative to address AI in education now to realize key opportunities and tackle unintended consequences.
By Moe White –

The Urban News recently looked at how AI—Artificial Intelligence—programs are impacting the education world.

Back in May of 2023, when the US Department of Education was still a valued part of our government, its Office of Education Technology (OET) analyzed both the benefits and potential drawbacks of AI in schools. The office focused on these major questions:

What is a desirable and achievable educational system that leverages automation to advance learning while protecting and centering human agency?

How and on what timeline will we be ready with necessary guidelines and guardrails, as well as convincing evidence of positive impacts, so that constituents can ethically, equitably, and widely implement this vision?

Many educators are already exploring using AI tools like speech recognition to increase support for students with disabilities, multilingual learners, and students who benefit from more personalized tools for learning. “[Teachers] are exploring how AI can enable writing or improving lessons, … [and] for finding, choosing, and adapting material for use in their lessons.”

Comparing potential benefits and drawbacks, the report notes that AI may improve the adaptivity of learning resources to students’ strengths and needs. “Developing resources that are responsive to the knowledge and experiences students bring to their learning—their community and cultural assets—is a priority, and AI may enable greater customizability of curricular resources to meet local needs.”

But AI may also discriminate through biases such as a voice recognition system that doesn’t work as well with regional dialects, or an exam monitoring system that may unfairly identify some groups of students for disciplinary action.

Another positive is that AI automated assistants or other tools may provide teachers with greater support. “As seen in voice assistants, mapping tools, shopping recommendations, essay-writing capabilities, and other familiar applications, AI may enhance educational services. AI may also enable teachers to extend the support they offer to individual students when they run out of time.”

But the report pointed out that experienced teachers recognize and use “teachable moments”—that is, opportunities to move beyond the planned lesson—that AI models ignore, deem “wrong,” or simply misunderstand. In other words, AI may provide information that appears authentic, but actually is inaccurate or lacking a basis in reality.

The report also noted AI’s specific risks to data privacy and security; producing output that is inappropriate or wrong; amplifications of unwanted biases; and new ways in which students may represent others’ work as their own—i.e., cheating. Additionally, some uses of AI may be invisible, creating concerns about transparency and trust.

There’s also the concern raised decades ago about computers: “Garbage in, garbage out.” Who determines the parameters of a question or issue to address? Is it biased? Who decides? Would recommendations suggested by an algorithm be fair?

Most of all, AI brings new risks like that of scaling pattern detectors and automated applications that result in “algorithmic discrimination” (systematic unfairness in the learning opportunities or resources recommended to some populations of students). For example, if AI speeds up the pace of curricula for some students and slows it down for others (based on incomplete data, poor theories, or biased assumptions about learning), achievement gaps could widen.

In some cases, the quality of available data may have unintended consequences. An AI-enabled teacher hiring system might be assumed to be more objective than human-based résumé scoring. But if the system relies on poor historical data, it might de-prioritize candidates who could bring both diversity and talent to a school’s teaching workforce.

“In summary,” the OTE concludes, “it is imperative to address AI in education now to realize key opportunities, prevent and mitigate emergent risks, and tackle unintended consequences.”

Adapting to AI

Meanwhile, the American Federation of Teachers (AFT) announced the launch of its new “National Academy for AI Instruction,” a $23 million endeavor funded by Anthropic, Microsoft, and OpenAI, three main players in the generative AI revolution.

“With the creation of the academy, leading artificial intelligence companies are stepping up their efforts to bring AI to schools across the U.S. OpenAI has committed to giving $10 million over five years, while Microsoft will provide $12.5 million. Anthropic, meanwhile, will contribute $500,000 the first year,” said Andrew Crook, a spokesperson for the AFT.

The AFT said it seeks to embrace the technology in a way that protects teachers’ place at the head of the classroom.

“The direct connection between a teacher and their kids can never be replaced by new technologies, but if we learn how to harness it, set commonsense guardrails and put teachers in the driver’s seat, teaching and learning can be enhanced,” AFT President Randi Weingarten said. “We want to do it in a way that teachers can really master the tools,” she told CBS MoneyWatch.

Gerry Petrella, general manager of US public policy at Microsoft, said in a July 7, 2025 interview on CBS News, “We know students are going to benefit the most from this technology when we put teachers at the center of this tool.”

But the use of AI has raised concerns about students abusing the technology to cheat or complete their assignments. Some colleges have already implemented AI detection software in a crackdown on the abuse of AI.

If students interact with a software program more than with a teacher, they can begin to feel disconnected and isolated.

MIT Media Lab’s Research

At the Massachusetts Institute of Technology, research is analyzing how AI affects our thinking. A recent study from MIT found that over-reliance on artificial intelligence can reduce brain activity and critical cognitive functions.

Advanced AI systems are trained on massive amounts of text data to understand and generate human-like text. That training base is referred to as the Large Language Model (LLM), which learns patterns, relationships, and nuances of human language. It then powers generative AI, so you can think of an LLM as the engine, and ChatGPT as the car that uses that engine to get you where you need to go.

In the MIT study, led by academic researcher Nataliya Kosmyna, essays were written by three groups: people using AI, others using a traditional search engine, and a third group relying on just their own brain power. That last group showed the strongest brain activity—as well as better memory recall. Those showing the lowest brain activity were the people relying on AI.

In an interview on CBS News, Kosmyna described “an additional session … with [the same] participants.” A participant who had originally used AI’s Large Language Model (LLM) for three sessions would, for the final session, be in the “brain-only” group, and vice versa. What Kosmyna found was that subjects who previously relied on LLM but then had to use only their brain to write an essay “showed weaker neural connectivity compared to the (former) brain-only folks.” And those who had originally used only their brains, but then were given access to LLM AI, continued to show high levels of neural connectivity.

In other words, the brain users’ experience stayed with them when accessing AI, but the lower-functioning LLM users still had lower brain power when forced to rely on it alone.

These findings raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI’s role in learning.

Walden University’s Cons and Pros of AI in Education

Walden University, and online institution headquartered in Minneapolis, MN, offers Master of Science in Education and Bachelor of Science in Elementary Education degree programs online. Those programs might give it a vested interest in how AI in classrooms will impact and change education systems in the US.

Its analysis of Artificial intelligence (AI) is comprehensive, describing its potential as “a field of computer science and technology that focuses on creating machines, systems, or software programs capable of performing tasks that typically require human intelligence … including reasoning, problem solving, learning, perception, understanding natural language, and making decisions.”

Walden considers potential uses for AI “exciting—as well as concerning,” referring to concerns such as bias, errors (misinformation), cheating, isolation, and job loss.

Artificial intelligence may generate misinformation. Source data may have errors or spread misinformation. One can’t assume that information provided by AI is accurate.

Students can use ChatGPT to write entire essays, answer quiz questions, or do their homework. Some AI programs can help teachers determine if their students are cheating, but those programs may falsely identify a student’s original work as plagiarism.

If students interact with a software program more than with a teacher, they can begin to feel disconnected and isolated.

Artificial intelligence has the potential to be a powerful learning tool. Some teachers worry that AI will replace them.

On the Other Hand . . .

The positives Walden identifies are assistance, speed, individualization, context, and personalization.

Some teachers have found that AI can help make their jobs easier.

AI can provide immediate assistance to a student needing help.

AI can help individualize learning opportunities for students.

AI can add information and understanding for individual students, for example by using sources that allow a student to talk to Anne Frank about her life or to Shakespeare about his plays.

Artificial intelligence can personalize student learning.

Walden concludes, “Balancing the advantages of artificial intelligence in education with its potential drawbacks requires careful planning and consideration, as well as ongoing evaluation. AI can empower educators, accelerate learning, and personalize educational experiences, quickly and easily.

On the other hand, the risks of bias, misinformation, and student isolation demand careful scrutiny. Teachers must explore the potential of AI in order to be effective advocates for their students and themselves.”

Also See: AI in the Classroom >>