10 AI discussions schools should have now

Artificial intelligence

Artificial intelligence | Monday, August 7, 2023

10 AI discussions schools should have now

When ChatGPT was released in November 2022, teachers and schools were reeling with the implications of this powerful new AI tool. 

This AI assistant – and other AI assistants like it – are capable of answering questions, writing essays, explaining ideas, answering questions, and more.

It caused us to ask lots of questions.

We wondered about cheating. Are students going to use this to cheat? But … is it cheating if everyone has access to it and it’s a commonly used tool? What do we consider cheating anymore?

We wondered about how it would change classwork students have done for decades – and how to use it to plan lessons

We wondered how it would change the classroom. The school. The teaching profession.

We’re still wondering about all of this.

The shock factor has subsided a little. We’ve had a little time to acquaint ourselves with these AI assistants – as well as other AI-driven tools.

Now, we’re on to the next tier of questions. We’re trying to understand the place of AI in schools and classrooms. We’re trying to create policy – for classrooms, for districts, for education as a whole

What are some of the essential AI discussions that schools need to address?

Here are 10 important ones that can get the conversation going.

1. What AI-related student behavior do we want to avoid?

This is where the minds of many of us go first – and rightfully so. If we’re in charge of teaching and learning (or supporting those who teach and learn), we want to preserve the learning of our human students. We want them to think. We still want them to develop skills. We need to prepare them for the future world to be good employees, good friends and family members, good citizens. 

To do that, we need to establish some guardrails – the kind on highways and roads that keep cars where they can operate safely. But before we do that, let’s ask ourselves a crucial question. Do these guardrails support students’ ability to thrive in a future that includes AI – or do those guardrails support a current construct that makes us feel safe and comfortable? Just remember … guardrails don’t help anyone if they block us from where we need to go.

2. How can – and should – students use AI responsibly?

When we start the conversation about AI in schools, many of us want to go first to what we think students shouldn’t do. It’s understandable – and natural to protect what we think will be best for our students. But this is only one side of the conversation. 

We can’t just create rules and punishments for what we don’t want students to do. We need to provide guidance on how AI can be used responsibly. Because, let’s be honest. It can be used responsibly! Many of us in education are thrilled at the support and efficiency AI gives us – and the time it’ll save us. It’s a game changer for us as educators – and it should be a game changer for students, too! 

(We also have to recognize that whether we’re advocating it or using it in classrooms, students are probably using it – whether they’re of age to use it or not!)

Let’s provide a vision of how AI can be used responsibly, ethically and justifiably in a classroom context (and, in turn, a real world context). If you work with students long enough, you learn that an over-emphasis on rules and punishments lead to students poking holes in the rules – and trying to find loopholes for the punishments. When we don’t share how AI can be used responsibly, we leave a vaccuum – and in a vaccuum, it’s the students who are left to decide if something is responsible. That’s a tough spot for a child or adolescent choosing between doing the work and finding an easy way out. Let’s discuss and model how AI should be used instead of focusing solely on how it shouldn’t be used.

3. What skills – in general and AI-related – will students need to prepare for their future?

This is a tough question to answer because it involves forecasting the future. Truthfully, we really don’t know yet what skills students will need to prepare them for the future. But we can make some pretty good predictions. 

Try this thought activity: Ask yourself, “What does AI make possible today?” Then, follow that up with a series of “What will it be able to do tomorrow? And after that? And after that?” Create a digital (or paper) flowchart, letting your mind hover on each stop to really consider what might come next. You might surprise yourself at your results.

If you’d rather leave the predictions to the AI futurists, there are tons of them online whose mission is to make guesses about what the next 5, 10, or 20 years will look like.

We look at all of this evidence with a discerning eye and a focus on how this applies to what we teach, how our students learn, and what we’re preparing our students for.

4. Is any of our classwork focused on skills (or activities) that are becoming obsolete?

Don’t worry. This question isn’t going to imply that all classwork and instruction we’ve ever done is suddenly obsolete. Lots of quality classroom practices still prepare students’ minds – and their skills and abilities – for the future of work. But as things change, some of them won’t. Over time, we should examine our teaching practices to make sure they’re still relevant to our students’ future. 

Let’s take the essay for example. Now, large language models (LLMs) can write reasonable essays in seconds. (I know, English teachers, and I agree … they’re not superb essays. Just reasonable. But they’ll get better with time.) If we want students to continue to write traditional essays as they have for decades (or longer), we need to consider relevance. How do the skills needed to write an essay serve students in their future – in 5, 10, or 20 years? Why are we using an essay as an evaluative task – and what do we gain from it as teachers?

I’m not saying the essay is dead. We just need to go back to the fundamentals to remind ourselves why we’re assigning it – and refocus on the benefits it brings our students. (And, of course, this goes for other classwork beyond essays.)

5. Is there a double standard between how we’re using AI as adults and what we expect from students?

I’ve seen teachers glow about the time AI can save them and the ideas it provides – and, in the next breath, criticize students who might use it in the same way. Sure, we’re adults. And yes, we have more life experience and education than our students. But, for our students’ sake, we have to protect ourselves – and them – from “that’s the way we’ve always done it” thinking.

An example: A teacher told me she didn’t want students to use AI as a writing aid because they haven’t developed their writing skills yet – as she had. This statement didn’t sit well with me. It feels a bit like saying a kid should read the whole instruction manual of a video game before they go play it. This never happens, though! They learn the game as they play it. 

I hope we’re careful with the line of thinking that students aren’t ready to use AI because of a lack of prerequisite skills. It’s dangerous ground. It runs the risk of being a quiet way of preserving a comfortable status quo in our teaching (that might not be relevant anymore).

6. How could the use of AI be harmful for students?

So far, you could argue that this post has been very pro-AI and its place in schools and the classroom. (And that would probably be a fair assessment.) Even though it’s a big part of our students’ future, we have to recognize that it presents some dangers. It’s our job to mitigate some of the risk those dangers pose. We can’t completely eliminate those risks – and to do so would fail to prepare students for a world in which AI plays a big part. But we can mitigate some of the risk.

For example, responses created by AI models can be inaccurate – and those inaccuracies can create misunderstandings that could follow our students their whole lives. AI models demonstrate bias – based on gender, race, location, age, and other factors – and those biases can impact students’ worldviews whether they realize it or not. It poses privacy issues related to information collected – and how that information is used.

Because there are potential dangers and risks, that’s not to say that we should immediately ban and discourage using AI in any form. Let’s be honest. Life is risky. Whenever we drive an automobile, there’s danger of injury from accident, danger to the environment from pollution, etc. But when we weigh the benefits against the risks, most of us are willing to drive automobiles. We should weigh the benefits against the risks – and we should also encourage our students to weigh them, too.

7. How could it be harmful if students don’t use AI?

You might consider two types of errors in this world – errors of commission and errors of omission. Errors of commission are errors made while doing something. Errors of omission are committed when we don’t do anything. It’s easy to consider all of the concerns when it comes to using AI in schools and classrooms. But we also must consider what happens if we don’t use it at all.

Let’s imagine a student who’s 10 years old today. In eight years, this student will graduate high school. Four years later, she will graduate college/university. What disadvantages will she face if she has no experience with AI – and the benefits and advantages it brings – when she enters higher education or the workforce? She might use AI irresponsibly and suffer the consequences, being punished at school or at work. Or she might not know how to use it at all, struggling to gain an advantage against her peers – or to simply do her work efficiently and effectively enough to keep pace. 

Of course, this isn’t a carte blanche suggestion to promote any and all use of AI in classrooms and schools. It’s up to us (and to our students) to decide what’s in their best interest. But if our students don’t learn anything about the place of AI in work, in learning, and in life, they’ll likely suffer the consequences once they leave the confines of the school.

8. What’s our stance on AI detectors and their place in teaching?

As AI assistants like ChatGPT began to spread, the cry from teachers was clear: “Is there a detector I can use to know if my students used AI to do their work?” The answer is a clear “yes but.” Yes, lots of those tools are readily available online these days – and you can use many of them for free. 

Here’s the “but.” Most of them are wildly inaccurate. They’re prone to say that AI-created text was written by a human. They’ll also say that human-created text was written by AI. This fact alone should encourage us to be very, very careful in deciding their place in the classroom.

Concern #1: Consider that a student writes an essay himself and turns it in – and an AI detector tells you a majority of it was generated by AI. What happens if you accuse that student of cheating and he has indeed done all of the work himself? Think of the potential damage to student-teacher relationships it could create – and the cold academic culture it could create as word spreads.

Concern #2: Accusing a student of “cheating with AI” can be very unclear, messy, gray area. What, exactly, constitutes cheating with AI? On the polar ends of the “using AI vs. using the human brain” spectrum, the judgments are easy. We don’t like when students use AI to completely avoid thinking and skill development (see: copy/paste AI responses). We like when students do lots of the thinking and skill development with their own brains. But there’s lots of gray area in the middle. What if they ask AI for ideas to help them with their work? What if they ask AI for feedback? What if they generate a passage of text with AI and edit/adjust it to their liking? You might see some of these examples as responsible and others as responsible. But when there’s a lack of transparency and students don’t know, accusing them unspecifically of “cheating with AI” can be harmful.

The most beneficial use of AI detectors I’ve seen so far is to start a conversation. Sometimes, a student’s writing doesn’t read like their usual writing. Sometimes, something’s amiss and you want to investigate. (Sometimes, the student’s writing includes the phrase “as a large language AI model, I’m unable to …” and you know something’s up.) When that happens, you might show a student your concerns and discuss the root of the issue – and how you can work together to resolve it.

9. Are we comfortable with the practices of the AI companies and tools we’re using?

Many of the points above make a big, big assumption. They assume that, by using these AI tools, we implicitly agree with the way the companies that create them do business. That doesn’t have to be the case. If we have concerns about the fundamental way these AI tools and businesses are constructed, we might choose not to use them.

Case in point: AI image generators. Use text to describe an image you’d like, and they’ll create it. (Granted, some tools create better results than others.) But how do those image generators learn how to create images? They have lots of images in their dataset, the library that they’re trained on. And how did those images get into that dataset? Not by permission of the creators, who own the rights to those images. 

In essence, image generators learn how to create images through intellectual property to which they haven’t been given permission. If we ask an image generator to create art in the style of a contemporary visual artist who’s still alive, it’s doing so for free – without any compensation to the artist who created the images the AI learned from. In a way, it’s intellectual property theft.

The same goes for AI assistants based on large language models. Case in point: me. I’ve published hundreds of articles at Ditch That Textbook on teaching with technology. I’m sure some or all of my work is in the dataset of AI assistants. Their product thrives because of people like me, but I’m not compensated in any way. 

Some people aren’t comfortable with that – or other concerns, like bias or the potential harm that bad actors could create with AI – so they won’t use them.

10. How will we get and share new ideas to evolve our practice forward?

We’re still trying to figure out the place artificial intelligence should have in learning, in the classroom, and in schools. Our best hope to figure out what works – and what doesn’t – so we can evolve our practice is through collaboration and sharing. 

How might we do that? When educators share with co-workers across the hallway – or across the district. When educators share their best ideas – and lessons learned from their failures – online through blogs, social media, and video. When educators attend conferences and get-togethers to hear people in other districts and other states – and those with fresh perspectives and new ideas.

Want to get up to speed quickly? Find a pipeline of new ideas and perspectives. (Of course, I think the Ditch That Textbook newsletter is one of the best sources for this. You can subscribe for free here.) Be willing to share what works for you – and to listen to what works for others. Our very best hope is to work together.

FREE teaching ideas and templates in your inbox every week!
Subscribe to Ditch That Textbook
Love this? Don’t forget to share
  • RazaHosyain says:

    Software Developer Jobs are for fresh, trainees and experienced workers. See complete jobs description, salary details, education, training, courses and skills requirement, experience details for Software Developer Jobs today in govt and private, which are for matric, inter, graduate, master level and above.

  • >