How to discuss AI tools with students and researchers

Author
Eva Lantsoght
Published
5 Sep ’24

This month, I am bringing you the final post in this mini-series about generative AI. I have written about potential use in research and education (as well as some examples of my use), and today I want to talk about talking about it.

If you look at advice out there on the use of generative AI for students and researchers, you will see that some enthusiasts advocate for a large reliance on the tool. I myself have strongly rejected the idea that you can “100x” your literature review by outsourcing it to a generative AI tool, as I am of the opinion that reading, letting ideas stew in your mind, and writing about your reading are essential to get a grasp on the literature – which, in turn, is a foundation for research.

So, how can we discuss ethical usage of generative AI with our students and researchers? Here are some aspects to consider:

1. University regulations

The basis for what can and what cannot be done at your university, both for students and researchers, is by now clearly communicated. Universities have developed guidelines, so I recommend that you check the university regulations and keep these as a starting point. In case of doubt, err on the safe side.

2. Publisher requirements

In addition to your university requirements, you may find that publishers have requirements for the use of generative AI in writing and research. By the same token, funding bodies by now also have their requirements. If you are writing for a publication or working on a proposal, you will want to consider these requirements as well.

3. Student or researcher?

I have observed heavier reliance on ChatGPT in undergraduate students than in graduate students and researchers. It seems that those who have some research experience, already better understand what is expected in research – and understanding who you are having a discussion with on the use of generative AI is important to frame the discussion and expectations.

4. When you expect heavy reliance

It is always difficult to take some work and ask a researcher or student if it is their own work or generated by ChatGPT. Having this difficult talk is challenging, and it depends as well on the previous three points. I am currently trying to find ways to incorporate generative AI in my teaching, without encouraging my students to overly rely on the tool – and so far, I don’t have a solution yet – just ideas that I need to try out.

5. Discuss genAI gone wrong

Best practices are one thing, but examples of generative AI gone wrong can also be good lessons. Of course, we have all chuckled at The Rat and other extreme examples, but there are a number of less obvious cases out there that also deserve our attention and discussion.

6. Identify core of the research

When we talk to our researchers, it is important to identify the core of our research. It should be clear that the core of our research comes from us, as researchers. We may get some help from ChatGPT in debugging our Python code, but we cannot outsource the entire theoretical framework of our research to the tool – we cannot expect any original and creative research to come out of it.

7. Data management

If you are going to run data through the data analyst, make sure you are working with data that has no privacy or other concerns related to it. When in doubt, avoid using your dataset in data analyst. In addition, I would not base my research on a data analysis that has been whipped up by data analyst – it would most likely not be allowed (see points 1 and 2), and certainly takes a lot of the fun and “chewing on the data” that happens in the analysis stage out of the equation.

8. Prompt engineering training

I think that a basic course in Prompt Engineering, and really understanding the use and function of generative AI, can go a long way. I would thus encourage students and researchers to set aside some time to learn the basics.

Recent blog posts