Skip to content

HigherEd degrees at risk: Time for Action!

Academic Integrity is challenged by the impact of Generative AI. According to research and news articles from across the globe, students’ use of AI when not allowed appears to be high, degrees are devalued, and students’ ability to develop critical thinking skills could be limited by AI.

40% of students have used AI in assessment when they were not supposed to, according to a recent AIinHE survey from over 8000 students from Deakin University, Monash University, The University of Queensland and UTS based out of Australia.

There are similar findings where “academic misconduct offences involving generative artificial intelligence (AI) have soared at many leading UK universities with some institutions recording up to a fifteenfold increase in suspected cases of cheating” according to the Times Higher Education.

There is, however, uncertainty related to current detection solutions as “AI Detectors falsely accuse students of cheating — with big consequences”, as reported by Bloomberg. “Cheating or not, an atmosphere of suspicion has cast a shadow over campuses”, according to The Guardian, with examples of students being accused of using AI because some of their points “had been structured in a list, which the chatbot has a tendency to do.” The impact of such false accusations against students might explain why a staggering 91% say they are “worried about breaking university rules” in the Australian survey.

All of this could lead to even more severe outcomes, such as “the combination of commercialised cheating and the rise of AI now threatened to devalue degrees until they were ‘handed out like expensive lollies’, one academic said” to the Guardian.

More broadly, universities should be mindful of AI's impact on humankind as Cambridge University “warns against over-reliance on these tools, which could limit a student’s ability to develop critical thinking skills.” If we don't train and assess students' ability to think and write without AI, how prepared are they to judge the output from AI?

How do we look to solve this growing issue? We refer to Phillip Dawson, et al. in their recent publication “Validity matters more than cheating”. Here they state, “A validity perspective makes the claim: a students’ assessment submission is valid if it represents their actual capability.” Thus, the key is to ensure the authenticity of the student’s original work to ensure it is theirs so that it is possible to measure their understanding.

With the above in mind, universities urgently need to examine their assessment security and AI policy. This should be a wake-up call for universities—there is no time or excuse for waiting!

Norvalid does precisely what the validity perspective is looking for by measuring the authenticity of student writing. Thus, instructors can continue to use open-book home essays to confidently assess and judge evidence of why a student deserves their grade, credential, or degree.

Don’t believe us, why don’t you try Norvalid's method for validating original writing.