|
||
|
||
AI has revolutionized how we process information, optimize tasks, and conduct research. However, its integration into academia sparks ethical and practical debates. Should we limit its use? How can we assess a student’s true knowledge if they employ these tools? This text explores these questions from the perspective of a technology expert who argues that banning AI is as absurd as rejecting calculators or spreadsheets in the past. The key lies in adapting teaching and evaluation methods to harness its benefits without sacrificing intellectual rigor.
I use AI for everything, starting early at dawn. But how? I’ve been a programmer my entire life (my daily occupation and source of income since 1979, before the first mass-impact personal computer, the IBM PC, emerged in 1981).
Over time, I’ve witnessed how at the Agrarian University, we were forbidden from using slide rules and instead relied on printed tables of logarithmic functions. Later, after graduating in Animal Science Engineering, we were allowed slide rules. At the University of Lima in the 1980s (where I studied Administration and then Systems Engineering), even basic four-function calculators were banned. Yet, during a finance specialization there in the 1990s, the syllabus required specific HP financial calculator models. Today, finance courses at UPC are taught exclusively using Excel (where formulas often differ from manual calculations). My grandfather wrote with pen and ink, my mother used a fountain pen, and I used ballpoint pens. AI is just another tool. It’s here to stay. It’s not “intelligent” (it’s highly limited) and is merely an evolution of 1960s programs like ELIZA. In the 2010s, PUCP banned phones or laptops in class, while Harvard and MIT embraced them, even live-streaming lectures via EDX.org.
We’ve used AI since Google perfected translators, Yahoo created search engines, or programming handled big data. The novelty? Now, no coding skills are needed—AI responds to everyday language, tailored to our instructions (colloquial or academic).
Mentioning them alone opens avenues for analysis:
Does Turnitin Accuse Us of Using AI?
The core issue isn’t whether to disclose AI use—it’s inevitable. Even Google searches rely on AI to deliver academic abstracts. The real challenge lies in evaluating a student’s understanding. Oral exams? Handwritten tests? Defending every paper? It’s context-dependent. Turnitin, the “gold standard,” admits its plagiarism detection is just one factor for teachers. For AI-generated text, it works acceptably only in English. As of 2025, it’s barely learning Spanish; other languages remain untrained.
This text, rich in personal anecdotes, couldn’t be AI-made. Yet, I’ll use AI to draft its abstract—then refine it with my critical thinking and heuristics (where AI falls short).
The worst flaw? Formal academic language (“Pizarro arrived with ample provisions…”) triggers false positives over colloquial phrasing (“Pizarro brought loads of food…”). Must seasoned researchers dumb down their writing to please Turnitin? Absurd. A 12-year-old writing like a senior student raises flags, but penalizing natural academic fluency is unjust.
Students deserve the same tools as teachers to challenge false positives. I’ve documented why fairness requires balance: grades shouldn’t hinge on flawed AI judgments. Turnitin’s business model (institutional licenses at ~$3/student) excludes individual checks, exacerbating bias.
Sponsored byIPv4.Global
Sponsored byRadix
Sponsored byDNIB.com
Sponsored byVerisign
Sponsored byCSC
Sponsored byVerisign
Sponsored byWhoisXML API