LLM Literacy
Dr. Kiron Ravindran holds a Ph.D. in Information Systems and a Post-Graduate Diploma in Management (International Business).
Hi! Tell us about your background and and your work.
Spain is what I call home now. I come from India and have lived in various parts of the world.
I am a teacher by profession. I teach courses in technology and innovation management in Graduate Management programs like the MBA at IE Business School.
When I am free, I like to play chess, solve crosswords, or go bikepacking.
LLMs can be wrong. What quick checks do you recommend running before accepting an AI-generated answer?
If the kind of question asked is likely to give an answer that is easily verifiable (things like hyperlinks for sources, facts in the public domain, quotes attributed to people), then verify it.
The challenge is when the answer is not easily verifiable—for instance, the opinion of a group of people on a certain topic, or the consensus on a particular stream of research. My first step would be to read each line with a healthy skepticism, asking what would happen if I had said that and it was later proven false. I have yet to see a dead giveaway, so the key is to remain always vigilant that the output could be a hallucination.
Now, if the question was not a search-like query and more of, say, help in editing, then the risks are lower and might need no second-guessing.
You warn that generative tools can create “experts who look competent but lack understanding.” In the world of AI, why does understanding still matter?
Understanding still matters because we’re usually evaluated not just by what we hand in, but by the choices we make and the judgment we show. In today’s knowledge economy, it’s not enough to produce polished answers—you need to know what they mean, when they apply, and how to act on them. Generative tools can help with the output, but if you can’t explain it or adapt it, you’re not equipped to lead or decide. That’s where the risk of looking competent without truly understanding becomes real.
How can we keep students practicing the slow, messy thinking that builds real skill without banning chatbots altogether?
We tend to blame students for taking shortcuts, but they’re just responding rationally to the system we’ve built. If we reward efficiency and output over process and understanding, it’s no surprise they’ll use tools like LLMs to get the job done quickly.
The real issue isn’t student behavior; it’s that our systems often assess final products, not the learning journey. We have put in place outcome-only goals like grades, and not process goals like learning and effort.
Because measuring learning at scale is hard, we default to the blunt instruments of measurement like final essays or exams. Naturally, this pushes students to optimize for results rather than depth.
To encourage slower, more thoughtful thinking, we need to redesign incentives and our systems, not ban the tools.
If AI keeps getting faster and smarter, what human capabilities will matter most, and how can young people start developing them today?
Humans are likely to always use machines as an extension of their capabilities. That doesn’t mean tasks will be equally divided in this hybrid team, nor should they be—the goal is to split the work in a way that produces the best output. But this delegation of tasks comes with a price. Think of it like a three-legged race: if one teammate isn’t pulling their weight, the other has to drag them along, slowing the team down. If the human doesn’t contribute meaningfully, the output from the AI will be of poorer quality and the AI may choose to bypass the human’s input or be throttled by it.
So the real human capability that will matter most is the ability to work well with AI by guiding it, steering it, challenging its output, and pushing it in better directions. When we had the typewriter, we learned the skill to manipulate this weird machine with a scattering of letters to churn out the standard 70 words per minute. Now machines can write on their own and the skill we need to learn is how to manipulate this new weird machine that hallucinates yet can churn out 700 words per minute.
Our job isn’t to hand over the task entirely, but to lead the AI: ask it to rethink, reword, recheck. That kind of active collaboration is what will keep us in the race.
Is AI any different than other types of “extended minds” like calculators and GPS?
Yes, AI is fundamentally different from tools like calculators or GPS or typewriters. Those are classic examples of extended minds, but they have no agency. They do what we tell them.
Generative AI, on the other hand, has a kind of autonomy. It crafts narratives, frames ideas, and can persuade us of the value of its output even when that output might be wrong or meaningless.
With a calculator or a GPS, we delegate a specific function and stay in control. But with GenAI, if we don’t have clarity on what we want, or we don’t stay critically engaged, we risk being misled. As Yuval Noah Harari points out, GenAI isn’t just a tool, it’s an agent. And agents have agency. If we don’t actively guide them, they can take us off course.
AI is already taking over many entry-level tasks. What steps can workplaces put in place so beginners get the hands-on experience that is necessary to build good judgement?
When companies stop hiring at the entry level and start laying off experienced staff, they create what’s known as an experience gap. This will eventually backfire. Without a pipeline of people who have developed judgment over time, organizations will struggle to critically assess AI-generated output or recognize when something is off.
The real task for companies is to build systems where internal skills can grow and be curated. That might mean offering meaningful internship programs, hiring from within, or consciously de-emphasizing short-term efficiency in favor of the slower, often inefficient process of real learning. It’s that friction of working through problems, not around them, that builds capability.
You endorse greater transparency surrounding AI usage. What simple disclosure habits do you recommend?
I think this is a changing dynamic. Today, no one feels the need to say, “By the way, I used PowerPoint,” when giving a presentation. It’s assumed. The value lies in the narrative, not the tool. I believe we’re heading in the same direction with GenAI. Using tools like ChatGPT to craft or polish a narrative will soon be seen as part of the default workflow, and the focus will shift to the quality of the thinking, not whether AI was involved.
For example, while answering these blog questions, I typed very little. I dictated my responses and asked ChatGPT to clean up my comment. Today, I feel a certain obligation to disclose that. But in the near future, I suspect we won’t need that level of transparency because AI use will be assumed, and what will matter is the clarity, originality, and coherence of the ideas.
Learn More
Read Dr. Ravindran's article in IE Insights, Is AI Creating Incompetent Experts?
Courses

Fallacy Detectors
Develop the skills to tackle logical fallacies through a series of 10 science-fiction videos with activities. Recommended for ages 8 and up.

Social Media Simulator
Teach your kids to spot misinformation and manipulation in a safe and controlled environment before they face the real thing.

A Statistical Odyssey
Learn about common mistakes in data analysis with an interactive space adventure. Recommended for ages 12 and up.

Logic for Teens
Learn how to make sense of complicated arguments with 14 video lessons and activities. Recommended for ages 13 and up.

Emotional Intelligence
Learn to recognize, understand, and manage your emotions. Designed by child psychologist Ronald Crouch, Ph.D. Recommended for ages 5 and up.
Worksheets

Logical Fallacies Worksheets and Lesson Plans
Teach your grades 3-7 students about ten common logical fallacies with these engaging and easy-to-use lesson plans and worksheets.

Symbolic Logic Worksheets
Worksheets covering the basics of symbolic logic for children ages 12 and up.

Elementary School Worksheets and Lesson Plans
These lesson plans and worksheets teach students in grades 2-5 about superstitions, different perspectives, facts and opinions, the false dilemma fallacy, and probability.

Middle School Worksheets and Lesson Plans
These lesson plans and worksheets teach students in grades 5-8 about false memories, confirmation bias, Occam’s razor, the strawman fallacy, and pareidolia.

High School Worksheets and Lesson Plans
These lesson plans and worksheets teach students in grades 8-12 about critical thinking, the appeal to nature fallacy, correlation versus causation, the placebo effect, and weasel words.

Statistical Shenanigans Worksheets and Lesson Plans
These lesson plans and worksheets teach students in grades 9 and up the statistical principles they need to analyze data rationally.