Critikid Logo

Machine Learning and Prompt Engineering

an interview with Dr. Mahipal Jadeja

Dr. Mahipal Jadeja is an Assistant Professor of Computer Science and Engineering at MNIT Jaipur. He holds a PhD in Computer Science and his research sits at the intersection of artificial intelligence and education. Dr. Jadeja regularly works with students, teachers, and parents to build age-appropriate AI literacy, helping them understand how AI systems work and how to use them thoughtfully and responsibly. He has an active publication record and has been featured by OpenAI.

In simple language, how does machine learning work?

Machine learning is a way of teaching computers to learn from examples instead of fixed rules.

A simple way to understand it is to think of a student during a school year. The student learns from books, lectures, and practice questions. Similarly, a machine learning model learns patterns from a large amount of data. This is called the training phase.

What really matters is not how well the student does in class practice, but how well they perform in the final exam, where the questions are new. In the same way, a good machine learning model is judged during the test phase, by how well it works on unseen data.

One key difference from humans is that machines need far more examples. A child can learn the difference between a dog and a cat from just a few pictures, but a computer may need thousands.

This reminds us that machines do not think like humans. They only recognize patterns based on the data they are given.

Two broad problems with machine learning models are overfitting and underfitting. Can you explain what these are?

Overfitting happens when a model learns the training data too well, but fails on new data. Using our student analogy, it is like a student who scores very high on practice questions but performs poorly in the final exam. This usually happens because the student memorizes patterns instead of understanding concepts.

For example, imagine a model trained to tell cats from dogs. If, in the training data, most cat images show cats sitting on sofas and dog images do not, the model may wrongly learn that the presence of a sofa means the image is a cat. This works well during training, but fails during the test
phase when a dog might also be sitting on a sofa. The model focused on an irrelevant detail rather than true features.

Underfitting is the opposite problem. Here, the model does not learn enough from the training data and performs poorly even on practice examples. In student terms, this is like someone who never understood the subject properly and therefore performs badly in both class tests and the final exam.

In short, overfitting means learning too much of the wrong things, while underfitting means not learning enough of the right things.

If you could give just one important tip for prompt engineering, what would it be?

If I had to give just one important tip for prompt engineering, it would be this: Before asking a question, be clear about who you are asking it to.

Large language models respond very differently depending on the role you assign them. When you ask a question without context, the answer is often generic. But when you clearly define a role, the response becomes more focused and thoughtful.

For example, if you are asking a question about physics, you can ask the model to respond as a physicist, or as a teacher explaining the idea to beginners. This simple step often improves clarity, depth, and relevance. It is similar to real life. Asking a question to a general librarian gives you a broad answer. Asking the same question to a domain expert gives you a deeper and more precise explanation.

Using source-first prompting can reduce the chance of being misled. What is it, and how can we implement it?

Source-first prompting means giving the model reliable information first, and then asking your question based on that information. A simple way to understand this is through a human analogy. If you ask someone a question about a topic they studied years ago, they might give a rough or incomplete answer. But if you first ask them to read a specific chapter and then ask the same question, their response is usually clearer and more accurate.

The same idea applies to large language models. When we provide trusted sources such as a document, article, or PDF before asking a question, the model can base its response on that material instead of guessing. For example, in a text summarization task, it is better to upload the document first and then ask for a summary. This reduces the chance of incorrect or made up information and improves alignment with the actual content.

How does bias creep into AI model training?

Bias creeps into AI model training mainly through the data used to train the model.

Large language models such as ChatGPT and Gemini are trained on enormous amounts of text collected from the internet. This data reflects human society, and like society, it contains imbalances, stereotypes, and gaps. When certain patterns appear repeatedly in the data, the model learns
them.

For example, a large portion of online content is written in English. As a result, these models tend to perform better in English than in many other languages. This is not intentional bias, but a result of data imbalance.

Similar effects occur in other areas as well. If certain viewpoints, cultures, or professions appear more often in the training data, the model may reflect those patterns more strongly in its responses.

Given the current state of technology, are there any specific situations when you advise people not to use AI?

Yes. There are certain situations where I advise people not to use AI, especially when it starts replacing their own thinking.

A simple way to understand this is the difference between information and knowledge. AI is very good at handling information. It can summarize a chapter, organize notes, or help you quickly understand what something is about. But real knowledge comes from thinking, questioning, and understanding things on your own.

One situation where AI should be avoided is using it as a shortcut for thinking. For example, letting AI do all the thinking for schoolwork, problem solving, or forming opinions can slowly reduce a child’s ability to reason, question, and learn deeply.

AI is best used for routine or supportive tasks. It should help learners, not replace their curiosity, judgment, or creativity. This is exactly why courses offered by Critikid are extremely relevant and important today. They focus on building critical thinking skills so that children learn how to question information, think independently, and use tools like AI wisely rather than blindly trusting them.

In short, AI should support learning, but thinking should always remain human.

How can people learn more about you and your work?

People can learn more about my work through my LinkedIn profile, where I regularly share thoughts and practical insights on Generative AI in education. That is also the best place to understand my background and ongoing work.

I am always happy to connect with educators, parents, and learners who are interested in understanding AI in a thoughtful and responsible way. Anyone interested in collaboration, workshops, or learning sessions is welcome to reach out to me via LinkedIn or email. I strongly believe that learning how to use Generative AI wisely, while continuing to think critically, is becoming an essential life skill.


Courses

Fallacy Detectors

Fallacy Detectors

Ages 8–12

Develop the skills to tackle logical fallacies through a series of 10 science-fiction videos with activities. Recommended for ages 8 and up.

US$15

Social Media Simulator

Social Media Simulator

Ages 9+

Teach your kids to spot misinformation and manipulation in a safe and controlled environment before they face the real thing. Recommended for ages 9 and up.

US$15

A Statistical Odyssey

A Statistical Odyssey

Ages 13+

Learn about common mistakes in data analysis with an interactive space adventure. Recommended for ages 12 and up.

US$15

Logic for Teens

Logic for Teens

Ages 13+

Learn how to make sense of complicated arguments with 14 video lessons and activities. Recommended for ages 13 and up.

US$15

Emotional Intelligence

Emotional Intelligence

Ages 5–7

Learn to recognize, understand, and manage your emotions. Designed by child psychologist Ronald Crouch, Ph.D. Recommended for ages 5 to 7.

US$10

Worksheets

Logical Fallacies Worksheets and Lesson Plans

Logical Fallacies Worksheets and Lesson Plans

Ages 8–12

Teach your grades 3-7 students about ten common logical fallacies with these engaging and easy-to-use lesson plans and worksheets.

US$10

Symbolic Logic Worksheets

Symbolic Logic Worksheets

Ages 13+

Worksheets covering the basics of symbolic logic for children ages 13 and up.

US$5

Elementary School Worksheets and Lesson Plans

Elementary School Worksheets and Lesson Plans

Ages 7–10

These lesson plans and worksheets teach students in grades 2-5 about superstitions, different perspectives, facts and opinions, the false dilemma fallacy, and probability.

US$10

Middle School Worksheets and Lesson Plans

Middle School Worksheets and Lesson Plans

Ages 10–13

These lesson plans and worksheets teach students in grades 5-8 about false memories, confirmation bias, Occam’s razor, the strawman fallacy, and pareidolia.

US$10

High School Worksheets and Lesson Plans

High School Worksheets and Lesson Plans

Ages 13+

These lesson plans and worksheets teach students in grades 8-12 about critical thinking, the appeal to nature fallacy, correlation versus causation, the placebo effect, and weasel words.

US$10

Statistical Shenanigans Worksheets and Lesson Plans

Statistical Shenanigans Worksheets and Lesson Plans

Ages 13+

These lesson plans and worksheets teach students in grades 9 and up the statistical principles they need to analyze data rationally.

US$10

Printable Logical Fallacy Handbook

Printable Logical Fallacy Handbook

Ages 13+

A printable PDF explaining 20 common logical fallacies with real-world examples. Recommended for teens and adults.

US$5

Printable Logic Puzzle Cards

Printable Logic Puzzle Cards

Ages 10+

Printable logic puzzle cards with answers and explanations. Varied levels mean they will challenge kids, teens, and even adults.

US$5

Printable Data Analysis Handbook

Printable Data Analysis Handbook

Ages 13+

A printable PDF explaining 8 common errors in data analysis with real-world examples. Recommended for teens and adults.

The Language of Science: Facts, Laws, and Theories

The Language of Science: Facts, Laws, and Theories

Ages 11+

This free science literacy worksheet teaches the difference between facts, laws, and theories and addresses common misconceptions. Recommended for grade 6 and up.

Printable Formal Fallacy Handbook

Printable Formal Fallacy Handbook

Ages 13+

A printable PDF explaining 6 formal fallacies with examples. Recommended for teens and adults.