Machine Learning and Prompt Engineering
an interview with Dr. Mahipal Jadeja
Dr. Mahipal Jadeja is an Assistant Professor of Computer Science and Engineering at MNIT Jaipur. He holds a PhD in Computer Science and his research sits at the intersection of artificial intelligence and education. Dr. Jadeja regularly works with students, teachers, and parents to build age-appropriate AI literacy, helping them understand how AI systems work and how to use them thoughtfully and responsibly. He has an active publication record and has been featured by OpenAI.
In simple language, how does machine learning work?
Machine learning is a way of teaching computers to learn from examples instead of fixed rules.
A simple way to understand it is to think of a student during a school year. The student learns from books, lectures, and practice questions. Similarly, a machine learning model learns patterns from a large amount of data. This is called the training phase.
What really matters is not how well the student does in class practice, but how well they perform in the final exam, where the questions are new. In the same way, a good machine learning model is judged during the test phase, by how well it works on unseen data.
One key difference from humans is that machines need far more examples. A child can learn the difference between a dog and a cat from just a few pictures, but a computer may need thousands.
This reminds us that machines do not think like humans. They only recognize patterns based on the data they are given.
Two broad problems with machine learning models are overfitting and underfitting. Can you explain what these are?
Overfitting happens when a model learns the training data too well, but fails on new data. Using our student analogy, it is like a student who scores very high on practice questions but performs poorly in the final exam. This usually happens because the student memorizes patterns instead of understanding concepts.
For example, imagine a model trained to tell cats from dogs. If, in the training data, most cat images show cats sitting on sofas and dog images do not, the model may wrongly learn that the presence of a sofa means the image is a cat. This works well during training, but fails during the test
phase when a dog might also be sitting on a sofa. The model focused on an irrelevant detail rather than true features.
Underfitting is the opposite problem. Here, the model does not learn enough from the training data and performs poorly even on practice examples. In student terms, this is like someone who never understood the subject properly and therefore performs badly in both class tests and the final exam.
In short, overfitting means learning too much of the wrong things, while underfitting means not learning enough of the right things.
If you could give just one important tip for prompt engineering, what would it be?
If I had to give just one important tip for prompt engineering, it would be this: Before asking a question, be clear about who you are asking it to.
Large language models respond very differently depending on the role you assign them. When you ask a question without context, the answer is often generic. But when you clearly define a role, the response becomes more focused and thoughtful.
For example, if you are asking a question about physics, you can ask the model to respond as a physicist, or as a teacher explaining the idea to beginners. This simple step often improves clarity, depth, and relevance. It is similar to real life. Asking a question to a general librarian gives you a broad answer. Asking the same question to a domain expert gives you a deeper and more precise explanation.
Using source-first prompting can reduce the chance of being misled. What is it, and how can we implement it?
Source-first prompting means giving the model reliable information first, and then asking your question based on that information. A simple way to understand this is through a human analogy. If you ask someone a question about a topic they studied years ago, they might give a rough or incomplete answer. But if you first ask them to read a specific chapter and then ask the same question, their response is usually clearer and more accurate.
The same idea applies to large language models. When we provide trusted sources such as a document, article, or PDF before asking a question, the model can base its response on that material instead of guessing. For example, in a text summarization task, it is better to upload the document first and then ask for a summary. This reduces the chance of incorrect or made up information and improves alignment with the actual content.
How does bias creep into AI model training?
Bias creeps into AI model training mainly through the data used to train the model.
Large language models such as ChatGPT and Gemini are trained on enormous amounts of text collected from the internet. This data reflects human society, and like society, it contains imbalances, stereotypes, and gaps. When certain patterns appear repeatedly in the data, the model learns
them.
For example, a large portion of online content is written in English. As a result, these models tend to perform better in English than in many other languages. This is not intentional bias, but a result of data imbalance.
Similar effects occur in other areas as well. If certain viewpoints, cultures, or professions appear more often in the training data, the model may reflect those patterns more strongly in its responses.
Given the current state of technology, are there any specific situations when you advise people not to use AI?
Yes. There are certain situations where I advise people not to use AI, especially when it starts replacing their own thinking.
A simple way to understand this is the difference between information and knowledge. AI is very good at handling information. It can summarize a chapter, organize notes, or help you quickly understand what something is about. But real knowledge comes from thinking, questioning, and understanding things on your own.
One situation where AI should be avoided is using it as a shortcut for thinking. For example, letting AI do all the thinking for schoolwork, problem solving, or forming opinions can slowly reduce a child’s ability to reason, question, and learn deeply.
AI is best used for routine or supportive tasks. It should help learners, not replace their curiosity, judgment, or creativity. This is exactly why courses offered by Critikid are extremely relevant and important today. They focus on building critical thinking skills so that children learn how to question information, think independently, and use tools like AI wisely rather than blindly trusting them.
In short, AI should support learning, but thinking should always remain human.
How can people learn more about you and your work?
People can learn more about my work through my LinkedIn profile, where I regularly share thoughts and practical insights on Generative AI in education. That is also the best place to understand my background and ongoing work.
I am always happy to connect with educators, parents, and learners who are interested in understanding AI in a thoughtful and responsible way. Anyone interested in collaboration, workshops, or learning sessions is welcome to reach out to me via LinkedIn or email. I strongly believe that learning how to use Generative AI wisely, while continuing to think critically, is becoming an essential life skill.