
Written by Sheik Md Akij, MBA student and graduate assistant in UBalt’s Center for Entrepreneurship and Innovation.
At the University of Baltimore, students are asking a question their professors couldn't have imagined a few years ago: Will I lose my job to AI, or will I use AI to build my career? The answer, according to Frank Xu, professor of cyber forensics and director of the M.S. in Forensic Sciences program, depends entirely on whether students learn to lead AI rather than follow it blindly.
As I sat and listened to him, the conversation moved from AI in education to the broader implications. According to Xu, the future will belong not to people who simply use AI, but to those who learn to guide it. And one thought distinctively emerged from the discussion: Students should use AI but never surrender their critical thinking to it.
Today, students get mixed signals from the people who teach and lead them. The message students receive about AI is contradictory. Faculty tells them AI will transform their careers, while course policies warn them that using ChatGPT could violate academic integrity.
Professor Xu's answer is simple. It’s not about saying ‘yes or no to AI’—it’s about taking responsibility. Maximizing the usage of AI means ‘staying in charge.’ You should be able to explain what you did, why you did it and whether the result is right. The real question isn’t whether a student used AI—it’s much more about whether the student still owns the thinking behind the work.
This matters most for students who are not from technical fields. As business students choose their majors to focus on people, markets and ideas rather than science, Xu noted. Now AI is showing up in their world. For some, AI's presence—feels inescapable.
But AI does not have to be scary. MBA students like me do not need to become machine learning experts, instead we need to learn how to match real customer needs with the right tools. What problems do people have? What tasks are slow or boring? Which jobs could be done better with help from AI? Those are business questions, not science questions.
That is where AI stops being a worry and becomes an opportunity.
Professor Xu offered a useful way to see it: years ago, every company needed a website. Then every company needed a mobile app. Now every company needs AI. In his view, AI helps small startups most, because it lets small teams do work that used to need bigger staff.
Xu also said technology is moving fast. Most people still think of AI as chatbots. But the frontier has already moved.
Most people still think of AI as chatbots. But the frontier has already moved.
The shift from chatbot to agent is vastly significant. A chatbot answers one question at a time. An agent can take a goal, break it into steps, use different tools along the way and keep working until the task is done. It can browse a website, read a file and check its own output. That moves AI from being a helper you talk to, to being a partner that gets work done with you.
Students need an honest message too. The old line, “AI will not replace you,” is not helpful anymore. A better version is this: AI alone may not replace you, but someone who knows how to use AI well may. This is not meant to scare anyone. It is meant to push people to act.
Professor Xu's advice is practical. Learn some basic programming. Build your thinking skills. Ask better questions. Always check the answers AI gives you instead of just trusting them.
The same principle sits at the heart of his current project on trustworthy AI. In cybersecurity and digital investigation, AI output becomes useful only when investigators can trace them back to factual evidence, inspect them, and validate them. His work helps students learn how to use AI not as a black box, but as a partner whose answers must always be checked against the facts. That approach builds real AI literacy.
Many students today use AI only for quick help. They enter a question, copy whatever appears and move on without reflection. But good AI work is deeper. It takes trying, editing and rethinking. The future will belong less to people who can ‘use ChatGPT’ and more to those who can lead AI with a clear goal.The next wave of founders may not be the ones building huge AI models from scratch. They may be the ones who take existing AI tools and use them to fix real problems in education, media, retail, health and local services. That is where real innovation lives.
AI is not just a tech story. It is a leadership story. It is about whether schools can help students feel confident instead of afraid and whether business students can see themselves as the bridge between human needs and new tools. If we do this right, the future of AI will not belong only to engineers. It will also belong to students who learn to question, check and lead.