
Artificial intelligence is already deeply embedded in areas like hiring, operations, customer experience, and strategy. But as organizations rush to adopt AI, a gap is emerging: most managers are trained to use AI tools, but not to understand, evaluate, or govern them.
At the University of Baltimore’s Merrick School of Business, we believe that’s not enough.
Our Master of Science in Artificial Intelligence for Business is built for a different kind of leader—one who can design, evaluate, and govern AI systems, not just consume their outputs. This means focusing not just on technical basics (which are becoming more accessible daily), but on building entire workflows, while understanding the legal, ethical and operational limits of their use.
Many academic programs focus on helping students sharpen their quantitative skills and learn to turn data insights into business decisions. That's valuable work—but it's only half the equation. The leaders who will thrive aren't just those who can read an AI-generated report or craft an effective prompt. They're the ones who understand what's happening under the hood: how models are built, where bias can creep in, what the limitations are, and how to communicate those realities to the C-suite.
AI use is rapidly changing focus from “text and report production” to agentic task
execution. When large language models can generate structured datasets, construct
dashboards, write executable formulas, or perform multi-step operational tasks under
human direction, the critical question becomes: Who designed it? Who constrained it?
Who validated it? And who takes responsibility when something goes wrong?
AI isn't just ‘recommending’ anymore, it decides and acts (within constraints).
Consider a striking example of the publicly traded semiconductor company, Nvidia.
They recently reported reducing a critical chipset design task—one that typically
requires eight engineers and 10 months—to an overnight process that is completed by
a single GPU. Yet even with this breakthrough, Nvidia's chief scientist emphasizes
they're nowhere close to asking AI to simply "design our next chip." Why? Because
specialized AIs are needed for each part of production, and each system must be designed,
vetted, supervised, and integrated. And critically: What happens when it goes wrong?
AI is already making decisions that affect hiring, financial forecasting, customer
interactions, and public policy. Getting it wrong leads to bias, regulatory exposure,
misalignment, and loss of trust. These are governance problems—and leadership problems.
Ones AI cannot solve.
Today’s leaders need to understand how models are built and trained, where bias and
risk enter the system, what AI can—and cannot—reliably do, and how to communicate
these realities to executives and stakeholders.

Dr. Fowler welcomes one-on-one conversations. Email her at dfowler@ubalt.edu.