
In Professor Mikhail Pevzner’s recent MBA economics class at the Merrick School of Business at the University of Baltimore, the conversation began with a provocation rather than a conclusion. The Professor of Accounting assigned an opinion piece in the Wall Street Journal written by U.S. Senator, Bernie Sanders, in which he argued that artificial intelligence poses a serious threat to workers, economic stability, and the broader structure of opportunity in the United States. Sanders framed AI not simply as a technological advancement, but as a force that could displace labor at scale and concentrate economic power in ways that are difficult to reverse.
That framing was taken seriously, but it was not taken at face value. The discussion
moved from what the article said to what it assumed, and how those assumptions needed
to be questioned.
The article’s concern about widespread job displacement was not dismissed. Instead,
it was unpacked. If artificial intelligence reduces the demand for certain types of
labor, what follows from that? Does the labor market adjust through reallocation,
as it has in previous technological transitions, or does it fail to do so in this
case? If workers are displaced, where do they go, and under what conditions would
they struggle to find new opportunities? These questions did not have immediate answers,
but they forced the conversation into a different register—one grounded in mechanisms
rather than assertions.
As the discussion unfolded, the article gradually transformed from a policy statement
into something more analytically useful: a hypothesis about how the economy functions
under technological change. Students began to see that the force of the argument depended
not on its rhetoric, but on a set of underlying assumptions about labor markets, firm
behavior, and the distribution of economic gains.
What emerged over the course of the discussion were two distinct ways of thinking
about the future. One view reflected a familiar historical pattern, in which technological
change disrupts existing jobs but ultimately leads to new forms of employment and
higher overall productivity. The other view, more aligned with the concerns raised
in the article, suggested that artificial intelligence might be different—that it
could outpace the economy’s ability to adjust, leaving a persistent gap between those
who benefit from the technology and those who are displaced by it.
One student asked whether it mattered which view was correct if economists couldn't
predict the outcome with certainty. The question shifted the entire discussion. It
wasn't about choosing sides, it was about understanding what evidence would be needed
to support either position, or what assumptions each perspective required you to accept.
The purpose of the discussion was not to resolve that tension. It was to make it visible,
to articulate the assumptions that separate the two perspectives, and to understand
what kinds of evidence would support one view over the other. In doing so, the conversation
moved beyond agreement or disagreement and toward something more fundamental: the
ability to analyze complex, real-world claims with clarity and discipline.
This approach reflects a broader philosophy of how artificial intelligence is used
in the classroom at Merrick. AI is not treated as a substitute for thinking, nor as
something to be avoided. It is treated as a tool that makes certain tasks easier—summarizing,
organizing, drafting—and, in doing so, raises the stakes for the tasks that remain.
When answers become easier to produce, evaluating those answers becomes more important.
The classroom, in this sense, becomes a place where students learn not how to compete
with AI, but how to collaborate with it without surrendering judgment. This is not
a compromise with technology—it is a commitment to ensuring that technological fluency
never replaces intellectual independence. They learn to question outputs, to interrogate
assumptions, and to distinguish between a well-written response and a well-reasoned
one. These are not new skills, but they have taken on renewed importance in an environment
where technology can generate convincing answers, at scale.
The discussion that began with a bold claim about the risks of artificial intelligence
ultimately led to a more grounded set of questions. How does technological change
reshape labor markets over time? Under what conditions do workers benefit from innovation,
and when do they fall behind? What determines whether the gains from productivity
are broadly shared or narrowly concentrated?
Artificial intelligence may be reshaping the economy, but it is also transforming
the nature of education. At the Merrick School of Business, that transformation is
being met not with avoidance, but with a deliberate effort to use AI to deepen, rather
than replace, the process of learning. Students are expected to use AI tools for research
and drafting—but they're also expected to defend their conclusions, show the assumptions
embedded in AI-generated responses, and articulate what the technology cannot evaluate
on its own. The goal is not to produce better answers, but better questioners.