This House Believes..

If we demand that AI 'must' create meaningful jobs, we abdicate our own responsibility. Reflecting on my recent Oxford Union debate, I explore why algorithms don't lead organisations, and why human dignity cannot be programmed into a neural network.

Share
This House Believes..
Oxford Union

Stepping onto the historic floor of the Oxford Union for the second time, I was ready for an intellectual challenge. But when I received the motion and my assigned side, I was momentarily stunned.

The motion read: "This House believes AI technologies must create opportunities for meaningful work and well-paying jobs, rather than contributing to AI-induced technological unemployment."

My assignment? The Opposition. More specifically, I was the closing speaker. This meant my job was to synthesise the entire debate and systematically dismantle the Proposition's arguments.

Oxford Union

At first glance, opposing the idea that AI should create meaningful work feels deeply counterintuitive, especially for someone who builds and implements tech solutions every day. How could I argue against job creation? How could I advocate for unemployment?

To find the answer, I did what I always do when confronted with a complex, seemingly unsolvable problem: I stepped back, went to the library, surrounded myself with research, and changed my angle of vision.

Sainsbury Library at the Saïd Business School

That is when the core flaw of the proposition revealed itself. The danger wasn't in the desired outcome: meaningful work is a universal good. The danger, and the fatal flaw in our modern tech discourse, was hidden in a single word: 

"must"

By stating that AI must create opportunities, we are anthropomorphising a mathematical model. We are assigning moral agency and societal responsibility to lines of code, thereby abdicating our own.

Therefore, I decided to build the argument around three pillars.

1. The Illusion of Algorithmic Agency (What We Learn)

In the tech industry, we often confuse capability with agency. Yes, AI can process millions of data points, recognize patterns, and generate text, but it has no inherent intent. It does not "want" to create jobs, nor does it "want" to destroy them. It only optimises for the parameters we set.

That transformation is fundamentally a human story. That it comes through a change in people, not through obligations placed on algorithms. AI can predict demand, flag risk, and streamline processes. But:

"Technology makes the forecast. The engineer explains it. And then the human makes the decision"

— Prof. Arora

AI is not the decision maker. It never was. It is a tool that informs human judgment. 

Rhodes House

2. The Danger of Historical Data vs. Human Courage (Who We Are)

If we wait for AI to proactively create opportunities, we will find ourselves trapped in the past. Machine learning models are trained on historical data. They look backward to predict the future.

I know the limitations of historical data intimately. When I walked into my first engineering role, I was the only woman in the room. In a field where people who looked like me were not historically expected to stay, let alone lead.

If an algorithm had been in charge of "creating opportunities" based on the data of that era, I would have been filtered away as a statistical error because the model would have optimised for the historical norm, which was overwhelmingly male.

I am here today not because technology created an opportunity for me, but because human beings made human decisions to invest in another human being. Leaders saw potential where an algorithm would have only seen a deviation from the mean. Opportunity does not magically flow from technology; it flows from the conscious choices people make around it to break historical cycles.

Jesus College

3. The Anatomy of "Meaningful" Work (How We Work)

The proposition argued that AI must create meaningful work. But what defines meaning?

Today, I work in a company that operates on a remote model, collaborating with colleagues who are distributed everywhere from New Zealand to the Bay Area. Since the pandemic, this borderless approach has allowed us to work with the best talent on the planet. Yet, when we reflect on what draws people to our work, the answers go far beyond the technology itself. The driver is often the flexibility it affords: the ability to engage deeply in meaningful work whilst also having the time to be present with our families.

Did an AI model create that? Did an algorithm mandate a corporate culture of trust, flexibility, and human dignity? Absolutely not.

Leaders made that decision. Human-centric governance made that possible. That is meaningful work. Not mandated by a motion. Built by humans who chose to do better.

View from Jesus College

The Decision is Ours

To close the argument at the Oxford Union, I looked out at the room filled with brilliant minds from across the globe and reminded them of why we were there.

"And to my friends from Stream 1, our 2025 cohort, do you remember what we were told on the very last day of Module 4? Keep learning, and do it together. But look around this room. 2021. 2022. 2023. 2024. 2025. Every cohort. Every stream. All of us are here together. And that is exactly the point. 

That is exactly why we are all here today. We kept learning. We did it together. Engineers and lawyers, investors and founders, technologists and business leaders. From China, from Germany, from Switzerland, from the United States, from Canada, from Ukraine, from the UK. Every background. Every discipline. And every single one of us knows, because we studied it, debated it, and lived it, that it is not AI that needs to create meaningful work.

It is us.

The technical professionals who build responsibly. The lawyers who govern wisely. The founders who invest in people. The leaders who choose to bring others along. That is where opportunity comes from. 

So when the proposition asks this House to place that obligation on a piece of software, remember that Technology makes the prediction. The engineer explains it. And the human makes the decision.

The decision is ours. It has always been ours.

People. Policy. Governance. Trust. Integrity.

Vote in opposition."

The House voted for the opposition.


As we continue to integrate artificial intelligence into every facet of our global economy, we must stop asking what AI must do for us and start asking what we must do with AI. We cannot outsource our moral obligations to algorithms. The responsibility for the future of work rests exactly where it always has: in our own hands.

Oxford Union Library

Photography by Fisher Studios and from my personal archive.