The Ethics Council

Sarah Doringoe opened Outlook on Tuesday morning. As the recently appointed AI Ethics Lead at TFT - a technology company - she received an urgent email about their hiring algorithm showing concerning patterns in its recommendations.
"The system is significantly favouring candidates from top-tier universities," wrote Marcus from HR. "We're missing out on diverse talent, and I'm worried we're perpetuating existing inequalities in tech." Sarah was frustrated. The hiring algorithm had been celebrated as a success story just months ago, processing thousands of applications with supposed objectivity. Now, it was revealing the hidden biases in its training data – data that reflected decades of institutional prejudices in the tech industry.
She opened a new message and typed: "Emergency Ethics Council Meeting Required - Hiring Algorithm Bias Detection." The Ethics Council had been her brainchild when she joined TFT. Unlike traditional corporate structures where AI ethics responsibility was scattered across departments, Sarah had insisted on shaping a diverse team of experts who would collectively oversee the ethical implications of the company's AI systems.
Two hours later, Sarah sat at the head of the conference table, surrounded by her team: Dr Chen, a full-time dedicated philosopher specialising in ethics and technology; Maria Rodriguez, a compliance expert; Dr Williams, a sociologist focusing on technological inequality who served as an external consultant and connected online; and several AI engineers and data scientists.
"The problem isn't just technical," Dr Williams explained, pointing to graphs showing the algorithm's bias patterns.
"We're seeing the intersection of social inequalities with technological systems. The algorithm is learning from historical hiring decisions influenced by systemic biases."
Maria added: "And with AI systems operating at this scale, we're not just perpetuating these biases – we're amplifying them at digital speed across thousands of decisions. Moreover, under the EU AI Act, hiring systems are classified as high-risk AI, requiring appropriate human oversight measures and transparency in how decisions are made. We need to ensure our system meets these regulatory requirements." One of the engineers raised his hand. "But we used standard industry practices in developing this algorithm. What could we have done differently?"
Dr Chen responded: "This is exactly why AI ethics education is so crucial. Knowing how to build these systems is not enough – we need to understand their social impact. Every decision in AI development has ethical implications, from data selection to deployment."
The engineers were learning from the sociologists about hidden biases in data collection. The compliance team was gaining insights from the philosophers about ethical frameworks. Together, they demonstrated why AI ethics couldn't be solved by any single discipline alone.
Within the next week, they developed a roadmap with both immediate and long-term measures. The immediate actions included retraining the algorithm with more diverse data sets, implementing new bias detection tools, and establishing mandatory AI ethics training for all employees. They also adjusted the process to ensure human oversight remained a key component, allowing for the validation of AI-generated decisions.
To address the crucial issue of algorithmic transparency, they created a six-month roadmap for implementing interpretable machine learning, or "white-box AI." This longer-term initiative would allow them to examine and improve how the machine learning model arrived at its decisions. While the technical implementation would take time, they could begin with documentation improvements and team training on interpretability techniques.

"This isn't just about fixing one algorithm," Sarah concluded. "It's about recognising that AI ethics isn't optional anymore. Every AI system we deploy has a real impact on people's lives. We need to build ethics into every stage of development, from conception to deployment."
In her monthly report, Sarah updated the company's leadership on their progress; she reflected on how the incident had demonstrated every key principle she'd been advocating for: the critical importance of AI ethics, the need for multidisciplinary approaches, the challenge of algorithmic bias, the importance of unified responsibility, and the fundamental role of ethics education.
The hiring algorithm issue had been a wake-up call, but it also showed that ethical AI development was possible with the right team and approach. As she finished her report, Sarah added one final note:
"The rising power of AI demands rising ethical responsibility. We can't treat ethics as an afterthought anymore – it must be at the core of everything we do."
Six months later, responding to the global recession and declining half-year bottom-line figures, Sarah had to define budget optimisation measures. She needed to cut the budget by 30% whilst maintaining the delivery of her yearly objectives. With the difficult decision to reduce her team size, Sarah reached out to Marcus and Barbara, the Chief Legal Officer, to explore alternative solutions. During their discussion, they discovered that the Enterprise Risk & Compliance Manager position in the legal department closely aligned with Maria's compliance and AI ethics expertise. However, it required additional knowledge in enterprise risk management and legal operations.
Maria expressed enthusiasm about the opportunity to expand her skill set. Together - Sarah, Marcus, and Barbara crafted a comprehensive three-month career development plan. The plan included specialised training in enterprise risk management, legal operations fundamentals, and regulatory frameworks beyond AI compliance.
The solution was advantageous for everyone involved. Since the legal department had already budgeted for the position, transferring Maria internally meant significant cost savings compared to external hiring. Her deep understanding of TFT’s culture, AI systems, and existing compliance frameworks meant she could become effective in her new role much faster than an external candidate. This faster onboarding timeline translated into immediate productivity gains for the legal department.
For Sarah's team, this internal mobility solution achieved the required 30% budget reduction while ensuring that Maria's expertise remained within TFT. Since Sarah had already launched a global upskilling programme in ethical implications in AI across her company 6 months ago, she has developed a critical mass of AI Ethics expertise to continue developing ethical AI algorithms at a reduced cost.
AI Tools Used in This Article
During my studies at Oxford, I work with a vast array of information sources: from academic articles, textbooks, and monographs to specialized curriculum literature, lecture materials, industry reports, analytical reviews, and research publications. In my writing process, I synthesise information from dozens of these diverse sources not only to develop comprehensive analyses, but also to explain complex concepts in a clear and engaging way.
In today's information-rich world, avoiding AI tools out of fear of 'artificiality' means deliberately limiting yourself. I actively integrate AI tools into my daily work, and it helps to boost my efficiency and productivity:
- NotebookLM — for processing research materials and primary sources, helps highlight key points and create thorough reviews;
- Claude (AI from Anthropic), Perplexity.ai, ChatGPT— help me with article structuring, analogies and translation;
- Grammarly Premium — I use it for proofreading;
- ChatGPT — for creating AI-generated images;
- ElevenLabs — for creating audio versions in English using an artificial voice based on my (Vira Larina's) voice.