0:00
/
Transcript

Fiona Phillips on How to Change the Law

Fiona Phillips has been a Magic Circle restructuring lawyer, a GC, and a key participant in a bank's digital transformation. Now she's building a cybersecurity startup in a 130-year-old IP firm

When I sit down with Fiona Phillips, Anthropic has just announced that it won’t be releasing its Mythos model - at least not yet - because of the cybersecurity implications of a system that appears uniquely capable of finding and exploiting software vulnerabilities.

Talk about good timing for a podcast with a leading expert on cybersecurity.

But before we get to that, let’s talk about Fiona’s story, which starts a long way from cybersecurity in the restructuring and insolvency team at a Magic Circle firm, six months before Lehman Brothers collapsed.

She trained at Freshfields, with a six-month stint in The Hague doing arbitration during her training contract. She qualified into restructuring and insolvency, expecting, as she puts it, “a nice quiet corporate support team where I could hide from the really vicious transactional hours.” Then Lehman went down and everything changed. She spent the next stretch of her career working for the administrators of banks and building societies, going in on day one during the most tense period in UK banking history, picking apart what had gone wrong and figuring out how to fix it.

It was fascinating work, but relentless. When HSBC offered her a move in-house, she took it, partly for the quality of life and partly because the bank was so international. She wanted to travel and live abroad, and HSBC delivered on both.

A secondment to Dubai that was meant to last six months turned into four years. She ended up as general counsel for the retail bank across the Middle East and North Africa, dealing with financial crime, M&A across the region, and the complex politics of the Gulf.

The HSBC digital journey

In 2015, Fiona moved to Hong Kong, HSBC’s spiritual home. She tells me that if you get in a taxi in Hong Kong and say “take me to the bank,” you’ll end up at HSBC. She joined the executive committee for the retail and private banks and the team embarked on a serious digital transformation.

The fear at the time was fintechs. Incumbent banks were watching startups build better, faster, more intuitive products, and wondering whether the ground beneath them was about to give way. It’s a dynamic that will sound very familiar to anyone watching legal right now.

HSBC’s response was to go and learn. The exco travelled to Silicon Valley, to China, to Southeast Asia, spending time with big tech companies and innovators. They recruited people from completely different industries. They experimented. They put a team in a WeWork and said: if you were going to disrupt us, what would you build? The lesson, Fiona says, was about giving people inside a big organisation different rules to play by, creating the right environment for experimentation within a business that was built for stability.

As a lawyer watching all of this, she couldn’t help wondering how the same thinking might apply to the legal function. So they tried. And then Fiona became, as she puts it, “really obsessed” with legal design.

Legal tech is the new fintech

When we talk about what law can learn from what happened in banking, Fiona draws a sharp parallel but also flags a crucial difference.

In banking, the fintechs discovered that becoming a bank is hard. Capital requirements, regulatory burden, and consumer expectations around safety and stability all acted as barriers. That’s why the big banks survived. They digitalised fast enough, and the moats held.

In law, those moats may not exist. It’s much easier to become a law firm than a bank. The barriers to entry are low. And clients may not care about the stability and heritage of a big firm if they can get what they need from a tech-enabled alternative. Law, Fiona suggests, may be significantly easier to disrupt than banking was.

The one thing the banking experience made crystal clear, she says, is that you have to obsess about the customer’s point of view. “You have to stop thinking that a customer wants a mortgage. They don’t want a mortgage, they want a house.” The same logic applies to law. Nobody wants a conveyancing lawyer, she says. They want a house. The legal work should be seamless, frictionless, and invisible. If AI-native firms can build that experience from scratch rather than trying to retrofit it onto traditional models, she thinks they may have a genuine structural advantage.

Kill the memo

This leads us to legal design, which Fiona describes simply as making sure that when you deliver a product or service to a client, it’s designed from the beginning for their needs, not yours.

She gives a pointed example. She’s been drafting an AI policy for a client. Most templates she’s seen start with definitions, because they’re written by lawyers for lawyers. Nobody, she says, has ever opened a document as a normal person and thought: what I’d really like first is a dense legal definition. And most AI policies she’s seen are either aggressive or patronising in tone, full of prohibitions and warnings, when what users actually need is clear, practical guidance on a handful of questions. Can I use this tool? What data can I put in? Has the client consented? How do I check the output?

She thinks the legal profession has a deep problem with this. Lawyers don’t think of what they do as a product. They think they give advice. Products feel cheap, beneath them. But if you launched a product in banking or cosmetics, you’d never release it without testing it on users first. The legal profession has, by and large, a complete absence of that kind of testing.

And she’s clear-eyed about the difficulty: making something simple is deceptively hard. Lawyers see a well-designed document and think it looks easy. Actually, she says, getting to simple is a real art, and getting lawyers to respect that is one of the biggest challenges she faces.

At one point in our conversation, we joke about launching KillTheMemo.com. She’s in. I think she’s only half joking.

Back in private practice

After years in-house, Fiona had what she describes as a reflective moment. She went and shadowed a criminal judge for a while. She’d originally wanted to be a criminal barrister and never did, and she wanted to ask herself a basic question: did she still want to be a lawyer?

The answer was yes. She believes in the rule of law. She believes in the power of the law. But she also knew she wanted to be at the cutting edge of where technology was evolving, and she needed to be somewhere that the ethical dimension mattered, somewhere she could say to clients “I don’t think you should do this, even if it’s legal.”

She found that at Marks and Clerk, a 130-year-old IP firm. What drew her in was the people. Patent attorneys, she points out, are the inverse of the usual dynamic: they’re technologists and scientists who became lawyers, rather than the other way around. “It’s kind of the perfect lawyer, in my view.” The firm works at the cutting edge of invention: AI patents, semiconductors, electronics, space. One of her colleagues is on the shortlist to be the UK’s first astronaut.

Within Marks and Clerk, she’s built a new subsidiary focused on cybersecurity, data, AI law, governance, and ethics, with a strong emphasis on education. She describes it as a startup inside a law firm. She doesn’t think she’d have gone back to private practice for traditional transactional work. But she found a place where she can practise law in a way that makes her passionate and lets her build things.

The Anthropic question

The Glasswing announcement has led to a busy week.

She tells me the defenders of companies and governments from cyber attacks are in a constant race with criminals, and the criminals have a structural advantage: they don’t have to comply with any law, go through compliance checks, or worry about whose data they’re using. What Anthropic has said, in essence, is that it has built a model that could be transformative for cyber defence, but devastating if it fell into the wrong hands.

Fiona’s question is about who gets to set the red lines. She thinks it’s admirable that Anthropic has drawn them. But in a functioning democratic society, she asks, should it really be a private company that determines what the government can and can’t do with AI? These companies can enforce limits because they control the tools. But is that how it should work?

She’s not arguing against Anthropic’s decision. She’s arguing that we haven’t built the democratic infrastructure to handle decisions of this magnitude.

Regulation is not the enemy

Fiona pushes back on the common argument that regulation kills innovation. She doesn’t buy it, though she’s thoughtful about proportionality. The question, she says, is whether the most powerful AI models are the equivalent of nuclear technology: capable of enormous good, capable of enormous harm, and therefore requiring intergovernmental rules and collaboration, not just one country’s framework. That top tier of AI, the systems that could orchestrate large-scale cyber attacks, probably warrants that level of seriousness. Your contract review tool does not.

In the meantime, she thinks companies should stop waiting for legislation and start self-regulating on substance, not just process. She’s frustrated by the responsible AI conversation as it currently exists, which she sees as too focused on frameworks and tick-box compliance. She wants companies to take positions: what will you ban? What will you never do? What’s your stance on emotional recognition AI? On AI in HR? On recording every call with a transcription tool?

And she makes a powerful point about existing law. Tort law already provides duties of care that could apply to AI harms. In the absence of legislation, she expects to see a lot more litigation. It’s already happening in the US, with cases involving children harmed by chatbot interactions and bias in hiring tools.

The education gap

Underpinning everything is what Fiona sees as a massive education problem. It’s not just judges who don’t understand the technology.

Many AI vendors can’t clearly explain how their own tools handle data. Companies don’t understand the true value or true risk of their data. Senior executives can’t articulate how their organisations use it. In a world where AI governance is becoming critical, she worries about a repeat of what happened with GDPR: a compliance exercise that generated paperwork without generating understanding.

She and her colleague Eleanor, Marks and Clerk’s data partner, are trying to change this by building educational programmes for in-house lawyers. The goal is about helping people ask the right questions. When someone says “let’s talk about data,” are they talking about prompts, training data, outputs, or something else entirely? Until people can make those distinctions, she says, governance will remain surface-level.

Final note

Fiona Phillips has built a career that most lawyers wouldn’t have the nerve or the curiosity to attempt: Magic Circle to banking to the Middle East to Hong Kong to a startup inside a 130-year-old patent firm. She’s done insolvency, financial crime, digital transformation, legal design, and cybersecurity.

What comes through most clearly in our conversation is a combination of moral seriousness and creative restlessness. She genuinely believes that lawyers have a responsibility to tell clients not just what’s legal, but what’s right. And she thinks the profession’s resistance to rethinking how it delivers its work, from the 30-page memo to the definition-first policy document, is both a failure of imagination and a disservice to clients.

She closes our conversation with a line from Ernest Shackleton, borrowed via Jacinda Ardern: optimism is true moral courage. It’s brave to stay optimistic, she says. But if we don’t, what else have we got?

Discussion about this video

User's avatar

Ready for more?