Something interesting happened this week in legal land:

Microsoft admitted that its Copilot AI was summarizing confidential emails…

Without permission...

For weeks…

Even with data loss prevention policies in place.

You're worried about security and confidentiality?

Well, the same company that sells “enterprise-grade security”…

Couldn’t stop its own AI from processing labeled confidential emails.

Now consider this:

If Microsoft can’t perfectly contain it…

What does that mean for small and mid-sized firms experimenting with AI right now?

It's important to be honest here and look at the implications.

Before you paste something into ChatGPT…

Do you know where that data is going?

What info they keep in their systems?

Could you explain your process if a judge asked?

Or are you just hoping and vibing?

A lot of attorneys and firms are in this "just vibing" category.

But something else happened this week too.

Top lawyers raised their billing to $3,400 an hour.

Meanwhile, some on the bottom end still make $60 an hour.

Same license and profession...

Very different realities.

But one group understands leverage...

And the other doesn't.

The firms billing at the top of the market aren’t “using ChatGPT.”

They’re doing a few critical things that make all the difference:

  • Building systems

  • Reducing friction

  • Protecting privilege

  • Shortening feedback loops

They’re eliminating the little inefficiencies that silently eat 20-30% of revenue.

Meanwhile, other firms are copy/pasting blindly...

Arguing internally about whether AI is ethical...

Letting associates experiment without guardrails...

Avoiding it completely and hoping it slows down...

But AI is not a toy.

It presents serious liability if you don't know how to use it...

If you have no policy and no one takes ownership of this critical technology.

And right now there are three types of firms:

Type 1: Pretend AI is all hype.

Type 2: Use AI randomly with no structure.

Type 3: Build controlled internal systems that reduce risk and increase leverage.

Most firms are Type 1 or Type 2.

Type 3 firms will start to pull away over the next 12–24 months.

We won't realize until we look up one day and they're too far ahead.

So if you’re going to use AI…

You need to do it right.

And if you’re going to avoid it…

You better understand exactly what you’re giving up.

You can't afford to just vibe.

Over the next few days, we're going to break down what proper AI infrastructure looks like...

What to automate and what to never automate...

How to account for all the risk of using it...

And how to tap into the leverage that lives buried deep inside AI.

If you’ve been experimenting…

Or avoiding…

You may want to read the next few issues.

Because the divide is happening in real time, behind the scenes...

At thousands of law firms across the country...

And while some lawyers will spend this weekend learning the tools...

Others will continue to pretend that AI doesn't exist.

More next week.

🔎 What Else We’re Seeing Behind The Scenes

Over the last few weeks, we’ve been tracking:

• Judges sanctioning AI misuse
• VC-backed firms moving faster than expected
• Hundreds of plaintiff firms admitting workflow gaps
• Associates using AI without clear supervision
• Intake systems that fall apart under automation

This is all relevant to operations.

There’s nothing theoretical here.

If you want context, browse the archive here:

Hit reply with the word “feature” and we’ll send you a few quick questions.

Prefer to jump the line?

Fill out the feature waitlist here:

No cost or strings attached, just spotlighting great firms.

Until next time,

-The Legal Brief

Keep Reading