01 / 15
For Republic Financial · Two-hour workshop

How to think about AI in your business.

A working session for Republic Financial executives — frameworks, not jargon — built around 25 questions I bring into every executive workshop, and two live engagements pulled from Republic's own portfolio.

Prepared forRepublic Financial
FormatWorking lunch
Length2 hours
Anchored in25 MarginWise questions
Illustrated byTwo Republic engagements
Take-homeA forcing function
02 / 15
The claim to test

Before AI. Before automation.
Before any tool decision at all.

My job today is not to convince you of anything. It's to give you three ways of thinking that will sharpen every AI conversation that hits your desk this year.

"

Most of what gets called "AI strategy" is really just figuring out how your business actually works. The decisions that matter — AI, automation, or neither — only get good once the questions in your senior people's heads are out, written down, and answered. That's the work.

03 / 15
Three-minute exercise · MarginWise framework

Circle three.
We'll come back to them.

These are 25 questions I bring to every executive workshop — they're not Republic's, they're mine. We won't address all 25 today; we'll hit roughly eight, organically, through two real cases. At the end I'll come back to whichever the room ranked highest. This is your workshop to direct.

01 Looking busy vs being productive

How do I tell the difference between AI work that looks impressive in a demo and AI work that actually saves us time, money, or errors?

02 Our edge in five years

Which of the things we're good at today get weaker as AI tools become widely available, and which actually get stronger?

03 One plan, not ten pilots

How do we run a coordinated company-wide approach instead of a pile of disconnected experiments nobody can tie back to a goal?

04 Whose job is the learning

Is it the company's responsibility to teach employees how to use AI tools, or is it on each employee to figure out on their own time?

05 Redesign vs bolt-on

How do we redesign work from the ground up around AI, instead of adding AI on top of how we already do things and calling it transformation?

06 Talking to our people honestly

How do we explain what's changing to employees in a way that's honest about the impact without setting off panic or sounding like a press release?

07 What does this even mean

"AI transformation" gets thrown around. What does it actually look like in our business? What are the concrete options on the table?

08 If a task drops to a dollar

If something that used to cost us $100 now costs $1, what newly becomes worth doing? Where does that change what we offer or how we operate?

09 How we decide to fund a project

What's our checklist for green-lighting an AI project? What passes, what doesn't, and who gets to decide?

10 Experiment without leaking

How do we let employees try AI tools without exposing client data, IP, or putting compliance at risk? Where are the safe sandboxes?

11 Hearing from the front line

How do the people doing the actual work tell us where AI could help, and how do we decide which of their ideas to pursue?

12 Leaders using the tools themselves

How do executives get hands-on enough with AI tools to credibly lead the rest of the company through this — instead of delegating it?

13 Data ready, but moving today

How do we get our data progressively ready for AI without freezing all AI work for two years waiting for a perfect data project?

14 Where to actually start

Of everything we could do, where do we start in the next 30, 60, 90 days, and why that work and not something else?

15 Avoiding vendor lock-in

How do we build AI tools so we can swap models or vendors when the market shifts, instead of being stuck rebuilding from scratch?

16 Hiring for AI curiosity

How do we test for AI curiosity, comfort, and basic literacy in interviews — including for non-engineering roles?

17 IT, legal, compliance as allies

How do we turn IT, legal, and compliance into partners who help us move faster, instead of the people whose default is to say no?

18 Redeployment, honestly

When we say AI will free people up to do higher-value work, how true is that really? What if higher-value work isn't actually waiting for them?

19 Fear, apathy, and inertia

How do we shift culture when most employees are either afraid AI will replace them or just disengaged from learning it at all?

20 Central team vs embedded

Should one central team own the AI strategy, or should it live in every function? What are the trade-offs we're choosing between?

21 Risk of acting and risk of waiting

What are the real risks of investing aggressively in AI right now, and what are the real risks of waiting? Both lists need to be on the table.

22 Our internal best users

How do we find the people inside our company already using AI well, and turn what they do into a standard others can learn from?

23 What worked, what failed, and why

Which companies have used AI well, which have failed publicly, and what specifically can we apply or avoid from each?

24 If a competitor started today

If a brand-new competitor launched tomorrow and built everything around AI from day one, what would they do differently — and why aren't we?

25 What if vendor prices spike

What happens to our cost structure if the AI vendors we depend on raise prices significantly? How do we hedge that exposure?

04 / 15
How we'll spend the next two hours

Frame, demonstrate, apply.

A predictable rhythm: each mental model gets stated, shown in both cases, then handed to you to apply on a real open question.

0:00

Open & rank your top 3

Pick three from my 25-question card. We'll come back to them.

0:15

Two Republic cases, briefly

An aircraft leasing deal model and Republic's RFC Drilling AP report.

0:30

Three ways to think about it

Understand the work first. Fix what goes in. Match the rules to the task.

1:15

Live consult

The real questions I'm asking my clients this week. How would you decide?

1:45

If a competitor started today

Closing thought experiment, then back to whatever the room ranked highest.

05 / 15
Two engagements · pulled from Republic's portfolio

Two industries.
The same artifact.

Both come from Republic Financial work — specialty aircraft finance on one side, RFC Drilling AP on the other. A spreadsheet that reformats data from a source system, layered with judgment, maintained by an unsung operator who has been doing this for years. This is the universal corporate document — and where most "AI strategy" conversations actually live.

Case 01 · Specialty finance

Aircraft leasing deal model

A Boeing 767 cargo plane on a six-year lease to a freight airline. The seller hands over their version of a spec sheet — an Excel file with 49 photos of contract pages pasted into it. The analyst spends hours retyping numbers into Republic's deal model.

  • One spreadsheet in, one spreadsheet out
  • ~300 cells of engine wear data copied by hand each deal
  • Per-deal cadence; one number wrong can move the answer materially
  • Output: deal returns, leverage, cash-on-cash
Case 02 · Oilfield services

RFC Drilling AP aging

A drilling company runs Microsoft Dynamics GP. Operator (Dana) builds a payment-plan summary every two weeks: pivot, categorize, layer in actual and projected payments, ship to leadership.

  • 488-row GP export → categorized summary
  • Vendor categorization in shadow lookup tables
  • Bi-weekly cadence, recurring time tax
  • Output: who gets paid, who gets deferred
06 / 15
The work before the work

The most valuable thing
I do for a new client
is ask better questions.

Your senior people have decades of judgment they've never written down. Before I recommend any tool — AI, software, anything — I'm pulling that out: into questions, then answers, then documented steps. Most of my first month on a project is email.

Sent to Dana · Tuesday

"Are vendor names spelled exactly the same way in the aging report and the payment history report?"

Sounds simple. Bob's reply was partial. If yes, a simple script can match them. If no, we need a more complex tool that can handle near-matches — different cost, different risk, different approval process.

one yes-or-no changes the whole project
Sent to Dana · Tuesday

"Is there a master vendor list in GP that assigns a category to each vendor?"

GP has a Vendor Class field on every record. Every single row in the export reads "DEFAULT." The real categories live in Dana's head and in a side spreadsheet she keeps updated by hand.

the field exists. nobody is using it.
Sent to Dana · Tuesday

"How do you update the payment details in the summary today?"

Bob: "I think that happens manually." Even Bob doesn't fully know. Dana's process has never been written down because nobody asked. That's not a Republic problem — that's every company.

knowledge that lives in one person's head
For follow-up Friday

"Where do projected payments come from? Vendor conversations, payment plan terms, cash on hand — is any of that written down?"

The answer tells us whether projecting payments is a clear rule we can code, a guess we can train software to make, or human judgment that should stay with a person.

rule, guess, or judgment?
07 / 15
Three ways to think about it

Three lenses
you'll keep using after today.

Each one filters out a category of bad AI projects before they get funded. Each one shows up in both Republic cases. Each one changes the question you ask before you pick a tool.

01
The work

Understand the work first. The tool follows.

The same job often has parts that need AI, parts that need a simple script, and parts that should stay with a person. You can't tell which is which until the work is documented and the questions are answered.

questions before tools
02
The inputs

Fix what you control. Translate what you don't.

When the source is yours — your CRM, your accounting system — fix it there. When it isn't — sellers, vendors, regulators — build a clean intake on your side so the mess gets caught at the door instead of spreading through everything downstream.

your side, their side
03
The risk

How sure can we be? That tells us how to manage it.

Some tasks have one right answer; others involve judgment. They need different review, different approvals, different oversight. Don't apply one risk policy to everything called "AI."

match the rules to the task
08 / 15
Way of thinking 01 · in both cases

Same project. Different parts.
Different tools, on purpose.

When AI gets pitched, notice the assumption that the whole job needs the same answer. It doesn't. Some pieces benefit from AI, some don't. You only know which is which after you've understood the work.

Inside the seller pack

One document. Three different jobs.

  • The parts table — neat rows and columns, a simple script can copy them
  • The narrative paragraphs — written like a memo, AI can pull the key facts but a human checks
  • 49 photos of contract pages — first scan them, then have AI read the text
  • One file. Three tools. Each picked for the kind of content it handles.
Inside Dana's report

One report. Three different decisions.

  • Sorting invoices into aging buckets — math, no AI needed
  • Matching payments to vendors — depends on whether names match exactly across reports
  • Projecting future payments — judgment that probably should stay with Dana, or another human
  • Same report, same hour of work, three different tool choices required.
09 / 15
Way of thinking 02 · in both cases

Fix what you control.
Translate what you don't.

The two cases sit on opposite sides of this. RFC Drilling is Republic's own system — you can fix it. Aircraft sellers will hand over whatever they have, and you can't change that. The mental move is different in each, and getting that right matters.

Aircraft leasing — you don't control the source

Build a clean intake on your side. The seller's mess never gets to.

  • The seller will always send you whatever Excel they have — that's not changing
  • So design Republic's intake to be the bridge: scan, extract, validate, normalize
  • Build it once; every future deal flows through it the same way
  • The model only ever sees clean structured data, even when the seller's pack is a 49-photo Excel
RFC Drilling — you do control the source

Use the field GP already has, the way it was designed.

  • GP has a Vendor Class field on every record — designed for exactly this purpose
  • Every single row in today's export reads "DEFAULT" because nobody's been filling it
  • Dana keeps the real categories in a side spreadsheet she updates by hand
  • The fix isn't software. It's: assign the categories in GP and use the field
10 / 15
Way of thinking 03 · in both cases

Same job.
Different inputs.
Different tools. Different oversight.

Matching vendor names across reports is the cleanest example we have. The same business need leads to a simple script or to AI software, depending on the answer to one question — and they need very different oversight.

If names match exactly

One right answer every time

  • A simple script joins the two reports on the vendor name
  • Same input always produces the same output, every time
  • Compliance can sign off in ten minutes — they can read the code
  • Oversight: standard IT change control. No new policy needed.
If names drift between reports

Best guess, with a confidence score

  • Software has to decide that "RK Pipe" and "RK Pipe & Supply, LLC" are the same
  • Same input might produce slightly different output as the model evolves
  • Each match needs a confidence score and a human review threshold
  • Oversight: a different policy entirely, with accuracy testing over time
11 / 15
Live consult · Republic case 01

Two questions I'm working through
on the leasing deal this week.

For each, I'll walk through how the three lenses guide the decision. Then I'll ask the room how you'd decide. None of these have a "right" answer — they have defensible answers, and walking through the trade-offs out loud is the work.

01

Should we maintain our own catalog of engine part prices?

The engine maker publishes a price list every year. Three real options: keep using the prices in our template, build and maintain our own list, or buy a data feed. The "obviously automate it" answer might actually be the wrong one.

Fix the inputs
02

Should we rename sheets in the model for every new deal?

It looks productive. It would add zero analytical value, and create real maintenance work because formulas reference sheet names. Sometimes the right answer to "should we automate this?" is just no.

Understand the work
12 / 15
Live consult · Republic case 02

The questions I sent Dana.
These have to get answered first.

Notice these aren't AI questions or automation questions. They're work questions about how the job actually gets done. Get them answered and the tool choice becomes obvious. Skip them and you'll automate the wrong thing.

03

Are vendor names spelled the same way in the aging report and the payment history report?

If yes, a simple script can match them. Compliance signs off in ten minutes. If no, we need software that handles near-matches, with confidence scores and human review — and that's a much bigger procurement and oversight conversation.

Match rules to task
04

Does GP have a master vendor list with categories assigned?

The field exists. Every record reads "DEFAULT." Two paths: automate Dana's side spreadsheet, or use the field GP already has so the category lives on the vendor record forever. The second one is the answer that lasts.

Fix the inputs
13 / 15
The pattern, two industries

Same kind of spreadsheet.
Same six things going wrong.

Once you see this pattern, you'll see it everywhere in Republic. Probably in every business meeting you have for the next month. This is the most useful thing you'll take from today.

What goes wrong Aircraft leasing RFC Drilling AP
How the work gets done isn't written down How the analyst figures out how much money changes hands at lease end — only in his head. How Dana decides which vendors get paid this Friday — never documented.
Data the system already has, retyped by hand Monthly maintenance fees that already live as clean data in Salesforce — retyped into Excel. GP's Vendor Class field set to "DEFAULT" on every record; the real categories sit in a side spreadsheet.
Fix what you control; translate what you don't Sellers will keep sending photo-laden Excels. Build a one-time intake on Republic's side that catches the mess at the door. RFC's GP system is ours — fix the categorization at the source, in the field GP already has.
Judgment sitting on top of clean data Pricing the deal, estimating resale value, structuring the debt, settling at lease end. Which vendors get paid in full, partial, or deferred to next cycle.
Setup work paid every cycle About 3 hours per deal of typing before any real analysis happens. Several hours every two weeks, forever, just to produce the report.
Errors and outdated parts nobody fixes Broken cell references in the valuation section. A label on the model that names a completely different engine — the consequence of copying the file from another deal years ago and never cleaning up. Broken cell references in the totals row of the summary tab.
14 / 15
Closing thought experiment

If a competitor launched tomorrow,
designed around AI from day one,
what would they do differently?

They'd still get the same messy Excel from sellers — they can't change that.
But they wouldn't have an analyst typing for three hours before any analysis starts. They wouldn't have a Dana rebuilding the same report by hand every two weeks.

Not because they're better at AI. Because they understood the work first, then picked the right tool for each piece — a parser at the door for the seller's mess, a clean field in GP for vendor categories, a person on purpose for the judgment calls. The tool followed the work.

The question isn't whether they exist yet. It's what's stopping you from being them.

15 / 15
Back to your top three · then we're done

The companies that win the next five years
won't be the ones with the most AI tools.

They'll be the ones who got the questions out of their senior people's heads, wrote them down, and picked the right tool for each piece — sometimes AI, sometimes a simple script, sometimes leaving a person in charge on purpose. That's the work.

Show of hands — what did you circle on the card? I'll spend our remaining time on whichever question the room is loudest on, then we'll wrap.

Davis Handler signature
Davis Handler
MarginWise · prepared for Republic Financial · davis@marginwise.ai