The Backward Jump
By Chris Fenster, Founder & Executive Chairman
I ran the 400 meters in high school, and everyone on the track team had to pick a field event. I picked the high jump.
I wasn’t a natural; I’m not particularly springy, and at barely six feet I didn’t have the build of the guys who made it look effortless. But I was fascinated by the technique: the strange, counterintuitive act of turning your back to the bar and flopping over it backward. It wasn’t without incident. Junior year, I landed awkwardly in the pit and my own knee hit me in the face and knocked out my two front teeth. The foam kept me from breaking my neck, but it couldn’t protect me from myself.
My coach, Tom Farquhar, taught me the Fosbury Flop with the precision of the physics teacher he was. (He later became the school’s principal, which tells you something about the kind of man who teaches you to do things backward.)
Thirty-five years later, this idea keeps coming back to me: not because I’m nostalgic for high school track, but because I think the Fosbury Flop is a great analogy for what’s happening right now with AI in finance.
The Technique That Changed Everything
On October 20, 1968, Dick Fosbury sprinted up to the high jump bar at the Mexico City Olympics, turned his back to it, and flopped over backward. The crowd gasped. Coaches called it an aberration. The Soviet world-record-holder dismissed it as a gimmick. Commentators questioned his sanity.
He cleared 7 feet 4 inches and won Olympic gold.
Within eight years, the average height of elite high jumpers had increased by four inches. By 1980, every world record used the Fosbury Flop. By 1988, the last “straddle jumper” competed at the Olympics. A technique that looked absurd—even dangerous—had completely replaced the way humans had been doing the same task for decades.
What most people miss about the Fosbury Flop is that it wasn’t a feat of superior athleticism. Fosbury wasn’t any stronger or faster than his competitors; he wasn’t even taller. The brilliance was in the physics: by arching his back over the bar, Fosbury’s center of mass could actually pass under the bar while his body cleared over it. He didn’t need to jump higher; he needed to distribute his mass differently. Every technique before the Flop—the straddle, the scissor kick, the Western roll—required lifting your entire center of gravity above the bar through brute force. Fosbury changed the question from “how do I jump higher?” to “how do I get my body over while keeping my center of gravity low?”
Same task, fundamentally different approach.
The straddle technique that preceded the Flop was, by 1968, highly optimized. Generations of athletes had refined it, squeezing out incremental gains through better training, more explosive takeoffs, more precise timing. And … it was a dead end. Fosbury’s insight wasn’t that the old way was bad, it was that there was a ceiling on how good it could get, and the only way past that ceiling was to reimagine the approach entirely.
The Pit
The Fosbury Flop wasn’t just a new technique, it was a new technique predicated on entirely new infrastructure.
Before the early 1960s, high jumpers landed on sawdust, sand, or low stacks of mats. You could do a straddle onto that because you’d land on your feet or your side. But the Flop requires landing flat on your back. On sawdust, that would’ve meant a broken neck. Foam landing pits, introduced in the early ’60s, didn’t just make the Flop safer; they made it possible. The technique couldn’t exist without the infrastructure to support it.
Even after Fosbury won gold in 1968, the infrastructure took years to spread. Elite programs got foam pits first, and most high schools didn’t have them for over a decade. During that window, the dividing line between good and great high jumps wasn’t talent, or coaching, or a willingness to learn; it was access to the right equipment. If you didn’t have a pit, you couldn’t do the Flop. Period.
Keep that in mind, because it matters.
A Pattern That Repeats
This pattern—same task, different technique, requires new infrastructure—shows up more often than you’d think. Henry Ford’s assembly line moved cars, instead of workers, to radically cut production time from over 12 hours per vehicle to about 93 minutes. Same output, fundamentally different approach, that required a factory built for the purpose.
Greg LeMond didn’t win the 1989 Tour de France by becoming a stronger cyclist. He showed up to the final time trial with specially engineered aerodynamic handlebars and a teardrop helmet his competitors had never seen, and made up a 50-second deficit in a 27-minute race: not through more power, but through less drag. Now every time-trialist in the world uses aero everything. But LeMond needed the equipment to exist before he could deploy the technique.
When Billy Beane’s Oakland A’s started valuing on-base percentage over batting average, scouts thought he’d lost his mind. Moneyball wasn’t about finding better players; it was about redefining what “better” meant. Same goal, different lens. But it required statistical infrastructure—databases, models, analytical frameworks—that most teams didn’t have and weren’t willing to build.
In every case, the revolution wasn’t about doing more: it was about doing it differently. And in every case, the people who resisted weren’t stupid: they were experts in a technique that, unbeknownst to them at the time, was about to become obsolete. And in every case, this new technique required access to new infrastructure.
Do Things Wrong to Do Them Right
Over the past few months, I’ve built dozens of company analyses with AI: not as a side experiment, but as a systematic effort to teach it to do the things we currently rely on human context and intuition to do. We call this framework PromptKit: a library of structured analytical routines, each refined through real client work, that codify the pattern recognition our teams have built over 18 years and over a thousand companies.
The process itself is a kind of Fosbury Flop. Every time the AI gets something wrong, we write a small patch: a specific instruction that addresses the failure. Over time, these patches compound into something remarkable: a system that learns from every mistake across every analysis. This is not how humans work. We don’t typically do things wrong in order to do them right.
That would be … backward, but that’s exactly my point.
Recently, we were reviewing a narrative the AI had drafted for a calendar-year 2025 analysis. The conclusions were technically defensible but fundamentally wrong. The company had hit an inflection point mid-year that turned what looked like an average twelve months into a two-chapter story: a rough first half and a meaningful recovery in the second. The AI had averaged the year and written a tepid summary. A simple nudge—treat this as two distinct periods—followed by a small PromptBlock, transformed the analysis from “objectively average” into the rebound story it actually was.
Any decent Finance Director would have seen this immediately, but now that insight isn’t locked in someone’s head. It’s a permanent fixture in PromptKit, triggered automatically whenever the data suggests a mid-period inflection. Not one person’s judgment call; a system-level capability.
In another analysis—this time for an early-stage company that had been underinvesting in its accounting team—the AI surfaced two observations independently. First, that the company’s gross margins were unusually strong for a firm its size; and second, that its days inventory outstanding were high. Each observation, on its own, read as a footnote. But a good CFO knows the connection: those margins are overstated because cost of goods sold is understated, which means inventory on the balance sheet is inflated by the same amount. Sure enough, there’s a $500K-plus inventory write-down coming, along with an unpleasant restatement of 2025 gross profit margins and EBITDA. The AI noticed the symptoms, and a human recognized the disease. And now PromptKit connects them automatically for the next company that fits the pattern.
This is the part that’s hard to see from the outside. Each individual patch is small, but the compound effect is a system that gets meaningfully smarter with every engagement: not over years of training humans, but over weeks of refining prompts. It’s dramatically easier to build institutional knowledge into a system than to train it into a distributed network of finance leaders spread across hundreds of companies and industries.
(Our clients recently heard from Ray about PropellerOS, and PromptKit is one piece of the infrastructure OS we’re building to make this kind of scaling possible.)
The New Technique
Most of what the finance industry is doing with AI right now is the equivalent of refining the straddle jump: taking existing processes—close, variance analysis, budget-to-actual—and running them faster. That’s valuable, but it’s not transformative.
The transformation is what happens when you turn around. I know because I’ve been living it. Over the past few years my role has moved into strategy, leadership, and the things an Executive Chairman is supposed to focus on, so I’d spent very little time personally reviewing client financials. But over the past several months, I’ve been deep in the weeds again, bantering back and forth with AI about company after company, building analyses that would have taken weeks in a fraction of the time. It’s not that I missed the work (though I’ll admit it’s been fun) but because I needed to learn new techniques firsthand before I could help rebuild Propeller around it.
What I’ve found has surprised me. This isn’t a faster version of the old job, it’s a profoundly different way of working. The AI is better than I am at many things: pattern detection, data processing, surfacing anomalies across enormous data sets. In those domains, its advantage is 1,000 to 1. But the combination of its capabilities and my experience is something else entirely. Thirty-plus years of context—knowing how companies actually behave at different stages, how we founders think (or don’t), and how to make sense of numbers that don’t make sense—layered on top of everything the AI can process, results in a depth and speed of insight that neither of us could produce alone.
Here’s the counterintuitive part: I’m spending more time on financial analysis now, not less. Economists call this Jevons Paradox: when something becomes dramatically cheaper to do, you end up doing dramatically more of it. Now that the cost of producing a sophisticated company analysis has collapsed, I’m doing more, and going deeper, asking questions I never would have had the bandwidth to explore. We can build complex analytical platforms in less time than it used to take to build a simple Excel model.
The nature of the work has shifted almost entirely from managing and overseeing people who produce numbers to doing research, analysis, and storytelling directly, with access to every fragment of data.
This is the Flop: not “do the same job with AI’s help,” but do the job in a fundamentally different way.
I wrote recently about the radiology paradox: that AI was supposed to eliminate radiologists, and instead made them more valuable than ever. The same dynamic is emerging in finance, but where the radiologist story is about why the role survives, the Fosbury Flop is about how the technique changes. The CFO of the near future doesn’t spend their time building models; they interrogate models by asking whether the assumptions hold given what they know about the business, the market, and the people involved. The CFOs of the future won’t generate analyses; they’ll interpret them and challenge them, adding the layer of context that requires being in the room and knowing the players. This isn’t delegation, it’s orchestration: thinking with the AI, not just directing it.
AI Will Break the CFO Role
Here’s something most people in finance won’t say out loud: They’re overpaid for their tactical oversight and significantly underpaid for a small amount of genuinely valuable strategic insight (this is part of The economics of the role have been sustained by the fact that both halves have historically required the same person: you couldn’t get the strategic insight without also doing (or at least overseeing) the tactical work.
AI breaks that coupling. When the tactical mantle of the job compresses—when AI handles the variance analysis, the reconciliation, the pattern detection across data sets—what’s left is the tip of the spear: human insights, relationships, and the judgment calls that require knowing why a founder made the counterintuitive bet, whether the board will support an aggressive move, or how a particular investor will react to a miss.
We’ll need to keep checking AI’s work, and that requires no small effort and the knowledge to vet its outcomes. But think about what this really means: it’s literally the least we can do. Making sure the AI isn’t wrong is the floor, not the ceiling. The real work—the work that defines the role going forward—is orchestrating AI to deliver the best possible insights. And that requires exactly the things that can’t be automated: experience, relationships, judgment, and the hard-won knowledge of how companies actually work.
The Pit Matters
The Flop was eventually democratized and made ubiquitous. Every high school got a foam pit, and eventually every kid learned the technique. That will happen with AI in finance too. But right now, we’re in the window where access to the infrastructure is the dividing line: not talent, not willingness, but infrastructure.
Finance is currently at peak straddle jump.
We can keep optimizing our old techniques—hire better controllers, implement better systems, and build more efficient processes—and still never get past the fundamental constraint, which is that tactical finance work is becoming commoditized and strategic finance work is becoming priceless. The gap between these two is widening every month. And I think about the many CFOs who are spending 99% of their time on organizational work that spits out the numbers with no interpretation or insight. This is simply not sustainable.
And just as with the high jump pit, infrastructure matters here. The freelance CFOs on LinkedIn who’s using ChatGPT to speed up their variance analyses for three clients is doing the Flop onto a pit of sawdust. It might work a few times, but they have no data lake underneath them; no PromptKit that gets permanently smarter from every failure, and no secure environment where sensitive financials from hundreds of companies with known outcomes can be cross-referenced. Without a systematic way to train a team to do what they figured out alone on a Tuesday afternoon, they’ve simply learned the technique but they don’t have the infrastructure to practice it at scale.
Not everyone is going to learn the Flop. And not everyone will invest in a pit that facilitates a safe landing, so you can get up and do it again and again. But for those who figure out the new technique—who learn to orchestrate rather than execute, to provide context rather than crunch numbers, to work with AI rather than compete against it—the ceiling disappears.
Raising the Bar
I wasn’t a particularly impressive high jumper; I didn’t have the explosive athleticism of the kids who made it look easy. What I had was a good coach and the stubbornness and discipline to work at a technique until it clicked. My coach taught me where to plant my foot, how to drive my inside knee and arms up, and then arch my back—and I made up for a lack of spring with mechanics I worked hard to hone. The Flop got me over a bar I had no business clearing on raw ability alone.
I think about that a lot these days. Not everyone is going to learn the new technique: some people are built for the straddle, and they’ll be fine… until the bar goes up. But for those of us willing to do the work, trust a method that feels unnatural, and turn our backs on the way we’ve always done things, the results could be highlight-reel-worthy indeed.
