BCG

6 items

Bloomberg Businessweek 2026-04-17-1

Consulting Used to Be a Dream First Job. AI Changed That

McKinsey is now running its internal AI tool Lilli inside the interview itself; Bain rolls out the equivalent this summer. The case interview is not dead; it has been absorbed into a tool-use assessment where prompt quality and output verification replace framework memorization as the filter. BCG's own global people chair admits the firm found "more hesitance than we thought" using AI because of quality-control risk: the elite-firm concession that AI output needs a human slop-filter, which is precisely the judgment layer every F500 hiring manager should be testing for and almost none are.

Wall Street Journal · 2026-03-31 2026-04-03-w2

Private Credit's Exposure to Ailing Software Industry Is Bigger Than Advertised

Blue Owl's reported software exposure is 11.6%; the actual figure, built company by company, is 21% — and BMC Software is sitting inside a bucket called 'business services.' The classification gap matters less as an accounting curiosity and more as a structural problem: if sector labels bend this far under pressure, the risk models built on top of them are measuring something adjacent to reality rather than reality itself. The same dynamic runs through the AI detection piece — five tools, one column, a 60-point spread in outputs — and through ICONIQ's retention data, where the metric everyone optimized (new logos) turns out to be the wrong one to watch. Morgan Stanley's finding that software borrowers carry the highest leverage ratios in private credit is the number that should focus attention: concentration is the visible risk, but it's the measurement system that determines whether anyone acts on it in time.

tisram.ai 2026-03-31-m1

The Subsidy War Has No Natural Floor

The month opened with a coding race and closed with a token leaderboard, and both stories are the same story: the labs are subsidizing consumption at a rate that no pricing model has caught up to. Week one made the mechanism visible. $200 plans delivering $1,000-plus of compute, security products given away to buy enterprise platform position, acquisition deals slowed by partner friction at exactly the moment speed mattered. Week three confirmed where that logic terminates: a Figma user running up $70K through a $20 account, Anthropic subsidizing at roughly 5x, and leaderboards gamifying consumption volume as if volume were the point. The BCG cognitive load data from week one adds a structural wrinkle the pricing teams aren't modeling: if heavier AI usage produces measurable fatigue and diminishing returns, the utilization rate assumptions inside every flat-rate SaaS margin projection are quietly wrong. That connects to the moat analysis in week two. The companies holding pricing power aren't the ones offering the most compute per dollar; they're the ones where switching carries real operational cost. Every SaaS platform running flat-rate AI access is accumulating a liability the income statement won't show until a cohort churns or a usage spike arrives simultaneously.

tisram.ai 2026-03-31-m3

Evaluation Is the Layer Nobody Built

A $25 pipeline producing publishable economic theory and 700 experiments running in two days look like productivity stories. They're actually stress tests for organizations that still measure AI value by what gets generated rather than what gets used. The legibility piece named the terminal form of this problem: AI-for-science will produce discoveries faster than labs, regulators, and clinical infrastructure can absorb them, and the bottleneck was never generation. That dynamic was already visible in week one, where the BCG data showed cognitive load spiking as oversight demands increased. The human-in-the-loop model assumes a human with enough bandwidth to loop, and that assumption is failing in practice. The tokenmaxxing story closes the arc: when consumption volume becomes the proxy for productivity, every measurement framework in the organization is now optimized for the wrong thing. What all three weeks surface, read together, is that the generation layer is effectively solved and the evaluation layer: scoring architecture, provenance infrastructure, translation tooling between machine output and institutional deployment, is where the next competitive advantage will be built. The companies that treat evaluation as an engineering problem now, rather than a governance afterthought, will hold a position in 18 months that no amount of inference spend can replicate.

HBR · 2026-03-11 2026-03-13-w3

When Using AI Leads to "Brain Fry"

Three AI tools is where the productivity curve flattens. BCG's data shows intensive agent oversight produces a distinct cognitive fatigue, which runs directly counter to the "human in the loop" orthodoxy underlying most enterprise AI governance. The buried signal: autonomous agents requiring less oversight may produce better human outcomes than copilot patterns demanding constant attention, reframing the safety argument for more autonomous systems from ethical preference to operational necessity. If $1,000-plus of compute delivered monthly for $200 requires sustained human supervision to be trustworthy, the productivity math degrades faster than the pricing math improves. The causal language in a cross-sectional self-report survey deserves skepticism, and the prescription is indistinguishable from a BCG engagement scope, but the structural observation holds regardless of who funded it. Organizations deploying more AI tools without redesigning oversight models are accumulating cognitive debt, not compounding returns.

HBR 2026-03-11-3

When Using AI Leads to "Brain Fry"

BCG-authored survey (n=1,488) coins "AI brain fry" – cognitive fatigue from intensive agent oversight, distinct from burnout. The three-tool productivity ceiling and oversight-as-binding-constraint findings are genuinely useful; the causal language on cross-sectional self-report data is not. The buried signal: autonomous agents requiring less oversight may produce better human outcomes than copilot patterns requiring constant attention – running directly counter to "human in the loop" orthodoxy. The prescription (organizational change management, leadership clarity) is indistinguishable from a BCG engagement scope.