Readiness Analytics is Broken

by Aakash Indurkhya

AI: From Models to Missions

This series dives deeper into how advanced analytics technology augments human-decision making, operationalizing mission insights to drive readiness at speed and scale throughout defense organizations.

Readiness Data Exists, Decision-Ready Insight Does Not

Ask any senior leader in the Department of War whether readiness is a solvable problem, and the answer is almost always the same: yes. They believe the data exists. They believe smarter analysis is possible. And many recognize, often painfully, that persistent underperformance in readiness outcomes is at least in part self-inflicted, a consequence of having the wrong tools, the wrong approach, or both.

And yet the problems persist.

The GAO has issued more than 100 recommendations to improve military readiness across the air, sea, ground, and space domains, and most remain unimplemented. The Army’s fully mission capable rate for watercraft dropped from 75% in 2020 to less than 40% by 2024. The F-35 program has consistently fallen short of mission-capable goals, at a program cost now estimated to exceed $1.58 trillion in sustainment alone. These aren’t edge cases. They are the signature shortcomings of a readiness enterprise that continues to underperform despite significant investment.

So why isn’t analytics solving this? Why, after years of investment in data management platforms, predictive tools, and AI pilots, are readiness challenges still compounding rather than receding?

The answer lies in three fundamental, interconnected problems that plague nearly every readiness analytics program in the Department of War. Until all three are addressed simultaneously, not sequentially and not in isolation, analytics will keep falling short of impacting the decisions that will truly improve readiness.

Problem 1: Fragmented and Siloed Data Prevents a Trusted View of Readiness

The DoW’s data problem is widely acknowledged. Virtually every industry report, government audit, and defense technology publication cites it. What’s less acknowledged is how the attempted solutions have, in some cases, made things worse.

For years, the promise was simple: consolidate everything into a data lake, clean up the schemas, and let analytics run. Vendors like Databricks, Snowflake, and Palantir built entire product lines around this thesis. And for small, sufficiently disciplined organizations that went all-in on a single platform, it sometimes worked.

But the DoW is not that organization.

Across the Department, the response to fragmented legacy systems has been to procure even more fragmented modern systems. The result, as the Defense Innovation Board put it plainly in its 2024 report, is that “the Department operates numerous legacy systems that are often incompatible with one another” and that “the current state of data access within [DoW] vendor agreements is fragmented and inconsistent.” The DoW now faces a hybrid problem: archaic systems of record that may lack APIs entirely, layered alongside modern data management platforms that are themselves disconnected from each other.

As Joe McMahon, Senior Director at GDIT, observed in a 2025 analysis, data fragmentation “creates interoperability gaps that prevent the timely movement of information from sensor to decision-maker.” The bottleneck isn’t collection. It’s connection.

The practical impact for a readiness analyst is immediate and severe. To assemble a complete picture of a unit’s operational status, they may need to pull from a dozen different systems: legacy maintenance records, logistics databases, personnel systems, financial data, and often Excel files that exist only on someone’s laptop. Each additional source is another step, another manual reconciliation, another opportunity for the analysis to be incomplete. The DoW’s own Chief Data and AI Office has described this as the “chair swivel” problem, where analysts are forced to bounce between incompatible tools just to piece together a baseline view of reality.

No predictive model, no matter how advanced, can compensate for  an incomplete picture of the world it’s trying to predict.

Problem 2: Mirrors in the Dark – Why Prediction Alone Fails in Real Operations

Once data is assembled, or partially assembled, the dominant approach in readiness analytics has been to focus on prediction. Tell me when this aircraft will need maintenance. Tell me when my pilots will complete their qualifications. Tell me what my mission capable rate will look like in six months.

These are valuable questions. But in practice, the adoption of purely predictive solutions in the readiness space has been consistently poor. The reason is captured well by a simple analogy: if you hand someone a mirror so they can look around a corner, and then you turn off the lights, the mirror becomes useless. Knowing what’s coming doesn’t help if you can’t navigate what’s in front of you right now.

The fundamental problem with point solutions focused only on prediction is that they ignore the constraints that exist today. A commander already drowning in maintenance backlogs, personnel shortfalls, and parts shortages cannot afford to shift attention to a five-year readiness forecast, no matter how accurate it might be. If that forecast doesn’t account for the current parts shortage, the three mechanics who are deployed, and the fact that the depot work order was just delayed by six weeks, then it isn’t a forecast that can be acted on. It’s wishful thinking dressed up as analysis.

The GAO’s own body of work illustrates this vividly. Maintenance and supply issues have repeatedly limited aircraft availability, not because leaders lacked predictive models, but because the constraints that govern day-to-day sustainment operations were never fully integrated into the analytical picture. When predictive outputs don’t account for actual operational constraints, they don’t drive adoption. They sit on slides at briefings and get replaced by gut instinct and tribal knowledge, which is exactly what they were supposed to displace.

The Military.com reporting on the Pentagon’s early AI adoption mistakes put it directly: “The Pentagon’s biggest mistake was not moving too fast or spending too much. It was trying to deploy AI on top of fragmented, outdated data systems that were never built to support it.” The GAO corroborated this, finding between 2021 and 2022 that the DoD “lacked complete, accurate, and standardized data needed to support advanced analytics across logistics, maintenance, and readiness.” Without a holistic view of current constraints, predictive tools produce outputs that operators cannot consistently trust, and eventually stop using.

Problem 3: Cliffhanger Analysis – When Insight Stops Short of the Decision

Even when data is reasonably complete and the analysis is sound, there’s a third failure mode that consistently undermines impact: the analysis arrives without the last mile.

Here’s how it plays out.

A team works hard, produces a thorough analytical product, and brings it to a decision-maker. The decision-maker engages with it, which is a good sign, and then says: “This is useful, but I need you to adjust for this constraint. Show me a hybrid of these two courses of action. And account for this factor I didn’t mention earlier.”

The analysis, while correct, was incomplete relative to the decision actually being made.

This isn’t the analyst’s fault. It’s a structural problem. No static dashboard, no pre-configured report, no fixed analytical output can anticipate all the nuances that a decision-maker will bring to bear when working through a hard problem. Decision-makers need to be able to influence, interrogate, and iterate on analysis in real time. When the tools are too rigid to accommodate that last-mile customization, the analysis falls short of the decision it was meant to inform. The window closes and the decision is made elsewhere, often based on instinct instead.

Researchers at the Strategy Bridge have documented this pattern at the combatant command level, noting that while descriptive and predictive analytics can illuminate blind spots, the inability to rapidly recontextualize analysis around a commander’s specific constraints often renders it irrelevant to the actual decision at hand. As one defense strategy journal bluntly noted: “The best strategy too late is useless.” This applies equally to the best analysis.

The result is that analytics programs, despite genuine investment and effort, consistently fail to close the loop between insight and decision. The analysis gets done. The decision gets made anyway, elsewhere, without it.

How These Three Problems Compound

These three failures are not independent. They compound each other.

Fragmented data makes it impossible to account for real-world constraints.
An incomplete view makes predictive outputs untrustworthy.
Decision-makers can’t rely on the analysis and they improvise instead.

Each gap in the chain compounds the others.

Historically, the defense community has largely addressed these failures one at a time: one vendor to unify data, a different vendor for predictive analytics, another tool for visualization. The result is a patchwork that still leaves commanders without the integrated, adaptive, decision-ready intelligence they actually need.

Solving readiness analytics isn’t about deploying better predictions. It’s about connecting the data, grounding the predictions in today’s operational reality, and enabling the analysis to go the full distance, all the way to the moment of decision.

Virtualitics Is Already Doing This

Virtualitics has spent nearly a decade building toward exactly this integrated approach, and has already operationalized it.

Integrated Readiness Optimization (IRO) and Iris, built on the Virtualitics AI Platform (VAIP), tackles all three of these problems simultaneously. IRO connects data across disparate sources, including not just modern cloud platforms but legacy systems of record, operational databases, and even spreadsheets living on desktops, to ensure that the analytical picture actually reflects operational reality. From that foundation, IRO provides predictive and prescriptive capabilities grounded in the constraints commanders are actually working within, not an idealized data environment.

What truly distinguishes Virtualitics’ approach is our agentic layer Iris.

Virtualitics Iris introduces an interaction model that sits above the analytics and enables the last mile that traditional tools always leave behind. Through Iris, decision-makers can interrogate the analysis in natural language, adapt it to their specific questions, and explore courses of action in real time. The dashboard is enhanced with a dynamic, agentic experience where the system meets the decision-maker where they are, rather than forcing them to work within the tool’s constraints.

This isn’t a roadmap item – it is in active deployment today. And the signal from both existing and prospective customers is clear: this integrated, agent-native approach is what the readiness community has been waiting for.

The three problems that have broken readiness analytics for decades are solvable. The solution isn’t another standalone dashboard or a smarter prediction engine. It’s an integrated, AI-native platform that connects the data, grounds the analysis, and goes the full distance to the decision. Virtualitics is already there, and the work is just getting started.

Virtualitics builds AI-native readiness intelligence solutions for the Department of War. The Virtualitics IRO product suite and Iris platform are in active deployment across the U.S. military.

Want to learn more?

Meet With Us

Recent Posts

Blogs

March 31, 2026

What It Takes to Automate Government with AI

4 min read

Read More

Awards, Blogs

March 24, 2026

Virtualitics Wins 2026 AI Excellence Award for Interpretable & Transparent AI (XAI)

2 min read

Read More

Gov Navigators

News, Podcast

March 23, 2026

Episode 150: AI for Readiness: Making Sense of Defense Data with Rob Bocek

1 min read

Read More

Blogs

March 20, 2026

Predictive Logistics in the Indo-Pacific

3 min read

Read More
Rob Bocek, Chief Commercial Officer at Virtualitics, featured in the blog “The Quiet Edge: How Readiness Is Sustained When the Stakes Are Highest.”

Blogs

March 17, 2026

The Quiet Edge: How Readiness is Sustained When the Stakes are Highest

3 min read

Read More
Chris Brown, Public Sector CTO at Virtualitics, featured in an AI Data Management expert Q&A

Blogs

March 9, 2026

AI and Data Management

4 min read

Read More