
Validating Client Vision Through Discovery Visualisation Before Starting Code
The call came six weeks into development. The client — a national membership organisation — had reviewed the first working module and something was clearly wrong. Not wrong in a "the buttons are the wrong colour" sense. Wrong in a "this is not how our staff actually work" sense. The screens made logical sense from the requirements document. But the requirements document had been written by someone who understood the system in theory, not the people who would use it for eight hours a day.
Six weeks of development had to be partially unwound. The rework cost more than the original discovery phase would have. The team was demoralised. The client was frustrated. And the irony was that everyone had agreed on the requirements in writing, months earlier, without anyone realising they were agreeing to different things.
This is not an unusual story. It is, in fact, one of the most common patterns in custom software delivery — and it is almost entirely preventable.
The Numbers Behind the Problem
The Standish Group's CHAOS Report found that only 31% of software projects succeed by their own definitions of scope, time, and budget. The Project Management Institute's research attributes 47% of unsuccessful projects to poor requirements gathering. IBM's research on the cost of requirements errors — drawing on decades of industry data — found that fixing a requirements problem post-release costs between 30 and 100 times more than fixing it during the requirements phase itself.
McKinsey Digital's analysis of large IT projects found that 45% run over budget and 56% deliver less value than predicted. One in six large IT projects, according to research published in the Harvard Business Review, experiences a cost overrun of 200% or more.
These are not statistics about bad engineering. They are statistics about misalignment — between what was understood and what was needed, between what was written and what was meant, between the software that was built and the work it was supposed to support.
"The hardest single part of building a software system is deciding precisely what to build." — Fred Brooks, The Mythical Man-Month
Why Clients Cannot Fully Articulate What They Need — And Why That Is Normal
There is a persistent assumption in software projects that if you ask a client the right questions and document the answers carefully enough, you will arrive at a complete and accurate specification. This assumption is wrong — not because clients are poor communicators, but because of how human knowledge actually works.
Domain experts operate from deep tacit knowledge. They know their processes so thoroughly that enormous amounts of critical detail have become invisible to them — they do not think to mention it because it is so obvious it barely registers as information. A hospital administrator describing a patient transfer workflow will mention the formal steps but will unconsciously omit the informal workarounds her team has used for years, the edge cases that occur every second Thursday, and the reason the system was designed the way it was rather than the more obvious way. None of this is deliberate. It is simply the nature of expertise.
Compounding this is what researchers call premature solution framing. When clients describe requirements, they often describe the solution they have imagined rather than the problem they need solved. "I need a dropdown with all the client names" is a solution. The underlying problem — that staff currently cannot quickly find the right client record during a phone call — might be better solved a dozen different ways, some of them far simpler. Requirements gathered from solution descriptions inherit the limitations of the imagined solution before anyone has validated whether it is the right one.
Then there is what might be called the "yes" problem. When stakeholders review written requirements, they tend to nod along. Words seem reasonable. Sentences parse logically. Everyone in the room agrees. The problem is that nobody can actually see the software from a requirements document. They are imagining it — and they are each imagining something slightly different. The incompatibilities between those mental models do not surface until someone opens a browser and looks at a screen.
"The single biggest problem in communication is the illusion that it has taken place." — Widely attributed to George Bernard Shaw
What Discovery Visualisation Actually Is
Discovery visualisation is the practice of converting understanding into something tangible — wireframes, interactive prototypes, workflow diagrams, screen-by-screen mockups — before a single line of production code is written. The output is not a polished design. It is a shared object: something concrete enough that stakeholders can react to it, critique it, and correct it.
The goal is not to produce a specification. The goal is to surface disagreement early, when it is cheap to address. A low-fidelity prototype takes hours to adjust. A built feature takes days or weeks. A deployed system can take months and significant cost to reorient.
In practice, discovery visualisation encompasses several layers of work:
- Workflow mapping — documenting how work actually flows through the organisation, not how it is supposed to flow on paper. This almost always reveals gaps and exceptions that written requirements miss.
- User journey mapping — tracing the experience of each persona through the system from their first interaction to their last, identifying decision points, friction areas, and the information they need at each step.
- Wireframing — low-fidelity screen layouts that establish the information architecture and the key interactions without investing in visual design. These are fast to produce and fast to revise.
- Interactive prototyping — clickable mockups that simulate the experience of moving through the system. Used to test flows with real users before committing to implementation.
- Edge case and exception mapping — identifying what happens when things go wrong, when data is missing, when users make unexpected choices. These scenarios are consistently underrepresented in written requirements and consistently expensive to address post-build.
"If a picture is worth a thousand words, a prototype is worth a thousand meetings." — Tom and David Kelley, IDEO
What Happens When Discovery Is Skipped
The case against skipping discovery is not theoretical. The largest and most documented software failures in recent history share a common characteristic: they went directly from requirements to development without subjecting either the requirements or the assumptions behind them to rigorous visual validation.
Healthcare.gov, the US federal health insurance marketplace launched in 2013, cost between $500 million and $2 billion by various estimates and failed catastrophically at launch. Investigations pointed to a fundamental absence of stakeholder alignment during design, no testing with representative users before go-live, and a procurement structure that made no provision for discovery work. The technical problems were real, but they were downstream of a more fundamental failure: nobody had visualised the end-to-end user experience and validated that it was coherent before building it.
The FBI's Virtual Case File system — designed to replace paper-based case management — consumed $170 million before being abandoned in 2005. Requirements were documented but changed repeatedly throughout development, with no mechanism to validate whether proposed changes were coherent when seen together. The system was never tested against the actual workflows of the agents who would use it. When it was finally demonstrated, it bore little resemblance to operational reality.
At a less extreme scale, a Geneca survey of business and IT executives found that 75% of respondents said projects were "doomed from the start" — meaning misalignment between stakeholders was apparent before development began, but proceeded anyway. The same survey found that 80% of respondents spent at least half their time on rework. Not feature development. Rework.
Discovery Is Not a Delay — It Is an Accelerator
The most consistent objection to structured discovery is that it delays the start of real work. This reflects a misunderstanding of where the value of development time actually lies.
When a project skips discovery and goes directly to development, speed is an illusion. The team is moving quickly — building features, completing sprints, showing progress — but they are building toward a target that has not been sufficiently validated. The reckoning comes mid-project, when a stakeholder review reveals structural misalignment, or post-launch, when users interact with software that does not reflect how they actually work. At that point, the cost of correction is not the cost of discovery. It is the cost of rebuilding.
The return on discovery investment is measurable. Research published in IEEE Software found that projects that invested in prototyping and visual validation before committing to full development experienced 40% fewer requirements defects. Steve McConnell's analysis in Code Complete found that the average project spends 50% of its schedule on unplanned rework — time that structured discovery directly reduces. Nielsen Norman Group's research established that testing with five representative users identifies 85% of significant usability problems before development begins.
The financial framing is straightforward. Discovery typically represents 5–15% of total project budget. A mid-project realignment driven by structural misalignment typically costs 25–50% of budget in rework and delays. A post-launch rebuild — which is not uncommon when fundamental design assumptions prove incorrect — costs 60–100% of the original budget again. Viewed from that perspective, a two-week discovery phase that prevents a six-week rework cycle is not a delay. It is a four-week saving.
Addressing the Agile Objection
Some teams resist structured discovery with an appeal to Agile principles: "We don't do big up-front design. We iterate." This is a misreading of what Agile actually prescribes.
The Agile Manifesto's principle of "responding to change over following a plan" does not mean "no planning." It means that the ability to adapt is more valuable than the ability to predict perfectly — and that the planning process should be designed to accommodate change rather than resist it. Discovery visualisation is entirely consistent with this. A low-fidelity prototype produced in week one is not a rigid specification — it is a hypothesis about what the right thing to build looks like, produced quickly enough that it can be validated, challenged, and revised before development begins.
Google Ventures' Design Sprint — arguably the most influential modern framework for rapid product discovery — runs over five days. It is explicitly about moving fast: understanding the problem, generating solutions, selecting the most promising direction, building a lightweight prototype, and testing it with real users — all within a working week. The output is not a finished product. It is validated direction. That is what effective discovery produces.
The UK Government Digital Service — which manages digital services used by millions of citizens — mandates a formal Discovery phase before any new digital service proceeds to build. This requirement followed a series of expensive failures, including elements of the NHS National Programme for IT, which consumed approximately £10 billion before being substantially abandoned. The GDS Discovery framework exists precisely because the cost of building the wrong thing at government scale is not recoverable.
How Discovery Visualisation Changes Stakeholder Dynamics
Beyond the technical benefits, discovery visualisation does something that written requirements documents cannot: it creates a shared, concrete object that all stakeholders can engage with from their own perspective.
In organisations of any complexity, different stakeholders have different — and sometimes incompatible — mental models of what a system should do. The operations manager, the finance director, the frontline staff member, and the IT lead all understand the problem through their own lens. These perspectives do not automatically reconcile in a requirements workshop. They do surface — often dramatically — when a prototype is placed in front of the room and someone says, "Wait, that's not how we actually handle that."
That moment of friction is not a failure. It is the entire point. Every disagreement surfaced during a discovery review is a disagreement that does not have to be surfaced — at far greater cost — during development or after launch. The prototype is a catalyst for the conversations that need to happen. It gives stakeholders something to react to rather than something to imagine.
Across the organisations we work with — from enterprise clients like Glencore and NSW Health through to mid-market businesses building platforms for the first time — the pattern is consistent. Discovery reviews surface requirements that were never written down, reveal edge cases that nobody had considered, and often identify entire process assumptions that turn out to be incorrect. The result is not just better-specified software. It is a project team and client that have genuinely aligned on what success looks like before the clock starts.
What Good Discovery Looks Like in Practice
Effective discovery visualisation is not a lengthy, bureaucratic phase that produces thick documents and then hands off to developers. It is a compact, focused process — typically one to three weeks, depending on scope — that produces a small number of high-value artefacts:
A validated workflow map
Not a theoretical process diagram but a validated representation of how work actually moves through the organisation — including the informal steps, the exceptions, and the edge cases that written requirements habitually omit. This becomes the foundation against which every design decision is tested.
Prioritised user scenarios
A clear view of which user types interact with the system, what they are trying to accomplish, and what the primary, secondary, and exception flows look like for each. This prevents the common failure mode of designing for the happy path while neglecting the scenarios that will consume the majority of support effort.
Tested wireframes or prototypes
Screen-level representations of the proposed solution, validated with real end users before development begins. The fidelity level is proportional to the uncertainty: low-fidelity sketches for conceptual validation, higher-fidelity interactive prototypes for complex workflows where the sequence of interactions matters.
A documented assumption log
An explicit record of the assumptions the design rests on, the decisions that were made during discovery, and the questions that remain open. This gives the development team the context they need to make good decisions when they encounter situations the discovery phase did not anticipate — which is inevitable in any project of substance.
Discovery Is the Opening Phase of the Project, Not What Comes Before It
One of the most important reframings for clients who are new to structured discovery is understanding that discovery is not something that happens before the project starts. It is the opening phase of the project itself. The team is active, the work is real, and the output — validated understanding of what to build — is the most valuable deliverable a project can produce.
By the end of a well-run discovery phase, the development team knows what they are building and why. Ambiguous requirements have been converted into specific, visible designs. Edge cases are documented rather than deferred. Stakeholders have seen and reacted to the proposed solution rather than imagined it independently. The scope is agreed — not as a list of features, but as a shared picture of the experience the system will deliver.
What that means for delivery is measurable. Development proceeds with far fewer interruptions for clarification. Scope changes are smaller and more surgical because the scope was interrogated more thoroughly upfront. Reviews move faster because stakeholders are reacting to refinements of something they have already validated, not encountering the design for the first time. The 50% of schedule that the average project spends on unplanned rework does not disappear entirely — but it shrinks considerably.
The iMSX Approach
After 17 years and more than 250 solutions delivered — across healthcare, government, financial services, resources, and education — the single most consistent predictor of project success we have observed is the quality of shared understanding at the start. Not the talent of the development team. Not the sophistication of the technology stack. The degree to which everyone involved — client and delivery team alike — has a concrete, validated picture of what they are building and why.
This is why discovery visualisation is not optional in how we work. It is not a premium add-on for clients with larger budgets. It is the mechanism by which we protect our clients' investment — and our own track record. Every project we deliver carries a 100% Milestone Success Rate. That record is not separable from the discipline of validating scope before a single line of production code is written.
The clients who push back on discovery — usually because they are under pressure to show progress quickly — are, without exception, the clients who most need it. The pressure to show progress is a symptom of stakeholder anxiety, and the best cure for stakeholder anxiety is a prototype review in week two: something concrete to react to, something that demonstrates the team understands the problem, something that shows the project is moving — in the right direction.
Before You Write the First Line of Code
If you are scoping a new platform, replacing an existing system, or extending a product into new territory, the most important question to ask before any development begins is: can every stakeholder who will be involved in this project look at the same screen and describe what they see in the same terms?
If the answer is not a confident yes, the discovery phase has not done its job. And if the discovery phase has not happened, the answer is almost certainly no — even if nobody knows it yet.
At iMSX, we run structured discovery on every engagement. We do it because 17 years of delivery history has made the case conclusively. If you are planning a build and want to understand what effective discovery looks like in practice — or if you are mid-project and suspecting that the alignment you thought you had may not be as solid as it appeared — we are worth speaking to.
Starting a New Build?
Talk to us about our discovery process — how we validate scope, surface hidden requirements, and ensure every stakeholder is looking at the same picture before development begins.
Start a Conversation