One Day. Four Findings. $380K in Recoverable Capacity. Here’s What We Found.

Monday 4th May

One Day. Four Findings. $380K in Recoverable Capacity. Here’s What We Found.

 

The Business That Didn’t Know What It Didn’t Know

There is a particular kind of business problem that never makes it onto the agenda.

It is not urgent enough to trigger a crisis. It does not show up as a single line item on the P&L. Nobody is responsible for it, which means nobody is accountable for fixing it. And because it has been present long enough to feel normal, the team has stopped noticing it altogether.

This is the problem that quietly compounds. Year after year, it absorbs margin, erodes capacity, and ties up capital that should be working harder. The business keeps moving forward, but not as fast as it should. The founder keeps working harder, but the results do not quite reflect the effort.

That is exactly where this manufacturer was when they reached out to us.

A solid operation by any reasonable measure. Sixty-two employees. Approximately nine million dollars in annual revenue. A strong product, a loyal client base, and a team that genuinely wanted to perform. The founder had built something real over more than a decade of hard work.

But the numbers were telling a different story. Margins had been quietly declining for three years despite revenue growing. The founder was working longer hours than ever and making more decisions than ever, yet the business felt like it was running at a ceiling it could not break through. A recent attempt to increase capacity by adding headcount had not delivered the throughput improvement that justified it.

Nobody could point to the problem. That was the problem.

We ran a 1-Day Operational Diagnostic. Here is what we found.

Before We Start: What a Diagnostic Actually Involves

It is worth being specific about what happens on the day, because the term ‘diagnostic’ can mean very different things in a consulting context.

We arrive at the business at the start of the working day. We spend time on the production floor, in the warehouse, and in conversations with the people who actually run the operation day to day. We look at the scheduling system, the quality records, the purchasing history, and the financial reporting. We ask the kind of questions that rarely get asked internally, not because the team is not capable of asking them, but because everyone is too close to the operation to see it clearly.

We are not there to judge. We are not there to build a case for a lengthy engagement. We are there to find where the real constraints are sitting, quantify what they are costing, and identify the specific changes that would release that value.

By the end of the day, we have a clear picture. Within a few days, the client has a written findings report with four to six prioritised recommendations and a practical implementation roadmap.

On this engagement, the findings came in at four. Together, they added up to $380,000 in recoverable value.

Finding 1: Rework and Defect Costs Were Absorbing $110,000 Per Year

When we looked at the production data, the first thing that stood out was the volume of rework.

Not catastrophic failures. Not obvious rejects that stopped the line. Ordinary, everyday rework that had been quietly normalised over years of operation. Jobs that needed a second pass. Components that did not meet spec on the first run and had to be adjusted before they could move to the next stage. Finished goods that came back from the quality check with minor defects that required remediation before dispatch.

Individually, none of these incidents felt significant. A few minutes here. Half an hour there. The team handled them as a matter of course, because handling them was simply part of the job. They had become invisible.

The problem with normalised defects is that they stop being treated as a cost and start being treated as part of the process. The time spent fixing, re-running, and re-inspecting is not tracked against a rework budget. It does not appear as a line item on the P&L. It is simply absorbed into direct labour, which means it makes the labour cost look high without anyone understanding why.

We spent time mapping the most common rework triggers across the production floor. Three recurring failure points accounted for approximately 70 percent of the total rework volume. Two of them were preventable with a simple upstream quality gate. The third was a specification ambiguity that had been causing inconsistent interpretation between two production teams for over eighteen months, with neither team aware the other was reading the spec differently.

When we quantified the total cost across the year, including direct labour, materials consumed in failed runs, and the throughput time lost to re-work cycles, the number came to approximately $110,000 per annum. Not a single large incident. Thousands of small ones that had become invisible.

The fix was straightforward. A quality gate introduced at two critical production stages, before the most common failure points rather than after. A revised acceptance standard documented clearly, reviewed with both production teams in a single 45-minute session, and signed off by the production manager. A weekly defect tally made visible to the production floor, with a brief Friday review to track the trend.

No new headcount. No new software. No external quality programme. Just visibility, a clear standard, and accountability for the number.

Within six weeks, rework volume had dropped by more than 40 percent. The team did not need to be told twice. Once the cost was visible and the standard was clear, the improvement followed naturally.

If quality and rework costs are a recurring theme in your operation, this post goes deeper on why the problem persists and what the system fix looks like: The $240K Quality Problem Your Team Has Stopped Seeing

Finding 2: The Founder Was the Bottleneck on 23 Recurring Decisions

The second finding was less comfortable to deliver, because it was about the founder rather than the operation.

One of the questions we ask during a diagnostic is simple: what decisions do you need the owner involved in before you can move forward? We ask it separately of the production manager, the operations coordinator, the customer service team, and whoever handles purchasing. The answers are rarely identical, but they are almost always illuminating.

In this case, the list was longer than anyone in the room expected, including the founder.

Purchase orders above $2,000 required sign-off. Any deviation from a standard job specification needed a call. Customer complaints above a certain tier were always escalated to the founder directly. Quotes for non-standard work had to be reviewed before they went out. Any scheduling change when a machine went down needed a decision from the top.

In total, we identified 23 recurring decision types that routinely came back to the founder before the team could move. Each one was individually reasonable. The founder was not micromanaging out of a desire for control. Most of these touchpoints had been established years earlier for good reasons, and nobody had ever reviewed whether those reasons still applied.

But the cumulative effect was significant. Each decision type, on average, added between two and eight hours of delay to the relevant process. Urgent decisions got made quickly because they could not wait. Non-urgent ones sat in a queue behind everything else the founder was managing. The team had learned, over time, not to push too hard. They waited. The business waited with them.

The psychological dimension here is worth acknowledging directly, because it is present in almost every founder-led business we work with. Founders hold onto decisions not because they do not trust their teams, but because releasing control feels like losing visibility. If I am not in the loop, how do I know what is happening? The answer, paradoxically, is that staying in the loop on 23 routine decision types actually reduces visibility on the decisions that genuinely matter.

The fix was a one-page decision authority matrix. Not a lengthy governance document. A single page that defined, for each of the 23 decision types, who owned it, what the threshold was, and when escalation was genuinely required. The founder retained final say on strategy, major capital expenditure, key client relationships, and anything with a material risk profile. Everything else was formally delegated, with clear owners and clear thresholds.

The matrix was introduced in a single team meeting. The founder presented it, not us. That distinction mattered. Within three weeks, the daily interruptions to the founder had dropped by more than half. The team reported feeling more capable and more trusted. The founder reported, for the first time in several years, leaving the office before six o’clock with a clear conscience.

Founder dependency shows up in revenue systems as well as operations. This post covers what it looks like when the pipeline lives in one person’s head: You’ve Just Realised Your Entire Sales Pipeline Lives in Your Head

Finding 3: Scheduling Gaps Were Leaving 12 Percent of Production Capacity Unused

The production schedule looked full. That was the first thing the production manager told us when we sat down together. We are running at capacity. There is no slack in the system.

It is a comment we hear often, and it is almost never entirely accurate. Not because production managers are wrong, but because ‘busy’ and ‘fully utilised’ are not the same thing. A production floor can feel relentlessly busy while consistently under-delivering against its theoretical capacity, and the gap between the two is rarely visible from the inside.

We spent time mapping the actual sequence of jobs across a four-week period against the theoretical capacity of the key production assets. The pattern that emerged was consistent. Jobs were being scheduled in an order determined primarily by customer due dates and urgency, which is a reasonable starting point, but nobody was optimising the sequence for setup efficiency. The result was a high volume of changeovers, many of which were taking significantly longer than necessary because the previous job had left the machine in a configuration that required a full reset rather than a minor adjustment.

Urgent jobs were being inserted into the schedule at short notice, which disrupted the sequence further and created pockets of idle time on other assets while they waited for upstream work to clear. The schedule was reactive rather than structured. The team was working hard to manage it, but the system itself was working against them.

When we mapped the actual utilisation data, the gap between scheduled capacity and actual productive output was running at approximately 12 percent. On a production asset base of this scale, that represented a meaningful volume of recoverable throughput, approximately $140,000 in additional revenue annually at the business’s average margin, without adding a single machine or a single person.

The fix had two components. First, a revised scheduling template that grouped jobs by setup family, so that similar configurations were run sequentially rather than interspersed with unrelated work. This reduced average changeover time significantly for the highest-volume job types. Second, a 48-hour frozen schedule window, meaning that once a two-day schedule was confirmed, inserting an urgent job required explicit sign-off from the production manager and an acknowledgement of which job would be displaced. Urgent jobs could still jump the queue. They just could not do so invisibly.

The production manager was initially cautious about the frozen window. The concern was that it would slow the response to urgent customer requests. In practice, the opposite happened. Because the schedule was more predictable, the team could give customers accurate delivery commitments more consistently, which reduced the volume of urgent escalations driven by uncertainty.

Finding 4: Excess Inventory Was Tying Up $130,000 in Working Capital

The fourth finding was in the warehouse.

When we looked at the raw material stock levels for the business’s three highest-volume inputs, the on-hand quantities were running at approximately 14 weeks of cover. The target, based on actual supplier lead times and the business’s order frequency, should have been six to eight weeks.

The excess had not accumulated through a single bad decision. It had drifted there gradually, and the story behind it was entirely understandable. Two years earlier, the business had experienced a supply disruption that had caused a production stoppage and a missed customer delivery. It was a painful episode, and the lesson everyone took from it was to carry more stock. The purchasing team had quietly adjusted their reorder behaviour in response, and nobody had ever revisited the question of how much cover was actually necessary.

Nobody made a bad decision. The system just drifted.

That phrase is worth sitting with, because it describes a pattern we see in almost every business we work with. Processes and behaviours that made sense at a particular moment in time continue long after the circumstances that justified them have changed. The supply disruption was real. The caution was appropriate at the time. But two years later, with a more diversified supplier base and a better understanding of lead time variability, carrying 14 weeks of cover was no longer necessary. It was just the default.

The capital tied up in that excess inventory was approximately $130,000. That money was sitting on shelves, unavailable for anything else, not for investment in growth, not for a buffer against a slow month, not for the equipment upgrade the production team had been discussing. It was locked in raw material that would not be consumed for months.

There is also a secondary cost that rarely gets calculated. Excess inventory requires space. Space has a cost, whether it is rent, opportunity cost, or both. In this case, the warehouse was running close to capacity, and the business had been considering whether it needed to expand its storage footprint. The first question we asked was whether a reduction in inventory levels might resolve the capacity issue before any capital was committed to additional space. The answer was yes.

The fix was a par-level system for the top 20 raw materials. Each material was assigned a reorder point based on actual supplier lead time plus a reasonable buffer, and a maximum stock level that prevented over-ordering. The system was reviewed quarterly to account for seasonal variation and supplier performance. It took approximately half a day to build and implement.

The excess stock was drawn down over 90 days as production consumed it. There was no write-off, no waste, no disruption to production. The working capital was released progressively back into the business, and by the end of the quarter the warehouse had enough spare capacity that the storage expansion question was quietly shelved.

The Total Picture: Where $380,000 Came From

Four findings. One structured day of investigation. Here is the summary:

  • Rework and defect reduction: $110,000 per annum in recovered direct labour and materials
  • Capacity recovered through scheduling optimisation: $140,000 per annum in additional throughput potential
  • Working capital released from excess inventory: $130,000 (one-time release over 90 days)

Total: $380,000.

None of these fixes required new software. None required additional headcount. None required a lengthy implementation programme or an ongoing consulting retainer. They required clarity about what was actually happening, a structured method for finding it, and the willingness to act on what the data showed.

The founder’s comment at the end of the findings presentation has stayed with me. He said: I knew something was wrong. I just did not know it was this specific.

That is the value of a diagnostic. Not the report. Not the recommendations. The specificity. The difference between knowing that margins are under pressure and knowing exactly which three production failure points are absorbing $110,000 a year. The difference between feeling like the founder is too involved in the operation and having a one-page matrix that defines precisely which 23 decisions to let go of.

Vague problems do not get solved. Specific problems do.

What the 1-Day Operational Diagnostic Actually Is

The 1-Day Operational Diagnostic is a structured, on-site investigation of your business. It is not a strategy session. It is not a workshop. It is a diagnostic, in the same way that a thorough medical check-up is a diagnostic: systematic, evidence-based, and focused on finding what is actually present rather than confirming what you already suspect.

We look at four areas: your production or service delivery process, your financial visibility and reporting, your decision-making structure and where it creates bottlenecks, and your key operational metrics and whether they are measuring the right things.

We talk to your team, not just your leadership. The people closest to the operation almost always know where the friction is. They just rarely get asked.

At the end of the day, we debrief with you directly. Within a few days, you receive a written findings report with specific, prioritised recommendations and a clear implementation roadmap. Not a strategy document. Not a set of high-level observations. A plan that identifies the four to six highest-value changes available to your business right now, with enough detail to act on the following Monday.

The diagnostic is priced between $2,000 and $2,500 depending on business size and complexity, plus travel where applicable. For most businesses at the $5 million to $20 million mark, the recoverable value identified on the day is significantly higher than the cost of finding it. In the case study above, the return on the diagnostic investment was more than 150 to one.

If you have been running hard and feeling like the results do not quite match the effort, the diagnostic is where you start. Not with a strategy review. Not with a new hire. With a clear picture of what is actually happening inside the operation.

Because you cannot fix what you cannot see.

Ready to Find Out What’s Inside Your Business?

Book a 30-minute discovery call and we will talk through whether the 1-Day Operational Diagnostic is the right starting point for your business. No obligation, no hard sell. Just a direct conversation about what you are dealing with and whether we can help.

Book your diagnostic here: calendly.com/fbsconsulting-info/30min

Or visit fbsconsulting.com.au to learn more about how the diagnostic works and what other business owners have found when they finally looked closely.