Research · Social Housing

Programme-Scale Financial Simulation

What the Financial Modelling Output Looks Like at the Standard the Procurement Demands

12 March 2026 · Greg Williams · steko.co.nz/thinking

The Strategic Analysis diagnosed the landscape. This simulation demonstrates what a structured analytical response looks like when built to the standard the procurement demands. An 80-home Flexible Fund programme across three synthetic locations is modelled through a three-layer analytical stack — multi-criteria screening, deterministic financial modelling in HUD’s prescribed format, and Monte Carlo probabilistic analysis across 10,000 scenarios. Every figure is based on synthetic assumptions derived from public data and the HUD application pack. The value is not in the precision of the numbers but in the structural dynamics they reveal — dynamics that persist regardless of the specific cost inputs.

Five findings from the simulation

01The benchmark gap is structural at these cost assumptions. Weighted average Year 1 cost per place is $62,019 — 35% above the $46,000 HUD benchmark. In 10,000 Monte Carlo scenarios, not one produced a cost below benchmark. Development cost is the binding variable.
02The contingency level is a governance lever worth $8,700 per unit. At Medium (50%), the HUD contingency formula is self-sustaining across all scenarios. But it adds $15,000–$17,500 per unit to the Year 1 cost. Dropping to Low saves ~$8,700 per unit. A board risk-appetite decision.
03The capital allocation is tight at central equity. The central case supports 76 of 80 modelled units at 15% equity contribution — a $236,000 shortfall. The CFO’s confirmed capital envelope is the binding input.
04The wraparound gap is structural and unfunded. Annual wraparound cost of $552K–$840K across 80 homes is not included in the Agreed Amount. Over 25 years: $17.7M–$26.9M in cumulative unfunded exposure.
05Four inputs convert this illustration into submission content. Aspirational locations, capital envelope, Budget 2025 position, and existing programme cost data. The simulation engine re-runs with actual data in under an hour.
Each finding connects directly to the landscape mapped in the Strategic Analysis. Together, they define the analytical terrain any CHP must navigate before 24 April.
Five numbers that frame the conversation #1 $62,019 Weighted avg Year 1 Cost per place 35% above $46K HUD benchmark #2 $162.7M Total 25-year HUD investment Crown whole-of-life cost at programme scale #3 76 / 80 Central equity sufficiency 4-unit shortfall at $6M / 15% #4 $552K– $840K Annual wraparound gap (unfunded) Scored at assessment, excluded from contract #5 $17.7M– $26.9M 25-year cumulative wraparound exposure Board governance decision required

Numbers 1–3 are solvable with better data. Numbers 4–5 are structural and persist regardless of data quality.

The Analytical Framework

The HUD financial model is a submission instrument. It tells HUD what you are proposing. It does not tell the organisation whether the proposal is viable. The Strategic Analysis identified this gap as the centrepiece risk: the absence of a pre-submission feasibility platform that stress-tests assumptions before commitment. This simulation engine is that platform.

Three-layer analytical stack Multi-Criteria Analysis Screen locations before any financial modelling Output: Priority ranking Deterministic HUD Model Produce Agreed Amount per location in HUD format Output: Memo Table format Monte Carlo Analysis Probabilistic risk overlay across 10,000 scenarios Output: Probability distributions The engine is reusable: swap the config file, re-run with actual data. Stage 1 to Stage 2 is a config change, not a rebuild.

MCA screens on non-price grounds first (because HUD scores non-price at 80%), then deterministic modelling, then probabilistic analysis

The three layers operate sequentially. Multi-Criteria Analysis screens locations before any financial modelling, preventing capital allocation to locations that cannot be defended on non-price grounds. The deterministic model produces per-location financial outputs in HUD’s prescribed format. Monte Carlo provides probability distributions across 10,000 iterations, replacing single-point estimates with confidence intervals.

Input assumptions — synthetic vs actual

Every figure in this simulation is based on synthetic assumptions derived from public data and the HUD application pack. The following table shows what is synthetic and what actual data is needed to convert the simulation to submission-ready output.

Every synthetic assumption must be replaced before submission Parameter Synthetic Assumption Actual Data Required Programme scale 80 homes, 3 locations Board-confirmed scope South Auckland dev cost $480,000/unit (excl GST) QS estimate or programme actuals Hamilton dev cost $420,000/unit (excl GST) QS estimate for Waikato Christchurch dev cost $430,000/unit (excl GST) QS estimate for Canterbury Equity contribution 15% central ($6M) CFO-confirmed capital envelope Interest rate (Yr 1–5) 5.50% (CHFA assumption) CHFA engagement outcome Operating costs $12,625/unit Year 1 Actual programme operating data Contingency level Medium (50%) CFO-justified level Development costs are the highest-sensitivity assumptions — they drive 67–76% of the benchmark gap

The right column tells the organisation exactly what data is needed to convert synthetic to actual

The simulation engine is designed to be reusable. The calculation core is separated from the input configuration: swapping the config file and re-running with actual data converts this synthetic illustration into submission-ready output. Stage 1 to Stage 2 is a config change, not a methodology rebuild.

Location Screening and Priority Ranking

HUD scores non-price criteria at 80% of total marks before the cost envelope is opened. An organisation that builds a competitive financial model for a location it cannot defend on non-price grounds has wasted its modelling effort. The Multi-Criteria Analysis imposes screening discipline: no location enters the financial model without passing a weighted assessment against six criteria mapped to HUD’s own assessment priorities.

Multi-Criteria Analysis — six criteria weighted to HUD priorities Delivery Capability 25% HUD Criterion 1 Wraparound Reach 20% HUD Criteria 1+5 Financial Viability 20% HUD Criteria 6+7 Strategic Alignment 15% HUD Criterion 2 Delivery Timeline 10% HUD Criterion 4 Community Need 10% HUD Criterion 5 Pass threshold: 6.0/10 weighted score. Locations below threshold are screened out before financial modelling.

The screening discipline prevents capital allocation to locations that cannot be defended on non-price grounds

Location priority ranking — all three pass MCA screening 8.15 / 10 Rank 1 — South Auckland 40 units · PASS Anchor location. Established housing operations. Wraparound services infrastructure in place. Strengths: Delivery 9/10, Wraparound Reach 9/10 6.55 / 10 Rank 2 — Hamilton City 25 units · PASS Expansion location. HUD priority investment centre. Regional operations present. Strengths: Financial Viability 8/10, Strategic Alignment 8/10 6.45 / 10 Rank 3 — Christchurch 15 units · PASS Consolidation location. Existing regional presence. HUD investment location. Smaller scale. Strengths: Financial Viability 8/10, Delivery Capability 6/10

Three different stories: anchor, expansion, and consolidation — each with distinct competitive strengths and weaknesses

South Auckland carries half the programme and is the location where capability evidence is strongest. The financial model for South Auckland must be the strongest in the submission. Hamilton is the expansion play — strong on strategic alignment, weaker on delivery capability because there is no current housing stock. Christchurch is the consolidation location — financially competitive but smaller in scale.

The Financial Model Output

The Strategic Analysis identified the $46,000 per-place benchmark as the number against which every submission will be evaluated. The simulation confirms and quantifies the challenge. At synthetic development cost assumptions, every location exceeds benchmark. The programme weighted average is $62,019 — 35% above.

HUD Memo Table format — Year 1 Metric South Auckland Hamilton City Christchurch Programme Units 40 25 15 80 Dev cost/unit (incl GST) $552,000 $483,000 $494,500 Debt servicing/unit p.a. $34,979 $30,606 $31,335 Operating costs/unit p.a. $12,625 $12,625 $12,625 Contingency/unit p.a. $17,489 $15,303 $15,667 Year 1 Cost per place $65,093 $58,534 $59,627 $62,019 HUD benchmark $46,000 $46,000 $46,000 $46,000 Variance +$19,093 (+41%) +$12,534 (+27%) +$13,627 (+30%) +$16,019 (+35%) 25-year HUD investment $85.2M $48.2M $29.4M $162.7M Debt servicing alone consumes 67–76% of the $46K benchmark before operating costs or contingency

At synthetic assumptions, every location exceeds benchmark. Development cost is the primary driver.

The benchmark gap is almost entirely driven by one variable: development cost. Debt servicing alone — the PMT of development cost, equity rate, and interest rate — consumes $30,600 to $34,979 per unit per year. That is 67–76% of the entire $46,000 benchmark before a single dollar of operating cost or contingency is added. The path to benchmark is through development cost, not through operational efficiency. Operating cost reductions, even aggressive ones, cannot close a gap driven by debt servicing. An applicant’s strongest competitive lever is existing programme cost data — actual acquisition costs that may be materially lower than the synthetic estimates used here.

The full report includes per-location financial narratives, 25-year cashflow structure analysis, DSCR and ICR trajectories, and the complete assumptions register with sensitivity classifications. Request the full paper →

Risk, Sensitivity, and Contingency

The Monte Carlo analysis runs 10,000 iterations with six variables sampled simultaneously: interest rate, construction cost, operating costs, R&M escalation, vacancy rate, and index lag drift. It is not a stress test. It is a probability map — asking not “what happens in the worst case” but “across 10,000 plausible scenarios, what is the range of outcomes and how likely is each?”

Monte Carlo distribution — 10,000 scenarios Percentile Year 1 Cost per Place vs Benchmark P10 (optimistic) $58,480 +27% P25 $60,991 +33% P50 (median) $63,941 +39% P75 $67,171 +46% P90 (stressed) $70,200 +53% P95 $71,958 +56% Benchmark exceedance 100% Contingency sufficiency 100% Capital sufficiency 15.5% No scenario produces Year 1 cost below $46,000 at these development cost assumptions. The development cost assumption is the binding variable. Until QS-validated costs replace synthetic estimates, the gap cannot be closed.

100% benchmark exceedance is structural at these assumptions, not a tail risk

The contingency governance lever

This is the finding that could only emerge from running the simulation. At Medium (50%), the HUD contingency formula is self-sustaining across all 10,000 scenarios — zero contingency exhaustion probability. But contingency adds $15,000–$17,500 per unit to the Year 1 cost. At Low (25%), approximately $8,700 per unit per year is saved across the programme, taking the weighted average from $62,019 to approximately $53,300. Still above benchmark, but materially closer.

The trade-off is real. Lower contingency means less buffer against the exposures documented in the Strategic Analysis: index lag, legislative change, and capital replacements from Year 10. The Monte Carlo shows zero exhaustion at Medium, meaning the current setting over-provisions relative to modelled risks. Whether that over-provision is prudent insurance or unnecessary cost is a risk appetite question, not a financial modelling question.

A 25% reduction in development cost brings the Year 1 cost within striking distance of benchmark. If an existing programme reveals acquisition costs at $350K–$380K per unit — plausible for purchase of existing stock or lower-rise construction — the benchmark gap narrows substantially. The path to benchmark runs through development cost data, not operational efficiency.

Capital and Programme Scale

The equity available for the Flexible Fund is residual — what remains after Budget 2025 commitments are absorbed. The central estimate of $6 million is derived from public financial statements and accounts for prior capital commitments. The CFO’s confirmed figure replaces all modelled scenarios.

Capital allocation — equity scenarios at 15% Scenario Equity Available Achievable Units Programme (80) Feasible? Conservative $3,000,000 38 NO Central $6,000,000 76 NO ($236K short) Optimistic $10,000,000 128 YES At 10% equity, the central case supports 115 units. Lower equity rate = more homes but higher Agreed Amount.

The tension: lower equity rate means more homes but higher cost per home

The central case falls 4 units short. At 15% equity and $6M available, the programme supports 76 of 80 units. The levers are: reduce equity rate to 10% (supports 115 units at central), reduce programme scope to 76, increase equity, or reduce development cost. The optimal rate depends on the actual capital available and the organisation’s debt capacity — both CFO inputs.

The Wraparound Question

The Strategic Analysis devoted an entire section to this risk. The simulation now attaches programme-scale numbers to the gap. At 80 homes, annual wraparound cost ranges from $552,000 to $840,000 depending on service intensity. At 2% CPI indexation, the cumulative unfunded exposure over 25 years is $17.7M–$26.9M.

ServiceCost/Tenancy LowCost/Tenancy High80-Home Low80-Home High
Tenancy Support$3,500$5,000$280,000$400,000
Financial Mentoring$800$1,200$64,000$96,000
Community Healthcare Liaison$600$1,000$48,000$80,000
Family / Whānau Services$1,500$2,500$120,000$200,000
Programme Management$500$800$40,000$64,000
Total$6,900$10,500$552,000$840,000

The board must resolve — before signing — how this annual gap will be sourced for 25 years. Whether through dedicated MSD contract negotiation, endowment funding, philanthropic partnerships, or a deliberate programme scaling strategy, this is a governance decision. Organisations that describe aspirational wraparound capability without a funding architecture will be identified as such by the evaluation panel.

This is not a reason not to apply. It is a reason to apply with a funding architecture that has been resolved at governance level, documented in the application, and stress-tested against realistic revenue scenarios.

From Simulation to Submission

The gap between this synthetic illustration and a submission-ready package is four inputs from the organisation: aspirational locations, total capital envelope, Budget 2025 capital position, and existing programme cost data. The simulation engine re-runs with actual data in under an hour.

Opportunity feasibility vs full feasibility Opportunity Feasibility (This) Purpose Can the programme be structured competitively? Data quality Synthetic assumptions, public data Scope Programme-level, three locations Output Directional numbers, risk profile Full Feasibility (Stage 2) Purpose Is each project financially viable at confirmed costs? Data quality QS estimates, actual operating data, confirmed equity Scope Project-level, per-site Output Submission-ready financial models

The gap between these two columns is four inputs from the organisation

From simulation to submission — three stages Stage 1: Synthetic Simulation Where we are now • Directional numbers, risk profile • Methodology demonstrated • Data inputs identified • Engine ready for re-run Stage 2: Actual Data Analysis With four organisational inputs • Per-location HUD financial models • Cost Response Form populated • Submission-ready output • Two-envelope alignment notes Stage 3: Operational Standing analytical capability • MCA as pipeline screening • Financial model re-runs • Portfolio-level risk monitoring • Reusable across future rounds The gap between Stage 1 and Stage 2 is four inputs from the organisation. The engine re-runs in under an hour.

The analytical infrastructure exists. What it needs is the organisation’s own data.

The simulation engine produces outputs that map directly to every section of HUD’s prescribed Cost Response Form: equity structure (Section 1.1), programme finance (Section 1.2), Year 1 cost per place (Section 2.1a), cost accuracy evidence (Section 2.1b), contingency justification (Section 2.1d), and interest rate assumptions (Section 2.1e). The engine does not produce analysis for the sake of analysis — it produces the specific outputs that populate the specific fields in the specific forms HUD requires.

The structural findings will not change when actual data replaces the synthetic inputs: the benchmark gap is driven by development cost, the contingency level is a governance lever, the capital allocation is tight at central equity, the wraparound gap is structural and unfunded, and the path from simulation to submission runs through four conversations that can happen this week.

The full report includes per-location financial narratives, 25-year cashflow structure, sensitivity matrices, the complete assumptions register, HUD Cost Response Form mapping, and the application form alignment framework. Request the full paper →

This is the summary. The full analysis goes deeper.

The complete report includes the full three-layer analytical methodology, per-location MCA scoring with sub-criterion breakdowns, 25-year cashflow projections with DSCR and ICR trajectories, Monte Carlo parameter specifications across six simultaneously-sampled variables, development cost and interest rate sensitivity matrices, capital scenario modelling at four equity rates, the complete assumptions register with 20 classified parameters, HUD Cost Response Form section mapping, application form alignment framework, and the engagement pathway with timeline. The 13-slide summary deck is also available.

Request the full paper →

Sources & Provenance

HUD (2026a). Budget 2025 Flexible Fund — Opportunity. Te Tūāpapa Kura Kāinga — Ministry of Housing and Urban Development.

HUD (2026b). Budget 2025 Flexible Fund — Application Form. Te Tūāpapa Kura Kāinga.

HUD (2026c). Budget 2025 Flexible Fund — Financial Model. Prescribed Excel model with user guidance.

HUD (2026d). Budget 2025 Flexible Fund — Cost Response Form. Te Tūāpapa Kura Kāinga.

HUD (2026e). Budget 2025 Flexible Fund — Commercial Term Sheet. Te Tūāpapa Kura Kāinga.

HUD (2026f). Budget 2025 Flexible Fund — Information Document. February 2026, 27 pages.

HUD (2026g). Budget 2025 Flexible Fund — Q&A Responses. Published periodically from March 2026.

Williams, G. (2026). HUD Budget 2025 Flexible Fund — Strategic Analysis. First Edition, 28 February 2026. Steko Consulting Limited.

Colophon

Edition: 12 March 2026 (web publication March 2026)

This simulation was originally produced on 12 March 2026 as a detailed financial and analytical report accompanying a 13-slide summary deck, prepared for a community housing provider engagement. The simulation outputs, financial analysis, risk modelling, and assumptions are identical to the original publication. Organisation-specific references have been removed under the IP reservation terms of the original publication. The simulation methodology and structural findings have enduring value: the financial dynamics demonstrated here apply to any community housing provider assessing a programme-scale Flexible Fund application.

How this article was produced

This analysis was produced under a governed production method for research articles (RPP-001 v0.1.0). The simulation engine was built and executed within Claude sessions with zero local toolchain dependency. The engine reads configuration and reference data from structured JSON, producing all financial outputs programmatically.

What the practitioner brought: The analytical framework design, Monte Carlo parameter selection and risk mapping, MCA criteria structure mapped to HUD assessment priorities, balance sheet analysis from public financial statements, engagement pathway architecture, and all strategic judgments. Editorial direction and publication approval. Independent verification of HUD financial model formula extraction.

What the production engine brought: Simulation engine construction and execution (10,000 Monte Carlo iterations), HUD financial model formula extraction and replication, deterministic Agreed Amount calculation across three locations, sensitivity analysis, capital scenario modelling, wraparound cost quantification, and structured report production at publication depth. Cost Response Form section mapping and two-envelope alignment framework.

Powered by Claude Opus 4.6 · RPP-001 v0.1.0

Source verificationAll financial model formulas verified against HUD prescribed Excel model. Agreed Amount calculation, contingency formula, and benchmark confirmed against source.
Monte Carlo validation10,000 iterations with fixed seed (42) for reproducibility. Six variables sampled simultaneously. Distribution parameters documented in assumptions register.
Register compliancePASS — lens-not-subject, psychological register, brand consistency, ANON-001.

We have made best efforts to ensure the accuracy and integrity of this simulation. Source documents are publicly available from HUD. All assumptions are classified as synthetic, derived, or actual in the full assumptions register. If you believe any claim, citation, or finding requires correction, we welcome that feedback at [email protected] and will undertake to review and respond accordingly.

© 2026 Steko Consulting Limited · Originally produced 12 March 2026 · steko.co.nz