Most institutions do not have a governance problem. They have a gap between the governance they document and the governance they live. This framework was developed to close that gap — reliably, transferably, and without creating dependency.
Across financial institutions — banks, microfinance providers, digital payment operators, NBFIs — governance frameworks are standard. Risk committees meet. Control documents are filed. Audit responses are submitted on time. From the outside — and often from the inside — everything looks ordered.
The gap is not visible in the documentation. It becomes visible at the moment of decision. Who actually owns this call? Who will be held accountable if the control fails? What happens when a key decision-maker is unavailable, the regulator asks a question nobody anticipated, or the product is live and the risk scenario nobody planned for is unfolding in real time?
That is the governance gap: the distance between what an institution says it does and what it actually does when the pressure is on. It is almost always wider than the institution believes, invisible from the inside, and expensive — in regulatory exposure, in failed transitions, in products that launch without adequate controls, and in institutions that cannot function beyond the individuals currently holding key positions.
Risk governance fails not because institutions lack frameworks, but because those frameworks are written for auditors and lived by nobody. The value of independent advisory is not the production of another framework document — it is the judgment that distinguishes which parts of the existing framework are load-bearing and which are decorative.
This framework operates at the level of decision rights, accountability structures, and authority allocation — not operational procedures or technology systems. Operational failures (role blur, paper-driven processes, absent data infrastructure) are treated as downstream consequences of governance gaps, not as the governance problem itself. LITUp addresses the structural cause. The operational symptoms become addressable once the governance cause is corrected. An engagement that treats operational symptoms without correcting the governance cause will not hold.
Most governance advisory produces frameworks. LITUp produces four things that most governance advisory does not: a diagnostic that surfaces the gap between documented and actual decision behaviour; an identification of the specific failure mode — not the presenting symptom; a transfer of structural capability rather than a report of recommendations; and an uplift of the people who must carry it forward. The sequence is not incidental. Each phase creates the conditions the next requires. The fourth signal in the diagnostic cluster — authority misaligned with knowledge — is a pattern that standard governance reviews consistently miss because it is invisible in documentation. It requires behavioural observation to surface. That is where the gap between a compliance review and a genuine governance diagnostic becomes consequential.
Closing the governance gap requires a specific sequence of interventions. You cannot transfer a governance structure before you have accurately located the gap. You cannot uplift people before you have something durable to transfer. The sequence is not arbitrary — each phase creates the conditions for the next.
The trigger for this framework is a specific diagnostic cluster. The four signals below were derived from repeated observation across institutions of different types, sizes, and regulatory contexts — not constructed deductively. They describe patterns that consistently co-occur when a governance gap has become structurally embedded rather than situationally present. Each signal can appear in isolation with other explanations. When they appear together, the combination points to a structural governance deficit that internal effort alone will not correct — because the same layer generating the problem is typically the layer that would need to authorise a solution.
The threshold is not a count. Three signals without the fourth may indicate a less acute version of the same problem. The fourth signal without the first three may indicate a different problem entirely. The practitioner reads them as a system, not a checklist.
When something goes wrong, it is genuinely unclear who was responsible for the decision that caused it. Committees approved. Leadership aligned. But no single person will say: that decision was mine. Accountability exists on paper; in practice it is distributed until it disappears.
The control framework describes what should happen. But the controls have only been tested in benign conditions — during audits, during reviews, during periods when nothing is actually going wrong. Nobody knows what happens when two things fail at once, when the key person is unavailable, or when a scenario the framework did not anticipate occurs.
Discussions are thorough. Everyone leaves aligned. But six weeks later, the agreed actions have not been taken — not because of disagreement, but because the meeting produced consensus on direction without assigning a single accountable owner. The governance is theatrical: it performs the appearance of decision-making without producing the substance of it.
The people holding formal decision authority consistently lack the domain knowledge to exercise it well on the institution's most consequential questions — while the people holding the relevant knowledge hold no formal authority to act on it. This misalignment is especially prevalent in institutions navigating digital transformation, regulatory adaptation, or leadership transitions. It produces decisions made slowly, made badly, or deferred indefinitely. It is invisible in documentation and only surfaces through behavioural observation — which is why standard governance reviews consistently miss it.
The four signals are consistent across institution types. How they manifest is not. In a licensed bank or NBFI, Signal 01 typically surfaces at the Board-to-management interface: the Board approves a risk appetite statement and ratifies a governance framework, but never stress-tests whether management is living it. The CEO concentrates operational authority in ways the Board does not see or does not challenge. The risk committee meets, reviews dashboards, and produces minutes — but has not once pushed back on a product approval or a material limit breach. In an MFI or founder-led institution, Signal 01 surfaces differently: a small executive leadership group holds approvals across operational domains far below their governance role, creating decision queues that slow the entire organisation. In a fintech, the pattern often appears at the founder-CEO layer: rapid growth has outpaced the governance architecture, and the CEO is simultaneously the product decision-maker, the primary risk owner, and the institutional memory — a single point of failure that the organisation has not yet acknowledged as a governance problem. The diagnostic methodology is the same across all three. The entry question, the stakeholder map, and the authority framework review are calibrated to the institution's specific governance architecture.
Once the cluster is confirmed, the four-phase framework begins.
Each phase has a distinct purpose, a defined output, and a specific failure mode if skipped. They are designed to be completed in sequence — not because of process rigidity, but because each phase changes what is possible in the next.
The first phase is diagnostic. Its purpose is to produce an accurate map of where real accountability lives versus where the institution believes it lives. These two maps are rarely identical.
This is not a review of governance documentation. It is a structured examination of actual decision behaviour: who decides what, who defers to whom, where information flows, and where it stops. The documentation is the starting point, not the evidence. Evidence is found in how decisions are made under time pressure, how incidents are handled, and how accountability is assigned when something goes wrong.
The output addresses two dimensions. The first is accountability mapping: which controls are genuinely operational, which exist on paper, and where the accountability structure has silent dependencies on specific individuals rather than defined roles. The second — and the one that standard governance reviews consistently miss — is the knowledge-authority misalignment map: a structured identification of cases where the person holding decision authority lacks the domain knowledge to exercise it well, while the person with the relevant domain knowledge holds no formal authority to act on it. This second dimension is invisible in documentation. It requires direct observation of decision behaviour across a representative sample of consequential decisions.
Once the gap is located, the second phase names what is actually breaking — not the presenting problem, but the underlying failure mode. This is where cross-domain pattern recognition becomes the primary tool.
Governance failures in financial institutions rarely have a single cause. They are typically the intersection of a structural factor (accountability is not defined), a behavioural factor (the culture does not reward challenge), and a systemic factor (the information that would surface the problem does not reach the people who could act on it). Addressing one in isolation does not close the gap.
The illumination phase applies pattern recognition across organisational behaviour, institutional economics, cognitive science, and risk systems thinking to name the actual failure pattern — not the label that internal stakeholders have given it.
The third phase is the construction and handover of the governance structure itself — the accountability frameworks, decision authorities, process documentation, and control architectures that replace person-dependent governance with role-dependent governance.
The test for this phase is unambiguous: if the advisor disappeared tomorrow, could the institution continue to operate the structure that was built? If the answer is no, the transfer is incomplete. The deliverable is not a document — it is an institutional capability. Documents are the evidence of transfer, not the transfer itself.
This phase typically produces a governance documentation set, a decision authority framework, role definitions for the management layer, and — where executive transition is involved — a leadership continuity plan that maps institutional knowledge and relationship capital to named handover mechanisms.
The fourth phase addresses the people responsible for operating what was built. A governance structure is only as durable as the judgment of the individuals who run it. This phase assesses where the gaps are — in knowledge, in tools, in confidence — and fills them.
Uplift is not generic training. It is targeted capability development mapped to the specific governance structure that was transferred. The people who will operate the delegation framework need to understand not just what the framework says, but why it was designed that way — so they can exercise judgment in scenarios the documentation did not anticipate.
This phase converts a delivered solution into an institutional capability. It is what makes the previous three phases durable.
Every framework has a logic beneath the logic — an organising principle that determines how the practitioner makes decisions in situations the framework did not anticipate. For LITUp, that principle is trusteeship: the obligation to protect and return what is entrusted to you in better condition than you received it.
An advisor engaged on governance is, by definition, a temporary steward of something that belongs to the institution. The obligation is not to make the institution dependent on a continued advisory relationship. It is to leave the institution with a governance capability it can operate, sustain, and build on independently.
This means the framework is designed around a specific commercial ethic: a client who no longer needs you for this problem and comes back with the next one is the only successful outcome. A client who cannot function without the advisor's continued involvement is a failure of the engagement — regardless of what they paid or how satisfied they report being. Advisory dependency is a different form of the person-dependency the framework exists to correct.
Every design decision in the four phases follows from this. Phase Three's test — could the institution operate without the advisor? — is the trusteeship principle made operational. Phase Four exists because transferring a structure without building the human capability to run it is not trusteeship; it is a delayed hand-off to an inevitable reversion.
The practitioner who operates from this principle builds differently at every decision point in an engagement: scope is bounded by what the institution genuinely needs rather than what sustains the billing relationship; deliverables are designed for operational use rather than external legibility; and the measure of success is the institution's independence, not the advisor's indispensability.
The following is drawn from direct advisory experience with a founder-led financial institution. Identifying details have been anonymised. The governance patterns, structural dynamics, and outcomes described are real.
The patterns described here are not unique to this institution. They recur — in varying combinations and at varying stages — across founder-led financial institutions, MFIs preparing for external investment, and organisations navigating executive leadership transitions. The composite format protects the institution while preserving the diagnostic value of the case.
The institution. A microfinance institution incorporated under the Societies Act and licensed under the Microcredit Regulatory Authority, founded over three decades ago. No shares. No equity ownership. Authority derived entirely from executive position and the institutional capital — regulatory standing, apex funding relationships, donor trust — that the founding team had accumulated over decades of field operation. At inception, the institution served a defined geography with a clear mission and a small founding team whose responsibilities were cleanly divided: each co-founder controlling a domain, each trusted to run it without interference from the other. That architecture worked for its era. The institution grew, earned its MRA licence, built a large borrower base, and became a partner organisation of PKSF.
The environment that changed around it. Three decades is a long time in Bangladeshi financial services — and the sector has not changed at a constant rate. The MRA Act 2006 introduced a formal licensing and compliance regime where none had existed. Mobile financial services, beginning with bKash in 2011 and expanding through subsequent entrants, fundamentally restructured the payment and credit landscape. Borrowers who had previously depended on MFI field agents for basic financial access now had smartphones and mobile wallets. The competitive pressure this created was not immediately fatal — MFIs serve a segment that mobile money alone does not fully serve — but it reshaped borrower expectations and compressed the differentiation that had previously required no management. PKSF progressively raised its governance, reporting, and data standards for apex-funded partner organisations. MRA compliance requirements became more detailed and more operationally demanding. The cumulative effect: an institution built and governed for the conditions of the 1990s faces a structurally different compliance and competitive environment in the 2020s — one that demands governance it was never designed to provide.
What the founding architecture could not absorb was this acceleration. The co-founders, each authoritative in their domain, had aged into the institution's executive committee and senior management layer. Their deep experience — the source of the institution's original credibility — had calcified into a different kind of risk: every significant decision, across every function, required their clearance. Not because they were obstinate, but because the governance architecture had never been updated to distribute that judgment. The queue for decisions grew long. Processes slowed to match the bottleneck. Red tape accumulated not as policy but as the emergent property of a system where nothing moved without executive clearance. The entire organisation had adjusted its operating speed to the slowest point in the decision chain.
The institution had, without anyone deciding it, converted its founding strength into a single point of failure. And below the executive layer, an organisation had learned to wait.
The operational consequences. Governance gaps do not stay at the governance layer. They produce operational failures downstream — and this institution showed all of them. Job descriptions were absent or so broadly written as to be meaningless: field officers appraising loans, disbursing funds, chasing collections, and handling borrower grievances with no clear role boundary and no clear accountability when something went wrong. Everyone was doing everything. When accountability was diffuse, nothing was anyone's responsibility — and the institution had no mechanism to change that because the governance layer that would have to authorise clearer role definitions was also the layer generating the ambiguity.
Record-keeping was paper-driven throughout — disbursement records, repayment schedules, borrower data, branch performance — accumulated in physical files across offices with no centralised digital layer. The institution could not generate portfolio-level analytics. It could not produce the data outputs that PKSF and development finance institutions increasingly required for ongoing monitoring. The gap between what apex funders expected to see and what the institution could actually produce had become a direct funding risk.
New talent entering the institution encountered this environment and found it resistant. Those who understood what digital systems could do for the institution's operational efficiency and data quality could not move adoption forward — not because the technology was unavailable, but because technology decisions sat with the executive layer, and the executive layer evaluated technology through the lens of experience that predated it. The knowledge required to make the institution's most important adaptation decisions was concentrated in people who held no formal authority to make them. The authority required to make those decisions was held by people who lacked the knowledge to make them well.
The presenting problem. The engagement was initiated by a specific and time-sensitive requirement: the institution needed governance documentation to meet the due diligence standards of its apex funding body ahead of a funding review. That requirement was real. But it was the surface expression of a deeper structural question — whether the institution's governance could hold through a transition in its executive committee leadership, and whether it could adapt to an operating environment that had moved significantly beyond the conditions under which it was built.
The outcome was an institution that had moved from person-dependent governance to role-dependent governance — with a documented executive committee charter, a functioning decision authority framework, role definitions that began to resolve the operational blur, and the minimum data infrastructure required to meet its apex funder's reporting standards.
What did not go cleanly is worth noting. The executive committee charter was agreed in session and partially reversed outside it — two members reverted to their established approval patterns within weeks of the Transfer phase closing, requiring a structured re-engagement to hold the boundary. The role definitions were drafted but inconsistently adopted. The digital record-keeping protocol was implemented for new disbursements but not applied retroactively to existing portfolios, leaving a data gap the institution will need to close over time. These were not failures of the framework. They were accurate reflections of how governance reform actually proceeds in institutions with deep-rooted operating cultures: the structural architecture can be built faster than the behavioural patterns that support it can be changed. An engagement that claims otherwise should be read sceptically.
The outcomes of a complete LITUp engagement are observable and verifiable. They are not abstract improvements in governance culture. They are specific institutional capabilities that did not exist before.
Every significant operational decision has a named owner, a defined escalation path, and a documented authority limit. Accountability is role-dependent, not person-dependent.
The control universe has been stress-tested against realistic failure scenarios. The institution knows which controls are load-bearing and which are decorative.
The institution can demonstrate to regulators, funders, and its own board that its governance architecture holds beyond any single individual.
The individuals responsible for the governance framework can articulate the reasoning behind the design, not just execute the process.
A complete LITUp engagement should make the next engagement unnecessary for the same problem. If the institution is back with the same governance gap twelve months later, the transfer was incomplete. The test applies to the framework itself: did it leave the institution stronger than it found it?
This framework was developed through fifteen years of applied governance practice across digital financial services and banking — at Standard Chartered Bangladesh, at bKash (Bangladesh's largest digital financial service provider, 70M+ users), and through direct advisory work with institutional clients. The LITUp methodology is attributed to Asif Ahmed Noor and published here as a working paper for professional reference and institutional use.
For questions about the framework or its application to a specific institutional context, reach out directly.