The Diamond PAU is a next-generation smart contract architecture for Sky Protocol's Parallelized Allocation Units (PAUs), replacing the legacy monolithic controller design with a modular, faceted system built on the EIP-2535 Diamond Standard [1]. Defined in the Laniakea specification as the target-state infrastructure for all capital deployment across the Sky ecosystem, the Diamond PAU addresses fundamental engineering constraints that emerged as the protocol scaled from a single Ethereum deployment to a multi-chain, multi-Star operation managing billions of dollars in allocated capital [2][13].
At its core, the Diamond PAU decomposes a PAU's controller contract — the entry point that dictates how an off-chain relayer can interact with the ALM Proxy (the contract holding custody of all funds) — into independently deployable "facets," each encapsulating a specific integration or capability [2]. This means adding support for a new DeFi protocol, upgrading bridge logic, or modifying swap mechanics no longer requires replacing the entire controller. Instead, a single new facet can be deployed, audited, and wired into the existing Diamond proxy through a governance-authorized diamondCut operation [1][2].
The architecture preserves the two contracts that matter most for data continuity and fund safety: the ALM Proxy, which holds custody of all positions and LP tokens, and the Rate Limits contract, which enforces per-action throughput constraints [2][3]. Neither changes during the upgrade. The Diamond PAU exclusively replaces the controller layer — how the relayer is allowed to interact with the ALM Proxy — while maintaining identical external interfaces throughout the transition [2].
As of February 2026, the Diamond PAU upgrade is in active development, with Stage 1 (library extraction and standardization) entering its initial audit cycle [18]. The upgrade represents a critical enabler for the broader Laniakea roadmap, which envisions standardized factory deployment of PAUs for new Stars, Halos, and Generators across the ecosystem [7].
Background and Motivation
The Diamond PAU emerged from concrete engineering challenges encountered during nearly two years of operating Sky Protocol's capital allocation infrastructure at scale. Understanding these challenges requires familiarity with the legacy PAU architecture — the system that has successfully managed over $4 billion in peak allocated capital across six blockchains and more than twenty integration types, but which has reached the limits of its original design.
Legacy PAU Architecture
Every PAU in the Sky ecosystem, regardless of which Star operates it, consists of three core smart contracts working in concert [3]:
| Component | Purpose | Upgrade Status |
|---|---|---|
| Controller | Entry point for operations; enforces action constraints on the relayer | Replaced by Diamond |
| ALM Proxy | Holds custody of all funds; executes calls via doCall() |
Unchanged |
| Rate Limits | Linear replenishment rate limits with configurable max and slope | Unchanged |
The ALM Proxy serves as a stateless custody contract that holds all assets — USDS, USDC, LP tokens, and any DeFi positions — and routes calls to external protocols according to logic defined in the controller [16]. The proxy pattern was designed from the outset to allow future controller iterations: a new controller can be authorized on the existing ALM Proxy without migrating any funds [16]. This separation of custody from logic is the foundational safety property that makes the Diamond PAU upgrade possible without disrupting existing positions.
The controller — either a MainnetController on Ethereum or a ForeignController on destination chains — contains all the operational logic: minting and burning USDS via the AllocatorVault, swapping in the PSM, depositing into ERC-4626 vaults, and bridging via CCTP or LayerZero [15][16]. The relayer, an off-chain automated system operated by each Star's GovOps team, calls functions on the controller to execute capital allocation decisions. Critically, the relayer is assumed to be a compromisable address — it is a hot wallet controlled by an automated off-chain system [16]. The controller's job is to confine the relayer's capabilities to a narrow, rate-limited set of pre-approved actions so that even in a compromise scenario, the worst outcome is funds moving between approved protocol positions [29].
The Rate Limits contract enforces per-action throughput constraints using a linear replenishment model [16]:
currentRateLimit = min(slope * (block.timestamp - lastUpdated) + lastAmount, maxAmount)
Each action — depositing into a Morpho vault, swapping USDS to USDC in the PSM, transferring via LayerZero — has its own rate limit key with a configurable maximum amount and replenishment slope [27][29]. This means a compromised relayer can at most exhaust the current rate limit for each action type, and the limit gradually refills over time. Rate limits are set by governance through the Configurator system or directly via executive votes [4][28].
Deployed Instances
The Spark Liquidity Layer (SLL) — the most mature PAU deployment — operates across Ethereum mainnet, Base, Arbitrum, Optimism, Unichain, and Avalanche [29]. The Star Allocation System was initialized for Spark on October 31, 2024, establishing the AllocatorVault, AllocatorBuffer, and ALM Proxy as the canonical capital deployment pipeline [22]. The Arbitrum deployment followed in February 2025, adding eight new contracts including a ForeignController, PSM3, SSR Oracle, and Rate Limits instance [23]. As of February 2025, the Spark ALM Proxy on Ethereum mainnet is deployed at 0x1601843c5E9bC251A3272907010AFa41Fa18347E, with the MainnetController at 0x577Fa18a498e1775939b668B0224A5e5a1e56fc3 (version 1.8) [29]. Grove operates its own Liquidity Layer with ALM Proxy at 0x491EDFB0B8b608044e227225C715981a30F3A44E on Ethereum, with additional deployments on Base, Avalanche, Plasma, and Plume [29]. Keel maintains its ALM Proxy at 0xa5139956eC99aE2e51eA39d0b57C42B6D8db0758 on Ethereum [29].
Engineering Challenges
Three specific engineering problems drove the decision to adopt the Diamond architecture:
The 24KB Bytecode Limit
Solidity contracts on Ethereum are limited to approximately 24,576 bytes of deployed bytecode (the EIP-170 limit) [2]. As controller functionality grew — more vault integrations, more token standards, more bridge protocols, more deployment strategies — the legacy single-controller design repeatedly hit this ceiling [2][8]. The Spark MainnetController, in particular, accumulated logic for PSM swaps, ERC-4626 deposits, Morpho vaults, Aave integrations, LayerZero transfers, CCTP bridging, Curve interactions, SuperState integration, and numerous other protocols. The development team refactored existing logic to reduce bytecode size five or six times, extracting code into libraries and optimizing function signatures, until there was nothing left to trim [15].
Version 1.9.0 of the Spark ALM Controller, released in December 2024, explicitly "converted ERC4626 and AAVE logic into libraries to optimize bytecode consumption" [15]. Version 1.10.0 followed in February 2025 with further "LayerZero logic library refactoring" [15]. These were stopgap measures — each new integration required removing or compressing existing code, creating a zero-sum trade-off between features and contract size.
Diverging Codebases
When Grove launched in June 2025 with a $1 billion allocation to tokenized credit instruments [24], its Liquidity Layer was forked from the Spark codebase. Keel, launching in September 2025 for Solana ecosystem expansion [25], and Obex, the agent incubator Star, subsequently forked from Grove. Each fork independently evolved its controller logic to accommodate different integration requirements — Grove needed Morpho vault support and institutional credit interfaces, Keel required Solana-specific bridge handling, and Spark continued adding DeFi lending integrations.
Over time, these codebases diverged significantly. The same logical integration — such as an ERC-4626 deposit — could exist in slightly different implementations across Stars. Bug fixes in one fork did not automatically propagate to others. Auditing costs multiplied because each codebase required independent review. The Diamond PAU consolidates all integration logic into a single canonical repository with shared, standardized facets that every Star consumes [2][9].
State Migration Overhead
In the legacy architecture, upgrading a controller requires deploying an entirely new controller contract, migrating all accumulated state from the old controller to the new one, and then swapping the ALM Proxy's authorization to point to the new controller [16]. This state migration is a manual, error-prone process. The controller stores integration-specific configuration — maximum slippage parameters for Curve, exchange rate thresholds for ERC-4626 vaults, mint recipient addresses for CCTP — that must be faithfully reproduced in the new deployment [15]. Each upgrade carries risk proportional to the amount of state that must be migrated, and the migration window represents a period where configuration errors could expose funds to unexpected behavior.
Technical Architecture
The Diamond PAU applies the EIP-2535 Diamond Standard to the PAU controller layer, decomposing a monolithic contract into a Diamond proxy backed by multiple facet contracts [1][2].
EIP-2535 Diamond Standard
EIP-2535, authored by Nick Mudge and finalized as an Ethereum standard in 2020, defines a proxy contract pattern where external functions are supplied by separate contracts called facets [1]. The Diamond proxy maintains a mapping of function selectors to facet addresses. When a function call arrives at the Diamond, the proxy's fallback function looks up which facet handles that selector and delegatecalls to it [1]. Because delegatecall executes the facet's code in the proxy's storage context, all facets share a single unified storage space [1][20].
The standard mandates several components [1]:
diamondCut()function — Enables atomic addition, replacement, or removal of multiple facet functions in a single transaction, emitting aDiamondCutevent for transparency- Loupe functions — Four mandatory view functions (
facetAddress(),facetAddresses(),facetFunctionSelectors(),facets()) that provide introspection into the Diamond's current facet configuration DiamondCutevent — Emitted on every structural change for off-chain indexing and verification
The Diamond pattern has been adopted across DeFi by protocols including Pendle Finance, Aavegotchi, BarnBridge, and others seeking to overcome the 24KB contract size constraint while maintaining upgradeability [20]. The pattern is not without criticism — some developers cite complexity, storage collision risks, and a steep learning curve as trade-offs — but its ability to eliminate practical size limits while enabling surgical upgrades makes it well-suited for the PAU use case [20][21].
Diamond PAU Components
The Diamond PAU replaces only the controller layer while preserving the ALM Proxy and Rate Limits contracts unchanged [2]:
flowchart TB
subgraph DiamondPAU["Diamond PAU"]
DP["Diamond Proxy"]
DP --> FA["Facet A"]
DP --> FB["Facet B"]
DP --> FC["Facet C"]
DP --> FN["Facet N"]
ALP["ALM Proxy"]
RL["Rate Limits"]
end
The Diamond Proxy routes function calls to the appropriate facet via selector mapping. The ALM Proxy holds custody of all funds and the Rate Limits contract enforces throughput constraints — both unchanged from the legacy architecture.
The Diamond proxy serves as the single entry point — replacing the monolithic controller — and delegates incoming function calls to the appropriate facet based on selector mappings [2]. Each facet is an independently deployed contract containing related functionality. The DiamondCut facet enables governance to add, replace, or remove facets. The DiamondLoupe facet provides introspection — listing available facets and their function selectors [9].
The critical architectural property is that the ALM Proxy does not need to know or care whether it is being called by a monolithic controller or a Diamond proxy. It simply checks that the caller holds the ALM_CONTROLLER_ROLE and executes the requested operation [16][29]. This makes the Diamond upgrade transparent from the ALM Proxy's perspective.
Facet Organization
The Laniakea specification defines the target facet organization [2]:
| Facet | Functions | Description |
|---|---|---|
| ERC4626Facet | deposit, withdraw, mint, redeem |
Standard vault interactions for sUSDS, Morpho vaults, and other ERC-4626 compatible protocols |
| SwapFacet | Swap operations | DEX and PSM swap execution with rate limit enforcement |
| MorphoFacet | Morpho-specific deposit/withdraw | Specialized logic for Morpho vault interactions |
| NFATFacet | claim, deploy, redeem |
NFAT facility interactions |
| AdminFacet | setRelayer, setFreezer, configuration |
Administrative functions authorized via governance |
For the Phase 1 deployment specifically, the initial facet set consists of [9]:
- NfatDepositFacet — Deposit into NFAT facility queues
- NfatWithdrawFacet — Withdraw from queues before deal execution
- CoreHaloFacet — Deploy into Core Halo positions
- LegacyMigrationFacet — Wind down legacy positions during transition
The NFAT PAU — the first Diamond PAU expected in production — requires a minimal three-facet set: a USDS facet for minting and burning USDS, an ERC-4626 facet for depositing into sUSDS, and an NFAT facet for interacting with NFAT contracts [10]. The Spark PAU, by contrast, consumes the full superset of available facets to support its extensive integration portfolio [15].
This asymmetry illustrates a key benefit: a new Star launching under the Diamond PAU does not need to consume all available logic. It deploys only the minimal set of facets required for its business needs, reducing attack surface, simplifying configuration, and accelerating onboarding [5].
Storage Model
Diamond PAU uses the Diamond storage pattern (aligned with EIP-7201 namespaced storage) to prevent storage collisions between facets [2]. Each facet uses a unique storage slot derived from a deterministic hash:
bytes32 constant ERC4626_STORAGE = keccak256("diamond.storage.erc4626");
struct ERC4626Storage {
mapping(address => uint256) vaultBalances;
}
function erc4626Storage() internal pure returns (ERC4626Storage storage s) {
bytes32 position = ERC4626_STORAGE;
assembly { s.slot := position }
}
This pattern ensures that each facet's state variables occupy non-overlapping storage regions, even though all facets execute within the Diamond proxy's storage context via delegatecall [2][1]. A bug in one facet cannot corrupt another facet's storage, providing a degree of isolation that the monolithic controller lacked.
Upgrade Path
The transition from the legacy PAU architecture to the full Diamond PAU proceeds through three carefully sequenced stages. Each stage is designed to be independently auditable and deployable, with functional equivalence verified at each step [9].
Stage 1: Library Extraction and Standardization
The first stage addresses the diverging codebase problem by extracting all integration-specific logic from the monolithic controllers into standardized libraries [9][15]. Where Spark's controller previously contained inline Morpho integration code alongside PSM swap logic alongside Curve interactions, Stage 1 moves each integration into its own library contract. Similarly, Grove's independently evolved implementations are reconciled with Spark's.
The result is a single canonical repository — the sky-mainnet-controller — containing the union of all integration libraries across all Stars [15]. Both Spark and Grove (and by extension Keel and Obex, which forked from Grove) point to the same standardized libraries. A deposit into an ERC-4626 vault uses identical logic regardless of which Star initiates it.
This stage critically leverages the existing testing infrastructure developed over nearly two years of production operation. Because the libraries implement exactly the same functionality as the inline code they replace, the existing test suites can verify functional equivalence with high confidence [15]. Stage 1 entered its initial audit cycle with Cantina in early March 2026 [18].
Stage 2: Facet Conversion
Stage 2 converts the standardized libraries from Stage 1 into Diamond proxy facets [9]. The distinction is subtle but architecturally significant: instead of the controller calling into a library via delegatecall with the library's code executing in the controller's storage context, each facet becomes an independent contract that the controller delegatecalls to, with the facet managing its own rate limits, state, and integration-specific logic [2].
The external interface remains identical — calling depositERC4626 on the controller looks and behaves the same as before. The controller still has a manually defined set of integrations that delegate to facets. What changes is that each facet is now a persisted, independently deployed contract instance that can be reused across Stars and preserved across future upgrades [2][9].
Stage 2 introduces the Spark Parameter Registry — a dedicated contract for persisting integration-specific state (maximum slippage parameters, exchange rate thresholds, mint recipients) that survives controller upgrades [4]. In the legacy system, this state lived inside the controller itself, requiring manual migration on every upgrade. With the Parameter Registry, state persists independently, and the controller simply reads from it. The external governance interface remains the same: calling setMaxSlippage on the controller routes through to the Parameter Registry. The Configurator and rate limits systems operate identically across Stage 1 and Stage 2 [4].
A significant benefit of Stage 2 is that it provides a viable deployment substrate for new Stars. Although the controller still hard-codes its integration set (requiring a controller upgrade to add new facets), the deployed facets themselves are canonical production instances. A new Star could launch on a Stage 2 PAU with a larger function interface than strictly needed — unused integrations simply fail if their facets are not configured — while benefiting from audited, battle-tested logic [9].
Stage 3: Diamond Proxy Deployment
Stage 3 replaces the manually defined controller with a true Diamond proxy controller — a single entry point with a fallback function that dynamically routes to facets based on the Diamond's selector-to-facet mapping [2][9]. This is the transition from "controller with delegatecall to specific facets" to "Diamond proxy with dynamic selector routing."
The facet instances deployed in Stage 2 are maintained unchanged — the same contract addresses, the same bytecode, the same proven production behavior [2]. The only change is the routing mechanism. Where Stage 2's controller had explicit function definitions that delegated to specific facets, Stage 3's Diamond proxy has a single fallback() function that looks up which facet handles any given function selector and routes accordingly [1][2].
After Stage 3, adding a new integration for any Star is straightforward: deploy the new facet, then call diamondCut through governance to add its selectors to the Diamond [1][2]. No controller redeployment is needed. Each Star's Diamond is configured with only the facets it requires:
flowchart LR
subgraph SparkPAU["Spark Diamond PAU"]
SP["Diamond Proxy"]
SP --> F1["ERC4626 Facet"]
SP --> F2["PSM Facet"]
SP --> F3["CCTP Facet"]
SP --> F4["LayerZero Facet"]
SP --> F5["Morpho Facet"]
end
subgraph NfatPAU["NFAT Diamond PAU"]
NP["Diamond Proxy"]
NP --> NF1["USDS Facet"]
NP --> NF2["ERC4626 Facet"]
NP --> NF3["NFAT Facet"]
end
The upgrade from Stage 2 to Stage 3 is architecturally the most significant — transitioning to a fundamentally different proxy pattern — but operationally the lightest, because the actual integration logic (the facets) is already deployed and proven [9].
Migration Mechanics
A critical property of the entire upgrade path is that migrations can be performed per-PAU without affecting other PAUs in the system [2]. Spark can upgrade to Stage 2 while Grove remains on Stage 1. The NFAT PAU can deploy directly as a Stage 3 Diamond while existing Stars remain at earlier stages.
The legacy upgrade process requires:
- Deploying a new controller with all functions
- Migrating state from old controller to new
- Updating ALM Proxy authorization
- All-or-nothing replacement [2]
The Diamond PAU upgrade process:
- Deploying only the new or changed facet
- Calling
diamondCutto add, replace, or remove selectors - Surgical modification — only affected functions change
- Other facets remain untouched [2]
Governance and Configuration
The Diamond PAU integrates with the same governance infrastructure as the legacy PAU. The Configurator Unit — the system through which governance manages PAU parameters — is agnostic to whether the underlying controller is monolithic or faceted [4]. As the Laniakea specification states: "The diamond is transparent to the Configurator — it just sees callable functions" [2].
Configurator Unit and BEAM Hierarchy
The Configurator Unit governs Diamond PAUs through a three-contract system: BEAMTimeLock (an OpenZeppelin TimelockController with pause capability), BEAMState (storage), and the Configurator itself (operational interface) [4]. Governance authority flows through a hierarchy of Bounded External Access Modules (BEAMs):
| BEAM Type | Role | Capabilities |
|---|---|---|
| aBEAM (admin) | Core Council | Registers PAUs, approves init configurations, grants cBEAMs (14-day timelock for additions) [4] |
| cBEAM (configurator) | GovOps team per PAU | Sets rate limits within SORL bounds, executes approved controller actions, manages relayer/freezer [4] |
| pBEAM (permission) | Relayer | Executes constrained operations on the PAU as authorized by the cBEAM holder [11] |
The Council BEAM Authority document specifies the full hierarchy [11]:
- Council Beacon (HPHA, set by SpellCore 16/24 supermajority) sits at the top
- aBEAMs grant cBEAMs to GovOps teams, subject to a 14-day timelock for additions but instant for removals
- cBEAMs set pBEAMs (relayers) that execute on PAUs
- Any single Core Council Guardian can freeze or cancel any BEAM across the entire system [11]
For the Diamond PAU specifically, adding a new facet follows the BEAM governance flow: the facet is deployed, governance approves the diamondCut operation through the BEAMTimeLock (or via an executive vote / spell), and the Diamond proxy's selector mapping is updated [2][4].
Second-Order Rate Limits (SORL)
The Configurator Unit enforces a second-order constraint on how quickly rate limits can be increased — preventing a compromised GovOps operator from instantly raising limits to drain the PAU [4][12].
| Parameter | Default Value | Description |
|---|---|---|
| Timelock delay | 14 days | Minimum wait before PAU registrations, init additions, and cBEAM grants take effect [4] |
hop |
18 hours | SORL cooldown period between rate limit increases [4] |
maxChange |
25% (2,500 bps) | Maximum rate limit increase per cooldown period [4] |
The SORL calibration follows the formula IRL * (1 + SORL)^n = Target, where IRL is the initial rate limit and n is the number of hop periods [12]. With adopted parameters of IRL = $100K and SORL = 25%, reaching a $100M daily rate limit takes approximately 30 days of compounding increases:
| Day | Approximate Rate Limit |
|---|---|
| 0 | $100K |
| 7 | ~$476K |
| 14 | ~$2.27M |
| 21 | ~$10.8M |
| 28 | ~$51.4M |
| 30 | ~$80M |
This bootstrap timeline applies identically to Diamond PAU deployments — a new Star's rate limits ramp gradually from conservative initial values regardless of the underlying controller architecture [12].
Parameter Registry
The Spark Parameter Registry introduces persistent state storage that decouples configuration from the controller contract [4]. Integration-specific parameters — maximum slippage for Curve integrations, exchange rate thresholds for ERC-4626 vaults, CCTP mint recipients — are stored in the Parameter Registry rather than inside the controller [4].
The external interaction remains identical: governance calls setMaxSlippage on the controller, which routes through to the Parameter Registry. The difference is that when the controller is upgraded (or when facets are swapped in a Diamond), the Parameter Registry state persists automatically. This eliminates the manual state migration that plagued legacy controller upgrades [4].
Security Model
The Diamond PAU inherits and extends the security model of the legacy PAU, which is designed around the assumption that the relayer is a fully compromisable address [16].
Relayer Confinement
The relayer — an off-chain automated system operating as a hot wallet — can only call functions exposed by the Diamond proxy's facets [16]. Each function call is rate-limited by the Rate Limits contract, constraining maximum throughput per action type [29]. In a compromise scenario, the attacker can at most:
- Move funds between pre-approved protocol positions (deposits, withdrawals, swaps) at rates bounded by the rate limit configuration
- Exhaust current rate limits, which then require time to replenish based on the slope parameter
- Not access any function not mapped in the Diamond's selector table
The Diamond architecture improves on this model by providing facet-level isolation. A vulnerability in the SwapFacet cannot be exploited to call functions belonging to the AdminFacet, because selector routing is deterministic and based on the Diamond's facet mapping [2]. In the monolithic controller, all functions shared the same contract scope.
Emergency Freeze
The FREEZER_ROLE allows a designated multisig (deployed at 0xB0113804960345fd0a245788b3423319c86940e5 for both Spark and Grove) to instantly revoke the relayer's authorization [29]. This capability is unchanged by the Diamond upgrade — the AdminFacet handles setRelayer and setFreezer operations, and the freezer multisig retains the ability to halt all relayer operations with a single transaction [2][16].
Beyond the freezer, the Governance Security Module enforces a 24-hour pause delay on executive vote execution, providing a window to detect and cancel malicious governance proposals before they take effect [30]. Spells — the smart contracts encoding all on-chain changes for an executive vote — are subject to this delay before execution [31]. Emergency mechanisms including Protego (which can cancel pending governance actions), OSM_MOM (which can freeze oracle prices), and LINE_MOM (which can reduce debt ceilings to zero) operate independently of the PAU controller architecture [28].
Audit History
The legacy Spark ALM Controller has undergone multiple security audits. ChainSecurity completed an audit finding "a high level of security" across the codebase, specifically examining access control, rate limiting, and CCTP integration logic [17]. Cantina has conducted a continuous series of reviews, with engagements including "Sky: Spark ALM Controller" (February 2026), "Sky: SVM ALM Controller" (January 2026), and "Sky PAS" (February 2026), among others [18]. The MakerDAO DSS Allocator — the core allocation vault system that PAUs interact with — was separately audited by ChainSecurity, which found "a good level of security across critical areas including asset solvency, access control, and functional correctness" [32].
The Diamond PAU upgrade introduces new audit surface: the Diamond proxy routing logic, the diamondCut governance mechanism, and the namespaced storage pattern all require independent verification [18]. The staged upgrade approach is explicitly designed to minimize audit scope at each step — Stage 1 can leverage existing test infrastructure to prove functional equivalence, and Stage 3's audit surface is limited to the proxy routing mechanism since the facets are already audited from Stage 2 [9].
Roadmap Integration
The Diamond PAU is a foundational component of the Laniakea multi-phase roadmap, enabling standardized deployment patterns from Phase 1 through Phase 8 and beyond [7].
Phase 1.1: First Diamond PAU Deployment
Phase 1.1 of the Laniakea roadmap targets "Diamond PAU Deployment — Diamond PAUs deployed for first-cohort Primes" [7]. This initial deployment introduces the Diamond architecture for the PAU pattern, proving the concept in production with the NFAT PAU and selected Prime deployments. The Phase 1 specification defines the initial facet set (NfatDepositFacet, NfatWithdrawFacet, CoreHaloFacet, LegacyMigrationFacet) and establishes the governance integration pattern that subsequent phases will follow [9].
Phase 5: Halo Factory
Phase 5 introduces the Halo Factory, which deploys standardized Halo PAU stacks from templates [14]. By Phase 5, the Diamond PAU architecture serves as the substrate for factory-deployed Halos, enabling repeatable deployment of Halo infrastructure with minimal per-deployment engineering [14]. The Halo Factory specification references the Diamond PAU document for governance execution and guardrails [14].
Phase 6: Generator PAU
Phase 6 introduces a unified USDS Generator PAU that consolidates the per-Prime ilk model into a single-ilk interface between MCD and the Prime layer [6]. The Generator PAU architecture uses the standard PAU components (Controller + ALM Proxy + Rate Limits) and connects to Primes via ERC-4626 vault interfaces with functioning rate limits [6]. This consolidation eliminates per-Prime ilks, routing all Prime funding through a single Generator PAU that manages capital allocation centrally.
Phase 7: Prime Factory
Phase 7 introduces the Prime Factory: automated deployment of standardized Prime PAUs using Diamond architecture with default integrations to the Generator and Halo layers [5]. The factory deploys a complete PAU stack — Diamond Proxy, facets, ALM Proxy, Rate Limits — and registers it in the Configurator system [5][7]:
- Factory deploys Prime PAU (Diamond Proxy + facets + ALM Proxy + Rate Limits)
- Factory registers PAU in Configurator via BEAMTimeLock
- Factory pre-creates init rate limits for standard operations
- Core Council grants cBEAM to GovOps (14-day timelock)
- GovOps activates rate limits, sets relayer, begins operations [7]
The Phase 7 specification notes that a conservative "facet allowlist" should exist to control which facets can be wired into factory-deployed PAUs [5]. Pre-approved facets reduce the risk that the factory becomes a rapid pathway for unreviewed risk. Explicit governance review is required for adding new external integrations via new facets [5].
Phase 8: Generator Factory
Phase 8 extends the factory pattern to Generator PAUs, completing the standardized deployment pipeline across all protocol layers [7]. At this stage, the Diamond PAU architecture powers PAUs at every layer of the system — Generator, Prime, Halo, and Foreign — with each layer consuming only the facets relevant to its operational needs [3].
Four-Layer Universality
The architecture overview establishes that every layer in the Laniakea system is composed of PAUs using the same contract patterns [3]:
| Layer | Location | Upstream | Downstream |
|---|---|---|---|
| Generator | Mainnet | ERC-20 stablecoin | Prime Layer (via ERC-4626 vault) |
| Prime | Mainnet | Generator Layer | Halo Layer, Core Halos, Foreign Primes |
| Halo | Mainnet | Prime Layer | RWA strategies, custodians |
| Foreign | Alt-chains | Mainnet Prime (via bridge) | Foreign Halo Layer |
What differs between PAU deployments at each layer is the upstream connection, approved allocation targets, and rate limit values — nothing else [3]. The Diamond architecture amplifies this universality by making it trivial for any layer's PAU to adopt new integrations via facet addition rather than controller replacement.
Criticism and Considerations
The Diamond PAU architecture represents a deliberate trade-off between modularity and complexity. While the faceted design resolves the concrete engineering problems — bytecode limits, diverging codebases, state migration overhead — that necessitated the upgrade, it introduces its own set of risks and operational challenges. The EIP-2535 standard remains somewhat controversial among Ethereum developers, and the multi-stage transition from the legacy architecture creates a prolonged period of architectural heterogeneity across Stars [20]. The following considerations inform both the design decisions and the staged rollout strategy.
Diamond Pattern Complexity
The EIP-2535 Diamond Standard introduces architectural complexity that some Ethereum developers consider a significant trade-off [20]. The Diamond proxy pattern requires careful management of selector-to-facet mappings, namespaced storage slots, and delegatecall semantics. Storage collision risks exist if facet developers do not correctly implement the namespaced storage pattern — an error that could corrupt state across facets [20]. The DSS Allocator codebase (the allocation vault system) "avoids diamond/proxy upgrade patterns" by design, preferring "immutable core infrastructure with modular, replaceable funnels" [19].
Upgrade Risk
The most dangerous upgrade in the Diamond PAU transition is the move from Stage 2 to Stage 3, which replaces the controller's routing mechanism with a fundamentally different architecture [9]. While the facets remain identical, the entry point changes from explicit function definitions to dynamic selector routing. A bug in the Diamond proxy's fallback function could render the entire controller inoperable, requiring emergency governance intervention to authorize an alternative controller on the ALM Proxy.
The staged approach mitigates this risk by ensuring that each stage's audit builds on the previous one. The facets are deployed and battle-tested in Stage 2 before the proxy routing layer changes in Stage 3 [9]. However, the combination of delegatecall, shared storage, and dynamic routing creates a larger attack surface than the legacy monolithic controller, where all logic existed within a single, statically-analyzed contract.
Operational Overhead During Transition
During the multi-stage upgrade, different Stars may operate at different stages. Spark might be at Stage 2 while Grove is at Stage 1, and the NFAT PAU might deploy directly at Stage 3. This heterogeneity complicates operational procedures, audit coordination, and incident response. The Configurator system's compatibility across all stages mitigates some of this complexity, but GovOps teams must be trained on stage-specific deployment and upgrade procedures [4][9].
Current State and Future Developments
Disclaimer: This section contains forward-looking information based on published protocol roadmaps and development presentations as of February 2026. Actual implementation timelines, parameter values, and feature availability may differ materially from descriptions herein. The Diamond PAU specifications are under active development and subject to ongoing revision.
Current State
As of February 2026, the Diamond PAU upgrade is in active development. Stage 1 — library extraction and standardization of integration logic across Spark and Grove codebases — has entered its first audit engagement [18]. The legacy PAU architecture remains the production system across all Stars, with Spark operating version 1.8-1.10 controllers across six chains [15][29], Grove operating across five chains [29], and Keel on Ethereum mainnet [29].
The existing Spark Liquidity Layer has deployed through ten production releases, onboarded over twenty integration types, and managed over $4 billion in peak capital allocation [26]. As of mid-2025, Spark TVL reached an all-time high of approximately $8.1 billion (SparkLend $4.7 billion plus Spark Liquidity Layer $3.4 billion) [26]. Grove launched with a $1 billion allocation [24], and Keel launched with a $2.5 billion allocation roadmap [25].
Implementation Timeline
Key milestones as of February 2026:
| Milestone | Target | Status |
|---|---|---|
| Stage 1 audit (Cantina/ChainSecurity) | March 2026 | In progress [18] |
| Stage 2 audit | April 2026 | Planned |
| Stage 3 audit | April-May 2026 | Planned |
| Key decisions on Star launch stages | March 2026 | Pending |
The three audit stages are designed to proceed in parallel where possible — the production deployment of Stage 1 does not block the audit of Stage 2, and Stage 2's deployment does not block Stage 3's audit [9]. This parallelization is intended to reach the final Diamond PAU state as quickly as audit timelines permit.
Future Integration Patterns
The Diamond PAU enables a fundamentally different approach to integration management. When CCTP upgrades to version 2 — with deprecation of v1 expected toward the end of 2026 — the upgrade path under Diamond PAU is straightforward: deploy a new CCTPv2Facet, governance adds it to the Diamond via diamondCut, and the old CCTPv1Facet can be removed or retained for backward compatibility [2]. Under the legacy architecture, such an upgrade would require replacing the entire controller on every Star that uses CCTP.
The Phase 7 Prime Factory vision represents the end state: any new Star deploys a Diamond PAU from a standardized factory with a minimal set of pre-approved facets, inheriting battle-tested integration logic from the canonical facet repository [5]. The factory pattern, combined with the Configurator's SORL-constrained ramp-up, creates a repeatable, auditable deployment pipeline that scales to an arbitrary number of Stars without multiplicative engineering effort [5][7].
Related Topics
- Sky Protocol — The overarching protocol within which Diamond PAUs operate, including ALM Proxy and MainnetController technical details.
- Allocation System Primitive — The three-layer capital allocation architecture (Core/Funnel/Conduit) that PAUs implement.
- Spark Capital Allocation — End-to-end capital lifecycle showing how governance authorizes and manages Spark's PAU.
- Spells — The current governance execution mechanism; Diamond PAU upgrades are authorized through spells and executive votes.
- Spark — The primary Star operating the most mature PAU deployment, including the Spark Proxy Spell system.
- Grove — Star whose Liquidity Layer was forked from Spark, driving the code consolidation that motivated the Diamond upgrade.
- Laniakea — The forward-looking protocol design authored by Rune Christensen that defines the Diamond PAU target architecture.
- Architecture Overview — The four-layer PAU architecture across Generator, Prime, Halo, and Foreign layers.
- Configurator Unit — The governance system managing PAU parameters, rate limits, and BEAM hierarchy.
- BEAM Payload Safety — The spell-less parameter adjustment system that complements PAU governance.
- Governance Security Module — The 24-hour pause delay that applies to Diamond PAU governance operations.
- Executive Votes — The voting mechanism through which Diamond PAU upgrades are authorized.
- Sky Stars — Overview of Prime Agents whose capital allocation infrastructure the Diamond PAU standardizes.
- Skylink — Cross-chain bridging infrastructure that Diamond PAU facets interact with for multi-chain operations.
Sources
- ERC-2535: Diamonds, Multi-Facet Proxy | Ethereum
- Diamond PAU | Laniakea
- Architecture Overview | Laniakea
- Configurator Unit | Laniakea
- Phase 7: Prime Factory | Laniakea
- Phase 6: Generator PAU | Laniakea
- Roadmap Overview | Laniakea
- Sky Ecosystem Whitepaper | Laniakea
- Laniakea Phase 1 | Laniakea
- NFATs | Laniakea
- Council BEAM Authority | Laniakea
- Rate Limit Calibration | Laniakea
- Laniakea | Laniakea
- Phase 5: Halo Factory | Laniakea
- sparkdotfi/spark-alm-controller | GitHub
- Spark ALM Controller | Spark Docs
- Spark ALM Controller Audit | ChainSecurity
- Protocol Security Reviews | Cantina
- sky-ecosystem/dss-allocator | GitHub
- The Diamond Proxy Pattern Explained | RareSkills
- Introduction to the Diamond Standard | Nick Mudge
- Star Allocation System Initialization for Spark | Sky Governance
- Arbitrum Token Bridge Initialization | Sky Governance
- Grove Announces Launch | BusinessWire
- Keel Debuts as Sky's Solana-Focused Star | CoinDesk
- Spark TVL Reaches $8 Billion | The Defiant
- ALM Framework - A.3.3 | Sky Atlas
- Spell Operations - A.1.9 | Sky Atlas
- Spark SLL Contracts - A.6.1.1.1.2.6.1 | Sky Atlas
- GSM Pause Delay - A.1.9.3.1.2 | Sky Atlas
- DSSpell Documentation | MakerDAO Docs
- MakerDAO DSS Allocator Audit | ChainSecurity
Data Freshness
- Temporal Category — Semi-static with dynamic deployment status
- Last Updated — February 26, 2026
- Data Currency:
- Diamond PAU specification current as of Laniakea documents dated January-February 2026
- ALM Proxy and Controller addresses current as of Sky Atlas snapshot (February 2026)
- Audit status current as of Cantina portfolio accessed February 2026
- TVL figures from mid-2025 (The Defiant); current TVL may differ significantly
- Next Review Recommended — After Stage 1 audit completion (March-April 2026) or upon any production Diamond PAU deployment