Back to Blog

Table of Contents

Highlights

Scheduler Bindings are Coming to Agave!

Written By

Andrew Fitzgerald & Max Resnick

May 15, 2025

Today, more than 90% of the network runs a non default scheduler implementation like Jito or Paladin to increase their rewards. Soon, we’ll be adding a suite of new features to agave that will allow validators to customize block packing logic without modifying the stock agave client.

These changes will create a new modular architecture for transaction intake and block packing. This logic will be separated from the core validator binary, allowing for easier scheduler customization without the risk of disrupting core validator operations such as replay and voting. We will also provide default implementations of common block building primitives such as “drop on revert” and “all or nothing” bundles. 

Motivation

As MEV on Solana matures, custom scheduler implementations are becoming increasingly common. While custom scheduler’s may increase revenue, the way they are implemented today has a number of disadvantages. If you want to modify the scheduler today, you need to run a completely different version of the validator. Often these schedulers are private and the way they work isn’t well understood which can create difficulty landing transactions.

Moving to a modular architecture for scheduling has a number of advantages:

  1. Transparency: The increasing prevalence of modified schedulers makes it difficult to understand which games are being played.

  2. Safety: The Solana validator codebase is complex. Modifications to the core binary can impact the liveness of the protocol. 

  3. Dev Ops: Because modifying the scheduler requires running a different version of the validator binary, it can be difficult to to quickly patch the network since every change requires the scheduler teams to rebase. 

Implementation Overview

  • Added CLI options to:

    • Connect the validator to an external block-building service via QUIC or other method

    • Optionally designate an address/port for validator to publish on gossip as TPU

    • Manage connection to external block-building service via admin rpc

      • Allow disconnection and connection on a live validator

  • The block-building component streams transactions for block-building to the validator during the leader window

  • The leader packs the blocks with the given transactions in conflict-FIFO order

  • If the validator is disconnected from the external block-building service, the validator falls back to normal block-production path, and contact info is updated via gossip

Communication

The validator connects to a block-building service via TCP.

The block-building service will periodically send heartbeat messages to the validator.

During the leader window, the block-building service should send batches of transactions with metadata to the validator to be packed into the block(s).

Metadata will specify special treatment for the transactions i.e. drop on revert, all or nothing, etc.

Structure for messages from the block-building service to the validator may look similar to the following code:

#[derive(Serialize, Deserialize)]
#[repr(C)]
pub struct TransactionBatch {
   id: u32,
   flags: BatchFlags,
   transactions: Vec<Vec<u8>>,
}


bitflags! {
   #[derive(Serialize, Deserialize)]
   pub struct BatchFlags: u8 {
       const ALL_OR_NOTHING = 0b0000_0001;
       const DROP_ON_REVERT = 0b0000_0010;
   }
}

The validator must communicate back to the block-building service to signal the beginning and end of the leader slots, the results of sent transaction batches, and updates to block limits. These messages may take a form similar to the following code:

#[derive(Serialize, Deserialize)]
#[repr(C, u8)]
pub enum ValidatorMessage {
   BeginLeaderSlot(Slot),
   SlotUpdate({
       percentage_through_slot: u64,
       block_cost_units_used: u64,
   }), // percentage of way through the slot
   EndLeaderWindow,
   TransactionResults(Vec<TransactionResult>),
   InvalidBatch,
}


#[derive(Serialize, Deserialize)]
#[repr(C)]
pub struct TransactionUpdate {
   batch_id: u32,
   index: u32,
   // None indicates the transaction is not included
   committed_cus: Option<NonZeroU64>,
    write_accounts: Vec<Pubkey>,
}

Packing Guarantees

  • If two transactions conflict with one another, the validator will attempt to pack in receive order

  • If all received transactions cannot be packed into a block, due to block limits, transactions will attempt inclusion in receive order up to the limits

  • If a batch is received with the DROP_ON_REVERT flag, transactions will only be included if the transaction result is successful.

  • If a batch is received with the ALL_OR_NOTHING flag, transactions will only be included if all the transactions in the batch are included.