Skip to content

Scheduled url resolver#56

Open
vimmotions wants to merge 16 commits intomainfrom
scheduled-url-resolver
Open

Scheduled url resolver#56
vimmotions wants to merge 16 commits intomainfrom
scheduled-url-resolver

Conversation

@vimmotions
Copy link
Contributor

@vimmotions vimmotions commented Feb 25, 2026

Scheduled URL Resolver for #[resolve] Macro

Adds slot-scheduled, condition-gated URL resolution to the #[resolve] attribute macro. This enables entities to automatically fetch external data at a specific Solana slot, with conditional triggering and URL templating — all declaratively from the entity definition.

Motivation

The Ore mining protocol requires fetching a pre-computed seed from the Entropy API when a mining round expires. Previously, this would require manual polling or custom handler logic. With this feature, a single #[resolve] annotation handles:

  • Constructing the API URL from entity state
  • Waiting until the round's expiry slot
  • Only firing if the on-chain value hasn't arrived yet
  • Extracting and storing the result

Usage

#[resolve(
    url = "https://entropy-api.onrender.com/var/{entropy.entropy_var_address}/seed?samples={entropy.entropy_samples}",
    extract = "seed",
    schedule_at = state.expires_at,
    condition = "entropy.entropy_value == null",
    strategy = SetOnce
)]
pub resolved_seed: Option<Vec<u8>>,

New #[resolve] Parameters

Parameter Description
url = "..." URL with {field.path} template interpolation from entity state
schedule_at = field.path Solana slot number at which to execute the resolver
condition = "field == null" Only schedule if this condition is true at event time
strategy = SetOnce Existing parameter — only resolve once per entity

Architecture

URL Templates (UrlTemplatePart, UrlSource)

  • New AST types to represent URL templates with embedded field references
  • {entropy.entropy_var_address} syntax is parsed into UrlTemplatePart::FieldRef segments
  • At execution time, field references are resolved against the entity's current state via build_url_from_template()

Conditions (ResolverCondition)

  • Parsed from string expressions like "entropy.entropy_value == null"
  • Evaluated at event-processing time to decide whether to schedule the callback
  • Supports == and != operators with null comparisons

Slot Scheduler (SlotScheduler)

  • New component backed by BTreeMap<u64, Vec<ScheduledCallback>>
  • register(target_slot, callback) — enqueues a callback with deduplication
  • take_due(current_slot) — returns all callbacks whose target slot has passed
  • re_register(callback, next_slot) — re-enqueues a failed callback for retry
  • A background tokio::spawn task polls every 400ms, checking current slot against pending callbacks

Retry Logic

  • On failure (state not found, URL template unresolvable, or fetch returned no data), the callback is re-registered for current_slot + 1 with an incremented retry_count
  • After MAX_RETRIES (100 attempts, ~40s at 400ms polling), the callback is discarded with a warning log
  • Successful resolution (non-empty mutations) completes immediately with no further retries

Compiler & VM

  • QueueResolver opcode extended with url_template, condition, and schedule_at fields
  • VM evaluates the condition at event time; if true, pushes a ScheduledCallback
  • Handler extracts scheduled callbacks and registers them with the SlotScheduler
  • When due, the background task builds the URL from current state, fetches via UrlResolverClient, applies mutations, and publishes to the projector

Files Changed

AST & Types

  • hyperstack-macros/src/ast/types.rsUrlTemplatePart, UrlSource, UrlResolverConfig, ResolverCondition, ResolverSpec.schedule_at
  • hyperstack-interpreter/src/ast.rs — Mirror types for interpreter

Parsing

  • hyperstack-macros/src/parse/attributes.rs — Parse url, condition, schedule_at from #[resolve]
  • hyperstack-macros/src/stream_spec/entity.rsparse_url_template() for {field} syntax
  • hyperstack-macros/src/stream_spec/sections.rs — Wire parsed attributes into resolver specs
  • hyperstack-macros/src/stream_spec/ast_writer.rs — Condition string parsing, resolver spec grouping
  • hyperstack-macros/src/stream_spec/proto_struct.rs — Code generation for URL template types

Interpreter

  • hyperstack-interpreter/src/compiler.rs — Compile resolver specs with new fields into QueueResolver
  • hyperstack-interpreter/src/vm.rsScheduledCallback struct, condition evaluation, scheduled callback extraction
  • hyperstack-interpreter/src/scheduler.rsSlotScheduler, build_url_from_template(), evaluate_condition(), get_value_at_path()

Runtime Codegen

  • hyperstack-macros/src/codegen/vixen_runtime.rsSlotScheduler instantiation, background polling task, URL resolution and mutation publishing

Extend #[resolve] macro to support URL templates with field placeholders
(e.g., url = "https://api.example.com/{field.path}/data") alongside the
existing dotted-path syntax. Add AST types (UrlTemplatePart, UrlSource,
ResolverCondition, ScheduledCallback), condition/schedule_at parameter
parsing, compiler opcode extensions, and VM runtime template construction
with field interpolation.

Made-with: Cursor
Generate actual ResolverCondition and schedule_at values in the
proto_struct code path instead of hardcoded None, completing the
condition parameter support across both AST-writer and codegen paths.

Made-with: Cursor
New scheduler.rs with BTreeMap-based SlotScheduler for registering,
deduplicating, and dispatching slot-triggered resolver callbacks with
retry support. Adds get_entity_state() to VmContext for read-only
state access needed by the scheduler background task.

Made-with: Cursor
Wire SlotScheduler into both single-entity and multi-entity VmHandler
codegen. After each handler execution, scheduled callbacks are collected
and registered. A background task polls every 400ms, evaluates conditions,
retries up to 100 slots, builds URLs from templates, and fires resolver
requests through the standard resolve_url_batch pipeline.

Made-with: Cursor
Use the new #[resolve] syntax to pre-fetch entropy seed from the
Ore API when a round expires, conditioned on entropy_value not yet
being revealed on-chain.

Made-with: Cursor
@vercel
Copy link

vercel bot commented Feb 25, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
hyperstack-docs Ready Ready Preview, Comment Feb 26, 2026 9:19am

Request Review

adiman9
adiman9 previously approved these changes Feb 26, 2026
Copy link
Contributor

@adiman9 adiman9 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yh this is banging. Few notes but otherwise looks good to merge

}
}

pub fn register(&mut self, target_slot: u64, callback: ScheduledCallback) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the same entity gets a new expires_at value (e.g. a new round starts), register silently drops the callback because the dedup key is already in self.registered from the first call. The callback stays pinned to the original target slot.

if self.registered.contains(&dedup_key) {
    // Remove old callback targeting the stale slot
    for cbs in self.callbacks.values_mut() {
        cbs.retain(|cb| Self::dedup_key(cb) != dedup_key);
    }
 }

use serde_json::Value;
use std::collections::{BTreeMap, HashSet};

const MAX_RETRIES: u32 = 100;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This constant is exported as MAX_SCHEDULER_RETRIES on line 123, but the generated code in vixen_runtime.rs (line 946) re-declares its own const MAX_RETRIES: u32 = 100 instead of referencing hyperstack::runtime::hyperstack_interpreter::scheduler::MAX_SCHEDULER_RETRIES.

}
continue;
}
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We probably want to re-evaluate the condition here. If resolved_seed is already populated (from a previous successful resolve), it'll fire again every slot until expires_at + MAX_RETRIES slots pass. Also if for some reason something else in the condition has changed we may want to bail on making the http request.

Also worth checking SetOnce incase its already been set and we should bail in that case as well.

// Re-evaluate condition against current state
if let Some(ref cond) = callback.condition {
    if !hyperstack::runtime::hyperstack_interpreter::scheduler::evaluate_condition(cond, &state) {
        continue; // condition no longer holds
    }
}

// For SetOnce, check if target fields are already populated
if callback.strategy == ResolveStrategy::SetOnce {
    let already_resolved = callback.extracts.iter().all(|ext| {
        hyperstack::runtime::hyperstack_interpreter::scheduler::get_value_at_path(&state, &ext.target_path)
            .map(|v| !v.is_null())
            .unwrap_or(false)
    });
    if already_resolved { continue; }
}

}
}
} else {
continue;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This silently drops requests that don't contain a url_template but they might have a valid url in them already and just want to trigger of a slot

pub queued_at: i64,
}

#[derive(Debug, Clone)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe derive PartialEq as well?

@adiman9
Copy link
Contributor

adiman9 commented Feb 26, 2026

Noticed the CI is failing. Some clippy errors to clean up and you will need to regenerate the ore sdk via hs sdk create although you will need to compiler the hyperstack cli from your local copy. cargo install --path cli from the root of the repo. Then generate the sdk from the ore directory so it picks up the ore hyperstack.toml and places the sdk in the right directory.

Then worth just running npm install && npm run build in the stacks/sdk dir just to be sure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants