Technology May 01, 2026 · 8 min read

Signal-to-Action in Under 15 Minutes: Our Real-Time Pipeline Architecture

A signal without action is just noise with a timestamp I want to be direct about something before we get into the architecture: most revenue teams are not losing deals because they lack data. They are losing deals because data arrives, sits unactioned for 36 hours and by the time someone...

DE
DEV Community
by SpurIQ Engineering
Signal-to-Action in Under 15 Minutes: Our Real-Time Pipeline Architecture

A signal without action is just noise with a timestamp

I want to be direct about something before we get into the architecture: most revenue teams are not losing deals because they lack data. They are losing deals because data arrives, sits unactioned for 36 hours and by the time someone gets to it the buying window has moved.

A prospect visits your pricing page at 11:42am on a Tuesday. Your intent data platform flags it. The flag lands in a dashboard. The SDR sees it on Thursday morning during their weekly review. They send an outreach email Friday. The prospect is already two calls deep with a competitor.

That is not a data problem. That is a signal-to-action gap problem and it is what we built SpurIQ's real-time pipeline to eliminate.

This post is the technical architecture behind how we get from signal detected to action executed in under 15 minutes. Not as a marketing claim. As an engineering reality.

What counts as a signal

Before the architecture, the definition. A signal, in our system, is any event that changes the probability that a specific deal or prospect will advance or decay, if acted on or ignored.

Signals we ingest:

Each signal type has a different latency profile. Intent data comes in batches (usually daily or twice daily). Behavioral signals from your own website can fire in real time. CRM decay signals require polling. The architecture has to handle all of these without a uniform assumption about when data arrives.

The pipeline architecture

Here is the full system:

[Signal Sources]
    ↓
[Ingestion Layer - per-source adapters]
    ↓
[Signal Normalization - unified schema]
    ↓
[Signal Router - priority + type classification]
    ↓
[Context Enrichment - CRM + deal history]
    ↓
[Action Decision Engine - LLM + rules]
    ↓
[Action Queue - prioritized execution]
    ↓
[Execution Layer - CRM writes, alerts, drafts, sequences]
    ↓
[Feedback Loop - outcome tracking]

Let me go through each layer.

Layer 1: Ingestion, Per-source adapters, unified output

Every signal source has its own data format, authentication model and delivery mechanism. Bombora sends batch files. Your website fires webhooks. CRM decay requires scheduled polling. Treating these differently all the way through the pipeline creates unmaintainable complexity.
We built a per-source adapter layer that translates everything into a normalized signal schema at the boundary:

@dataclass
class NormalizedSignal:
    signal_id: str
    signal_type: str           # "intent", "engagement", "behavioral", "decay", "relationship"
    source: str                # "bombora", "gong", "website", "crm"
    account_id: str
    deal_id: Optional[str]     # None if pre-pipeline signal
    contact_id: Optional[str]
    raw_payload: dict
    confidence_score: float    # 0.0 - 1.0
    detected_at: datetime
    received_at: datetime      # When we got it (latency tracking)
    urgency: str               # "immediate", "high", "standard", "low"

The confidence_score and urgency fields are set by each adapter based on source-specific heuristics. A prospect visiting the pricing page for 4 minutes gets urgency: "immediate". A prospect who appeared in a weekly intent batch from three weeks ago gets urgency: "low".

class WebsiteBehaviorAdapter:
    def normalize(self, raw_event: dict) -> NormalizedSignal:
        session_duration = raw_event.get("session_duration_seconds", 0)
        pages_visited = raw_event.get("pages", [])

        is_high_intent = (
            "pricing" in pages_visited and 
            session_duration > 180
        )

        return NormalizedSignal(
            signal_type="behavioral",
            source="website",
            urgency="immediate" if is_high_intent else "standard",
            confidence_score=0.85 if is_high_intent else 0.40,
            # ... other fields
        )

Layer 2: Signal routing, Not all signals are equal

After normalization, signals go to the router. The router does two things: deduplication and priority classification.

Deduplication: The same account might fire five behavioral signals in an hour. We do not want five separate action pipelines running for the same account. We aggregate signals within a configurable time window (default: 30 minutes for behavioral, 24 hours for intent) and pass the aggregate downstream.

class SignalAggregator:
    def __init__(self, redis_client, window_seconds=1800):
        self.redis = redis_client
        self.window = window_seconds

    def aggregate(self, signal: NormalizedSignal) -> Optional[AggregatedSignal]:
        key = f"signal_agg:{signal.account_id}:{signal.signal_type}"

        # Check if an aggregate already exists in window
        existing = self.redis.get(key)

        if existing:
            agg = AggregatedSignal.from_json(existing)
            agg.add(signal)
            self.redis.setex(key, self.window, agg.to_json())
            return None  # Don't fire yet - still aggregating
        else:
            # First signal in window - start aggregate, schedule flush
            agg = AggregatedSignal.start(signal)
            self.redis.setex(key, self.window, agg.to_json())
            schedule_flush(key, delay=self.window)
            return None

When the flush fires, the aggregated signal, with all component events, moves downstream as a single enriched package.

Priority classification: After aggregation, signals are classified into priority tiers that determine queue placement and processing SLA:

P1 (Immediate - < 5 min SLA): High-intent behavioral signals, strong engagement signals, reactivation of previously dark deals
P2 (High - < 15 min SLA): Intent surge signals, stakeholder job changes, competitor mentions in calls
P3 (Standard - < 2 hour SLA): Routine engagement signals, CRM hygiene flags
P4 (Low - next business day): Low-confidence intent, bulk enrichment updates

The signal-to-action gap is most acute at P1 and P2. These are the moments where a 15-minute response is meaningfully better than a 4-hour response.

Layer 3: Context enrichment

Before the decision engine sees a signal, we enrich it with deal and account context. This is what separates a signal from an insight.

A pricing page visit from a prospect with no open deal, no prior engagement and no ICP match is noise. The same visit from a prospect who is 3 weeks into an active deal, has a close date in 18 days and whose champion is listed as "evaluating alternatives" in the CRM, that is P1.

class ContextEnricher:
    def enrich(self, signal: AggregatedSignal) -> EnrichedSignal:
        account = self.crm.get_account(signal.account_id)

        open_deals = self.crm.get_open_deals(signal.account_id)
        deal_context = None

        if open_deals:
            deal = self._select_most_relevant_deal(open_deals, signal)
            deal_context = DealContext(
                stage=deal.stage,
                days_in_stage=deal.days_in_stage,
                close_date=deal.close_date,
                last_activity_days_ago=deal.last_activity_days_ago,
                open_tasks=self.crm.get_open_tasks(deal.id),
                risk_flags=deal.risk_flags
            )

        return EnrichedSignal(
            signal=signal,
            account=account,
            deal_context=deal_context,
            enrichment_at=datetime.utcnow()
        )

The enrichment layer also re-scores urgency. A signal that came in as P2 might get promoted to P1 if the deal context reveals high risk. A P1 signal might be demoted to P3 if the account turns out to be a past customer with no active evaluation.

Layer 4: The action decision engine

This is the AI revenue execution core. The decision engine takes the enriched signal and determines: what should happen, in what order, with what priority.

We run a two-pass approach:

Pass 1: Rules engine: Fast, deterministic, zero-LLM. Rule examples:

  • If deal has been idle > 14 days AND close date < 21 days → flag as at-risk, notify manager
  • If pricing page visit AND active deal AND last contact > 5 days → trigger immediate follow-up
  • If competitor mentioned in call AND no "competitive positioning" task exists → create task

Rules handle ~60% of cases. They are fast, predictable and auditable.

Pass 2: LLM reasoning (for complex cases):

When rules do not produce clear action plan, ambiguous deal stage, multiple conflicting signals, multi-stakeholder complexity, we pass to an LLM with the full enriched context:

DECISION_PROMPT = """
You are a senior revenue strategist reviewing a live deal signal.

SIGNAL:
{signal_summary}

DEAL CONTEXT:
{deal_context}

ACCOUNT HISTORY:
{account_summary}

Based on this, determine:
1. Is immediate action required? (yes/no + reason)
2. What is the single most important action to take right now?
3. What is the risk level to this deal if no action is taken in 24 hours?
4. Are there any other stakeholders who should be looped in?

Be specific. Reference the actual signal data. Do not give generic advice.
Output as JSON matching the ActionPlan schema.
"""

The LLM output feeds the action queue as structured tasks, not freeform text.

Layer 5: Execution and feedback

Actions execute against the GTM stack via the same adapter pattern described in previous Blog CRM writes, Slack alerts, email drafts, sequence enrollments. Each execution logs an outcome event that feeds back into our signal scoring model.

If a P1 signal action consistently leads to deal progression, that signal type gets weighted higher. If a certain action type (e.g., manager alert for competitive mention) rarely results in meaningful activity, we adjust the rule. The system gets incrementally better without requiring manual model retraining.

The 15-minute number

P1 signal detected → action queued for rep review: average 4.2 minutes.
P1 signal detected → action executed (including auto-executes): average 11.8 minutes.
P2 signals: average 9.1 minutes to queue, 14.4 minutes to execution.

We track this in real time. When latency creeps above SLA, we get paged. The 15-minute number is a commitment, not an aspiration.

Why this architecture holds up

The design principles that made this work at scale:

  • Normalize at the boundary: Do not let source-specific formats leak into core processing.
  • Enrich before deciding: A signal without context is noise. Context before the LLM, not after.
  • Rules first, LLM second: Fast deterministic paths for common cases. Reserve expensive reasoning for genuinely complex ones.
  • Measure latency as a first-class metric: If you do not instrument the gap, you will not close it.

Revenue does not wait. The architecture should not either.

DE
Source

This article was originally published by DEV Community and written by SpurIQ Engineering.

Read original article on DEV Community
Back to Discover

Reading List