Alerting on Execution Cost Spikes with LiquidView API
Build a real-time alerting system for DEX execution cost anomalies using the LiquidView API and Python. Covers Telegram, Discord, and email notifications plus anomaly detection.
Why Execution Cost Alerts Matter for Active Traders
Execution costs on DEX perpetuals are not static. They fluctuate with market conditions, time of day, exchange-specific liquidity events, and sometimes for no apparent reason at all. For a trader who executes regularly on these platforms, an unexpected spike in execution cost — from a normal 4 basis points to 15 or 20 — can meaningfully degrade performance if they keep trading at their normal cadence without adjusting. The problem is that these spikes are often invisible: unless you are actively checking costs before every trade, you may not notice that conditions have deteriorated.
An alerting system solves this. Rather than requiring you to proactively check execution costs, the system monitors conditions continuously and notifies you when something actionable has changed. Sudden liquidity drops that double your execution cost, exchange-specific anomalies where one venue's costs spike while others stay normal, or cross-exchange divergences that create arbitrage-adjacent routing opportunities — all of these are alertable events that a well-configured system can surface in seconds. This guide walks through building a complete alerting system using the LiquidView API.
The alerting system in this guide is implemented in Python and designed to run as a standalone process separate from your trading logic. It polls the LiquidView API on a configurable interval, evaluates alert conditions, and dispatches notifications to Telegram, Discord, or email. Full code is included.
What to Alert On: Three Alert Condition Categories
Effective alerting requires defining clear conditions that are worth acting on. The goal is not to generate maximum alerts — it is to surface actionable signals. Three categories of conditions cover the most important execution cost scenarios.
The first category is absolute threshold alerts. These fire when execution cost on any tracked exchange exceeds a configured absolute value, regardless of what it was before. For example: "alert me if BTC execution cost at $25K on Hyperliquid exceeds 12 basis points." This condition is simple, easy to configure, and directly actionable — you know immediately that your normal routing is now expensive and you should either wait, route elsewhere, or reduce position size.
The second category is relative change alerts. These fire when execution cost changes significantly relative to a recent baseline, regardless of the absolute level. For example: "alert me if BTC execution cost on any exchange increases by more than 50% compared to its 1-hour rolling average." This is more sensitive than absolute threshold alerts because it catches spikes on normally-cheap exchanges that have not crossed an absolute threshold but are significantly elevated relative to their own recent history.
The third category is cross-exchange divergence alerts. These fire when execution costs on different exchanges diverge significantly from each other. For example: "alert me if the execution cost difference between the cheapest and second-cheapest exchange exceeds 8 basis points." A large cross-exchange cost divergence can signal an exchange-specific liquidity problem, a fee structure change, or — in some cases — an opportunity to route orders away from a temporarily expensive venue.
- Absolute threshold: Fixed cost level that triggers alert regardless of history. Best for "never trade above this cost" rules.
- Relative change: Percentage deviation from rolling average. Best for detecting anomalies on normally-cheap exchanges.
- Cross-exchange divergence: Spread between cheapest and most expensive exchange. Best for detecting venue-specific events and routing opportunities.
- Normalization alert: Cost returns to normal after a spike. Useful for re-entry signals — "conditions have improved, resume normal trading."
Building the Alerting System with the LiquidView API and Python
The alerting system consists of three components: a data collector that polls the LiquidView API, an alert evaluator that checks conditions against incoming data, and a notification dispatcher that sends alerts to your configured channels. These are implemented as three Python classes in a single script that runs as a persistent process.
The data collector runs in a loop with a configurable polling interval (recommended: 5 minutes, matching LiquidView's update frequency). On each cycle, it requests current execution cost data for all configured (exchange, token, size) combinations from the LiquidView API. It stores results in a rolling window deque — a fixed-size buffer that automatically discards data older than your configured lookback period (default: 2 hours). This rolling window provides the historical baseline needed for relative change calculations.
The alert evaluator runs after each data collection cycle. It receives the latest data point for each (exchange, token, size) combination, retrieves the corresponding rolling window for baseline computation, and evaluates all configured alert conditions. When a condition triggers, it produces an alert object containing: alert type (absolute/relative/divergence), exchange and token, current cost, baseline or threshold value, severity level, and a human-readable message. The evaluator also implements cooldown logic — a condition that has already triggered an alert will not trigger again until either the condition resolves and then re-triggers, or a minimum cooldown period (default: 30 minutes) has elapsed.
The cooldown logic is essential for preventing alert fatigue. Without it, a sustained elevated cost condition would generate continuous alerts until it resolves. With it, you receive one alert when the condition first triggers, and another only when it resolves or after the cooldown period.
Use a separate cooldown per (exchange, token, alert_type) combination. This prevents an alert on Hyperliquid BTC from suppressing an alert on Paradex BTC, which might be triggered by a completely different event and require different action.
Sending Alerts to Telegram, Discord, and Email
The notification dispatcher receives alert objects from the evaluator and sends them to configured notification channels. Implementing connectors for Telegram, Discord, and email covers most use cases and ensures you receive alerts through your preferred channel regardless of what you are doing.
The Telegram connector uses the Bot API. Create a bot via BotFather, obtain the bot token, get your chat ID by sending a message to your bot and querying the getUpdates endpoint. In Python, use the requests library to POST to https://api.telegram.org/bot{token}/sendMessage with your chat_id and the formatted alert message. Telegram supports basic Markdown formatting — bold the exchange name and cost figures to make alerts scannable at a glance.
The Discord connector uses webhook URLs. In your Discord server, go to channel settings, Integrations, create a webhook, and copy the webhook URL. POST to this URL with a JSON body containing a content field (plain text) or embeds array (rich formatting). Embeds allow color-coding by severity — red for critical alerts, yellow for warnings, green for normalizations — which makes the alert channel easy to scan.
The email connector uses smtplib from the Python standard library. Configure with your SMTP server, sender address, and recipient list. Email is appropriate for lower-frequency, higher-severity alerts that warrant a more permanent record. For real-time alerting where response time matters, Telegram or Discord is superior.
- Telegram: Best for real-time personal alerts. Low latency, reliable delivery, mobile push notifications. One-time setup via BotFather.
- Discord: Best for team visibility. Multiple people can see the same alert channel. Supports rich embed formatting with color-coded severity. Webhook setup is straightforward.
- Email: Best for audit trail and lower-urgency notifications. Not suitable for real-time alerting where sub-minute response matters.
- Multiple channels simultaneously: Configure the dispatcher to send to all channels for critical alerts, and only your preferred channel for informational alerts. Severity threshold per channel keeps signal-to-noise ratio high in each channel.
Test your notification connectors thoroughly before relying on them for live trading alerts. Send a test alert on startup and verify receipt. An alerting system that silently fails to deliver notifications is worse than no alerting system — it creates a false sense of security.
Example Alert Configurations for Common Trading Scenarios
Different trading styles and strategy types warrant different alert configurations. Here are three example configurations optimized for distinct use cases.
Configuration 1 — Active swing trader, $10K–$50K position sizes: Monitor BTC and ETH at $25K size tier. Absolute threshold alert at 12 bps total cost on any exchange. Relative change alert when any exchange's cost increases more than 60% above its 2-hour average. Cross-exchange divergence alert when the spread between cheapest and most expensive exchange exceeds 6 bps. Cooldown: 30 minutes. Channels: Telegram for all alerts.
Configuration 2 — Systematic strategy, $50K–$250K position sizes, running automated orders: Monitor BTC, ETH, and SOL at $100K and $250K size tiers. Critical alert (strategy pause trigger) when any tracked exchange exceeds 25 bps at $100K. Warning alert at 15 bps (flag for review but do not pause). Relative change alert at 80% deviation from 1-hour baseline. Cross-exchange divergence alert at 10 bps. Cooldown: 15 minutes for warning, 60 minutes for critical. Channels: Telegram for warnings, Telegram plus email for critical. Strategy code subscribes to the critical alert and pauses order submission until the all-clear (normalization) alert fires.
Configuration 3 — Researcher monitoring market quality trends: Monitor all available exchanges and tokens. Daily summary email listing 24-hour average execution costs per exchange per token. Alert on any exchange where the 24-hour average exceeds 150% of the 7-day average (structural liquidity deterioration signal). Alert on any new exchange whose execution cost data first appears or disappears from the LiquidView dataset (exchange coverage change). Cooldown: 24 hours for trend alerts. Channels: Email only.
Advanced Patterns: Anomaly Detection and Adaptive Thresholds
Simple threshold-based alerting works well but has a limitation: the thresholds are static. An exchange that normally runs at 6 bps and spikes to 12 bps would trigger a relative change alert, but if the alert threshold is set to 10 bps absolute (because that is where you need to stop trading), and the baseline slowly drifts up from 6 to 9 bps over several weeks without any single spike, the threshold becomes irrelevant. Adaptive thresholds solve this by anchoring alert conditions to the statistical properties of recent observations rather than fixed values.
An adaptive threshold replaces the fixed absolute value with a dynamically computed value based on the rolling mean and standard deviation of recent data. Instead of "alert when cost exceeds 12 bps," the condition becomes "alert when cost exceeds mean + 2.5 * stddev." This threshold automatically adjusts as baseline conditions change: if the exchange gets cheaper over time, the alert threshold tightens; if it gets more expensive, the threshold relaxes. The z-score multiple (2.5 in this example) controls sensitivity — higher values mean fewer but more significant alerts.
True anomaly detection takes this further by modeling the distribution of execution costs and flagging observations that are statistically improbable given the recent distribution. A simple implementation using scikit-learn's IsolationForest or a rolling z-score calculation can detect anomalies that fixed-threshold systems miss: cases where cost is not extremely high in absolute terms but is highly unusual given the exchange's specific recent behavior.
- Rolling z-score: Compute z = (current_cost - rolling_mean) / rolling_stddev for each data point. Alert when z exceeds a configured threshold (e.g., 2.5 or 3.0 standard deviations). Requires at least 20 data points (about 100 minutes of history at 5-minute collection) before the baseline is reliable.
- Adaptive absolute threshold: Set the alert threshold as rolling_mean + k * rolling_stddev. Update threshold after each new data point. Alert fires when current_cost > dynamic_threshold. Eliminates the need to manually update thresholds as baseline conditions evolve.
- Multi-signal anomaly: Alert only when multiple conditions are simultaneously elevated. For example, "alert when BTC cost z-score > 2 AND ETH cost z-score > 1.5 on the same exchange." This pattern detects exchange-level events (something systemic on the platform) while filtering out single-pair anomalies that may be noise.
- Regime change detection: Use a change-point detection algorithm (e.g., the PELT algorithm via the ruptures Python library) to detect when the cost distribution has structurally shifted. This identifies when an exchange has permanently repriced rather than temporarily spiked — a more consequential signal for long-term routing strategy.
Start with simple fixed-threshold alerts and accumulate 2–4 weeks of historical data before implementing adaptive thresholds. You need sufficient historical data to compute meaningful rolling statistics, and the process of setting up and tuning fixed thresholds first gives you an intuitive understanding of your data's typical behavior before adding statistical complexity.
See it in action
Compare execution costs across 9+ DEX perpetuals in real-time with LiquidView.
Related Articles
How to Integrate Execution Cost Data Into Your Trading Strategy
A practical guide to making execution cost data a first-class input in your trading strategy — pre-trade analysis, real-time routing, post-trade review, and full API integration.
Building a Smart Order Router in JavaScript
Step-by-step guide to building a smart order router in Node.js using the LiquidView API. Full code, error handling, caching, and deployment included.
Comparing DEX APIs: Data Quality and Coverage
A comprehensive comparison of DEX data APIs — what data is available, the limitations of direct exchange APIs, aggregator options, and why execution cost data is uniquely hard to get.
How LiquidView Collects Execution Cost Data
An inside look at LiquidView's data pipeline — how order book simulation works, what data is stored, the architecture behind the API, and the accuracy and limitations of the approach.
