Automating DEX Execution Analysis with Python
Automate your DEX execution cost analysis with Python, pandas, and the LiquidView API. Fetch data, compute averages, generate charts, and schedule reports.
Why Manual DEX Analysis Does Not Scale
Checking execution costs manually — opening LiquidView, comparing a few exchanges, noting the numbers — is perfectly fine for a one-off decision. But if you are a serious trader managing a portfolio across multiple tokens and executing trades daily, manual analysis becomes a significant time cost and an error-prone process. The exchange that was cheapest for BTC yesterday may not be cheapest today. Spreads shift with market conditions, exchange infrastructure changes, and new competitors enter the market.
Automation transforms this from a periodic manual chore into a continuous, systematic process. A properly built Python analysis script can fetch execution cost data for all your tokens and order sizes on a schedule, compute summary statistics, flag any meaningful changes, and produce charts that make the trends immediately visible. What previously took 30 minutes of manual review can be reduced to a 30-second glance at an automatically generated report.
This guide builds a complete DEX execution analysis pipeline in Python using the LiquidView API, pandas for data processing, and matplotlib for visualization. By the end, you will have a script you can run locally or deploy to a cloud scheduler to generate daily execution cost intelligence.
All code in this guide is tested with Python 3.10+. The required libraries are requests, pandas, and matplotlib — all installable via pip. If you use Jupyter Notebooks, this script translates directly to a notebook with minimal changes.
Setting Up the Environment
Before writing any analysis code, set up a clean Python environment with the required dependencies. Using a virtual environment keeps your trading tools isolated from other Python projects and makes dependency management reproducible.
- # Create and activate a virtual environment
- python3 -m venv dex-analysis
- source dex-analysis/bin/activate # macOS/Linux
- # dex-analysis\Scripts\activate # Windows
- # Install required packages
- pip install requests pandas matplotlib python-dotenv
- # Create a .env file for your API key
- echo "LIQUIDVIEW_API_KEY=your_key_here" > .env
The python-dotenv package allows you to store your API key in a .env file rather than hardcoding it in your script. This is good practice both for security (you can add .env to .gitignore so it is never committed) and for portability (you can change the key without editing source code). Always add .env to your .gitignore immediately after creating it.
- from dotenv import load_dotenv
- import os, requests, pandas as pd, matplotlib.pyplot as plt
- load_dotenv()
- API_KEY = os.environ["LIQUIDVIEW_API_KEY"]
- BASE_URL = "https://api.liquidview.io/v1"
- HEADERS = {"Authorization": f"Bearer {API_KEY}"}
Never commit your .env file or any file containing your API key to a public repository. Add .env to your .gitignore before making any commits. If you accidentally expose your key, regenerate it immediately from your LiquidView dashboard.
Connecting to the LiquidView API and Fetching Data
With the environment set up, the first functional component of the analysis script fetches execution cost data from LiquidView for a list of tokens and a list of order sizes. This function is the foundation of the entire analysis — everything else depends on the data it returns.
- def fetch_all_costs(tokens: list, sizes_usd: list) -> list:
- """Fetch execution costs for all token/size combinations."""
- records = []
- for token in tokens:
- for size in sizes_usd:
- try:
- resp = requests.get(
- f"{BASE_URL}/token/{token}",
- headers=HEADERS,
- params={"size": size},
- timeout=5
- )
- resp.raise_for_status()
- data = resp.json()
- for ex in data["exchanges"]:
- records.append({
- "token": token,
- "size_usd": size,
- "exchange": ex["name"],
- "fee_bps": ex["fee_bps"],
- "spread_bps": ex["spread_bps"],
- "impact_bps": ex["impact_bps"],
- "total_bps": ex["total_cost_bps"],
- })
- except Exception as e:
- print(f"Error fetching {token} at ${size}: {e}")
- return records
Call this function with your tokens of interest and the order sizes you want to analyze. A typical analysis might cover BTC, ETH, and SOL at order sizes of $5,000, $25,000, and $100,000 — capturing both small retail-sized trades and larger institutional-sized orders to reveal how exchange rankings shift across size tiers.
- TOKENS = ["BTC", "ETH", "SOL"]
- SIZES = [5_000, 25_000, 100_000]
- raw = fetch_all_costs(TOKENS, SIZES)
- df = pd.DataFrame(raw)
- print(f"Fetched {len(df)} data points across {df['exchange'].nunique()} exchanges")
Building the Analysis: Averages, Rankings, and Comparisons
With the raw data loaded into a pandas DataFrame, you can compute the analytical summaries that answer your practical trading questions. The most useful analyses are average all-in cost by exchange per token, cost rankings across size tiers, and the spread of exchange costs (how much variance there is in the market at any given moment).
- # Average all-in cost by exchange for BTC
- btc_avg = (
- df[df["token"] == "BTC"]
- .groupby("exchange")["total_bps"]
- .mean()
- .sort_values()
- .reset_index()
- )
- btc_avg.columns = ["exchange", "avg_total_bps"]
- print("BTC execution cost ranking (all sizes, average):")
- print(btc_avg.to_string(index=False))
- # How rankings change at large size ($100K)
- large_order = df[df["size_usd"] == 100_000].groupby("exchange")["total_bps"].mean().sort_values()
- print("\nRanking at $100K order size:")
- print(large_order)
The comparison between small-order and large-order rankings often reveals surprising results. Exchanges that appear cheapest at $5,000 may rank much worse at $100,000 due to thinner order books and higher price impact. This analysis makes those crossover points visible and quantifies the cost difference at each size tier.
- # Find the savings potential: difference between cheapest and your current exchange
- def compute_savings(df: pd.DataFrame, current_exchange: str, token: str, size: float) -> float:
- subset = df[(df["token"] == token) & (df["size_usd"] == size)]
- cheapest_bps = subset["total_bps"].min()
- current_bps = subset[subset["exchange"] == current_exchange]["total_bps"].values
- if len(current_bps) == 0:
- return None
- return current_bps[0] - cheapest_bps
- savings = compute_savings(df, "dydx", "BTC", 25_000)
- print(f"Potential savings by switching from dYdX to cheapest: {savings:.2f} bps")
Even 1 basis point of savings per trade compounds dramatically. A trader executing $500,000 notional per week saves approximately $2,600 per year per basis point saved. Run the savings calculation with your actual trading volume to quantify the annual dollar impact of routing optimization.
Visualizing Results with matplotlib
Numbers in a table are informative, but charts reveal patterns that are invisible in raw data. Two visualizations are particularly valuable for DEX execution analysis: a grouped bar chart showing exchange cost rankings across size tiers, and a line chart showing how a specific exchange's cost has trended over time if you store historical snapshots.
- def plot_exchange_comparison(df: pd.DataFrame, token: str):
- """Bar chart: all-in cost by exchange and size tier for a given token."""
- subset = df[df["token"] == token]
- pivot = subset.pivot_table(index="exchange", columns="size_usd", values="total_bps")
- ax = pivot.plot(kind="bar", figsize=(12, 6), colormap="viridis")
- ax.set_title(f"{token} Execution Cost by Exchange and Order Size")
- ax.set_ylabel("All-in Cost (bps)")
- ax.set_xlabel("Exchange")
- ax.legend(title="Order Size (USD)", labels=[f"${s:,}" for s in sorted(subset["size_usd"].unique())])
- plt.xticks(rotation=45, ha="right")
- plt.tight_layout()
- plt.savefig(f"{token}_exchange_comparison.png", dpi=150)
- plt.show()
- plot_exchange_comparison(df, "BTC")
If you run this script on a daily schedule and append each day's results to a CSV file, you can also build a time-series chart showing how exchange costs evolve over weeks and months. This historical view is invaluable for detecting when an exchange has meaningfully improved its liquidity (costs fall persistently) or degraded (costs rise persistently).
- # Append today's data to a historical CSV
- import datetime
- df["date"] = datetime.date.today().isoformat()
- history_file = "dex_cost_history.csv"
- write_header = not os.path.exists(history_file)
- df.to_csv(history_file, mode="a", header=write_header, index=False)
- print(f"Appended {len(df)} records to {history_file}")
Scheduling with Cron for Daily Automated Reports
Running the analysis script manually is useful for one-off investigations, but the real value comes from running it automatically on a fixed schedule. On macOS and Linux, the system cron daemon handles this. On Windows, use Task Scheduler. For cloud deployment, use a simple VM with cron, a GitHub Actions scheduled workflow, or a service like Render.com with a cron job configuration.
- # Open your crontab with: crontab -e
- # Run the analysis every day at 08:00 UTC
- 0 8 * * * /path/to/dex-analysis/bin/python /path/to/dex_analysis.py >> /path/to/dex_analysis.log 2>&1
- # Run hourly for more frequent monitoring
- 0 * * * * /path/to/dex-analysis/bin/python /path/to/dex_analysis.py >> /path/to/dex_analysis.log 2>&1
Always redirect both stdout and stderr to a log file (the >> ... 2>&1 part). Without this, cron jobs fail silently and you have no record of whether they ran successfully or produced errors. Check the log file periodically — daily failures are easy to miss when everything appears to be automated.
For a more production-grade setup, consider sending the analysis output via email or Slack. The smtplib library in Python's standard library can send email reports, and the Slack SDK makes it trivial to post a daily summary message to a channel. This way your analysis finds you rather than requiring you to go look for it.
A complete working script combining all the components in this guide — environment setup, data fetching, analysis, visualization, and logging — runs in approximately 15–30 seconds for 3 tokens and 3 size tiers. At 24 runs per day, this comfortably fits within LiquidView's standard API rate limits.
See it in action
Compare execution costs across 9+ DEX perpetuals in real-time with LiquidView.
Related Articles
How to Integrate Execution Cost Data Into Your Trading Strategy
A practical guide to making execution cost data a first-class input in your trading strategy — pre-trade analysis, real-time routing, post-trade review, and full API integration.
Building a Smart Order Router in JavaScript
Step-by-step guide to building a smart order router in Node.js using the LiquidView API. Full code, error handling, caching, and deployment included.
Comparing DEX APIs: Data Quality and Coverage
A comprehensive comparison of DEX data APIs — what data is available, the limitations of direct exchange APIs, aggregator options, and why execution cost data is uniquely hard to get.
How LiquidView Collects Execution Cost Data
An inside look at LiquidView's data pipeline — how order book simulation works, what data is stored, the architecture behind the API, and the accuracy and limitations of the approach.
