RTB Simulation SDK

Test your bidding strategy
against your own market,
before you go live.

Seven ML models. Your DSP data. A calibrated Gymnasium environment — inside your own infrastructure.

Get in Touch How it works
SDK · runs in your cloud Gymnasium environment BYOM & BYOB Zero data egress
How It Works

Three steps.

01 — Ingest

A 12-column DSP log

Standard columns any DSP already logs. No custom exports. Stays in your environment.

02 — Calibrate

Seven ML models establish your market's physics

CTR, CVR, LTV, floor price, conversion delay, win rate, and bid latency — fit to your data, not generic defaults.

03 — Simulate

Your bidder. Calibrated market. No live risk.

Plug in your agent and estimators. Compete against calibrated archetypes and a ghost market. Measure the impact before committing.

Calibration Engine

Seven ML models.
One physics layer.

Every calibration run trains a full suite from your log data. These set the hidden ground truths — your bidder sees a realistic market, not a synthetic one.

01
Win Rate
P(win) from bid, publisher, hour, ad size. Monotone on bid.
XGBoost
02
Floor Price
Quantile regression (α=0.05) on won impressions.
LightGBM
03
CTR
Binary classification on wins. Segment, publisher, hour.
LightGBM
04
CVR
Binary on clicks. Auto-fallback at <100 conversions.
LightGBM
05
LTV
Tweedie regression. Disabled automatically in CPA mode.
LightGBM
06
Conversion Delay
Survival AFT with interval censoring. 7-day lookback.
XGBoost AFT
07
Latency Twin
Calibrates bid latency per publisher and hour-of-day.
LightGBM
Calibration Audit Output
calibration_result · rich
// Freshness & Hygiene
dataset_range2024-10-01 → 2024-12-31
leakage_detectednone ✓
hygiene_dropped0 rows

// Dataset
n_auctions4,821,440
n_wins144,612
n_conversions8,204

// Model Quality (holdout)
ctr_auc0.7841 ✓
cvr_auc0.7320 ✓
ltv_mae$12.40 · 18.3% of mean ✓
optimal_k7 segments

// Budget Estimate
suggested_7d$8,991.50
Calibration complete — /models
Simulation Environment

Built as a native Gymnasium environment.
Calibrated to your market.

Auction mechanics, competitor behavior, KPI enforcement, and creative testing — all parameterized from your data.

Auction Mechanics

Adaptive publisher floors

First-price with floor visibility noise. Publishers adapt floors to fill-rate feedback over time.

Delayed Conversions

Heap-based attribution

Conversions scheduled via AFT delay model. Late attribution carries over across episode resets.

Creative Testing

Hypothetical uplifts

Define a creative with "+20% CTR" and measure the downstream impact — no live impressions needed.

KPI Enforcement

ROAS & CPA with kill-switch

Episode ends if KPI miss exceeds a hard threshold after minimum spend — matching real guard-rail behavior.

Competitor Market

7 archetypes + ghost market

Bidders re-roll their personality each episode. Residual pressure is modeled by an XGBoost ghost trained on your win data.

Latency

Timeout & failure modeling

Latency sampled from your Latency Twin. Configurable timeout thresholds and failure distributions.

Architecture & Privacy

SDK, not SaaS.
Your data stays put.

BidOptic runs in-process inside your cloud. Nothing is transmitted.

DSP Log Export
Your data lake — stays there.
Your infra
MasterCalibrator
7-model training pipeline.
SDK in-process
AdBiddingEnv (Gymnasium)
Your agent runs against the calibrated twin.
SDK in-process
CalibrationResult
Rich / plain / JSON output for logs or dashboards.
Internal BI
Data Processing Agreement

We sign a DPA confirming zero data retention or secondary use. The SDK processes everything within your perimeter.

SDK, not SaaS

You deploy the package. No upload portal, no hosted service, no external API calls.

Compliant by design

No data leaves your perimeter — fits existing governance frameworks without new data-sharing agreements.

Plugs into your stack

Output is a standard Python dict. Rich, plain, and JSON reporter modes for easy integration.

Override-aware config

Pin any calibrated parameter for scenario testing — no code changes required.

Bring Your Own

BidOptic sets the physics.
You bring the strategy.

BYOM — Bring Your Own Models

Your estimators,
plugged in directly.

BidOptic's models set hidden ground truths. Your pCTR, pCVR, and LTV models receive the same noisy signals as production.

Compare a new estimator against your baseline on the same ground truth
No live A/B test needed to measure model improvement
BYOB — Bring Your Own Bidder

Your bidding logic.
Battle-tested offline.

Connect your actual agent — rule-based, RL, or hybrid. Stable-Baselines3 compatible out of the box.

Test shading, pacing, and budget allocation before going live
Validate RL training stability without production exploration cost
Input Schema

Twelve columns.

Standard columns any DSP already logs. No bespoke exports.

dsp_log_schema.csv12 required
1auction_idUnique bid request identifier
2timestampUTC bid request time
3user_idPseudonymous user identifier
4publisher_idPublisher / placement identifier
5ad_sizeAd size / format label
6bid_priceCPM submitted by your DSP
7clearing_priceClearing price (if won)
8is_wonWin / loss binary
9is_clickedClick indicator (post-win)
10is_convertedConversion indicator
11conversion_valueRevenue value (0 if no conversion)
12conversion_timestampUTC conversion time (null if none)
Optional: bid_latency_ms trains the Latency Twin from real data. If absent, a lognormal distribution is synthesized and flagged in the audit output.

Works for smaller players

If you have a win log, a click log, and a conversion log — you have what BidOptic needs.

Quality scales with depth

Model quality is measured on a held-out time window. Coverage gaps are surfaced before training begins.

Data hygiene built in

Temporal leakage detection, zero-delay artifact removal, and staleness warnings at 30 and 90 days — handled before the first model trains.

Interested?

BidOptic is in early access. Drop your email or reach out directly.

No pitch. Just a technical conversation.