Kalshi is a CFTC-regulated event-contract exchange in the United States, and its REST + WebSocket APIs let you query thousands of binary markets and place limit orders programmatically. This guide is a faithful walkthrough of API v2 as it stands in late April 2026 — verified against docs.kalshi.com after the major 2026 Q1 changes (RSA-PSS-only signing, token-bucket rate limits, fixed-point pricing, retired market orders). If you've used a Kalshi tutorial older than three months, skim §0 first; several specifics changed in ways that will silently break old code.
0. What changed since most older Kalshi tutorials
If you're working from a 2024 or early-2025 guide, here's a tight diff so you don't waste a debugging cycle:
| Topic | Old (≤ early 2025) | Current (2026-04) |
|---|---|---|
| Signing algorithm | RSA-SHA256 / PKCS#1 v1.5 | RSA-PSS with MGF1(SHA256), salt = digest length |
| Rate-limit model | RPS quota, three tiers | Token bucket, 10 tokens/request default, five tiers (Basic / Advanced / Premier / Paragon / Prime), Read and Write are independent buckets |
| 429 behaviour | Retry-After header | No Retry-After, no X-RateLimit-* — pure exponential backoff |
| WebSocket auth | Query-string signed URL | HTTP headers during the WebSocket handshake (same payload as REST) |
| WebSocket heartbeat | Client-driven | Server-driven; server sends Ping every 10 s with body heartbeat, client responds Pong |
| Order types | market and limit | Limit orders only — market was removed on 2026-02-11 |
| Orderbook shape | bids + asks | Bids only on each side; YES BID @ $0.60 ≡ NO ASK @ $0.40 |
| Price / size fields | Integer cents (yes_bid: 42), integer count (count: 10) | String fixed-point (yes_bid_dollars: "0.4200", count_fp: "10.00"); old fields removed during 2026 Q1 |
| Public hostname | trading-api.readme.io documentation, trading-api.kalshi.com API | Docs at docs.kalshi.com; production API at api.elections.kalshi.com (covers all market categories, not just elections) |
If any of those rows surprise you, this guide is for you.
1. 30 seconds: your first Kalshi API response
The fastest possible Kalshi call is unauthenticated. GET /series and GET /markets are public — you don't need a key, a signature, or a clock-synced machine. Pick a series ticker, pull its metadata, done. From there you can decide whether you actually need to authenticate at all (many analytics use cases don't).
curl -sS https://api.elections.kalshi.com/trade-api/v2/series/KXHIGHNY \
| jq '.series | {ticker, title, frequency, category}'
# one-time install, then point your MCP client at the parlay server
npx parlay-mcp@latest
# in your MCP-aware client (Claude, Cursor, etc.)
>>> kalshi.get_series("KXHIGHNY")
Same query — direct HTTP vs through Parlay's prediction-market MCP server.
Two notes worth knowing before you go further:
- The production hostname is
api.elections.kalshi.comeven though the path doesn't includeelections. This domain serves every market category (sports, weather, economic indicators, politics — all of it).demo-api.kalshi.co(note the.co) is the sandbox. - Old
trading-api.readme.ioandtrading-api.kalshi.comURLs are dead. Tutorials, blog posts, and SDK READMEs that still point there were last updated before the 2025-07-31 cutover. Treat them as suspect.
The rest of this guide assumes you're hitting the production hostname and want to do something more interesting than read a single series.
2. Authentication: RSA-PSS signing, step by step
Authentication is where most Kalshi integrations get stuck for an hour and then for a few minutes again later. The mechanics are not complicated, but every detail has to be right — wrong padding scheme, wrong timestamp unit, wrong path prefix, and you get the same opaque 401 with no useful body. Read all five steps before writing code.
Auth · Step 1 — How the signature is constructed
Kalshi uses RSA-PSS signing, not HMAC and not RSA PKCS#1 v1.5. PSS is the probabilistic signature scheme; it adds randomized padding (the "salt") so two signatures of the same payload don't match — that's intentional and you don't have to do anything special with it. Use SHA-256 as the hash and as the MGF1 hash, and a salt length equal to the digest length (32 bytes).
The payload you sign is plain ASCII concatenation: timestamp + METHOD + path. Three rules about that payload — this is the part that surprises people:
timestampis the current Unix time in milliseconds, not seconds. Useint(time.time() * 1000)in Python;Date.now()in JavaScript.METHODis the HTTP verb in upper case (GET,POST,DELETE).pathmust include the/trade-api/v2prefix and must not include the query string. Always strip everything from?onward before signing, or you'll match on local tests (no query) and fail in production (with a query).
Then attach three headers to every authenticated request:
KALSHI-ACCESS-KEY: <api_key_id>
KALSHI-ACCESS-TIMESTAMP: <unix_ms>
KALSHI-ACCESS-SIGNATURE: <base64-encoded RSA-PSS signature>
Public endpoints (/series, /events, /markets, the orderbook endpoints) accept these headers but don't require them. Anything under /portfolio requires them.
Auth · Step 2 — A complete working signing implementation
Below is the canonical implementation in three languages. The Python version is the one Kalshi's official docs publish; the TypeScript and cURL/OpenSSL versions are functional ports. You can paste any of them into a fresh project and immediately authenticate against GET /portfolio/balance.
import base64
import time
from urllib.parse import urlparse
import requests
from cryptography.hazmat.primitives import hashes, serialization
from cryptography.hazmat.primitives.asymmetric import padding
KEY_ID = "your-api-key-id"
PRIVATE_KEY_PEM = open("kalshi_private_key.pem", "rb").read()
BASE_URL = "https://api.elections.kalshi.com/trade-api/v2"
private_key = serialization.load_pem_private_key(
PRIVATE_KEY_PEM, password=None
)
def sign(method: str, path: str) -> dict[str, str]:
# path must include /trade-api/v2 and exclude any query string.
path_no_query = path.split("?")[0]
timestamp_ms = str(int(time.time() * 1000))
message = (timestamp_ms + method.upper() + path_no_query).encode("utf-8")
signature = private_key.sign(
message,
padding.PSS(
mgf=padding.MGF1(hashes.SHA256()),
salt_length=padding.PSS.DIGEST_LENGTH,
),
hashes.SHA256(),
)
return {
"KALSHI-ACCESS-KEY": KEY_ID,
"KALSHI-ACCESS-TIMESTAMP": timestamp_ms,
"KALSHI-ACCESS-SIGNATURE": base64.b64encode(signature).decode("utf-8"),
}
def call(method: str, path: str, **kwargs):
# Pull the full path (including /trade-api/v2) for signing.
full_path = urlparse(BASE_URL + path).path
headers = sign(method, full_path)
return requests.request(method, BASE_URL + path, headers=headers, **kwargs)
print(call("GET", "/portfolio/balance").json())
import { createSign, constants } from 'node:crypto';
import { readFileSync } from 'node:fs';
const KEY_ID = 'your-api-key-id';
const PRIVATE_KEY_PEM = readFileSync('kalshi_private_key.pem', 'utf8');
const BASE_URL = 'https://api.elections.kalshi.com/trade-api/v2';
function sign(method: string, path: string) {
const pathNoQuery = path.split('?')[0];
const timestampMs = String(Date.now());
const message = timestampMs + method.toUpperCase() + pathNoQuery;
const signer = createSign('RSA-SHA256');
signer.update(message);
signer.end();
const signature = signer.sign({
key: PRIVATE_KEY_PEM,
padding: constants.RSA_PKCS1_PSS_PADDING,
saltLength: constants.RSA_PSS_SALTLEN_DIGEST,
});
return {
'KALSHI-ACCESS-KEY': KEY_ID,
'KALSHI-ACCESS-TIMESTAMP': timestampMs,
'KALSHI-ACCESS-SIGNATURE': signature.toString('base64'),
};
}
export async function call(method: string, path: string, init?: RequestInit) {
const fullPath = new URL(BASE_URL + path).pathname;
const headers = { ...sign(method, fullPath), ...(init?.headers as object) };
return fetch(BASE_URL + path, { ...init, method, headers });
}
const res = await call('GET', '/portfolio/balance');
console.log(await res.json());
KEY_ID="your-api-key-id"
KEY_FILE="kalshi_private_key.pem"
METHOD="GET"
PATH_PART="/trade-api/v2/portfolio/balance"
TS_MS=$(($(date +%s%N)/1000000))
PAYLOAD="${TS_MS}${METHOD}${PATH_PART}"
SIG=$(printf '%s' "$PAYLOAD" | openssl dgst -sha256 \
-sigopt rsa_padding_mode:pss \
-sigopt rsa_pss_saltlen:digest \
-sign "$KEY_FILE" | base64 -w0)
curl -sS "https://api.elections.kalshi.com${PATH_PART}" \
-H "KALSHI-ACCESS-KEY: ${KEY_ID}" \
-H "KALSHI-ACCESS-TIMESTAMP: ${TS_MS}" \
-H "KALSHI-ACCESS-SIGNATURE: ${SIG}"
If you get a 200 from GET /portfolio/balance, signing is working. If you get a 401, jump straight to Step 3.
Auth · Step 3 — Three real traps
Most signing failures fall into one of these three buckets. The error message Kalshi returns is always the same generic 401 — the cause is what differs.
Trap A — clock drift. Kalshi rejects timestamps that are too far from server time, but the rejection comes back as the same opaque 401 you'd get for a wrong key. If your machine isn't NTP-synced (or you're inside a container with a frozen clock), you'll fail signing on otherwise-correct code. Run timedatectl on Linux or sntp -sS time.apple.com on macOS, and make sure your CI runners enable NTP.
Trap B — PEM format. When you generate a key, you'll get one of two PEM headers: -----BEGIN PRIVATE KEY----- (PKCS#8, modern OpenSSL default) or -----BEGIN RSA PRIVATE KEY----- (PKCS#1, older). Python's cryptography library handles both, and so does Node's createPrivateKey, but if you're piping the PEM through a third tool (Vault, Kubernetes Secret, copy-paste through Slack) you sometimes lose the line breaks or the header. Always verify with openssl rsa -in kalshi_private_key.pem -check -noout before debugging signing logic.
Trap C — key management in production. Kalshi does not store your private key. If you lose it, you regenerate the key pair and re-upload the new public key — there is no recovery. Treat the key like an SSH key: in production, store it in a real secrets manager (AWS Secrets Manager, Google Secret Manager, HashiCorp Vault), not in a Kubernetes ConfigMap or a .env file checked into git. Rotate every 90 days as a habit, even though Kalshi doesn't force it.
Auth · Step 4 — Where this guide goes next
The signing block above is undifferentiated infrastructure work — every Kalshi consumer ends up writing it, and the official kalshi-python-sync SDK already encapsulates it. The interesting part of a Kalshi integration starts after authentication: how to navigate the endpoint catalogue without burning rate-limit budget, how to read the orderbook correctly given that there are no asks, and how to handle real-time data over WebSocket. The next four sections cover those.
Auth · Step 5 — Same call, three layers of abstraction
Before moving on, it's worth seeing how the same query — list every active market in a series — looks at three layers: raw HTTP, the official Kalshi SDK, and Parlay's MCP server. The point is not to argue Parlay against the official SDK; the point is that Parlay is the only one of the three that also covers Polymarket, Manifold, and Opinion.trade behind the same call. If your application only ever touches Kalshi, the official SDK is great. If it touches more than one venue, the case for Parlay is the cross-market unification.
import base64, time
from urllib.parse import urlparse
import requests
from cryptography.hazmat.primitives import hashes, serialization
from cryptography.hazmat.primitives.asymmetric import padding
KEY_ID = "..."
private_key = serialization.load_pem_private_key(
open("key.pem", "rb").read(), password=None
)
BASE = "https://api.elections.kalshi.com/trade-api/v2"
def sign(method, path):
ts = str(int(time.time() * 1000))
msg = (ts + method + path.split("?")[0]).encode()
sig = private_key.sign(
msg,
padding.PSS(mgf=padding.MGF1(hashes.SHA256()),
salt_length=padding.PSS.DIGEST_LENGTH),
hashes.SHA256(),
)
return {
"KALSHI-ACCESS-KEY": KEY_ID,
"KALSHI-ACCESS-TIMESTAMP": ts,
"KALSHI-ACCESS-SIGNATURE": base64.b64encode(sig).decode(),
}
markets, cursor = [], None
while True:
path = f"/markets?series_ticker=KXHIGHNY&status=open&limit=100"
if cursor:
path += f"&cursor={cursor}"
full_path = urlparse(BASE + path).path
resp = requests.get(BASE + path, headers=sign("GET", full_path))
resp.raise_for_status()
data = resp.json()
markets.extend(data["markets"])
cursor = data.get("cursor")
if not cursor:
break
print(f"{len(markets)} open markets")
from kalshi_python import KalshiClient
client = KalshiClient(
api_key_id="...",
private_key_path="key.pem",
)
markets = list(client.markets.list(
series_ticker="KXHIGHNY",
status="open",
))
print(f"{len(markets)} open markets")
from parlay import Parlay
p = Parlay()
markets = p.kalshi.list_markets(series="KXHIGHNY", status="open")
# the same shape across venues:
poly = p.polymarket.list_markets(query="weather", status="open")
≈ 40 lines → 8 lines → 2 lines, and only the third covers other venues.
3. Endpoints that actually matter
Kalshi's API surface is broad but the day-to-day surface area is narrow: a handful of public discovery endpoints, a handful of authenticated portfolio endpoints, and a recently-introduced historical-data tier for anything older than a few weeks. Here's the working subset.
3.1 Public discovery — /series, /events, /markets, and the orderbook
These four endpoint families let you walk Kalshi's catalogue from broadest (a series like "Highest temperature in NYC today") down to a single tradeable market (today's specific contract on whether the high will exceed 75°F).
GET /series— paginated list of series, filterable by category.GET /series/{ticker}— metadata for one series.GET /events— paginated list of events; does not return multivariate events anymore. UseGET /events/multivariatefor those.GET /events/{ticker}— one event with its child markets.GET /markets— paginated list of markets (filter byseries_ticker,event_ticker,status).GET /markets/{ticker}— single market metadata.GET /markets/{ticker}/orderbook— the per-market orderbook (read carefully, see below).GET /markets/orderbooks?tickers=...— batched orderbook fetch, up to 100 tickers in one call (added 2026-03).GET /markets/trades— recent fills across markets.
Two non-obvious things about the orderbook response. Kalshi's binary markets exhibit a duality (YES + NO = $1.00), and the API leans on it: each side of the book is expressed as bids only. There are no asks anywhere in the response.
{
"orderbook_fp": {
"yes_dollars": [
["0.3900", "5.00"],
["0.4000", "13.00"],
["0.4200", "13.00"]
],
"no_dollars": [
["0.5400", "8.00"],
["0.5600", "17.00"]
]
}
}
The arrays are sorted ascending by price. The best bid is the last element, not the first. To compute the YES ask (which the API never returns), use the duality: YES_ask = 1.00 − best_NO_bid. So in the example above, the best YES bid is $0.4200 and the best YES ask is 1.00 − 0.5600 = $0.4400, giving a $0.02 spread. Both prices and counts are strings to support sub-penny pricing and fractional contracts.
3.2 Portfolio (authenticated)
Once signing is working, the portfolio endpoints are uneventful:
GET /portfolio/balance— cash balance in cents.GET /portfolio/positions— your open positions only. As of 2025-12, settled positions moved to a separate endpoint.GET /portfolio/settlements— your settled positions (the new home for what used to live in/portfolio/positions).GET /portfolio/orders— your order history; supports cursor pagination.POST /portfolio/orders— place an order. Limit-only. See §6 for the order-type changes.POST /portfolio/orders/{id}/amend— amend a resting order.DELETE /portfolio/orders/{id}— cancel a single order.POST /portfolio/orders/batched— place many orders in one call. Note: each batched order still costs 10 tokens; batch is for round-trip efficiency, not for rate-limit savings.DELETE /portfolio/orders/batched— bulk cancel.GET /portfolio/fills— your individual fills.GET /portfolio/subaccounts/balances— sub-account balances if you have them.
3.3 Historical data (the new partition)
Starting 2026-02-19, Kalshi split data into a live tier and a historical tier. Recent data lives at the regular endpoints; once a market settles or a fill ages past the cutoff, it moves to dedicated historical endpoints:
GET /historical/cutoff— returns the timestamp at which the live/historical boundary sits. Read this before deciding which family of endpoints to query.GET /historical/markets— settled markets older than the cutoff.GET /historical/fills— your fills from before the cutoff.GET /historical/orders— your cancelled or completed orders from before the cutoff.
Most third-party tutorials don't mention the historical tier yet. If you're backtesting against multi-month data, you'll need it.
3.4 Endpoint cheat sheet
| Endpoint | Method | Auth | Notes |
|---|---|---|---|
/series | GET | — | Paginated, cursor-based |
/events | GET | — | Excludes multivariate; use /events/multivariate |
/markets | GET | — | Filter by series_ticker, event_ticker, status |
/markets/{ticker}/orderbook | GET | — | Bids only; read backwards for best price |
/markets/orderbooks | GET | — | Up to 100 tickers per call, added 2026-03 |
/markets/trades | GET | — | Recent cross-market fills |
/portfolio/balance | GET | RSA-PSS | Cash position |
/portfolio/positions | GET | RSA-PSS | Open positions only since 2025-12 |
/portfolio/settlements | GET | RSA-PSS | Settled positions (new home) |
/portfolio/orders | GET / POST / DELETE | RSA-PSS | Limit only; see §6 |
/portfolio/orders/{id}/amend | POST | RSA-PSS | Amend a resting limit |
/portfolio/orders/batched | POST / DELETE | RSA-PSS | Round-trip win, not a rate-limit win |
/portfolio/fills | GET | RSA-PSS | Your fills |
/historical/cutoff | GET | RSA-PSS | Live/historical boundary |
/historical/{markets,fills,orders} | GET | RSA-PSS | Anything older than cutoff |
/account/limits | GET | RSA-PSS | Your current rate-limit budget |
/account/endpoint_costs | GET | RSA-PSS | Token cost per endpoint |
4. Pagination, rate limits, and token costs
Pagination is straightforward. Rate limits are the part everyone underestimates. Kalshi switched to a token-bucket model on 2026-04-23 with five tiers, two independent buckets per tier, and no Retry-After header. If you're writing anything that traverses thousands of markets — a backtest, a daily snapshot, an arbitrage scanner — it's worth understanding the bucket model precisely so your code doesn't get throttled into uselessness.
Rate · Step 1 — How pagination and the new bucket model work
Pagination is cursor-based. Every list endpoint that supports it (/markets, /events, /series, /markets/trades, /portfolio/history, /portfolio/fills, /portfolio/orders) accepts cursor and limit (default and max usually 100). The response includes a cursor field; when it's null, you've reached the end. There is no total count in any list response — you cannot show a deterministic progress bar, only "fetched N so far."
Rate limiting is token-bucket since 2026-04-23. Each tier has a Read budget and a Write budget, both expressed in tokens-per-second; default cost per request is 10 tokens (cheaper for cancels and single-order reads). The two buckets are independent — saturating reads doesn't slow down writes.
| Tier | Read tokens/s | Write tokens/s | How to qualify |
|---|---|---|---|
| Basic | 200 | 100 | Just complete account onboarding |
| Advanced | 300 | 300 | Fill out an application |
| Premier | 1,000 | 1,000 | TBD by Kalshi |
| Paragon | 2,000 | 2,000 | TBD |
| Prime | 4,000 | 4,000 | TBD |
So a Basic-tier user gets 20 reads/second and 10 writes/second by default, with a small headroom for bursting (the Write bucket allows about 2 seconds of accumulated tokens; Basic is 1 second on the Read side). Live numbers are at GET /account/limits; per-endpoint costs are at GET /account/endpoint_costs.
The two practical surprises:
- There is no
Retry-Afterheader on a 429. There are also noX-RateLimit-*headers. The body is{"error": "too many requests"}. You're flying blind and have to rely on exponential backoff with jitter. - Batched orders don't save tokens. Submitting 25 orders in one batched call costs 25 × 10 = 250 tokens, the same as 25 individual calls. Batching saves round-trip latency, not rate-limit budget.
Rate · Step 2 — A production-grade paginator
Below is the kind of paginator you want for any "fetch all markets in a series" job: cursor-aware, exponential-backoff for 429s, jitter to avoid thundering-herd retries, distinguishes a 429 (rate limit) from a 503 (transient server error) so you can log them differently, and persists the cursor so a crash doesn't restart from zero.
import json
import os
import random
import time
from pathlib import Path
from typing import Iterator
import requests
CURSOR_FILE = Path(".kalshi_cursor.json")
def paginate(
session: requests.Session,
base_url: str,
path: str,
headers_fn,
*,
item_key: str,
job_id: str,
) -> Iterator[dict]:
"""Yield every item in a paginated Kalshi list, surviving 429s and crashes."""
state = json.loads(CURSOR_FILE.read_text()) if CURSOR_FILE.exists() else {}
cursor = state.get(job_id)
while True:
full_path = path + (f"&cursor={cursor}" if cursor else "")
backoff = 0.5
for attempt in range(8):
resp = session.get(
base_url + full_path,
headers=headers_fn("GET", full_path.split("?")[0]),
timeout=15,
)
if resp.status_code == 429:
# No Retry-After. Exponential backoff + jitter.
sleep = backoff + random.uniform(0, backoff)
time.sleep(min(sleep, 30))
backoff *= 2
continue
if 500 <= resp.status_code < 600:
# Treat 5xx as transient but separate from rate-limiting.
time.sleep(backoff)
backoff = min(backoff * 2, 30)
continue
resp.raise_for_status()
break
else:
raise RuntimeError(f"giving up on {full_path} after 8 retries")
data = resp.json()
for item in data.get(item_key, []):
yield item
cursor = data.get("cursor")
state[job_id] = cursor
CURSOR_FILE.write_text(json.dumps(state))
if not cursor:
return
Two implementation notes worth carrying around. First, the state[job_id] = cursor persistence write is not atomic — for a single-process script that's fine, but if you're running multiple paginators in parallel, switch to os.replace() on a tempfile. Second, the timeout of 15 seconds is deliberate: Kalshi sometimes takes 5–10 seconds to return on cold-cache series queries, and you don't want a 5-second timeout retrying into your own rate-limit ceiling.
Rate · Step 3 — Three real traps
Trap A — assuming one bucket. Read and Write share nothing. If you saturate reads (a backfill job), your writes (live trading) are unaffected, and vice versa. Conversely, building a single global semaphore against "Kalshi rate" wastes capacity. Track the two buckets separately.
Trap B — believing batching saves tokens. Programmers reach for batched endpoints expecting cheaper rate-limit cost. They're right about latency (one TCP round trip vs 25), wrong about tokens. If the goal is more orders per second under your write budget, batching only helps because of round-trip parallelism, not bucket math.
Trap C — wanting a progress bar. Cursor responses don't carry a total count, full stop. Some integrators have hacked together a count by walking the page list in parallel with a count probe — don't bother. Surface "fetched N so far" in your logs and accept that you don't know the size of the universe until you've enumerated it.
Rate · Step 4 — Where this goes next
The paginator above is correct and reusable, but it's also another piece of generic plumbing. The differentiating value of a Kalshi integration is whatever lives downstream of "got the data" — the trading logic, the surfacing in a UI, the cross-venue comparison.
Rate · Step 5 — Same job, two abstraction levels
# See Step 2 above — ~60 lines including cursor persistence,
# exponential backoff, jitter, 429 vs 503 differentiation, and
# the loop that consumes pages until cursor=null.
from parlay import Parlay
p = Parlay()
markets = p.kalshi.list_markets(series="KXHIGHNY", all_pages=True)
# cursor handling, 429 backoff, and tier-aware throttling
# are inside the SDK; the same call shape works for Polymarket too.
60-line production paginator → one method that handles cursor, 429s, and tier-aware backoff for you.
5. Real-time data via WebSocket
Most non-trivial Kalshi integrations end up needing the WebSocket. Polling /markets/{ticker}/orderbook every second is wasteful and gives you stale views of any market that moves quickly. The WebSocket gives you incremental orderbook deltas, fills, and lifecycle events with sub-second latency.
How the WebSocket works
The WebSocket URL is wss://api.elections.kalshi.com/trade-api/ws/v2. Authentication during the handshake uses HTTP headers — not query-string-encoded credentials. Sign the path /trade-api/ws/v2 with the same RSA-PSS algorithm as for REST, and attach the same three headers (KALSHI-ACCESS-KEY, KALSHI-ACCESS-TIMESTAMP, KALSHI-ACCESS-SIGNATURE) to the upgrade request. Old blog posts that show a query-string signature are outdated.
Once connected, you subscribe by sending JSON commands:
{
"id": 1,
"cmd": "subscribe",
"params": {
"channels": ["orderbook_delta"],
"market_tickers": ["KXHIGHNY-23DEC25-T75"]
}
}
Channels split into two groups. Public channels — ticker, trade, market_lifecycle_v2, multivariate_market_lifecycle, multivariate — broadcast across all users. Private channels — orderbook_delta, fill, market_positions, communications, order_group_updates, user_orders — return data scoped to your account; these require connection-level authentication (which you already did during the handshake). Note that orderbook_delta is technically a private channel because the response includes your own client_order_id, but the underlying market data is public — the privacy is in the per-user enrichment.
Heartbeats are server-driven: every 10 seconds, Kalshi sends a Ping with body heartbeat, and your client must respond with a Pong. Python's websockets library does this automatically; in other ecosystems (Go, raw browser WebSocket) you may need to wire it explicitly. If you stop responding to Pings, the server closes the connection.
A note on ticker_v2: this channel was retired on 2026-02-12. Use ticker. Also new in April 2026: orderbook_delta now supports a get_snapshot action — useful when you've reconnected and want to resync without dropping the existing subscription. Several timestamp fields gained *_ts_ms (millisecond) variants in 2026 Q2, while the older ts (second-resolution) variants are deprecated.
Minimal Python WebSocket client
import asyncio
import json
import time
from urllib.parse import urlparse
import websockets
from cryptography.hazmat.primitives import hashes, serialization
from cryptography.hazmat.primitives.asymmetric import padding
import base64
KEY_ID = "..."
private_key = serialization.load_pem_private_key(
open("key.pem", "rb").read(), password=None
)
WS_URL = "wss://api.elections.kalshi.com/trade-api/ws/v2"
def auth_headers():
ts = str(int(time.time() * 1000))
path = urlparse(WS_URL).path
msg = (ts + "GET" + path).encode()
sig = private_key.sign(
msg,
padding.PSS(
mgf=padding.MGF1(hashes.SHA256()),
salt_length=padding.PSS.DIGEST_LENGTH,
),
hashes.SHA256(),
)
return [
("KALSHI-ACCESS-KEY", KEY_ID),
("KALSHI-ACCESS-TIMESTAMP", ts),
("KALSHI-ACCESS-SIGNATURE", base64.b64encode(sig).decode()),
]
async def main():
async with websockets.connect(WS_URL, extra_headers=auth_headers()) as ws:
await ws.send(json.dumps({
"id": 1,
"cmd": "subscribe",
"params": {
"channels": ["orderbook_delta"],
"market_tickers": ["KXHIGHNY-23DEC25-T75"],
}
}))
async for raw in ws:
msg = json.loads(raw)
if msg.get("type") == "orderbook_snapshot":
print("snapshot:", msg["msg"]["yes_dollars"][-1]) # best YES bid
elif msg.get("type") == "orderbook_delta":
print("delta:", msg["msg"])
asyncio.run(main())
The library handles Pong responses for you. The first message after subscribe is a orderbook_snapshot with the full book; every subsequent message is an orderbook_delta describing a single change.
WebSocket · Three real traps
Trap A — confusing connection auth with channel auth. A private channel like orderbook_delta requires the connection to be authenticated, but you authenticate the connection during the handshake — not by re-signing the subscribe message. This catches people who saw a WS error code 9: Authentication required after subscribing and started looking for a way to attach auth to the JSON payload. There isn't one; fix the handshake.
Trap B — sequence gaps after a disconnect. The orderbook_delta stream uses sequence numbers. If you reconnect after even a brief network blip, you may have missed deltas. The new get_snapshot action (added 2026-04-20) is the clean fix — request a fresh snapshot and resume processing deltas from there. Don't try to manually replay missed deltas; they aren't replayable.
Trap C — the connection cap. Maximum WebSocket connections per account is tier-based; default is 200. That sounds generous, but if you're running an arbitrage scanner that opens a connection per market, 200 is small. Plan for multiplexing many subscriptions onto fewer connections — subscribe accepts arrays of market_tickers for a reason.
Same stream, fewer lines
# See the Python example above — ~50 lines for a single
# subscription, no reconnect, no gap recovery, no multiplexing.
from parlay import Parlay
p = Parlay()
async for msg in p.kalshi.subscribe_orderbook(["KXHIGHNY-23DEC25-T75"]):
print(msg)
50-line raw client → 3 lines, with reconnect and gap-recovery built in.
6. Common errors and how to fix them
These are the failures every Kalshi integration runs into eventually. Grouped by category, each entry is the symptom you'll see, the root cause, and the smallest fix.
Authentication failures
Rate-limit failures
Order-placement failures (new error codes, 2026-01-26)
WebSocket failures
Data-migration failures (the 2026-Q1 trap)
7. Beyond Kalshi: querying across markets
If your application only ever talks to Kalshi, the official kalshi-python-sync and kalshi-typescript SDKs already do everything in this guide and they're well-maintained. There's no point reimplementing them, and there's no real differentiation in writing your own RSA-PSS wrapper.
The case for a different approach starts when your application needs to span multiple prediction-market venues. Kalshi is one of four serious sources of binary-event prices, alongside Polymarket (on-chain, EIP-712 signing, Polygon-based), Manifold (play-money, REST-only, OAuth), and Opinion.trade (UK regulator, REST + GraphQL). Each one has its own auth scheme, its own pagination semantics, its own rate-limit model, its own market-identifier format. Writing four integrations and a unifying layer on top is real work — easily two engineering quarters once you account for testing, error handling, and the long tail of edge cases.
That's the layer Parlay's MCP server fills. The same list_markets, get_orderbook, and subscribe_orderbook calls work across Kalshi, Polymarket, Manifold, and Opinion.trade, with normalized market identifiers and a single authentication surface. The Kalshi-specific affordances — RSA-PSS signing, fixed-point fields, the bids-only orderbook quirk — are still there for you to reach into, but you don't have to write them four times.
If you're building a single-venue Kalshi tool, you don't need Parlay; the official SDK is the right answer. If you're building anything that asks "what does this market price across venues?" or "where is the arbitrage opportunity right now?", Parlay is the layer you'd otherwise have to build yourself.
8. Frequently asked questions
9. Next steps
If you're building on Kalshi specifically, the next thing worth your time is a careful read of the official rate-limits page and the fixed-point migration page — both contain edge cases this guide didn't cover. If you're building a multi-venue tool, the Polymarket guide on this site walks through the equivalent pieces for Polymarket's CLOB API.