API Documentation

Integrate financial sentiment analysis into your applications with the Discourses API. Powered by academic methodology from Paolucci et al. 2024.

Base URL
https://discourses.io/api/v1
Python SDK
pip install discourses

Introduction

The Discourses API provides institutional-grade financial sentiment analysis through a simple REST interface. Our proprietary model analyzes text from news articles and social media to deliver actionable sentiment signals.

Academic Foundation

Discourses implements academic methodology from Paolucci et al. 2024. aurelius produces a unique measure of sentiment contributing to return predictability in the cross section.

Key Features

  • Era-calibrated lexicons — Time-variant sentiment scoring across 4 distinct market eras
  • Sub-50ms latency — Real-time analysis for time-sensitive applications
  • Compare Eras — Analyze semantic drift by comparing text across different time periods
  • Batch processing — Analyze up to 100 texts with single or multi-era comparison

Authentication

All API requests require authentication using an API key. Generate keys from your Dashboard.

Header Authentication

Include your API key in the Authorization header using Bearer token format:

Authorization: Bearer ec_live_your_api_key_here
Keep your API keys secure

Never expose API keys in client-side code or public repositories. Use environment variables and server-side requests only.

Quick Start

Make your first era-calibrated sentiment analysis request in seconds:

import discourses

client = discourses.Discourses(api_key="YOUR_API_KEY")
result = client.analyze(
    "Diamond hands! HODL to the moon! 🚀🚀🚀",
    era="meme"
)

print(result.label)       # very_bullish
print(result.confidence)  # 0.78
const response = await fetch('https://discourses.io/api/v1/analyze/era', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer YOUR_API_KEY',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    text: 'Diamond hands! HODL to the moon! 🚀🚀🚀',
    era: 'meme'
  })
});

const result = await response.json();
console.log(`Sentiment: ${result.classification.label}`);
curl -X POST "https://discourses.io/api/v1/analyze/era" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "text": "Diamond hands! HODL to the moon! 🚀🚀🚀",
    "era": "meme"
  }'

Response

Response
200 OK
{
  "scores": {
    "outlook": 0.7842,
    "bullish": 0.52,
    "bearish": 0.00,
    "neutral": 0.48,
    "confusion": 0.00
  },
  "classification": {
    "label": "bullish",
    "confidence": 0.71
  },
  "analysis": {
    "word_count": 12,
    "matched_count": 3,
    "negation_count": 0
  },
  "era": {
    "name": "present",
    "description": "Current era with aggregate lexicon",
    "lexicon_size": 11195
  },
  "meta": {
    "model": "aurelius-present",
    "processing_time_ms": 3
  }
}

Rate Limits

Rate limits vary by subscription tier. Monitor your usage via response headers.

Tier Requests/Day Requests/Min Max Text Batch Size
Free
100 10 5K chars 1
Builder
5,000 60 10K chars 10
Professional
50,000 300 50K chars 100
Enterprise
Unlimited Custom Unlimited 1,000

Rate Limit Headers

Monitor these response headers to track your usage:

  • X-RateLimit-Limit — Maximum requests per minute
  • X-RateLimit-Remaining — Requests remaining in current window
  • X-RateLimit-Reset — Unix timestamp when the limit resets

Era Analysis

POST /v1/analyze/era All Tiers

Analyze sentiment using aurelius era-specific lexicons. Each era captures the distinct financial vocabulary and sentiment patterns of its time period, from early social media through the meme stock revolution.

aurelius scoring engine

Named after Marcus Aurelius, embodying measured, rational analysis of market sentiment. Features era-calibrated lexicons, intelligent negation handling, n-gram pattern matching, intensity modifiers, and a unique confusion score that measures sentiment whiplash from negation.

Request Body
Parameter Type Description
text Required string The text to analyze. Max length varies by tier.
era Optional string Era model: primitive (<2016), ramp (2016-2019), meme (2019-2023), present (>2023). Default: present
options.include_tokens Optional boolean Include matched token breakdown. Default: false
import discourses

client = discourses.Discourses(api_key="YOUR_API_KEY")
result = client.analyze(
    "Diamond hands! HODL to the moon! 🚀🚀🚀",
    era="meme"
)

print(result.label)       # very_bullish
print(result.confidence)  # 0.78
print(result.outlook)     # 0.9999
const response = await fetch('https://discourses.io/api/v1/analyze/era', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer YOUR_API_KEY',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    text: 'Diamond hands! HODL to the moon! 🚀🚀🚀',
    era: 'meme',
    options: { include_tokens: true }
  })
});

const { scores, classification } = await response.json();
console.log(`Outlook: ${scores.outlook}, Label: ${classification.label}`);
curl -X POST "https://discourses.io/api/v1/analyze/era" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "text": "Diamond hands! HODL to the moon! 🚀🚀🚀",
    "era": "meme",
    "options": {"include_tokens": true}
  }'

Response

Response
200 OK
{
  "analysis": {
    "matched_count": 3,
    "negation_count": 0,
    "word_count": 6
  },
  "classification": {
    "confidence": 0.7768,      // |outlook| × coverage × (1 - confusion×0.5)
    "label": "very_bullish"  // Based on raw.outlook: >1 = very_bullish
  },
  "era": {
    "date_range": "2019-2023",
    "description": "WSB/Reddit meme stock era (2019-2023)",
    "lexicon_size": 9822,
    "name": "meme"
  },
  "meta": {
    "model": "aurelius-meme",
    "processing_time_ms": 0,
    "text_length": 36
  },
  "raw": {
    "bearish": -1.3531,       // Sum of bearish token scores
    "bullish": 5.3201,        // Sum of bullish token scores
    "negation_flips": 0.0,     // Magnitude of sentiment flipped
    "outlook": 4.7605         // Raw sum (determines label thresholds)
  },
  "scores": {
    "bearish": 0.3108,        // Bearish proportion (0 to 1)
    "bullish": 0.6394,        // Bullish proportion (0 to 1)
    "confusion": 0.0,         // Negation whiplash (0 to 1)
    "neutral": 0.0497,        // Neutral proportion (0 to 1)
    "outlook": 0.9999         // tanh(raw.outlook) → sentiment direction
  },
  "tokens": [
    {"negated": false, "position": 1, "score": -1.3531, "token": "hands", "type": "bearish"},
    {"negated": false, "position": 2, "score": 3.2242, "token": "HODL", "type": "bullish"},
    {"negated": false, "position": 5, "score": 2.096, "token": "moon", "type": "bullish"}
  ]
}

Score Types

Normalized Scores
Field Range Description
outlook -1 to +1 Overall sentiment direction, normalized via hyperbolic tangent: tanh(raw.outlook)
bullish 0 to 1 Proportion of text conveying bullish/positive sentiment
bearish 0 to 1 Proportion of text conveying bearish/negative sentiment
neutral 0 to 1 Proportion of text that is sentiment-neutral
confusion 0 to 1 Negation whiplash - how much sentiment was flipped by negation. High confusion reduces confidence.
Raw Component Scores
Field Range Description
outlook -∞ to +∞ Raw sum of all sentiment (bullish + bearish). Determines classification label thresholds.
bullish 0 to ∞ Sum of all bullish/positive token and n-gram scores
bearish -∞ to 0 Sum of all bearish/negative token and n-gram scores
negation_flips 0 to ∞ Magnitude of sentiment reversed by negation (used to calculate confusion)
Era Selection Guide

Match the era to your text's origin: use meme for r/wallstreetbets content, primitive for pre-2016 filings, and present for current analysis with the most comprehensive lexicon (11,195 tokens).

Available Eras

Query GET /v1/eras to retrieve detailed era metadata:

Era Period Lexicon Use Case
Primitive < 2016 5,557 tokens Historical filings, early Twitter, pre-social sentiment
Ramp 2016 — 2019 7,751 tokens Fintech emergence, crypto adoption, algorithmic trading era
Meme 2019 — 2023 9,822 tokens WSB, Reddit, meme stocks, retail revolution vernacular
Present > 2023 11,195 tokens Current analysis with aggregate of all eras

Compare Eras

POST /v1/analyze/compare-eras All Tiers

Analyze the same text across multiple eras to understand how financial language and sentiment evolved over time. Perfect for backtesting, historical analysis, and understanding semantic drift in financial terminology.

Semantic Drift Detection

See how the same phrase would be interpreted across different market regimes. Ideal for understanding how terms like "disruption", "moon", or "volatile" changed meaning over time.

Request Body
Parameter Type Description
text Required string The text to analyze across eras.
eras Optional array Array of eras to compare: ["primitive", "ramp", "meme", "present"]. Default: all eras.
options.include_tokens Optional boolean Include matched token breakdown for each era. Default: false
import discourses

client = discourses.Discourses(api_key="YOUR_API_KEY")
result = client.compare_eras(
    text="This stock is going to the moon! Diamond hands! 🚀",
    eras=["primitive", "meme", "present"]
)

# View per-era results
for era, data in result.results.items():
    print(f"{era}: {data['classification']['label']}")

# Check semantic drift
print(result.drift['direction'])   # positive_shift
print(result.drift['magnitude'])   # 0.3065
const response = await fetch('https://discourses.io/api/v1/analyze/compare-eras', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer YOUR_API_KEY',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    text: 'This stock is going to the moon! Diamond hands! 🚀',
    eras: ['primitive', 'meme', 'present']
  })
});

const { results, drift } = await response.json();
console.log(`Drift magnitude: ${drift.magnitude}`);
curl -X POST "https://discourses.io/api/v1/analyze/compare-eras" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "text": "This stock is going to the moon! Diamond hands! 🚀",
    "eras": ["primitive", "meme", "present"]
  }'

Response

Response
200 OK
{
  "drift": {
    "direction": "positive_shift",
    "magnitude": 0.3065,
    "min_era": "primitive",
    "peak_era": "present"
  },
  "meta": {
    "eras_compared": 3,
    "model": "aurelius-multi",
    "processing_time_ms": 4
  },
  "results": {
    "meme": {
      "analysis": {"matched_count": 3, "negation_count": 0, "word_count": 9},
      "classification": {"confidence": 0.7716, "label": "very_bullish"},
      "scores": {"bearish": 0.2312, "bullish": 0.4528, "confusion": 0.0, "neutral": 0.316, "outlook": 0.9933}
    },
    "present": {
      "analysis": {"matched_count": 3, "negation_count": 0, "word_count": 9},
      "classification": {"confidence": 0.7751, "label": "very_bullish"},
      "scores": {"bearish": 0.3041, "bullish": 0.5152, "confusion": 0.0, "neutral": 0.1807, "outlook": 0.9977}
    },
    "primitive": {
      "analysis": {"matched_count": 2, "negation_count": 0, "word_count": 9},
      "classification": {"confidence": 0.4369, "label": "bullish"},
      "scores": {"bearish": 0.3374, "bullish": 0.4, "confusion": 0.0, "neutral": 0.2626, "outlook": 0.6912}
    }
  }
}

Drift Analysis

Drift Object Fields
Field Type Description
direction string Overall drift direction: positive_shift, negative_shift, or stable
magnitude number Absolute difference between peak and minimum era scores (0 to 2)
peak_era string Era with highest sentiment score
min_era string Era with lowest sentiment score

Batch Analysis

POST /v1/analyze/batch All Tiers

Analyze multiple texts in a single request using era-calibrated sentiment analysis. Supports both single-era and multi-era comparison for each text. Ideal for backtesting, news feed processing, and historical analysis.

Request Body
Parameter Type Description
texts Required array Array of text objects with id and text fields. Max size varies by tier.
era Optional string Single era for all texts: primitive, ramp, meme, present. Default: present
compare_eras Optional boolean If true, analyze each text across all eras (like Compare Eras). Default: false

Single Era Batch

Analyze multiple texts using a single era model:

import discourses

client = discourses.Discourses(api_key="YOUR_API_KEY")

texts = [
    {"id": "post_1", "text": "Diamond hands! This is going to the moon 🚀"},
    {"id": "post_2", "text": "Bearish on this one, expecting a pullback"},
    {"id": "post_3", "text": "HODL gang! We're not selling!"}
]

result = client.batch(texts=texts, era="meme")

for post_id, data in result.results.items():
    label = data['classification']['label']
    print(f"{post_id}: {label}")
const response = await fetch('https://discourses.io/api/v1/analyze/batch', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer YOUR_API_KEY',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    texts: [
      { id: 'post_1', text: 'Diamond hands! This is going to the moon 🚀' },
      { id: 'post_2', text: 'Bearish on this one, expecting a pullback' },
      { id: 'post_3', text: 'HODL gang! We\'re not selling!' }
    ],
    era: 'meme'
  })
});

const { results, meta } = await response.json();
console.log(`Processed ${meta.texts_processed} texts in ${meta.processing_time_ms}ms`);
curl -X POST "https://discourses.io/api/v1/analyze/batch" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "texts": [
      {"id": "post_1", "text": "Diamond hands! This is going to the moon 🚀"},
      {"id": "post_2", "text": "Bearish on this one, expecting a pullback"},
      {"id": "post_3", "text": "HODL gang! We'\''re not selling!"}
    ],
    "era": "meme"
  }'
Response 200
{
  "meta": {
    "era": "meme",
    "processing_time_ms": 2,
    "texts_failed": 0,
    "texts_processed": 3
  },
  "results": {
    "post_1": {
      "classification": {"confidence": 0.4258, "label": "bullish"},
      "scores": {"bearish": 0.2528, "bullish": 0.3438, "confusion": 0.0, "neutral": 0.4034, "outlook": 0.6735}
    },
    "post_2": {
      "classification": {"confidence": 0.7266, "label": "very_bullish"},
      "scores": {"bearish": 0.0497, "bullish": 0.3497, "confusion": 0.0, "neutral": 0.6006, "outlook": 0.9353}
    },
    "post_3": {
      "classification": {"confidence": 0.5894, "label": "very_bullish"},
      "scores": {"bearish": 0.2636, "bullish": 0.5956, "confusion": 0.4788, "neutral": 0.1408, "outlook": 0.9975}
    }
  }
}

Multi-Era Batch Comparison

Set compare_eras: true to analyze each text across all eras with drift detection:

import requests

response = requests.post(
    "https://discourses.io/api/v1/analyze/batch",
    headers={"Authorization": "Bearer YOUR_API_KEY"},
    json={
        "texts": [
            {"id": "headline_2015", "text": "Markets volatile amid uncertainty"},
            {"id": "headline_2021", "text": "Retail investors embrace volatility"}
        ],
        "compare_eras": True
    }
)

result = response.json()
for text_id, eras in result['results'].items():
    drift = eras['drift']
    print(f"{text_id}: drift {drift['direction']} (magnitude: {drift['magnitude']:.2f})")
    for era in ['primitive', 'meme', 'present']:
        print(f"  {era}: {eras[era]['label']} ({eras[era]['outlook']:.2f})")
const response = await fetch('https://discourses.io/api/v1/analyze/batch', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer YOUR_API_KEY',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    texts: [
      { id: 'headline_2015', text: 'Markets volatile amid uncertainty' },
      { id: 'headline_2021', text: 'Retail investors embrace volatility' }
    ],
    compare_eras: true
  })
});

const { results, meta } = await response.json();

// Check semantic drift for each text
Object.entries(results).forEach(([id, eras]) => {
  console.log(`${id}: ${eras.drift.direction} (${eras.drift.magnitude})`);
});
curl -X POST "https://discourses.io/api/v1/analyze/batch" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "texts": [
      {"id": "headline_2015", "text": "Markets volatile amid uncertainty"},
      {"id": "headline_2021", "text": "Retail investors embrace volatility"}
    ],
    "compare_eras": true
  }'
Response 200
{
  "results": {
    "headline_2015": {
      "primitive": {"outlook": -0.62, "label": "bearish"},
      "meme": {"outlook": -0.18, "label": "neutral"},
      "present": {"outlook": -0.35, "label": "bearish"},
      "drift": {"magnitude": 0.44, "direction": "positive_shift"}
    },
    "headline_2021": {
      "primitive": {"outlook": -0.45, "label": "bearish"},
      "meme": {"outlook": 0.52, "label": "bullish"},
      "present": {"outlook": 0.28, "label": "bullish"},
      "drift": {"magnitude": 0.97, "direction": "positive_shift"}
    }
  },
  "meta": {"texts_processed": 2, "eras_per_text": 4, "processing_time_ms": 28}
}
Batch Size Limits

Free: 1 text | Builder: 10 texts | Professional: 100 texts | Enterprise: 1,000 texts. Multi-era comparison counts as 4× against your rate limit (one per era).

Methodology

aurelius employs a time-variant sentiment modeling approach that fundamentally differs from static dictionary methods and large language models. Our methodology is grounded in academic research and calibrated across distinct market eras.

Academic Foundation

This methodology implements the approach from Paolucci et al. 2024, which demonstrates that time-variant sentiment measures contribute significantly to return predictability in the cross-section of stocks.

Core Process & Mathematical Framework

Unlike LLMs that rely on static embeddings trained on fixed corpora, aurelius measures each token's sentiment contribution at each point in time. The process follows three key steps, each grounded in rigorous statistical methodology:

1

Aggregate Token Sentiment

For each word w in the financial lexicon at time period t, we compute the average sentiment across all documents containing that token. This creates a time-indexed sentiment profile for every word in our vocabulary.

\[ \bar{S}_{w,t} = \frac{1}{n_{w,t}} \sum_{d} S_d \cdot \mathbf{1}_{\{w \in d\}} \]
\(\bar{S}_{w,t}\) Mean sentiment for word w at time t
\(n_{w,t}\) Number of documents containing word w
\(S_d\) Sentiment score of document d
\(\mathbf{1}_{\{w \in d\}}\) Equals 1 if word appears in document, 0 otherwise

The indicator function ensures we only aggregate sentiment from documents where the token actually appears, preventing dilution from irrelevant documents. The time subscript t represents the era-specific calibration window, capturing how the token's sentiment charge evolves across market regimes.

2

Apply Central Limit Theorem

With sufficient observations (our corpus contains millions of documents), the Central Limit Theorem guarantees that the sample mean converges to a normal distribution, regardless of the underlying sentiment distribution.

\[ \bar{S}_{w,t} \;\xrightarrow{\;d\;}\; \mathcal{N}\!\left(\mu_{w,t},\; \frac{\sigma^2_{w,t}}{n_{w,t}}\right) \]
\(\mathcal{N}\) Normal (Gaussian) distribution
\(\mu_{w,t}\) True average sentiment for word w in population
\(\sigma^2_{w,t}\) Variance of sentiment scores for word w
\(\xrightarrow{d}\) Converges in distribution as sample size grows

This convergence is critical because it allows us to make probabilistic statements about a token's sentiment without assuming any particular distribution of document-level sentiments. The variance term \(\sigma^2_{w,t}/n_{w,t}\) decreases with more observations, yielding tighter confidence bounds for high-frequency tokens.

3

Derive Signal Probabilities

The final step transforms the normalized sample mean into a probability that the token carries positive (or negative) sentiment. This is the core innovation that produces interpretable, statistically grounded sentiment signals.

\[ P(w^{+} | t) = \Phi\!\left(\frac{\bar{S}_{w,t}}{\sigma_{w,t}\, /\, \sqrt{n_{w,t}}}\right) \]
\(P(w^{+} | t)\) Probability that word w is positive at time t
\(\Phi\) Standard normal cumulative distribution function
\(\sigma / \sqrt{n}\) Standard error of the mean (uncertainty measure)

The argument to \(\Phi\) is the familiar z-score (or t-statistic), measuring how many standard errors the sample mean is from zero. By passing this through the normal CDF, we obtain a probability between 0 and 1:

  • P ≈ 1.0 — Token is strongly bullish in this era
  • P ≈ 0.5 — Token is sentiment-neutral
  • P ≈ 0.0 — Token is strongly bearish in this era

Document-Level Aggregation

Once we have probability scores for each token, the document-level sentiment is computed by aggregating across all tokens in the text:

\[ S_{\text{doc}} = \frac{1}{|W_d|} \sum_{w \in W_d} \bigl( 2 \cdot P(w^{+} | t) - 1 \bigr) \]
\(S_{\text{doc}}\) Final document sentiment score (ranges from −1 to +1)
\(W_d\) Set of all scored words in the document
\(2P - 1\) Transforms probability to sentiment scale

The transformation \(2P - 1\) maps probability scores to the standard sentiment scale: a token with P = 1 contributes +1 (bullish), P = 0.5 contributes 0 (neutral), and P = 0 contributes −1 (bearish). The average across all scored tokens yields the final document sentiment returned by the API.

Era-Based Calibration

Financial language undergoes significant regime changes over time. A word like "disruption" conveyed crisis in 2008 but signaled innovation by 2020. Our proprietary research identifies four distinct eras, each representing a statistically significant regime change warranting model re-calibration:

Era Period Characteristics Calibration Focus
Primitive < 2016 Foundational financial sentiment patterns. Early corpus establishing baseline lexicon popularity and term frequency distributions. Traditional financial terminology, pre-social media discourse, institutional language patterns
Ramp 2016 — 2019 Rapid growth in corpus containing financial market information. Expansion of digital financial discourse. Fintech emergence, crypto terminology adoption, algorithmic trading language
Meme 2019 — 2023 The golden era of r/wallstreetbets and the GME short squeeze. Unprecedented social attention and retail flow into markets. Retail investor vernacular, social media sentiment markers, meme stock terminology
Present > 2023 Post-meme era dilution of social tokens. Normalization and institutional adaptation to new paradigms. AI/LLM discourse integration, institutional social adoption, sentiment dilution patterns

Regime Change Detection

Era boundaries are not arbitrary dates—they are determined through rigorous statistical analysis of our proprietary corpus. We identify regime changes when:

  • Token sentiment drift — The rolling sentiment coefficient for high-frequency tokens diverges significantly from historical norms (p < 0.01)
  • Vocabulary expansion — New terminology achieves critical mass (>10% of daily document frequency) requiring model incorporation
  • Cross-sectional predictability shift — Out-of-sample return predictability degrades by >15% under the existing model
  • Structural break tests — Chow tests and CUSUM procedures indicate parameter instability in the sentiment-return relationship
Era Selection

When calling the API, you can specify an era parameter to analyze text as if it were written in that time period. If omitted, aurelius automatically selects the appropriate era based on the current date. For historical analysis, always specify the era matching your document's origin date.

Why Time-Variance Matters

Static sentiment models assume language meaning is constant—a fundamentally flawed assumption in financial markets. Consider these examples of semantic drift:

"volatile"
Pre-2016 Strong Negative

Associated with risk, uncertainty, portfolio danger

2019-2023 Neutral/Positive

Trading opportunity, retail enthusiasm, "volatility is the game"

"moon"
Pre-2016 Not Financial

Rarely appeared in financial corpus

2019-2023 Strong Positive

"To the moon" — extreme bullish sentiment signal

By calibrating to each era, aurelius captures these semantic shifts and produces sentiment scores that accurately reflect market participant psychology at the time of document creation.

Scoring Guide

aurelius returns a sentiment score from -1.0 (very bearish) to +1.0 (very bullish). Here's how to interpret the results:

< -1
Very Bearish
raw.compound < -1
-1 to 0
Bearish
-1 ≤ raw.compound < 0
0
Neutral
raw.compound = 0
0 to 1
Bullish
0 < raw.compound ≤ 1
> 1
Very Bullish
raw.compound > 1

Additional Metrics

  • magnitude (0-1) — Strength of the sentiment signal regardless of direction
  • confidence (0-1) — Model's confidence in the prediction

Error Codes

The API uses standard HTTP status codes. Here are the most common errors:

Status Code Description
400 invalid_request Missing or invalid request parameters
400 text_too_long Text exceeds your tier's character limit
401 unauthorized Invalid or missing API key
403 upgrade_required Feature requires a higher subscription tier
429 rate_limit_exceeded Too many requests. Retry after X-RateLimit-Reset
500 internal_error Server error. Retry with exponential backoff