Skip to main content

Documentation Index

Fetch the complete documentation index at: https://podonos.com/docs/llms.txt

Use this file to discover all available pages before exploring further.

Why this section exists

Podonos looks simple from the outside: upload audio, get a report. Behind that interface, every evaluation runs through layers of sourcing, screening, quality control, bias minimization, and statistical analysis that we have engineered over years. This section documents what we actually do — so you can trust the numbers and design better experiments on top of them.

The pillars

Evaluators

How we source, qualify, and filter the humans who rate your audio.

In-Session Quality Control

Acoustic environment, fatigue, attention, reliability, and automatic audio sanity checks.

Bias Minimization

Order shuffling, anchoring, and loudness normalization to keep results clean.

Evaluation Design & Review

Science-backed templates and human review of your custom evaluations.

Recommendations

Best practices for audio length, instructions, and anchors.

How a Podonos evaluation flows

1

Sourcing & qualification

Evaluators come from vetted partner pools. Each candidate is pre-screened for hearing capability, language proficiency, and instruction-following ability.
2

Per-session checks

Every time an evaluator joins a session, we measure their acoustic environment and re-verify their setup. No headphones? No quiet room? They are rejected before they rate a single file.
3

Smart assignment

Our algorithm splits your evaluation into subsessions sized to fit within the 45–60 minute fatigue limit, then assigns evaluators per subsession to hit your requested votes-per-query.
4

During the session

Attention tests are embedded throughout. Audio order is shuffled. Anchors are pinned next to the rating scale. A mid-session break is mandatory.
5

Post-session reliability

We score each evaluator’s reliability against the cohort and automatically drop unreliable evaluators. New evaluators are recruited to backfill until your votes-per-query target is hit.
6

Statistical analysis

Aggregated, anchored, normalized, and ready to read in your Workspace.
The customer-facing knobs are simple: language, evaluator count, votes per query, evaluation type. Everything else on this page is automatic.