Overview
Tokens are consumed when you generate transcripts and reports. The system calculates costs based on interview length, file type, and the number of report types you select. You can see estimated costs before generating, and actual usage may vary slightly.Transparent pricing
See estimated token costs before generating transcripts or reports.
Pay per use
Tokens are only charged when you actually generate transcripts or reports.
No hidden fees
The estimated cost shown is what you’ll be charged. Actual usage may vary slightly based on complexity.
Batch calculations
For batch operations, see the total cost upfront before confirming.
How token costs are calculated
Transcription costs
Tokens are charged for generating transcripts from audio or video files. The cost is calculated based on:- Interview length: Longer interviews require more tokens
- File duration: The system processes the entire audio/video file
- File type: Different file formats may have varying processing requirements
Report generation costs
Tokens are charged for generating analysis reports. The cost is calculated based on:- Base report cost: Each report type has a base cost
- Number of report types: The base cost is multiplied by the number of report types you select
- Interview complexity: More complex interviews with longer transcripts may require additional processing
If you select multiple report types (e.g., Q&A, General, and Persona), the cost is calculated as: base cost × number of report types.
AI Agent costs
Tokens are charged for each assistant message when using the AI Agent to generate Research Context Briefs or Interview Guides. The cost depends on the reasoning mode you choose:- Fast mode: 2 tokens per assistant message
- Use for quick responses and standard guide/brief generation
- Suitable for most use cases
- Thinking mode: 5 tokens per assistant message
- Use for more complex research scenarios requiring deeper analysis
- Provides more thorough responses and better handles nuanced requirements
Example calculations
Single record with one report type:- Transcription: Based on interview length
- Q&A Report: Base report cost
- Total: Transcription cost + Report cost
- Transcription: Based on interview length
- Q&A Report: Base report cost
- General Report: Base report cost
- Persona Report: Base report cost
- Total: Transcription cost + (Base report cost × 3)
- Each record that needs transcription: Individual transcription cost
- Each report type for each record: Base report cost × number of types
- Total: Sum of all transcription and report costs
Viewing estimated costs
Before generation
Before generating transcripts or reports, you can see the estimated token cost:- Single record: The token summary shows estimated costs when you select transcript or report options
- Batch operations: The batch generation modal shows total estimated costs for all selected records


Insufficient tokens
If you don’t have enough tokens, the system will:- Show required amount: Display how many tokens you need
- Offer to purchase: Provide a button to buy tokens
- Block generation: Prevent generation until you have enough tokens


When tokens are charged
Tokens are deducted from your balance:- After successful generation: Tokens are charged only after transcripts or reports are successfully generated
- Not for failed operations: If generation fails, tokens are not charged
- Not for viewing: Viewing existing transcripts or reports doesn’t consume tokens
- Not for editing: Editing transcripts or reports doesn’t consume additional tokens
Regeneration costs
If you regenerate a transcript or report:- Full cost applies: You’re charged the same amount as generating it for the first time
- Warning shown: The system warns you if you’re about to regenerate something that already exists
- Consider carefully: Regenerating typically produces the same result, so consider if it’s necessary
Regenerating transcripts or reports usually produces identical results. Only regenerate if you’ve made significant changes to the source material or settings.
Batch generation costs
When generating reports for multiple records:- Total shown upfront: The batch generation modal shows the total estimated cost
- Per-record breakdown: You can see which records will be processed and their individual costs
- All or nothing: Tokens are charged for all successfully generated items


Factors affecting token usage
Several factors can influence how many tokens are used:| Factor | Impact | Notes |
|---|---|---|
| Interview length | Higher for longer interviews | Longer audio/video files require more processing |
| File type | Varies by format | Different formats may have different processing requirements |
| Number of report types | Linear increase | Each additional report type adds to the cost |
| Interview complexity | Slight variation | More complex content may require additional processing |
| Text transcripts | No transcription cost | Uploaded text files skip transcription, saving tokens |
Best practices
Check costs before generating
Check costs before generating
Always review the estimated token cost before generating transcripts or reports. This helps you plan your token usage and avoid unexpected charges.
Generate only what you need
Generate only what you need
Select only the report types you actually need. Each additional report type increases the cost, so be selective.
Use text transcripts when possible
Use text transcripts when possible
If you have text transcripts, upload them directly instead of audio/video files. This saves transcription tokens.
Avoid unnecessary regeneration
Avoid unnecessary regeneration
Regenerating transcripts or reports typically produces the same result. Only regenerate if you’ve made significant changes or need to update settings.
Plan batch operations
Plan batch operations
For batch generation, review the total cost before confirming. This helps ensure you have enough tokens for all operations.
Related topics
- Account overview
- Purchase tokens
- Referral program
- Create interview guide - Learn about AI Agent token costs for guide generation
- Add research context - Learn about AI Agent token costs for Research Context Brief generation