Highlights
- Stable windows matter more than perfect totals
- Model share is easier to read when grouped directly
- Source context helps interpret model shifts
Guide
A practical guide to comparing AI model usage in vibestats across reporting windows and source scopes.
Highlights
Relevant commands
npx vibestats --modelnpx vibestats --since 2026-01-01 --until 2026-01-31 --modelnpx vibestats all --modelStep 1
Use the model report shape so the output groups by model instead of by day or month.
Step 2
Use a clear date range so the comparison reflects model behavior and not a moving reporting window.
Step 3
Check whether model share also changes cost shape or differs between Claude-compatible and Codex usage.
Most AI coding reporting problems are not about one missing command. They are about choosing the right surface: daily usage, wrapped summaries, activity heatmaps, cost views, or shareable aggregate pages.
This guide stays focused on vibestats workflows and the pages already documented in the public command reference, so you can move from local data to a readable result quickly.
FAQ
Start with the relevant command, verify the output locally, then decide whether you need a share page, a wrapped summary, or a heatmap for communication.
No. vibestats works from local usage artifacts and only turns aggregate results into hosted pages when you explicitly publish them.
Related pages