Output & Reports
ViberTest produces two output formats: a colorized terminal report for humans and a structured JSON report for automation. This page explains both in detail.
Terminal Output#
The default output format. ViberTest prints a branded banner, scans your project with a progress indicator, then displays results grouped by file.
Sections#
- 1Banner
ASCII art logo with version number and scan target path.
- 2Progress
Shows files scanned, rules executed, and issues found in real-time.
- 3Issues by Severity
Issues grouped by severity level (critical → low), each showing: severity badge, rule name, file path, and message.
- 4Summary
Total issues by severity, top 5 most problematic files, and the final score with letter grade and ASCII art.
JSON Report#
Generate a JSON report with:
$ vibertest scan . --format json --output report.jsonReport Structure#
The JSON report is a nested object with four main sections: score, summary, issues, and metadata.
{
"version": "0.1.0",
"timestamp": "2026-02-18T19:48:09.597Z",
"projectName": "my-app",
"projectPath": "/Users/dev/my-app",
"scanDuration": 4.231,
"score": {
"score": 82,
"grade": "B",
"label": "Good",
"breakdown": {
"critical": 0,
"high": 3,
"medium": 8,
"low": 4,
"info": 0
},
"penalties": [
{
"ruleId": "dead-code",
"count": 5,
"penalty": 2.6
}
]
},
"summary": {
"totalFiles": 47,
"totalIssues": 15,
"issuesBySeverity": {
"critical": 0,
"high": 3,
"medium": 8,
"low": 4,
"info": 0
},
"issuesByRule": {
"oversized-files": 4,
"dead-code": 5,
"missing-error-handling": 6
},
"skippedFiles": []
},
"issues": [
{
"ruleId": "oversized-files",
"ruleName": "Oversized Files",
"severity": "low",
"message": "File has 623 lines (max: 500)",
"filePath": "src/components/Dashboard.tsx",
"suggestion": "Consider splitting into smaller modules.",
"learnMoreUrl": "https://refactoring.guru/smells/large-class"
}
],
"metadata": {
"nodeVersion": "v22.19.0",
"platform": "darwin",
"configUsed": false
}
}Top-Level Fields#
| Field | Type | Description |
|---|---|---|
| version | string | ViberTest CLI version that generated the report |
| timestamp | string | ISO 8601 timestamp of when the scan ran |
| projectName | string | Name of the scanned project (from package.json or directory name) |
| projectPath | string | Absolute path to the scanned project |
| scanDuration | number | Scan duration in seconds |
| score | object | Score result with grade, breakdown, and penalties (see below) |
| summary | object | Aggregate counts: total files, issues by severity and rule |
| issues | array | Array of all issues found (see Issue Object below) |
| metadata | object | Environment info: Node version, platform, config used |
Score Object#
The score field contains the full scoring result:
| Field | Type | Description |
|---|---|---|
| score.score | number | Health score from 10-100 |
| score.grade | string | Letter grade: "A", "B", "C", "D", or "F" |
| score.label | string | Human-readable label: "Excellent", "Good", "Needs Work", "Poor", "Critical" |
| score.breakdown | object | Issue count per severity level (critical, high, medium, low, info) |
| score.penalties | array | Per-rule penalty breakdown: { ruleId, count, penalty } |
Summary Object#
| Field | Type | Description |
|---|---|---|
| summary.totalFiles | number | Total files analyzed (after ignore filters) |
| summary.totalIssues | number | Total number of issues found |
| summary.issuesBySeverity | object | Issue count per severity level |
| summary.issuesByRule | object | Issue count per rule ID (e.g. { "dead-code": 12 }) |
| summary.skippedFiles | array | Files that were skipped during scanning |
Issue Object#
Each item in the issues array has this shape:
| Field | Type | Description |
|---|---|---|
| ruleId | string | The rule that flagged this issue (e.g. oversized-files) |
| ruleName | string | Human-readable rule name (e.g. Oversized Files) |
| severity | string | One of: "critical", "high", "medium", "low", "info" |
| message | string | Human-readable description of the issue |
| filePath | string | Relative file path from the scan target |
| line | number? | Line number where the issue was detected (optional, not all rules provide this) |
| column | number? | Column number (optional, not all rules provide this) |
| codeSnippet | string? | Code snippet around the issue (optional, not all rules provide this) |
| suggestion | string | Actionable fix suggestion |
| learnMoreUrl | string? | URL to learn more about the issue pattern (optional) |
Metadata Object#
| Field | Type | Description |
|---|---|---|
| metadata.nodeVersion | string | Node.js version used during the scan |
| metadata.platform | string | Operating system platform (e.g. "darwin", "linux", "win32") |
| metadata.configUsed | boolean | Whether a config file was found and used |
Parsing Reports Programmatically#
The JSON report is designed for easy parsing. Here are common use cases:
Node.js#
import { readFile } from 'node:fs/promises';
const report = JSON.parse(await readFile('report.json', 'utf8'));
console.log(`Score: ${report.score.score}/100 (Grade ${report.score.grade})`);
console.log(`Issues: ${report.summary.totalIssues}`);
console.log(`Scan took ${report.scanDuration}s`);
// Filter critical issues
const critical = report.issues.filter(i => i.severity === 'critical');
if (critical.length > 0) {
console.error(`${critical.length} critical issues found!`);
process.exit(1);
}
// Show penalty breakdown
for (const p of report.score.penalties) {
console.log(` ${p.ruleId}: ${p.count} issues, -${p.penalty} pts`);
}Bash (with jq)#
# Get score and grade
$ jq '.score.score' report.json
$ jq '.score.grade' report.json
# List critical issues
$ jq '.issues[] | select(.severity == "critical") | .message' report.json
# Count issues by severity
$ jq '.summary.issuesBySeverity' report.json
# Show penalty breakdown
$ jq '.score.penalties[] | "\(.ruleId): \(.count) issues, -\(.penalty) pts"' report.jsonPython#
import json
with open('report.json') as f:
report = json.load(f)
print(f"Score: {report['score']['score']}/100 (Grade {report['score']['grade']})")
print(f"Issues: {report['summary']['totalIssues']}")
for issue in report['issues']:
if issue['severity'] == 'critical':
print(f" CRITICAL: {issue['filePath']} — {issue['message']}")