BetaViberTest is in active development — expect breaking changes.
Overview
DocsOutput & Reports

Output & Reports

ViberTest produces two output formats: a colorized terminal report for humans and a structured JSON report for automation. This page explains both in detail.

Terminal Output#

The default output format. ViberTest prints a branded banner, scans your project with a progress indicator, then displays results grouped by file.

Sections#

  1. 1
    Banner

    ASCII art logo with version number and scan target path.

  2. 2
    Progress

    Shows files scanned, rules executed, and issues found in real-time.

  3. 3
    Issues by Severity

    Issues grouped by severity level (critical → low), each showing: severity badge, rule name, file path, and message.

  4. 4
    Summary

    Total issues by severity, top 5 most problematic files, and the final score with letter grade and ASCII art.

JSON Report#

Generate a JSON report with:

Terminalbash
$ vibertest scan . --format json --output report.json

Report Structure#

The JSON report is a nested object with four main sections: score, summary, issues, and metadata.

report.jsonjson
{
  "version": "0.1.0",
  "timestamp": "2026-02-18T19:48:09.597Z",
  "projectName": "my-app",
  "projectPath": "/Users/dev/my-app",
  "scanDuration": 4.231,
  "score": {
    "score": 82,
    "grade": "B",
    "label": "Good",
    "breakdown": {
      "critical": 0,
      "high": 3,
      "medium": 8,
      "low": 4,
      "info": 0
    },
    "penalties": [
      {
        "ruleId": "dead-code",
        "count": 5,
        "penalty": 2.6
      }
    ]
  },
  "summary": {
    "totalFiles": 47,
    "totalIssues": 15,
    "issuesBySeverity": {
      "critical": 0,
      "high": 3,
      "medium": 8,
      "low": 4,
      "info": 0
    },
    "issuesByRule": {
      "oversized-files": 4,
      "dead-code": 5,
      "missing-error-handling": 6
    },
    "skippedFiles": []
  },
  "issues": [
    {
      "ruleId": "oversized-files",
      "ruleName": "Oversized Files",
      "severity": "low",
      "message": "File has 623 lines (max: 500)",
      "filePath": "src/components/Dashboard.tsx",
      "suggestion": "Consider splitting into smaller modules.",
      "learnMoreUrl": "https://refactoring.guru/smells/large-class"
    }
  ],
  "metadata": {
    "nodeVersion": "v22.19.0",
    "platform": "darwin",
    "configUsed": false
  }
}

Top-Level Fields#

FieldTypeDescription
versionstringViberTest CLI version that generated the report
timestampstringISO 8601 timestamp of when the scan ran
projectNamestringName of the scanned project (from package.json or directory name)
projectPathstringAbsolute path to the scanned project
scanDurationnumberScan duration in seconds
scoreobjectScore result with grade, breakdown, and penalties (see below)
summaryobjectAggregate counts: total files, issues by severity and rule
issuesarrayArray of all issues found (see Issue Object below)
metadataobjectEnvironment info: Node version, platform, config used

Score Object#

The score field contains the full scoring result:

FieldTypeDescription
score.scorenumberHealth score from 10-100
score.gradestringLetter grade: "A", "B", "C", "D", or "F"
score.labelstringHuman-readable label: "Excellent", "Good", "Needs Work", "Poor", "Critical"
score.breakdownobjectIssue count per severity level (critical, high, medium, low, info)
score.penaltiesarrayPer-rule penalty breakdown: { ruleId, count, penalty }

Summary Object#

FieldTypeDescription
summary.totalFilesnumberTotal files analyzed (after ignore filters)
summary.totalIssuesnumberTotal number of issues found
summary.issuesBySeverityobjectIssue count per severity level
summary.issuesByRuleobjectIssue count per rule ID (e.g. { "dead-code": 12 })
summary.skippedFilesarrayFiles that were skipped during scanning

Issue Object#

Each item in the issues array has this shape:

FieldTypeDescription
ruleIdstringThe rule that flagged this issue (e.g. oversized-files)
ruleNamestringHuman-readable rule name (e.g. Oversized Files)
severitystringOne of: "critical", "high", "medium", "low", "info"
messagestringHuman-readable description of the issue
filePathstringRelative file path from the scan target
linenumber?Line number where the issue was detected (optional, not all rules provide this)
columnnumber?Column number (optional, not all rules provide this)
codeSnippetstring?Code snippet around the issue (optional, not all rules provide this)
suggestionstringActionable fix suggestion
learnMoreUrlstring?URL to learn more about the issue pattern (optional)

Metadata Object#

FieldTypeDescription
metadata.nodeVersionstringNode.js version used during the scan
metadata.platformstringOperating system platform (e.g. "darwin", "linux", "win32")
metadata.configUsedbooleanWhether a config file was found and used

Parsing Reports Programmatically#

The JSON report is designed for easy parsing. Here are common use cases:

Node.js#

parse-report.mjsjavascript
import { readFile } from 'node:fs/promises';

const report = JSON.parse(await readFile('report.json', 'utf8'));

console.log(`Score: ${report.score.score}/100 (Grade ${report.score.grade})`);
console.log(`Issues: ${report.summary.totalIssues}`);
console.log(`Scan took ${report.scanDuration}s`);

// Filter critical issues
const critical = report.issues.filter(i => i.severity === 'critical');
if (critical.length > 0) {
  console.error(`${critical.length} critical issues found!`);
  process.exit(1);
}

// Show penalty breakdown
for (const p of report.score.penalties) {
  console.log(`  ${p.ruleId}: ${p.count} issues, -${p.penalty} pts`);
}

Bash (with jq)#

Terminalbash
# Get score and grade
$ jq '.score.score' report.json
$ jq '.score.grade' report.json

# List critical issues
$ jq '.issues[] | select(.severity == "critical") | .message' report.json

# Count issues by severity
$ jq '.summary.issuesBySeverity' report.json

# Show penalty breakdown
$ jq '.score.penalties[] | "\(.ruleId): \(.count) issues, -\(.penalty) pts"' report.json

Python#

parse_report.pypython
import json

with open('report.json') as f:
    report = json.load(f)

print(f"Score: {report['score']['score']}/100 (Grade {report['score']['grade']})")
print(f"Issues: {report['summary']['totalIssues']}")

for issue in report['issues']:
    if issue['severity'] == 'critical':
        print(f"  CRITICAL: {issue['filePath']} — {issue['message']}")