This is an early beta of our innovation scoring engine. There are plenty of kinks to work out. This number represents an early approximation that will likely go up.
0.90
Codebase Health Score
Est. >95th percentile
among TypeScript codebases
How well-built is this codebase? How sustainable is it to build on?
Score Composition
Maintainability
Testability
Test Coverage
Debuggability
Adaptability
Simplicity
Coupling
Program-to-Contract
S — Single Responsibility
Security
Cyclomatic Complexity
Error Handling
Resilience
Documentation
Hidden Coupling
Brittleness
Code Hygiene
Language Breakdown
JSON 44.7%
Markdown 25.5%
TypeScript 14.9%
CSS 6.7%
EJS/HTML 3.5%
YAML 1.4%
Ruby (ERB) 1.2%
JavaScript 0.6%
SQL 0.4%
Ruby 0.3%
Other 0.8%
v0.0.107 · scoring engine 0.27.45 · Based on 230 facts, 50 scores across 185 runs · Report generated Mar 20, 2026 at 10:34 AM EDT
SOLID Principles
▶S — Single Responsibility
0.86
Measures how focused each contract is: average methods per interface, constructor dependencies, and lines per implementation.
Ratio of consumer code importing interfaces vs concrete implementations.
Component
Value
Score
Weight
DI bindings with interface
308 /367
0.84
50%
0–100% scale (higher is better)
Interface imports (excl. DI)
1463 /1489
0.98
50%
0–100% scale (higher is better)
Concrete imports (excl. DI)
26 count (info)
—
—
▶Composition over Inheritance
0.99
Counts class-extends usage. Fewer extends = more composition, higher score.
Component
Value
Score
Weight
`implements` count
459 classes
0.99
100%
0–100% scale (higher is better)
`extends` count (non-Error)
1 classes (info)
—
—
▶Vertical-Agnostic
0.99
Measures what percentage of source files are vertical-agnostic (useful to any startup regardless of business domain) vs domain-specific. Only 9 modules are truly domain-specific; the rest replace SaaS subscriptions.
The Tests ARE the Documentation (behavioral spec). The README IS the Documentation (quickstart, size). The CHANGELOG IS the Documentation (release history, freshness, version match). Plus: PHILOSOPHY.md, TECHNICAL_GUIDE.md, JSDoc on interfaces, conversation-starter docs.
NACHA/ACH readiness: ACH origination service, return handling via audit trail, risk management through evidence collection, and NACHA-compliant record retention policy.
Component
Value
Score
Weight
ACH origination service
1 exists
0.99
25%
0–100% scale (higher is better)
Return handling (audit trail)
1 exists
0.99
25%
0–100% scale (higher is better)
Risk management (evidence)
1 exists
0.99
25%
0–100% scale (higher is better)
NACHA record retention
1 exists
0.99
25%
0–100% scale (higher is better)
▶PSD2/SCA Readiness
0.99
PSD2/SCA readiness: Strong Customer Authentication service, 3D Secure protocol integration, trusted beneficiary consent management, and regulatory reporting via evidence collection.
Component
Value
Score
Weight
SCA service
1 exists
0.99
30%
0–100% scale (higher is better)
3DS integration
1 exists
0.99
30%
0–100% scale (higher is better)
Trusted beneficiaries (consent)
1 exists
0.99
20%
0–100% scale (higher is better)
Regulatory reporting (evidence)
1 exists
0.99
20%
0–100% scale (higher is better)
▶FFIEC Readiness
0.99
FFIEC readiness: examination control mapping, risk assessment evidence, business continuity planning, and information security program (audit, encryption, MFA).
Component
Value
Score
Weight
Control mapping
1 exists
0.99
25%
0–100% scale (higher is better)
Risk assessment (evidence)
1 exists
0.99
25%
0–100% scale (higher is better)
Business continuity planning
1 exists
0.99
25%
0–100% scale (higher is better)
Information security program
3 of 3 controls (audit, encryption, MFA)
0.99
25%
0–100% scale (higher is better)
▶SOX Readiness
0.99
SOX readiness: Segregation of Duties enforcement, approval workflows, internal control testing, and automated evidence collection for audit.
Total number of TypeScript source files (excluding tests, scripts, and generated code).
▶Total Source Lines165,239 lines
Total lines of TypeScript source code.
▶Avg Lines Per File132 lines
Average lines per source file. Lower means more focused, single-purpose files.
▶Avg Lines Per Impl212 lines
Average lines per implementation class. Measures how bloated implementations get.
▶Files Over500 Lines40 count
Source files exceeding 500 lines — candidates for splitting.
40 out of 1,256 source files
▶Files Over1000 Lines4 count
Source files exceeding 1,000 lines — strong candidates for refactoring.
4 out of 1,256 source files
Module Structure
▶Module Count159 count
Number of distinct modules (top-level directories under src/).
▶Avg Files Per Module7.90 count
Average source files per module. Very high counts suggest a module is doing too much.
▶Service Interface Count256 count
Number of service interface files (contracts without implementation).
▶Service Impl Count223 count
Number of service implementation files.
▶Repository Interface Count141 count
Number of repository interface files (data-access contracts).
▶Repository Impl Count109 count
Number of repository implementation files.
▶Controller Count87 count
Number of controller files (HTTP endpoint handlers).
▶Avg Methods Per Interface4.20 count
Average methods per service interface. Lower means leaner, more focused contracts (Interface Segregation).
▶Avg Constructor Deps2.30 count
Average constructor dependencies per implementation. High counts in orchestrator services are a sign of good composition.
▶Di Binding Count367 count
Total const bindings in diContainer.ts (excluding controllers, LOG, container, and non-service values).
▶Di Bindings With Interface308 count
DI bindings with an explicit interface type annotation. These are programmed to a contract.
308 out of 367 bindings
▶Di Contract Percent83.90 percent
Percentage of DI bindings that are typed to an interface rather than inferred as concrete. The DI container is the source of truth: if a class is injected as a dependency, it should be behind a contract.
ISP Consumer Utilization
▶Isp Consumer Utilization65.20 percent
Average percentage of interface methods actually used by each consumer. Higher means consumers depend on what they use (good ISP).
▶Isp Consumer Pairs1,389 count
Total consumer-interface pairs analyzed for ISP utilization.
▶Isp Interfaces With Zero Consumers3 count
Interfaces with no consumers found — skipped in utilization calculation.
Testing
▶Test File Count317 count
Total number of test files.
▶Unit Test File Count301 count
Test files that run without a database.
▶Integration Test File Count16 count
Test files requiring a live database.
▶E2e Test Count1 count
End-to-end test specs (Playwright).
▶Source To Test Ratio25.20 percent
Ratio of test files to source files, expressed as a percentage. Analogous to code coverage breadth.
▶Lines Per Test521 lines
Average source lines per test file. Lower means more granular test coverage.
1 test file for every 521 source lines
▶Modules With Zero Tests4 count
Modules with no test files at all — blind spots in the test suite.
4 out of 159 modules
▶Assertion Count6,390 count
Total number of assertions (expect/assert calls) across all tests.
▶Test Natural Order Passing3,294 count
▶Test Natural Order Failing23 count
▶Test Random Order Passing3,294 count
▶Test Random Order Failing23 count
▶Test State Leakage Detected0 count
Security
▶Raw Sql Concatenations40 count
SQL queries built with string concatenation — SQL injection risk.
▶Parameterized Query Count954 count
SQL queries using parameterized placeholders ($1, $2). Safe by design.
▶Hardcoded Secret Count0 count
Potential hardcoded secrets (API keys, passwords) found in source code.
▶Dangerous Eval Count0 count
Dynamic code execution (code injection risk).
▶Xss Risk Count1 count
Pattern-matched XSS risk indicators (direct DOM writes and unescaped request data in responses). Review individually — many are legitimate client-side rendering.
1 pattern matches across 1,256 source files
Complexity & Code Hygiene
▶Max Cyclomatic Complexity41 complexity
Highest cyclomatic complexity in any single function. Lower means simpler branching.
▶Avg Cyclomatic Complexity3.60 complexity
Average cyclomatic complexity across all functions.
▶Functions Over10 Complexity225 count
Functions with cyclomatic complexity above 10 — worth reviewing for simplification.
225 out of 3,981 functions
▶Functions Over20 Complexity36 count
Functions with cyclomatic complexity above 20 — strong refactoring candidates.
36 out of 3,981 functions
▶Total Function Count3,981 count
Total number of functions and methods in the codebase.
▶Avg Method Lines22.20 lines
Average lines per method across all implementation files (raw, including comments and logging).
▶Avg Method Lines Logic Only14.70 lines
Average lines per method after stripping comments and LOG statements. Measures pure logic density without penalizing good practices.
▶Functions Over100 Lines47 count
Functions exceeding 100 lines — hard to test and reason about.
47 out of 3,981 functions
▶Functions Over50 Lines330 count
Functions exceeding 50 lines — worth reviewing for extraction.
330 out of 3,981 functions
▶Any Typed Parameters210 count
Total `: any` occurrences (boundary + structural). Legacy aggregate — see breakdown below.
210 total any-typed occurrences
▶Any Typed Structural93 count
Structural `: any` — developer chose `any` over a real type. These are the ones that matter for code hygiene.
93 structural any-typed occurrences
▶Any Typed Boundary117 count
Boundary `: any` — pg driver rows, catch blocks, AsyncLocalStorage, external API responses. Pragmatic and not penalized.
117 boundary any-typed (no penalty)
Error Handling
▶Total Catch Blocks1,087 count
Total try/catch blocks in the codebase.
▶Empty Catch Blocks51 count
Catch blocks with no body — silently swallowing errors.
51 empty out of 1,087 catch blocks
▶Empty Catch Percent4.70 percent
Percentage of catch blocks that are empty.
▶Catch And Ignore Blocks7 count
Catch blocks that catch an error but never log, rethrow, or use it.
7 ignored out of 1,087 catch blocks
▶Log Before Throw Blocks460 count
Throw statements preceded by a LOG call — good diagnostic discipline.
▶Log Before Throw Percent100 percent
Percentage of throw statements that are preceded by logging.
Logging
▶Files With Logger406 count
Source files that instantiate a named logger via getLogger().
406 out of 1,256 source files
▶Total Log Statements5,640 count
Total LOG.* calls across the codebase.
▶Log Statements With Method Prefix5,602 count
Log statements that include a method-name prefix (e.g. "processRequest(): ...").
5,602 out of 5,640 log statements
▶Log Method Prefix Percent99.30 percent
Percentage of log statements with method-name prefix. Per GLPR, this should be high.
▶Log Statements With Fmt6,513 count
Log statements using fmt() for safe object serialization.
6,513 out of 5,640 log statements
▶Avg Log Static Prefix Length45.10 chars
Average characters of static text before the first interpolation in LOG statements. Longer prefixes are easier to grep.
▶Median Log Static Prefix Length44 chars
Median characters of static text before the first interpolation in LOG statements.
▶Log Trace Count760 count
Number of LOG.trace() calls.
▶Log Debug Count1,953 count
Number of LOG.debug() calls.
▶Log Info Count1,409 count
Number of LOG.info() calls.
▶Log Notice Count192 count
Number of LOG.notice() calls.
▶Log Warn Count177 count
Number of LOG.warn() calls.
▶Log Error Count1,019 count
Number of LOG.error() calls.
▶Log Fatal Count127 count
Number of LOG.fatal() calls.
Duplication & Dead Code
▶Duplicate Block Count130 count
Number of detected duplicate code blocks.
▶Duplicate Line Count1,577 lines
Total lines of duplicated code.
1,577 duplicated out of 165,239 source lines
▶Duplication Percent1 percent
Percentage of source code that is duplicated.
▶Total Export Count2,470 count
Total number of exported symbols (functions, classes, constants).
▶Unused Export Count148 count
Exported symbols not imported anywhere — potential dead code.
148 unused out of 2,470 exports
▶Unused Export Percent6 percent
Percentage of exports that are unused.
Dependencies
▶Runtime Dependency Count41 count
Number of production npm dependencies. Fewer means less attack surface and smaller bundles.
▶Dev Dependency Count38 count
Number of dev-only npm dependencies.
▶Types In Prod Deps0 count
@types packages listed in dependencies instead of devDependencies.
Supply Chain Security (OSV Scanner + npm audit)
▶Osv Vulnerability Count1 count
Total known vulnerabilities across all npm dependencies (via osv-scanner).
▶Osv Critical Count0 count
Vulnerabilities with CVSS score >= 9.0. Require immediate attention.
▶Osv High Count0 count
Vulnerabilities with CVSS score >= 7.0.
▶Osv Medium Count1 count
Vulnerabilities with CVSS score >= 4.0.
▶Osv Low Count0 count
Vulnerabilities with CVSS score < 4.0.
▶Osv Affected Package Count1 count
Distinct npm packages with at least one known vulnerability.
▶Osv Affected Runtime Package Count1 count
Vulnerable packages in production dependencies (not devDependencies). These ship to users.
▶Osv Fix Available Count1 count
Vulnerabilities where an upstream fix version exists.
▶Osv Fix Available Percent100 percent
Percentage of vulnerabilities with a fix available. Higher means easier to remediate.
▶Npm Audit Total27 count
Total vulnerabilities reported by npm audit.
▶Npm Audit High0 count
High severity vulnerabilities from npm audit.
▶Npm Audit Critical0 count
Critical severity vulnerabilities from npm audit.
▶Npm Override Count5 count
Total npm version overrides defined in package.json to force-fix transitive vulnerabilities.
▶Npm Override Stale Count0 count
Overrides pinning a version older than what is currently published on npm. Stale overrides indicate deferred maintenance.
▶Npm Override Freshness Percent100 percent
Percentage of npm overrides that are current (installed version matches or exceeds latest). 100% means all overrides are up to date.
CodeRabbit PR Review Quality
▶Code Rabbit Prs Reviewed10 count
Number of recent PRs reviewed by CodeRabbit AI reviewer.
▶Code Rabbit Total Files Reviewed971 count
Total files examined by CodeRabbit across recent PRs.
▶Code Rabbit Actionable Comments179 count
Total actionable findings posted by CodeRabbit. Lower means cleaner PRs at submission time.
▶Code Rabbit Outside Diff Comments16 count
Suggestions on code outside the PR diff — proactive code quality catches.
▶Code Rabbit Critical Comments18 count
Critical-severity findings (security, data corruption, logic errors). Target: 0.
▶Code Rabbit Clean Review Percent81.60 percent
Percentage of reviewed files with no actionable findings. Higher means cleaner code at PR time.
▶Code Rabbit Resolved Comments185 count
▶Code Rabbit Unresolved Comments0 count
▶Code Rabbit Unresolved Critical Comments0 count
▶Code Rabbit Baseline Pr45 count
▶Code Rabbit All Time Prs Reviewed14 count
▶Code Rabbit All Time Total Files Reviewed1,052 count
▶Code Rabbit All Time Actionable Comments293 count
▶Code Rabbit All Time Outside Diff Comments34 count
▶Code Rabbit All Time Critical Comments49 count
▶Code Rabbit All Time Clean Review Percent72.10 percent
▶Code Rabbit All Time Resolved Comments243 count
▶Code Rabbit All Time Unresolved Comments55 count
▶Code Rabbit All Time Unresolved Critical Comments11 count
Coupling
▶Type Hub Files2 count
Files that re-export many types, creating implicit coupling across modules.
▶Type Hub Max Fan Out483 count
Highest number of imports from a single type-hub file.
▶Hardcoded Column Names0 count
SQL column names hardcoded as string literals in non-repository files.
▶Type Escape Hatches445 count
Uses of "as any", "as unknown", or @ts-ignore — bypassing the type system.
445 escape hatches across 165,239 source lines
Liskov & Substitutability
▶Instanceof On Concrete Types0 count
Uses of instanceof on concrete types — violates Liskov Substitution Principle.
▶Concrete Downcasts1 count
Downcasts to Impl classes (e.g. "as FooImpl") — tight coupling to implementations.
▶Consumer Impl Imports5 count
Consumer files importing implementation classes directly instead of interfaces.
Documentation
▶Philosophy Md Lines1,216 lines
Lines in PHILOSOPHY.md — the project's design rationale document.
▶Technical Guide Md Lines785 lines
Lines in TECHNICAL_GUIDE.md — the project's technical reference.
▶Conversation Starter Count454 count
Files in doc/conversation-starter/ — design discussions and research preserved as artifacts.
▶Interfaces With Js Doc255 count
Service interfaces with JSDoc comments on the class or its methods.
255 out of 256 service interfaces
▶Behavioral Spec Lines3,629 lines
Lines in BEHAVIORAL_SPEC.txt — auto-generated from test suite output via make bdd. The tests ARE the documentation.
▶Behavioral Spec Age Days7 days
Days since BEHAVIORAL_SPEC.txt was last regenerated. Fresher = more trustworthy.
▶Readme Lines316 lines
Lines in README.md — the front door to the project.
▶Readme Has Quickstart1 count
Whether README.md contains a Quick Start section. New developers need this.
▶Readme Quickstart Steps5 count
Number of steps in the quickstart section. Fewer steps = lower barrier to entry.
▶Changelog Matches Version1 count
Whether CHANGELOG.md header version matches the VERSION file. Stale changelog = stale documentation.
▶Changelog Lines2,279 lines
Total lines in CHANGELOG.md. A substantial changelog shows disciplined release documentation.
▶Changelog Version Sections45 count
Number of ## Version headers in CHANGELOG.md. Each section = one documented release.
▶Changelog Age Days0 days
Days since CHANGELOG.md was last modified. Fresh changelog = active release documentation.
AI Adoption
▶Ai Assisted Commits2,023 count
Git commits with a Co-Authored-By: Claude header.
2,023 out of 2,663 total commits
▶Total Commits2,663 count
Total git commits in the repository.
▶Ai Commit Percent76 percent
Percentage of commits that were AI-assisted.
Git Discipline
▶Git Fix Commit Count231 count
Commits whose message starts with "Fix" — corrective work rather than new development.
▶Git Fix Ratio8.70 percent
Percentage of commits whose message contains "fix". Info only — commit messages are not atomic, so this is context, not a quality signal.
▶Git Rename Commit Count75 count
Commits that rename code. Active renaming shows naming discipline — zero renames is a smell.
▶Git Refactor Commit Count33 count
Commits that refactor, extract, or decompose code. Proactive maintenance activity.
▶Git Refactor Ratio1.20 percent
Percentage of commits that are refactoring/extraction work.
Git commit atomicity is not scored. Commits in this codebase are stream-of-consciousness snapshots — not atomic units of work. The PR is the atomic unit. Penalizing multi-file commits or attributing commit-message keywords (like "fix") to all files in a commit produces misleading results. Commit atomicity and fix ratio are tracked as context but carry zero weight.
Grayed-out scores are listed for completeness but are either not yet measurable via static analysis, are cross-referenced from another section, or are pending external tooling (e.g. CodeRabbit subscription).
All scores are deterministic. No AI, no heuristics, no sampling. Same codebase = same scores, every time. If a score seems wrong, the formula is documented and can be audited.
"Copy with Formatting" copies the preview with links intact for pasting into Gmail/Outlook.
Edit the HTML on the left to customize - preview updates automatically.
Technical Details