ASP Project — Code Analysis Report

Auto Service Planning domain engine · Generated 2026-04-29 · 801 commits · main branch · Excludes fixtures, docs, migrations, data files, markdown, config (TOML/YAML/JSON), Dockerfiles

105,753
Lines of Code
572
Source Files
18,893
Comment Lines
99.1%
Functions ≤10 Complexity
7
Languages
5.3 MB
Source Size

Lines of Code by Language

Code Distribution by Module

Code vs Comments vs Blanks

Function Complexity Distribution

Language Breakdown

LanguageFilesCodeCommentsBlanksTotalComplexityShare
Python35874,97916,9259,109101,0133,54970.9%
TypeScript16224,8101,4332,82029,0632,91623.5%
HTML384,0801234234,62603.9%
JavaScript91,052115971,2641671.0%
CSS2515679067200.5%
Makefile215818453395810.1%
BASH11594638243490.2%

Module Breakdown

ModuleFilesCodeCommentsComplexityComment RatioShare
Domain Engine (asp/)10713,7927,9531,25536.6%13.0%
Test Suite (tests/)11330,8993,1407179.2%29.2%
Django Server (server/)16732,4545,6881,61414.9%30.7%
Workbench Frontend14222,6991,1112,7314.7%21.5%
Scripts & Audits71,22930513019.9%1.2%

Complexity Analysis (per function)

Cyclomatic complexity measures the number of independent paths through a function — each if, for, except, or case adds one path. Lower is simpler. This project enforces a per-function limit of 10 (McCabe / NIST standard) via radon. Measured across 2,005 production Python functions (tests and seeders excluded — assertion-heavy and linear field population by design).

99.1%
Functions ≤10
McCabe/NIST threshold
2.8
Average Complexity
2,005 functions
2
Median Complexity
50th percentile
8
95th Percentile
Hotspot indicator
20
Highest Function
_assemble_reservation_intents
≤10
Per-Function Limit
Enforced by radon
GradeComplexity RangeDescriptionFunctionsDistribution
A1-5Simple173286.4%
B6-10Well-structured25512.7%
C11-15Moderate180.9%
D16-20Complex00.0%
E21-50High risk00.0%
F51+Unmaintainable00.0%

How to Read This Section

What this measures:
Each number is the cyclomatic complexity of a single function or method, measured by radon (the industry-standard McCabe implementation for Python — same algorithm used by SonarQube, Code Climate, and NDepend). The McCabe/NIST recommended threshold is ≤10 per function — this project enforces it via radon on every commit (pre-commit hook). Grades follow radon’s standard scale: A (simple) through F (unmaintainable).
Why per-function, not aggregate:
Some tools report a project-wide total (e.g. “5,053”) — this is misleading because it grows with codebase size regardless of quality. The industry standard (SonarQube, Code Climate) is to report the distribution: what percentage of functions fall under the threshold, plus median and 95th percentile to identify hotspots.

Key Ratios

2.2:1
Test-to-Domain Ratio
18%
Comment Density
184
Avg LOC / File
99.1%
Functions ≤10 Complexity
30% / 70%
Domain / Server Split
2.0:1
Backend : Frontend

Development Cost & Effort Estimation

Based on the Basic COCOMO model (organic mode, Boehm 1981). COCOMO is an industry-standard formula that estimates the effort, time, and cost to build software from scratch based on its size in lines of code. 📖 What is COCOMO? (Wikipedia) 🎓 COCOMO Explained (GeeksforGeeks)

$3.61M
Estimated Total Cost
Fully-loaded cost iSalary ($56,286/yr) multiplied by 2.4× to cover the real cost of employment: benefits, office space, equipment, management, HR, training, and admin overhead. Industry standard — total cost is typically 2–3× base salary.
22.4 months
Schedule Estimate
Elapsed calendar time
320.4 months
Person-Months Effort
~26.7 person-years
14.3
Team Size Required
Average developers

Estimated Cost by Module

Effort & Team Size by Module

Module SLOC iSource Lines of Code — actual code lines, excluding blanks and comments. This is what COCOMO uses as input. Est. Cost iEstimated cost to build from scratch, using average salary ($56,286/yr) multiplied by 2.4× overhead (benefits, facilities, management, equipment, HR). Schedule iCalendar months of elapsed wall-clock time. Not additive — modules are built in parallel by a team. Person-Mo iPerson-Months — total human effort. If 3 developers work for 4 months, that equals 12 person-months. This IS additive across modules. Team Size iAverage number of developers needed simultaneously. Calculated as Person-Months ÷ Schedule. $/SLOC iCost per source line of code. Roughly consistent (~$30) because COCOMO scales nearly linearly at this codebase size. Share
Domain Engine13,792$424,8199.9337.743.80$30.8013.0%
Test Suite30,899$990,91613.7188.036.42$32.0729.2%
Django Server32,454$1,043,34313.9892.696.63$32.1530.7%
Workbench Frontend22,699$716,80712.1263.685.25$31.5821.5%
Scripts & Audits1,229$33,5453.792.980.79$27.291.2%
Total105,753$3,606,63922.39320.4214.31100%

Understanding These Estimates

What is COCOMO?
COCOMO (Constructive Cost Model) is the most widely-used software cost estimation model, created by Barry Boehm in 1981. It takes the number of lines of code and applies empirically-derived formulas to estimate how many person-months of effort, calendar months, and developers it would take to build the software from scratch. The “organic” mode used here assumes a small-to-medium team working with familiar technology.
The formulas (COCOMO Basic, organic mode):
Effort = 2.4 × (KSLOC)1.05 person-months
Schedule = 2.5 × (Effort)0.38 months
Team = Effort ÷ Schedule
Cost = Effort × ($56,286/yr salary × 2.4× overhead)
KSLOC = thousands of source lines of code
Why Schedule = 2.5 × Effort0.38?
Development time grows with effort, but not in a straight line. Why not just Time = Effort ÷ People? Because that assumes work is perfectly divisible — which software work is not. Just as 10 chefs cannot cook a meal in 1/10th the time (they’d be tripping over each other in the kitchen), adding developers speeds things up but also adds communication overhead, coordination delays, and dependencies between tasks. The 0.38 exponent captures this: as effort goes up, time goes up too — but slower, because some work runs in parallel. The 2.5 constant is a calibration factor derived by Barry Boehm from 63 real projects, anchoring the formula to match observed durations.
What is the 2.4× overhead?
The COCOMO model multiplies raw salary by 2.4× to reflect the fully-loaded cost of employment. This covers: office space and facilities, health insurance and benefits, management and HR, equipment and software licenses, training, recruitment, and administrative overhead. The 2.4× multiplier is a standard industry figure — in practice, the total cost of a developer to a company is typically 2–3× their base salary.
What it doesn't capture:
AI-assisted development, domain expertise ramp-up, data fixture creation, deployment infrastructure, or the 801 commits of iterative refinement and domain expert consultation across the project lifetime.
Actual vs estimated:
This 105,753 SLOC codebase was built by 1 developer + AI in ~35 days (801 commits since 2026-03-25). COCOMO estimates 14.3 developers over 22.4 months — demonstrating significant productivity leverage from AI-assisted domain-driven development.

Productivity Metrics

801
Total Commits
Since 2026-03-25
~35 days
Actual Dev Time
vs 22.4 months with 14.3 devs
3,021
LOC per Day
Code lines only
22.9
Commits per Day
Atomic commits
132
Avg LOC / Commit
Code lines only
~274x
Productivity Multiplier
vs COCOMO baseline

Top 25 Largest Files

#FileLangCodeCommentsCmplxSize
1server/core/admin.pyPython3,1639864
2server/network/sinks.pyPython2,55953691
3server/sandbox/views.pyPython2,286154138
4tests/django/network/test_network_seeder.pyPython2,197408111
5tests/domain/p2p_planning/test_regen_services.pyPython1,83122231
6tests/django/django_models/test_django_repositories.pyPython1,55019221
7tests/domain/architecture_classification/test_services.pyPython1,4049940
8tests/django/django_models/test_dual_repo_parity.pyPython1,320365
9server/engine/test_contract.pyPython1,1905523
10tests/domain/equipment/test_services.pyPython1,180293
11asp/p2p_planning/infrastructure/django_repositories.pyPython1,07810238
12workbench-v2/frontend/src/components/PlanDetailPage.tsxTypeScript1,0382139
13tests/django/django_models/test_core_models.pyPython9022418
14workbench-v2/frontend/src/__tests__/fixtures.tsTypeScript893170
15server/network/baseline_ort.pyPython84557154
16asp/equipment/infrastructure/django_repositories.pyPython83322177
17workbench-v2/frontend/src/components/Graph.tsxTypeScript80564164
18tests/domain/p2p_planning/test_services.pyPython8043610
19server/engine/django_seeder.pyPython7969420
20server/engine/sandbox_response_shapes.pyPython771550
21workbench-v2/frontend/src/components/delivery-view/deliveryPageTransform.tsTypeScript76723255
22tests/domain/p2p_planning/test_intra_routing_walker.pyPython732233
23asp/p2p_planning/domain/regen_slice_walker.pyPython72819455
24server/network/baseline_builder.pyPython72819419
25workbench-v2/frontend/src/types.tsTypeScript728250