Motor Truck Cargo · Data Intelligence Platform

The data to price cargo risk correctly has always existed. It was just never in one place.

Motor truck cargo is one of the most complex property & casualty lines to price. Losses happen through crashes, yes — but also through heavy braking, spoilage, seal failures, and fictitious pickups. Fleetidy compiles everything that matters — public safety data, application intelligence, and claims outcomes — into a single platform where actuarial analysis can build a model that sharpens over time.

800K+ FMCSA Carriers Every Loss Type Three Data Layers Iterative Pricing Model

Layer One · Public Data

800,000 carriers. Seven federal datasets. A decade of safety history.

The FMCSA publishes complete operating history on every interstate motor carrier in the country. Census records, inspection reports, roadside violations, crash incidents, BASIC scores, authority history, and operating status. Fleetidy compiles all of it, normalizes it against fleet size and mileage exposure, and applies empirical weights validated against actual loss correlations. This is the baseline every carrier gets scored against before a single application is filed.

Scored

Crash-per-Power Unit

Normalized crash frequency adjusted for fleet size and mileage exposure

Scored

Behavioral Risk Index

Weighted violation portfolio across 30+ violation types with empirical relative-risk multipliers

Composite

FRED Peer Index

Percentile rank within fleet-size peer group — a single number from 0 to 100

Beyond Crashes

Cargo is lost in many ways. You need to see all of them before you know which ones are predictable.

Crashes are the most visible failure mode — and FMCSA data captures them well. But cargo is also lost through heavy braking, spoilage, broken seals, fictitious pickups, fire, and dozens of other causes. The only way to know which loss types have predictive triggers is to collect claims data across all of them. Once you have the full picture, the data itself reveals which causes can be managed through underwriting decisions — terms, exclusions, pricing, declination — and which are essentially random perils. For the loss causes that turn out not to be predictable, you still need to know their loss cost per unit of insurance to confirm that the rate is adequate.

Loss causes Fleetidy captures

Crashes

Collisions, rollovers, jackknifes. Already well-covered by Layer 1 FMCSA data — crash frequency, violation history, and safety scores.

Inertial

Hard braking, rapid acceleration, sharp turns — cargo shifts or is damaged without any collision. Requires loss-type data from applications.

Adulteration

Spoilage, rusting, contamination, broken seals, refrigeration failure. Highly commodity-specific.

Theft & Fraud

Cargo theft, fictitious pickups, hijacking. A rising loss category correlated with lane, commodity value, and operational profile.

Fire & Weather

Vehicle fires, severe weather events, natural disasters. May or may not have carrier-specific predictors — the data will tell.

Other Perils

Mechanical failure, warehouse incidents, and any other cause. Captured so the full loss cost per unit of insurance is known.

The principle: collect claims data across every loss type. The data reveals which causes have predictive triggers — those get built into underwriting rules. The causes that turn out to be unpredictable still matter: their loss cost per unit of insurance sets the floor for rate adequacy. Both outcomes require Layers 2 and 3.

The Architecture

Three data layers. Each one builds on the last.

Predictive pricing requires more than a snapshot. It requires a model that accumulates institutional knowledge over time — what risks were submitted, what was written, what was declined, what claims resulted, and how those outcomes compare to the initial risk assessment. Fleetidy is structured as three compounding data layers.

Layer 1 · Public Data

Active

FMCSA census, inspections, crashes, violations, BASIC scores, and authority history for 800K+ interstate carriers.

Normalized, scored, and available at point-of-quote. No application required.

DOT Census Crash Records Inspections BASIC Scores Violations Authority

Layer 2 · Application Data

Grows With Every Submission

Structured data from every application — written and not-written. Both the risks you bound and the risks you declined contain pricing signal.

Commodities, lanes, coverage limits, loss run data, premium history, and reasons for decline.

Commodity Profile Coverage Structure Loss Run History Geographic Ops Decline Reasons

Layer 3 · Claims Outcomes

Closes The Loop

One year after binding, what actually happened? Claims frequency, severity, and loss type fed back into the model validate and sharpen Layer 1 and Layer 2 signals.

This is the feedback loop that makes predictive pricing possible.

Loss Type Claim Severity Frequency L1 Correlation L2 Correlation

Institutional Knowledge Accumulates

Every submission adds a data point. Over time, the platform holds a complete picture of what you have seen, what you have written, and why. That picture is searchable, analyzable, and exportable.

Actuarial Analysis Without Manual Extraction

Instead of pulling spreadsheets from three different systems, the data is already normalized and joined. Actuaries and data scientists work directly on clean, structured output — with or without ML tooling.

Each Cycle Sharpens The Next

As claims outcomes arrive, the model's predictive correlations are validated or corrected. Variables that show actuarial lift get elevated. Variables that show no signal get deprioritized. The model improves with every underwriting cycle.

Submission Management · Layer 2 Collection

Every application is a data asset. Treat it like one.

The submission pipeline is the primary interface for collecting Layer 2 data. Every application — whether quoted, bound, referred, or declined — contributes commodity profiles, coverage structures, loss run histories, and operational context that feeds the pricing model. Dual-client workflows ensure both broker intent and underwriter reasoning are captured, not just the final outcome.

Broker Dashboard

Create applications with structured data capture — commodities, lanes, coverage needs, and loss history. Submit to the underwriting pool when ready. Track every submission through quoting, binding, and issuance.

Underwriter Dashboard

Claim submissions from the incoming pool. Access the carrier's FRED Score alongside application data at point-of-quote. Rate, quote, refer, or decline — every decision is recorded with structured reasoning.

Shared Messaging

Threaded conversations between broker and underwriter, directly on the submission record. No context lost. No forwarded chains. Every message tied to the account it belongs to.

File Uploads & Auto-Classification

Upload or email documents into any submission. Files are automatically sorted by type — loss runs, certificates, binders, photos — so the submission file stays organized without manual tagging.

Tiered roles ensure brokers submit, underwriters quote, managers oversee their teams, and admins control the platform. Referrals are structured with reason codes and tracked. Every action is auditable.

Security · Protecting Institutional Data

The value is in the data. The security model reflects that.

Fleetidy accumulates proprietary pricing intelligence — loss histories, decline reasons, commodity correlations, and carrier assessments that do not exist in any public dataset. The security architecture is designed to protect that institutional knowledge with enforced access controls, full audit trails, and insurance-grade session management.

Two-Factor Authentication

TOTP-based 2FA with QR provisioning and backup codes. Enforced at login and available for all user roles.

Role-Based Access Control

API-level enforcement of role permissions. Brokers cannot access underwriter actions. Managers see only their team. Admins have a complete audit view.

Full Audit Trail

Every login, submission, status change, quote, and document upload is logged with timestamps, user IDs, and IP addresses. Searchable and filterable by category and date range.

Secure Session Management

Bcrypt password hashing, CSRF protection, secure cookie configuration, and session expiration policies. XSS-safe output escaping throughout all templates.

Underwriting Workspace · Where Data Becomes Decisions

Everything an underwriter needs. One record.

When an underwriter claims a submission, they get a tabbed workspace with the full picture: application details, the carrier's FRED Score and safety profile, messaging with the broker, private notes, referral tools, a rating calculator, and a quote builder. Every decision — quote, refer, decline — is recorded with structured reasoning that feeds Layer 2.

Integrated Rating

Enter trucking revenue, storage revenue, and rate percentage. Premium calculates automatically. Push directly into a formatted quote.

Structured Referrals

Refer by capacity, authority concern, risk complexity, or specialty. Set priority, designate the recipient, and add context notes. The referral trail stays on the record.

Carrier Intelligence Map

Every submission links to the carrier's full FMCSA profile — geospatial HQ mapping, inspection timeseries, safety metrics, and the FRED Score — right on the Insured Info tab.

Decline with Transparency

When a risk does not fit, decline with a documented reason that is sent directly to the broker. No ambiguity. No lost follow-ups. The decline reason becomes a data point in Layer 2.

Underwriter Workspace
Application Insured Info Messages Notes Rate Quote
FRED Score 38
Status
Working
Effective
03/01/2026
Documents
loss_runs_2024.pdf Loss Runs
cert_of_insurance.pdf Certificate
warehouse_photos.zip Photos
Quick Actions
Send Quote Refer Decline

Start building your pricing model

Your first data layer is ready. Start compiling the rest.

The FMCSA foundation is already scored and waiting. Sign up to begin adding your application data, your underwriting decisions, and your claims outcomes to a system designed to turn institutional knowledge into predictive pricing.