Scientific foundation

The evidence case for PulseGuard.

This page summarizes what peer-reviewed mouthguard validation, concussion consensus work, and on-field data-screening studies actually support. The goal is not to overclaim. It is to show why a mouthguard-first, multi-signal workflow is a strong product direction for real teams.

Primary sources onlyPeer-reviewedDecision support, not diagnosis

Search the evidence

Find topics, evidence, and specific cases faster.

Search across the evidence blocks and the reference base for terms like concussion, coupling, risk, subconcussive, or mouthguard.

3 evidence blocks14 references

Measurement fidelity

A mouthguard-first system follows the strongest sensor position in the literature.

Instrumented mouthguards can capture linear and rotational head kinematics close to the skull. Validation studies show strong agreement is achievable, while follow-up work shows that dentition coupling and fit are decisive for accuracy.

What this means for PulseGuard

PulseGuard is designed around a mouthguard-centered measurement path instead of a loosely coupled external mount, because the literature keeps pointing back to skull coupling as the critical foundation.

Clinical reality

Concussion decisions are strongest when they are multimodal, not single-signal.

The consensus literature does not support diagnosing concussion from one number alone. Acute assessment works best when impact context is combined with cognition, balance, vestibular or oculomotor findings, symptoms, and trained review.

What this means for PulseGuard

PulseGuard is positioned as a decision-support layer: impact mechanics plus physiologic context plus review workflow, built to support earlier escalation rather than replace clinical judgment.

Field deployment

Raw on-field impact data is not enough; screening and verification matter.

Recent on-field studies show that raw mouthguard event streams contain spurious recordings and coupling problems. Data becomes meaningfully decision-grade only after screening, verification, and context-aware review.

What this means for PulseGuard

PulseGuard is framed as an operational workflow, not just a sensor output. That makes the product direction stronger for real teams, because evidence quality depends on process as much as hardware.

Primary sources

Reference base.

Join the waitlist

The full reference base is currently collapsed.

Expand it to browse all primary sources, or use the search above to jump straight to a specific topic or paper.

Validation boundary

What this page does not claim.

Strong evidence does not mean unlimited claims. The most credible position for PulseGuard is to help teams detect, contextualize, and escalate faster while keeping medical judgment in the loop.

Not a diagnosis by itself

No sensor stream alone should claim to diagnose concussion. The evidence supports better screening and faster review, not replacing trained clinical assessment.

Fit and coupling stay critical

Mouthguard quality depends on how securely the device couples to dentition during live sport. Poor coupling reduces accuracy and can increase false events.

Operational workflow decides value

A good product needs more than a sensor. Review rules, escalation logic, and human follow-up are what turn raw signals into trustworthy decisions.