Neurospeech Middleware

Speak naturally. Be understood clearly.

Helping you be understood, not just heard

DysVoxa Neurospeech Platform converts dysarthric speech into intelligible, identity-preserving output in real time and routes it to your virtual microphone for Zoom, Teams, Meet, and more.

Core signal path

Mic input -> Voice Activity Detection (VAD) -> Automatic Speech Recognition (ASR) -> Dysarthria correction -> Text-to-Speech (TTS) -> Virtual microphone

Important notice

Why DysVoxa matters

Dysarthria affects millions globally—from stroke and Parkinson's survivors to people with ALS, cerebral palsy, and cerebellar ataxia. The problem is real and urgent:

The Communication Gap

Standard speech AI systems fail on dysarthric patterns. Existing solutions either don't work or erase speaker identity, leaving people isolated in work and healthcare settings.

The Real Impact

Difficulty being understood triggers communication anxiety, social withdrawal, and barriers to employment and medical care access. The psychological toll compounds the neurological one.

Recovery Window

After stroke or brain injury, neuroplasticity creates a recovery window. Real-time communication support during that period can reduce isolation and support rehabilitation outcomes.

Identity-Preserving Tech

Most assistive speech tech depersonalizes users. DysVoxa works differently: it improves clarity while protecting personal communication style and voice identity.

This is not about "fixing" dysarthria. It's about restoring access—to work, to healthcare, to relationships—during a critical window when both technology and clinical support matter most.

Investors

Market and growth thesis

Explore the unmet assistive communication segment, business model, milestone roadmap, and funding ask.

Open investor page
Speech Therapists

Clinical collaboration

Review pilot protocol and intelligibility metrics. SLP involvement is limited to app testing.

Open SLP page
Patients

Daily communication support

Understand setup steps, supported dysarthria types, privacy options, and call-app compatibility.

Open patient page

Target latency profile

Near-conversational real-time

Modeling coverage

6 dysarthria phenotypes

Deployment mode

Local-first with secure cloud fallback

Current project state

  • Core codebase is active and evolving with ongoing real-time pipeline improvements.
  • Personal voice card profile creation is currently in progress.
  • System is being tested locally with practical speech scenarios.
  • Progress is currently constrained by available hardware capacity.
  • Funding is actively being sought to scale development and validation.