Voice
Primary. Tap-to-talk in Hindi or English.
How it works
Voice in, structured care out. Seven steps inside SAHAI — and the architecture behind them.
The elder taps a big button and says what they feel. "My head hurts." "Remind me to take my BP medicine at 8." Hindi, English, or both.
Speech becomes text. AI extracts intent, symptom, medicine, time. If it's a critical input, SAHAI confirms — once, gently — before saving.
The event is timestamped and stored. Raw input + structured output, side by side. Nothing is overwritten. Ever.
Reminders go into the reliability service — a deterministic queue separate from AI. They will fire on time even if everything else is slow.
At the right moment, SAHAI speaks: "It's time for your BP medicine." Three big buttons appear: TAKEN, SKIP, REMIND IN 10.
If a critical medicine is missed, family gets a push. Then a phone call. Then an SMS. Until someone confirms that the elder is okay.
Confirmed inputs and repeated patterns flow into long-term structured memory — visible in the family dashboard, exportable for the doctor.
Channels
Voice
Primary. Tap-to-talk in Hindi or English.
Text + voice messages, same processing.
App UI
Big-button fallback when voice fails.
IVR call
Reminder calls with keypad input.
Architecture
AI services interpret. They never directly write to the database. The Data Service is the single source of truth — and the Reliability Service runs reminders without ever asking the AI for permission.
Service 01
The entry point. Authenticates, routes, rate-limits.
Service 02
AI layer only — no database access. Speech, intent, response.
Service 03
The single owner of the database. Append-only, audited.
Service 04
Reminders, escalation, retries. Deterministic by design.
Service 05
Push, voice call, SMS. The chain that must always work.
Tech stack
Offline support