Optional detectionOptional humanSilence after a short greeting before settling as HUMAN.
Optional humanSpeech longer than this is treated as machine-like (skips the short-greeting heuristic).
Optional interruptOptional llmLLM used to classify call greetings. Resolution order:
LLM instance: used as-is (caller-owned; AMD will not close it).string: treated as a Cloud Inference model id (e.g. 'openai/gpt-4o-mini')
and an inference LLM is constructed (AMD-owned).null: explicitly opt out of the AMD default and reuse session.llm
(mirrors python NOT_GIVEN). Throws if the session has no compatible LLM.undefined (default): use the bundled default model string.Optional machineSilence after machine-like speech before opening the silence gate.
Optional noIf no final transcript arrives within this window, settle as MACHINE_UNAVAILABLE.
Optional participantRestrict AMD to a specific participant. Used to filter the
waitForTrackPublication gate (see python detector.py) and span
attribution. When unset, AMD binds to whichever participant the session
is linked to.
Optional promptOverride the AMD classification system prompt.
Optional sttDedicated STT used to transcribe call audio for AMD. Resolution order
mirrors llm above. When set (or default), AMD subscribes to a private
audio branch from AudioRecognition and never depends on the
pipeline STT — useful when the session is using a realtime model that
does not surface user transcripts. Pass null to disable the dedicated
STT and listen to session-level UserInputTranscribed events instead.
Optional suppressIf true, do not log a warning when the resolved LLM is not among the bundled AMD-tested model strings. Has no effect on classification behavior.
Hard ceiling for the entire detection. After this, settle with whatever evidence exists.