Show HN: AlgoMommy — Organize video clips by talking while recording (macOS)
If you record videos regularly and end up with a messy clips folder, AlgoMommy copies your raw clips into an existing folder hierarchy
based on short spoken instructions embedded in the recording (wake phrase: “Hey Cleo”).
After recording, drop the clips into AlgoMommy; it listens for the instructions you said during recording and routes the files into the appropriate sub-folder.
You speak natural language instructions anywhere in the clip (wake phrase: “Hey Cleo”).
AlgoMommy extracts audio locally and transcribes locally.
It sends only short text snippets (typically up to ~30 seconds of speech) for segments that appear to address Cleo, plus a list of destination folder paths under the root you choose.
The service decides where the clip should be copied and what tags/metadata to apply.
Why “Cleo”
WhisperKit is accurate but too slow to transcribe an entire long clip end-to-end.
Apple SpeechAnalyzer is much faster but less accurate.
AlgoMommy uses SpeechAnalyzer to quickly locate likely “Cleo-addressed” segments, then re-transcribes just those segments with WhisperKit for accuracy.
This page is intentionally plain for HN (no signup gates here; the app itself requires auth due to paid LLM calls).
If you’re coming from Hacker News, comments + tough questions are welcome.