Roles:
- We are building the world’s first real-time collaborative music creation platform, where thousands of people can co-produce music LIVE with AI and human producers.
- Your mission is to engineer the system that lets audiences influence music production in real timetempo, kicks, melodies, lyrics, AI-generated stems—using natural language, voice inputs, and interactive voting.
Responsibilities:
- Build real-time collaborative music interaction tools
- Integrate with DAWs (Ableton/FL Studio/Logic) or AI music tools (e.g., Suno, Udio, Sudo Studio)
- Implement live word-cloud feedback from crowd voice + text inputs
- Build voting systems, sliders, and real-time UI responders
- Create systems to map audience input → music production actions
- Handle live session sync, audio routing, low-latency communication
- Build an interface for producers to receive and use crowd feedback
- Speech-to-text pipeline for live collective voice input
- AI-assisted suggestions for BPM, chord progressions, stems, etc.
Requirements:
- WebSockets, real-time systems, low-latency architecture
- React / Next.js for the real-time interface
- Node.js / Python for backend orchestration
- Experience with music APIs or DAW integrations a strong plus
- Knowledge of AI/ML pipelines (Whisper, LLMs, audio models)
- Strong understanding of audio processing concepts
- Experience with clustering / ranking algorithms
- Ability to turn experimental ideas into functional prototypes fast