Building audio plugins with C++ and JUCE that implement psychoacoustic principles
I'm developing audio plugins that process sound based on how humans actually perceive it—using critical bands, masking models, and perceptual processing rather than just raw frequency manipulation.
Psychoacoustic delay plugin testing critical band filtering and frequency masking. Built with C++ and JUCE as VST3/AU. Seeking feedback from producers to refine algorithms.
Exploring masking-aware processing and perceptual effects within Ableton Live's ecosystem. Testing concepts before implementing in C++.
Real-time frequency analysis plugin for practicing JUCE fundamentals—DSP implementation, UI design, and plugin architecture.
Starting with perception research—critical bands, masking curves, loudness models—and figuring out how to implement them efficiently.
Writing C++ code that runs in real-time without dropouts. Optimizing filter banks, managing memory, and keeping latency minimal.
Building cross-platform plugins (VST3/AU) with clean UI design and proper parameter handling using the JUCE framework.
Getting feedback from producers, refining algorithms based on real-world use, and improving usability.
Core language for audio processing
Cross-platform audio plugin development
Industry-standard plugin formats
Rapid prototyping within Ableton
I'm looking for producers and mix engineers to test EchoPsychFX and provide feedback on the perception algorithms and workflow. Early testers get direct input on feature development.
Whether you want to beta test, discuss psychoacoustic processing, or explore collaboration—reach out.
Get in Touch →