
Work Highlights
In Harmony: An Interactive Media System
In Harmony was an interactive media installation housed inside a geodesic dome. Sitting in the middle of the dome was a table with a clear surface and a webcam at its base. Museum visitors moved "game pieces" across the table to activate sound for a 2nd-order ambisonic sound system (9 speakers) and change the lighting behavior of hundreds of LEDs. They could control the sound's spatialization, filters, and faders as well as trigger hidden sounds.
For this installation, I was project lead, programmer, composer, sound designer, and installation co-designer. Being project lead meant I was responsible for making sure the project was on-time and within budget constraints. Additionally, it was my job to coordinate the programming, creative, and fabrication teams.
We used JavaScript, bash, Max/MSP, and reacTIVision to create the stand-alone software application. The app is owned by Baltu Technologies, with whom I was employed at the time. That deliverable was licensed to the i.d.e.a. Museum of Mesa, who commissioned Baltu to create the exhibit.
In Harmony was open to the public from June 2018 through November 2019. Though we had a debugging and repair clause in our contract, and though our team would check-in on the installation via site visits, none of it was necessary as there was not one technical problem during its entire lifespan.
Face- & Eye-Controlled Interactive AI Music System for Analog Synths
This video is an excerpt from a lecture-recital I gave at Western Carolina University. The system uses Python, ChatGPT 3.0 (text-davinci-003), MIDI, OSC, Max for Live, EyeHarp, ZigSimPro, and Moog synthesizers.
This system was designed to facilitate a collaborative human-machine interaction where no composer's work was used a learning model. Instead, the prompt asks the AI for "three harmonious frequencies". The prompt is continuously retriggered by the user, creating a chord progression. In-depth details are given in the video.
Virtual Reality, 360 Videos, & Spatial Audio


I have worked on multiple virtual reality software applications made with Unity gaming engine. I wrote code in C#, did audio post-production, storyboarded game designs, did 3D spatial sound design, and composed music. Above are two clips of such endeavors.
The first video is a trailer for a VR game made for the Phoenix Natural Discovery Center. For that game, I did sound design, composed music, and wrote C# code. I also storyboarded and directed the trailer.
The second clip is a 360 video of an in-studio string quartet performance, for which I did audio post-production. It uses 1st-order ambisonic format, so users with VR headsets may look around the environment and hear the spatialization change as they do. Should you be viewing on a computer, you may click the navigational compass in the top-left to "look around" and hear the changes in sound.
Live Sound & Livestreaming

Over my 20+ year career in music, I have served as engineer for hundreds of concerts. I have also run live sound for numerous speaking events, such as fundraisers, presentations, and open houses.
During my tenure at Western Carolina University, I revived the school's auditorium sound system and expanded the livestream infrastructure, coordinating a student engineer team responsible for 33,000 minutes of online watch time.
The video above is a broadcast for which I was one of three lead engineers. I also served the host and ensemble director. The livestreamed event was a live in-studio concert broadcast from three separate university studio spaces. It was simulcast to 95.3 WWCU, Dillsboro.
Music Production & Composition
I have extensive experience as a music producer and composer in the field. I feel at home whether sequencing virtual instruments in a DAW, arranging for and running studio recording sessions, or orchestrating for the concert hall. I can write and produce a wide range of commercial styles, having done client work in cyberpunk, new age, R&B, and beyond. I'm also classically trained in chamber and orchestral composition.
I've been performing my original electronic music internationally for 15 years. I specialize in launchpad and analog synth performance. Many of my performances use wavetable softsynths and orchestral sample libraries. In addition to composing and performing my electronic music, I often serve as mixing/mastering engineer for them as well.
Biography
Justin Kennedy is a software engineer and system designer specializing in music software and accessible music technology. He holds a Doctor of Musical Arts degree from Arizona State University, for which he wrote a dissertation on computer music and its intersection with ancient Mongolian vocal music. He also holds certificates from Skillsoft for C++ and from EyeHarp as an accredited EyeHarp specialist.
He is currently developing a software synthesizer in collaboration with a company that cannot be named at this time. This project is being developed using CMake, C++, JUCE, OpenFrameworks, and boost.
