There are a few different ways to sync visuals to music:
The demos below work in Chrome using Three.js and the Web Audio API, but the same principals apply if you are using Processing, OpenFrameworks or some other framework. Audio Analysis To sync to an audio input, we need to analyse the audio stream in realtime. There are 4 main pieces of data we can extract: Volume – the thicker bar on the right hand side Waveform – the jagged white line Levels – the bar chart of frequency amplitudes, from bass on the left to treble on the right. Beat Detection – the volume bar flashes white when a beat is detected. The white line above the volume bar indicates the beat threshold. To see what these look like, view the Audio Analysis Demo. Drag and drop an MP3 file to play it, or switch to the mic input with the control panel at right. Volume The volume is the current global amplitude or loudness of the track. Volume data can be used raw or eased over time to give a smoother value: smoothedVolume += (volume - smoothedVolume) * 0.1; Simple volume tracking can be enough to give a nice synced feel. In the Paradolia demo, the volume is used to determine the brightness of the lights in the scene. Beat detection is also used to trigger the material textures switching out. Waveform The waveform is the shape of the sound wave as it flies through the air and hits your ear. With the Web Audio API, use this call to get the waveform as an array of numbers between 0 and 256, where 128 indicates silence: analyser.getByteTimeDomainData(timeByteData); The Loop Waveform Visualizer draws the waveform data into circles that expand from the middle of the screen. The volume is also used to give a little bounce on the height of the waveform. Levels The levels are an array of amplitudes for each frequency range. They can be visualized as a bar chart or a 1980’s graphic equalizer. Using the WebAudio API this call will get the levels as an array of numbers between 0 to 256, where 0 indicates silence.
0 Comments
|