Audio reactive webcam free download

Just a quick blog post on a Max for Live patch I had been working on which I have provided a free download of, the video of the patch is below , music is by very talented Ilkae-(track titled “KK”)  be sure to check his music out.

https://ilkae.bandcamp.com/

This patch was inspired by Masato Tsutsui who would be one of my favourite programmers/digital artists. By following his two tutorials which are linked at the end of the post , I was able to use the feed from my webcam as a texture, which was then made audio reactive in Ableton Live 9.6.

I advise you to watch both of his tutorials because he explains on how to use a webcam feed as a texture better than I can even though his tutorials are in Japanese ! .

After completing his two tutorials , I decided to continue on and make the patch both audio reactive and to be able to work in Ableton live as a Max for Live patch.

I have covered how to create a basic audio reactive Max for Live patch here and here, this same method is used to make the relevant shape audio reactive.

I have included the Max for Live patch as a free download .

The patch will be in presentation mode , to view patching mode press the yellow button as displayed in the image below. By pressing cmd+E you will unlock the patch.

Screen Shot 2016-02-24 at 22.37.13

I have commented the patch to try help explain some parts of it , if you have any questions feel free to ask , thank you for reading this post.

To install the patch

  • Open up Ableton Live
  • Select the Master Channel
  • Drag and drop the .amxd file into the effects
  • Drag in a track into any audio channel
  • Press the toggle to start the patch and select open to start the webcam
  • Play an audio file and view the screen (esc will put the screen into fullscreen mode)

http://audiovisualacademy.com/avin/en/software/maxmspjitter-masato-tsutsui-camera-part-1/

http://audiovisualacademy.com/avin/en/software/maxmspjitter-masato-tsutsui-camera-part-2/

https://vimeo.com/masato221

Advertisements

Very Basic Max for Live Jitter + Download

Max for Live Introduction 

When the developers of Cycling 74 met with both Robert Henke and Gerhard Behles from Ableton Live, Max for Live was soon formed. This programme allowed for the development of devices such as synthesisers, drum machines, audio effects, MIDI effects and the implementation of live video into the music creation and performance software using Max MSP. Max for Live allows Max MSP to be used to create MIDI effects (which processes MIDI data), audio effects (for processing audio) and for the development of instruments (takes MIDI performance data and transforms it into audio).

These devices can be created in Ableton Live for real time processing (ability to hear instruments as you develop them). With the series of specific Max for Live objects available (which all begin with live.), MIDI mapping parameters of a created device to a hardware MIDI controller is achievable.

Some of the objects include:

Live.dial: Circular slider or knob which outputs numbers according to its degree of rotation.

Live.gain: Decibel volume slider or monitor

Live.slider: Output numbers by moving a slider on screen

Live.toggle: Creates a toggle switch that outputs 0 when turned off and 1 when turned on.

By pressing command and m, these live objects can be mapped to any recognised MIDI controller, a GUI (Graphical User Interface) can be designed within the Max for live vertical limit to create both ease of use and accessibility for the user.

Max for Live works through access of the live object model, this map is a guide to everything accessible within Ableton Live. Not all parameters are available in the music software’s Application Program Interface (API), the live object model shows what max for live has access to.

ObjectModel

Creating a Max for Live Jitter Patch: 

Below I am going to briefly demonstrate how to create a basic Max for Live Jitter patch , I recommend you right click on each object used and select “reference” to read up on them more, it is one of the best ways to learn Max in my opinion. The download link for the created patch is below. The idea of this post is to demonstrate to people new to Jitter on how to create a basic audio reactive patch to work within Ableton live , I have added comments in the patch to try and explain on how it works.

To create a Max for Live device we first open up Ableton Live. Select Max for Live and drag in an empty Max for Live audio effect into the master channel.

Screen Shot 2016-02-24 at 21.02.04

This creates an empty audio device patch with just the plugin~ and plugout~ objects. These represent the audio coming from Ableton Live and the audio being sent to the audio output device.

When creating an audio effect, signal (which is indicated by the green and white patch chords) from Ableton is routed through a created effect and then sent to the left and right outputs.

For a Jitter patch, a copy of the audio signal is taken for audio analysis while leaving the plugin~ and plugout~ objects intact. This means that the audio will play as normal while also being sent to the relevant Jitter object for audio analysis.

Drag in a track of your choice to the same channel, which will be used for audio analysis for the creation of the jitter patch.

The Qmetro object bangs out frames per second, this is activated using the toggle and once it is selected the Qmetro starts the video. The Qmetro object has low priority properties, compared with the Metro object which triggers a bang at the set interval at all times. Meaning it will slow down in triggering bangs depending on current CPU usage, resulting in a lower frame rate. To view the frame rate , the jit.fpsgui object is used. This is attached to the jit.gl.render object.

To allow of the drawing and rendering of OpenGL, the jit.gl.render object is needed. This renders OpenGL objects to the destination window.

For a video to be rendered successfully , a message is required to allow for the erasing and drawing of frames. A trigger bang erase message is used , this first erases the existing frame , then receives a bang from Qmetro releasing a new frame and the trigger to draw the next frame, this process is then repeated.

By leaving out this message will result in the image being constantly drawn over and over again on the same frame.

To analyse the audio , the jit.catch~ object is used which transforms audio into matrices. These matrices can be seen by connecting the jit.pwindow  to the outlet of the object.

image

The next stage is to create a shape , to do this adds the jit.gl.gridshape object. This creates defined shapes such as sphere, torus , cube , plane , circle and others.

The jit.op object is added , this object is used to add the matrices from the jit.catch~ object to the gridshape. The @ symbols represent attributes for an object , in the case of jit.gl.gridshape, @Shape Sphere is added , this will automatically draw a sphere shape once the main toggle switch is pressed .

To add a menu of an attribute , click on the left hand side of the object (in this case the jit.gl.gridshape object) select shape , this will add a scrollable menu , allowing you to change to different pre determined shapes. This object is then attached to the jit.gl.mesh object, through changing attributes give you the ability to have different draw modes such as polygon, line, point, triangle, quads and line loop. These determine how the shape will be drawn. The @auto_colors attribute is added to give an array of colour onto the gridshape.

The jit.window object is added , this will create a floating screen where your visuals will be rendered.
The jit.gl.handle object is added, this allows you to move the gridshape with your mouse, it can be rotated, zoomed in and out (hold alt and click mouse), or positioned on screen (hold Cmd and click mouse)

Screen Shot 2016-02-24 at 23.55.20 Finished Max for Live Patch

In Max there is both a patching mode and presentation mode, the latter being used to create the graphical user interface in Ableton Live. Which an example of can be seen below.

Screen Shot 2016-02-24 at 22.06.55To add an object to presentation mode , just right click and select “Add To Presentation Mode” , when all relevant objects are selected then press “Alt + Cmd+E” or press the yellow highlighted button in the screenshot below, this will switch between Patching Mode and Presentation Mode. Screen Shot 2016-02-24 at 22.37.13

When in Presentation Mode , all relevant objects can be positioned above the vertical device limit. The device can be downloaded here . Press Cmd+E to unlock the patch and switch to patching mode to view this basic patch.

If you have any questions , feel free to leave a comment ,thank you for reading.

I should have another post up soon enough , please follow if interested.

The new mastering engine is here. Get that crystal clear & airy sound for your tracks.

Max for Live Jitter Patch

Ive been working on a few Max for Live patches over the last month or so , Im still relevantly new to Max and Jitter and constantly learning more each week.

This patch was inspired by Masato Tsutsui who would be one of my favourite programmers/digital artists. By following his two tutorials which are linked at the end of the post , I was able to use the feed from my webcam as a texture, which was then made audio reactive in Ableton Live 9.6 .

The idea of the patch was to see movement of who ever is in front of the webcam while also being audio reactive to the music playing in Ableton Live , I still have more that I would like to do with the project.

The video of the patch is below , music is by very talented Ilkae-(track titled “KK”)  be sure to check his music out.

https://ilkae.bandcamp.com/

Max for Live allows Max MSP to be used to create MIDI effects (which processes MIDI data), audio effects (for processing audio) and for the development of instruments (takes MIDI performance data and transforms it into audio).

These devices can be created in Ableton Live for real time processing (ability to hear instruments as you develop them). With the series of specific Max for Live objects available (which all begin with live.), MIDI mapping parameters of a created device to a hardware MIDI controller is achievable

When creating an audio effect, signal (which is indicated by the green and white patch chords) from Ableton is routed through a created effect and then sent to the left and right outputs, for a Jitter patch, a copy of the audio signal is taken for audio analysis while leaving the plugin~ and plugout~ objects intact. These objects represent the audio coming from Ableton Live and the audio being sent to the audio output device, this means that the audio will play as normal while also being sent to the relevant Jitter object for audio analysis.

To analyse the audio the jit.catch~ object was used. The jit.catch~ object transforms signal data, which essentially is audio into matrices, this can be seen in the image below.

Untitled

For my next blog post , I intend to have a tutorial on creating a basic Max for Live Jitter patch. The links to the tutorials used and the vimeo link to Masato Tsutsui are below, thank you for reading. 

http://audiovisualacademy.com/avin/en/software/maxmspjitter-masato-tsutsui-camera-part-1/

http://audiovisualacademy.com/avin/en/software/maxmspjitter-masato-tsutsui-camera-part-2/

https://vimeo.com/masato221

Brief History of Visual Music

The history of visual music dates back as far as the 16th century. Through Giuseppe Arcimboldi`s study of the Pythagorean harmonic proportions of tones and semitones he displayed the relationship between the musical scale and the brightness of colours. Starting with white and gradually adding more black, he managed to render an octave in the twelve semitones, with the colours ranging from white to black, this grey scale painting would gradually darken the colour white, using black for indicating a rise in semitones.

The Italian painter divided a tone into two equal parts, gently and softly, he would turn white into black, with the white representing a deep note and black representing the very high ones.

In 1704, while analysing the spectrum of light, Isaac Newton suggested a close link between the seven colours of the rainbow and the seven notes of the musical scale. The scientist stated that an increase of the frequency of light in the colour spectrum from red to violet made a corresponding increase in the frequency of sound in the diatonic major scale.

Since Isaac Newton’s idea, other people have had different response to the scientist’s link between colour and sound.

Rivera0

In 1743, a French mathematician by the name of Louis Bertrand Castel introduced the relationship between colour and notes. This led to him inventing and creating the ocular harpsichord, this musical instrument could transform sound into colour. With each note in the scale representing a different colour, for example when the C note was pressed, a small panel indicating the colour violet would appear above the instrument. The mathematician later perfected his system, proposing a range of twelve colours, which corresponded to the semitones.

A number of instruments and responses were since based on Castels work, all with their own ideas on the relationship between colour and sound.

With many studies on the relationship between colour and sound over the years, physician Ernest Chladni, took a different approach to the study and looked at the relationship between sound and form. In 1987 he investigated the patterns produced by certain frequencies through vibration on flat plates.

This was achieved by scattering fine sand evenly over a glass or metal plate and by gliding a violin bow against the plate to cause patterns through vibrations. The vibratory movement caused the powder to move from the antinodes to the nodal lines. Black lines represented the parts of the plate, which vibrated the most. Chladni was able to produce sound, giving it a dynamic image; he discovered that the same sound would produce the same pattern each time.

Swiss doctor Hans Jenny was influenced by Chladni`s work in cymatics, which is the study of sound and vibration made visible. In 1967 she published the first volume of cymatics. The Study of Wave Phenomena documented several experiments performed by Jenny using sound frequencies on various materials including water, sand, liquid plastic and iron filings.

Many crystals are distorted by electric impulses and produce electric potentials when distorted. When a series of electric impulses are applied to the crystal, the resulting distortions will have the character of real vibrations. These crystals allowed for a whole range of experimental possibilities with the ability to display both frequency and amplitude. The oscillator is attached to the underside of the plate and when a frequency is outputted, the material on the plate generates a pattern.

Jenny then proceeded to invent the tonoscope, which was constructed to make the human voice visible. By singing into a pipe, the air passes through, causing vibrations on the black diaphragm, which has quartz sand evenly spread across it.

Hans Jenny stated that if you had the same frequency and the same tension, you would get the same form, with low tones generating simple patterns and high tones resulting in more complex designs.

The pattern is characteristic not only of the sound but also the pitch of the speech. Hans Jenny also used this device to visualise music, namely orchestral music such as Bach and Mozart.

Thomas Wilfred was born in Denmark in 1889. Upon moving to New York in 1919, he co-founded the Promethans, who were a group dedicated to exploring spiritual matters through artistic expression.

Speaking of light as an art form, 1922 saw Wilfred invent the Clavilux, which was considered to be the first device designed for audio-visual shows.

The Clavilux had six projectors, which were controlled by a keyboard consisting of banks of sliders, which would resemble a modern lighting desk. An arrangement of prisms would be placed in front of each light source. Wilfred mixed the intensity of colour along with a selection of geometric patterns.

Although most of Wilfred`s performances with the Clavilux were presented in complete silence, it was not until 1926, that he collaborated with the Philadelphia Orchestra in the presentation of Rimsky-Korsakov’s Scheherazade.

Thomas Wilfred produced roughly forty works before his death in 1968 but only eighteen pieces have since survived. The Clavilux was capable of creating complex light forms, which mix together to create a depth of light; this could be seen as resembling the northern lights in Iceland.

Influenced by Thomas Wilfred`s colour organ and Leon Theremins music, Mary Ellen Bute began to develop a kinetic visual art form. She produced several abstract animations set to classical music by Bach and Shostakovich.

This was achieved by submerging tiny mirrors in tubs of oil and connecting them to an oscillator. With the production of these animations, Mary Ellen Bute said that she sought to “Bring to the eyes a combination of visual forms unfolding along with the thematic development and rhythmic cadences of music”.

She referred to some of her films as seeing sound and a few of Bute`s abstract films were shown at Radio City music hall and were often screened before Hollywood feature films. Center for Visual Music. (2014)

In 1921 German, painter and filmmaker Walter Ruttmann created Opus 1. He assembled each projection print of the film with an old college friend who wrote the score. A string quintet performed live with each screening if Opus 1, which was shown in several cities across Germany. The abstract shapes moved onto the screen in time with the music, Ruttmann achieved this by drawing colour pictures in the musical score so musicians would be able to synchronise their playing with the film.

Upon the attendance of a rehearsal of Opus 1 in Frankfurt, Oskar Fischinger decided to make visual music. He started to experiment with slicing wax and clay images while using silhouettes combined with drawn animations.

Fischinger made some of his earlier films using a colour organ which was controlled by several slide projectors and stage spotlights that had changing colour filters and fading capabilities.

In 1925 he designed a new colour organ with five projectors, which added a more complex layer of colour. Fischinger created wooden cubes and cylinders that were painted and coloured with fabric, that were projected on screen to create his films.

When moving to America, Fischinger created great works such as “An Optical Poem” which was set to the music of “Hungarian Rhapsody no.2” and “Motion Painting no. 1” set to the music of J.S Bach`s “Brandenburg Concerto no.3”.

When attending the “Art in Cinema Festival” in San Francisco in 1947, Fischinger met two painters who had been inspired by his work. Harry Smith painted directly on the filmstrip and the resulting film was accompanied by a jazz performance.

Jordan Belson, in 1957 began to choreograph visual accompaniments to new electronic music. Composer Henry Jacobs composed the electronic music while Belson created the visuals using multiple projection devices.

In 1961 he began to create live visuals by the manipulation of pure light. Taking the role of a modern VJ, with his use of custom built optical bench with rotary tables, variable speed motors and lights of varied intensity, he would create live visuals to accompany electronic music.

Belson did not want any of his material uploaded online; therefore not many of his works are available. Norman Mc Laren was born in Scotland in 1914, while studying art and interior design at the Glasgow School of Art in 1933; he began to make short experimental films,

Mc Laren wrote that while listening to music he would see abstract images in his mind and after watching his first abstract film in 1934, he discovered a way in which he could make these images in his head visible to others through film.

By painting onto film cells, he had the ability to display a visual representation of music.

Incorporating a variety of musical styles into his films including Indian music by Ravi Shankar, Trinidadian string band and a jazz piano soundtrack by Oscar Peterson.

Mc Laren also used a technique he called “Animated Sound” by scratching directly onto the soundtrack of the film, he would create unusual electronic sounds and this can be heard in his film entitled “Blinkity Blank” from 1955

While an undergraduate student in electronic engineering and electronic music at the university of Illinois, American video artist Stephen Beck first began to experiment with the use of video and electronic wave forms to create images. In 1969, the Beck direct video synthesizer was designed; this device would construct an image using the basic visual elements of form, shape, colour, texture and motion. Using no camera Beck`s invention would generate videos from sound.

In his essay titles “Image Processing and Video Synthesis”, the video artist discussed that the four distinct categories of electronic video instruments are:

Camera Image Processing

Direct Video Synthesis

Scan Modulation/Rescan

Non-VTR Recordable

The Camera Image Processing was used to modify signal to a black and white television camera by adding colour to its signal.

Direct Video Synthesisers were designed to operate without a camera, containing circuitry to generate a complete video signal which included colour generators to produce colour, a form generator circuitry which was designed to create shapes and motion modulation to move the shapes through electronic wave forms such as curve, sine and other frequency wave patterns.

Scan Modulation/Rescan was used to manipulate images by means of deflection and electronic modulation, images on the screen can be rotated, stretched and reflected.

Non-VTR Recordable is a TV to display his output. Stephen Beck. (1975).

In 1973, a series of live performances took place titled ”Illuminated Music”. With Stephen Beck controlling the visuals and electronic musician Warner Jepson using the Buchla 100 analogue modular synthesiser while performing the music to accompany the visuals.

Both Beck and Jepson who were members of the National Center for Experiments in Television worked together, performing Illuminated Music in front of audiences in Dallas, Boston and Washington DC.

These performances demonstrated the integration between electronic music and video synthesis, an art form, which is still used to this day.

The majority of electronic music concerts have a visual element present, this is either performed by the artist themselves or more frequently by video programmers who will tour with and work with the artist in question in developing and performing the visual element of the performance.

Commonly used software for this is Resolume, VDMX or Mad Mapper.

With Graphics Processing Units (GPU) and processors getting more powerful over the years, many modern methods to develop and programme videos were made available. Quartz Composer, Jitter and VVVV are all video synthesis tools used to create original videos.

Thank you for reading , I hope you gained an insight into the history of how music was perceived visually over the years , the next post will be about modern digital artists and electronic musicians .

An Introduction

Im an electronic musician who has recently graduated from Limerick Institute of Technology . Over the years I`ve developed an interest in digital art and in the relationship between electronic music and video.
An avid user of Ableton Live , Cubase 8 , Max MSP Jitter/ Max for Live and Processing. I Intend to post screenshots, videos and audio clips of my work and blog about Music Production , Mixing and Mastering Techniques, digital art and the odd tutorial.

Feel free to follow and comment on posts , thank you for reading

The new mastering engine is here. Get that crystal clear & airy sound for your tracks.