The point of this post is to go through a cheap way of mastering , for those with a low budget that want to get a demo mastered before sending it to labels or for net labels looking for a way to master artists music at a reasonable cost.
The term Mastering is given to the process of taking an already mixed down audio file and preparing it for distribution for multiple formats such as CD, Vinyl and Streaming. During this process tools such as limiting, compression and equalisation are used to ensure the consistency between multiple tracks on an album, where your music will sound good on all platforms such as headphones, monitors and a sound system in a venue.
A thing to remember is that Mastering wont fix a bad Mix , its essential to work on your mix down while creating a good stereo image. I have a post on some mixing tips , which can be read here.
Mastering on a Budget
There are many ways to get your music mastered with a mastering engineer being the best option available , generally expensive ( around $50 a track ) but being able to communicate with an engineer during the process is invaluable along with the results achieved.
One of these methods is the use of online mastering services, where a website will allow you to drag and drop your music and get your audio file ready for distribution within minutes. CloudBounce is one of these services that uses a mastering engine to both analyse and apply several processing tools, such as a compressor , limiter and stereo imaging to your music , which also allows you to tweak your music several times before finalising.
24 bit Wav and 320kbps MP3 will then be available to you for distribution, although I personally dont see any point in mastering down to MP3 as it is not a lossless format.
With this service , your first track will be free then you can get 5 tracks mastered for under $10 or an infinite amount mastered for under $30 a month or $199 for the year (these options would be ideal for Net Labels on a budget).
Online services wont replace an actual mastering engineer but CloudBounce gives great results on a budget, its worth signing up and getting your first track mastered for free , id you are happy with the results then you can get an additional five tracks done for under $10.
Free Mastering Plug in
A simple master can be achieved in Ableton using an effects chain, which contains tools which are used by mastering engineers , this Ableton mastering tool is available for a free download , its ideal for mastering stems in preparation for a live set or getting your tracks as close to being mastered as possible. The To master in Ableton , simply drag and drop the effect into you master channel.
I can send this effects rack to anyone for free if they want, just leave a comment below.
My next post will be a tutorial and free download on another Audio Visual device for Ableton Live.
Ive decided to write a quick post on some basics and tips aimed at people just beginning to write music that have helped me with mixing down music , I intend to cover this topic more extensively soon along with some tutorials on electronic music production , mostly concentrating one specifics such as side chaining , reverb , programming electronica/IDM beats and many more. For now Im just going to go through some tips which I found useful over the years , this were both learnt in college and things I generally have done in the studio for years.
1. Trust your ears
When using music recording software , it is so easy to rely on our eyes rather than our ears when mixing down. What I usually do is start the project file from the 4th bar , giving me a few seconds of silence before for play back and to also quickly ensure that new elements or changes are happening every 4th, 8th, 16th, 32nd, or 64th bar. Closing the laptop screen slightly , I sit in the sweet spot and listen to the mix , I find that errors are quickly found this way , it could be a timing issue of something or that something is not sitting well in the mix well due to a clash of frequencies which leads me to the next topic EQ.
The correct use of EQ can be the difference between a muddy sounding mix and a great sounding one. There are several charts available displaying the ideal frequency ranges of instruments but generally I would use my ears for this while sometimes referring to the chart below.
The point of the chart is to show what frequency range each instrument lays and where a sound can be enhanced depending on what is required. For example with a hi hat/cymbal , to make the sound bright you would boost the frequency between 8-12kHz. Below I have included two screenshots of how I EQ a Kick Drum and Hats , generally I would have two kick drums in my mix .
The reason the kick drum is cut off at around 2kHz is because from there to 20kHz is generally un-used for this instrument , it also allows room for another instrument which would be in this frequency range. Putting it simply the Kick drum is predominantly in the bass range and the Hi Hat is in the treble , by EQ-ing both accordingly will create a nice clean mix , each instrument would be given space to breathe.
This is something that some musicians have an issue with and I was one of them. To fix this carefully listen to a genre which you are trying to create (house, electronica, techno) and break down the track , then by placing a track from an artist of your required genre into your session file, copy their exact structure while you are mixing down. I found this really helpful , I intend on doing a full post on structure within the next week or so.
I hope you enjoyed reading this post and learnt something that you can take with you to the studio, thank you for reading and feel free to comment and follow the blog.
The next post will be a tutorial and another free download of a Max for live Jitter device for Ableton Live.
Just a quick blog post on a Max for Live patch I had been working on which I have provided a free download of, the video of the patch is below , music is by very talented Ilkae-(track titled “KK”) be sure to check his music out.
This patch was inspired by Masato Tsutsui who would be one of my favourite programmers/digital artists. By following his two tutorials which are linked at the end of the post , I was able to use the feed from my webcam as a texture, which was then made audio reactive in Ableton Live 9.6.
I advise you to watch both of his tutorials because he explains on how to use a webcam feed as a texture better than I can even though his tutorials are in Japanese ! .
After completing his two tutorials , I decided to continue on and make the patch both audio reactive and to be able to work in Ableton live as a Max for Live patch.
I have covered how to create a basic audio reactive Max for Live patch here and here, this same method is used to make the relevant shape audio reactive.
I have included the Max for Live patch as a free download .
The patch will be in presentation mode , to view patching mode press the yellow button as displayed in the image below. By pressing cmd+E you will unlock the patch.
I have commented the patch to try help explain some parts of it , if you have any questions feel free to ask , thank you for reading this post.
To install the patch
Open up Ableton Live
Select the Master Channel
Drag and drop the .amxd file into the effects
Drag in a track into any audio channel
Press the toggle to start the patch and select open to start the webcam
Play an audio file and view the screen (esc will put the screen into fullscreen mode)
For my last post , I wrote a tutorial on creating and provided a free download to a fairly basic Max for Live Jitter device (can be read here).
For this post , I am going to talk through the Max for Live device which I developed with help from Robin Price for my final year project in college , the link to download this is at the bottom of the page .
The device is similar to the last blog post as the jit.catch~ object is used for audio analysis and both jit.gl.gridshape & jit.gl.mesh objects are used.
The gridshape is added to the matrix by giving the object the @matrixoutput attribute. The gridshape sends out X, Y and Z co-ordinates and in the case of this patch, audio matrices are added to the Z plane for animation. This is achieved by forming a mathematical operation using the jit.op object, “jit.op@ op pass pass + “ means the signal from jit.catch~ , passes the X plane, passes the Y plane and is added to the Z plane
Creating a Texture
An initial texture is taken from jit.catch~ ,the matrices are taken from the object and sent to jit.op where a mathematical operation is done, in this case it is to increase the amplitude going into the jit.catch~ which makes the quieter sounds more visible.
From here the signal is sent to the matrix and then to jit.gl.texture where a texture is created.
The texture is name @texture 1, by adding the @texture attribute to the jit.gl.mesh object; the name of the newly created texture can be added directly to the shape. The attribute used here is @texture texture 1.
jit.gl.shader is used and the file name is referred to. In Jitter there are three different types of texture mappings.
Applies texture in a fixed manner relative to objects co-ordinate system, this means as an object is rotated and positioned , the texture will stay the same.
As the object rotates , the texture changes
Environment mapping, rendered as though it is reflecting the surrounding environment. The texture changes as the model moves.
By using the @tex_map1 attribute in thejit.gl.mesh object will set the texture mapping to object linear. The poly_mode attribute is set to 0 1, this means that the front of the rendered shape will be solid while the back will be wireframe.
To allow the function to switch between full screen, the key and sel objects are used. By using ascii where each key on the keyboard is given a number, any key can be used to trigger a message in Max MSP. In the case of this project, the escape key is set to switch between full screen, being ascii number 27, once this key is pressed, it turns the toggle one which activates the full screen message which is being sent to the jit.window object
With the shape stationary on the screen it was decided to animate the shape and allow for the definition of a viewpoint change. To change the viewpoint, the jit.gl.camera object is used.
With the ability to change camera position , lens angle or zoom and camera rotation , adds variation to the video output.
To animate the OpenGL shape, the jit.anim.drive is used. With this object OpenGl shapes can be rotated, moved to a specified location and scaled to a specified size.
By using the turn 111 message enables the audio analysis output to rotate 360 degrees with each number representing the X, Y and Z axis respectively.
A message and a dial to adjust the speed of rotation is added and this will be the first of the projects live ui objects.
The next stage of the patch is to implement MIDI mappable parameters, meaning that the user can map live UI objects to their hardware MIDI controller and to have the ability to change how the video is displayed.
A live UI object is one ,which is recognised by Ableton Live and available to map to a MIDI controller.
In Max for live , any attribute which has an integer or flonum can be controlled using an Ableton Live specific dial, fader, toggle or button. Creating a message with the attribute name followed by $1 allows for the value to be changed by a live object.This can be seen in figure 1 above.
Due to the customisable parameters in the patch, it was decided next to implement a preset system. So if the user found an interesting output he/she could save this as a preset, which can be recalled during their live performance.
Three objects are required to have the ability to store and recall preset`s, these are pattrstorage, autopattr and preset. To open the client window with all live UI objects and their current relevant values, you double click on the pattrstorage object .To avoid confusion, each live UI object is given a unique scripting name, this will be viewable in the client window. To give an object a scripting name, the live UI object in question in selected and by pressing command and I ,the scripting name can be changed to a unique name. See figure 2.
The preset graphical user interface is used to save and recall presets. By holding shift and clicking on an empty slot you will store all current values. Once a value is stored the empty clip changes to yellow. Although this colour can be changed in the inspector menu.The pattrstorage object takes a snapshot of all values and stores it to the empty slot selected.
The Max for Live patch can be downloaded here ,if you have any questions feel free to leave a comment / follow, thank you for reading.
When the developers of Cycling 74 met with both Robert Henke and Gerhard Behles from Ableton Live, Max for Live was soon formed. This programme allowed for the development of devices such as synthesisers, drum machines, audio effects, MIDI effects and the implementation of live video into the music creation and performance software using Max MSP. Max for Live allows Max MSP to be used to create MIDI effects (which processes MIDI data), audio effects (for processing audio) and for the development of instruments (takes MIDI performance data and transforms it into audio).
These devices can be created in Ableton Live for real time processing (ability to hear instruments as you develop them). With the series of specific Max for Live objects available (which all begin with live.), MIDI mapping parameters of a created device to a hardware MIDI controller is achievable.
Some of the objects include:
Live.dial: Circular slider or knob which outputs numbers according to its degree of rotation.
Live.gain: Decibel volume slider or monitor
Live.slider: Output numbers by moving a slider on screen
Live.toggle: Creates a toggle switch that outputs 0 when turned off and 1 when turned on.
By pressing command and m, these live objects can be mapped to any recognised MIDI controller, a GUI (Graphical User Interface) can be designed within the Max for live vertical limit to create both ease of use and accessibility for the user.
Max for Live works through access of the live object model, this map is a guide to everything accessible within Ableton Live. Not all parameters are available in the music software’s Application Program Interface (API), the live object model shows what max for live has access to.
Creating a Max for Live Jitter Patch:
Below I am going to briefly demonstrate how to create a basic Max for Live Jitter patch , I recommend you right click on each object used and select “reference” to read up on them more, it is one of the best ways to learn Max in my opinion. The download link for the created patch is below. The idea of this post is to demonstrate to people new to Jitter on how to create a basic audio reactive patch to work within Ableton live , I have added comments in the patch to try and explain on how it works.
To create a Max for Live device we first open up Ableton Live. Select Max for Live and drag in an empty Max for Live audio effect into the master channel.
This creates an empty audio device patch with just the plugin~ and plugout~ objects. These represent the audio coming from Ableton Live and the audio being sent to the audio output device.
When creating an audio effect, signal (which is indicated by the green and white patch chords) from Ableton is routed through a created effect and then sent to the left and right outputs.
For a Jitter patch, a copy of the audio signal is taken for audio analysis while leaving the plugin~ and plugout~ objects intact. This means that the audio will play as normal while also being sent to the relevant Jitter object for audio analysis.
Drag in a track of your choice to the same channel, which will be used for audio analysis for the creation of the jitter patch.
The Qmetro object bangs out frames per second, this is activated using the toggle and once it is selected the Qmetro starts the video. The Qmetro object has low priority properties, compared with the Metro object which triggers a bang at the set interval at all times. Meaning it will slow down in triggering bangs depending on current CPU usage, resulting in a lower frame rate. To view the frame rate , the jit.fpsgui object is used. This is attached to the jit.gl.render object.
To allow of the drawing and rendering of OpenGL, the jit.gl.render object is needed. This renders OpenGL objects to the destination window.
For a video to be rendered successfully , a message is required to allow for the erasing and drawing of frames. A trigger bang erase message is used , this first erases the existing frame , then receives a bang from Qmetro releasing a new frame and the trigger to draw the next frame, this process is then repeated.
By leaving out this message will result in the image being constantly drawn over and over again on the same frame.
To analyse the audio , the jit.catch~ object is used which transforms audio into matrices. These matrices can be seen by connecting the jit.pwindow to the outlet of the object.
The next stage is to create a shape , to do this adds the jit.gl.gridshape object. This creates defined shapes such as sphere, torus , cube , plane , circle and others.
The jit.op object is added , this object is used to add the matrices from the jit.catch~ object to the gridshape. The @ symbols represent attributes for an object , in the case of jit.gl.gridshape, @Shape Sphere is added , this will automatically draw a sphere shape once the main toggle switch is pressed .
To add a menu of an attribute , click on the left hand side of the object (in this case the jit.gl.gridshape object) select shape , this will add a scrollable menu , allowing you to change to different pre determined shapes. This object is then attached to the jit.gl.mesh object, through changing attributes give you the ability to have different draw modes such as polygon, line, point, triangle, quads and line loop. These determine how the shape will be drawn. The @auto_colors attribute is added to give an array of colour onto the gridshape.
The jit.window object is added , this will create a floating screen where your visuals will be rendered.
The jit.gl.handle object is added, this allows you to move the gridshape with your mouse, it can be rotated, zoomed in and out (hold alt and click mouse), or positioned on screen (hold Cmd and click mouse)
Finished Max for Live Patch
In Max there is both a patching mode and presentation mode, the latter being used to create the graphical user interface in Ableton Live. Which an example of can be seen below.
To add an object to presentation mode , just right click and select “Add To Presentation Mode” , when all relevant objects are selected then press “Alt + Cmd+E” or press the yellow highlighted button in the screenshot below, this will switch between Patching Mode and Presentation Mode.
When in Presentation Mode , all relevant objects can be positioned above the vertical device limit. The device can be downloaded here . Press Cmd+E to unlock the patch and switch to patching mode to view this basic patch.
If you have any questions , feel free to leave a comment ,thank you for reading.
I should have another post up soon enough , please follow if interested.
Ive been working on a few Max for Live patches over the last month or so , Im still relevantly new to Max and Jitter and constantly learning more each week.
This patch was inspired by Masato Tsutsui who would be one of my favourite programmers/digital artists. By following his two tutorials which are linked at the end of the post , I was able to use the feed from my webcam as a texture, which was then made audio reactive in Ableton Live 9.6 .
The idea of the patch was to see movement of who ever is in front of the webcam while also being audio reactive to the music playing in Ableton Live , I still have more that I would like to do with the project.
The video of the patch is below , music is by very talented Ilkae-(track titled “KK”)be sure to check his music out.
Max for Live allows Max MSP to be used to create MIDI effects (which processes MIDI data), audio effects (for processing audio) and for the development of instruments (takes MIDI performance data and transforms it into audio).
These devices can be created in Ableton Live for real time processing (ability to hear instruments as you develop them). With the series of specific Max for Live objects available (which all begin with live.), MIDI mapping parameters of a created device to a hardware MIDI controller is achievable
When creating an audio effect, signal (which is indicated by the green and white patch chords) from Ableton is routed through a created effect and then sent to the left and right outputs, for a Jitter patch, a copy of the audio signal is taken for audio analysis while leaving the plugin~ and plugout~ objects intact. These objects represent the audio coming from Ableton Live and the audio being sent to the audio output device, this means that the audio will play as normal while also being sent to the relevant Jitter object for audio analysis.
To analyse the audio the jit.catch~ object was used. The jit.catch~ object transforms signal data, which essentially is audio into matrices, this can be seen in the image below.
For my next blog post , I intend to have a tutorial on creating a basic Max for Live Jitter patch. The links to the tutorials used and the vimeo link to Masato Tsutsui are below, thank you for reading.
The history of visual music dates back as far as the 16th century. Through Giuseppe Arcimboldi`s study of the Pythagorean harmonic proportions of tones and semitones he displayed the relationship between the musical scale and the brightness of colours. Starting with white and gradually adding more black, he managed to render an octave in the twelve semitones, with the colours ranging from white to black, this grey scale painting would gradually darken the colour white, using black for indicating a rise in semitones.
The Italian painter divided a tone into two equal parts, gently and softly, he would turn white into black, with the white representing a deep note and black representing the very high ones.
In 1704, while analysing the spectrum of light, Isaac Newton suggested a close link between the seven colours of the rainbow and the seven notes of the musical scale. The scientist stated that an increase of the frequency of light in the colour spectrum from red to violet made a corresponding increase in the frequency of sound in the diatonic major scale.
Since Isaac Newton’s idea, other people have had different response to the scientist’s link between colour and sound.
In 1743, a French mathematician by the name of Louis Bertrand Castel introduced the relationship between colour and notes. This led to him inventing and creating the ocular harpsichord, this musical instrument could transform sound into colour. With each note in the scale representing a different colour, for example when the C note was pressed, a small panel indicating the colour violet would appear above the instrument. The mathematician later perfected his system, proposing a range of twelve colours, which corresponded to the semitones.
A number of instruments and responses were since based on Castels work, all with their own ideas on the relationship between colour and sound.
With many studies on the relationship between colour and sound over the years, physician Ernest Chladni, took a different approach to the study and looked at the relationship between sound and form. In 1987 he investigated the patterns produced by certain frequencies through vibration on flat plates.
This was achieved by scattering fine sand evenly over a glass or metal plate and by gliding a violin bow against the plate to cause patterns through vibrations. The vibratory movement caused the powder to move from the antinodes to the nodal lines. Black lines represented the parts of the plate, which vibrated the most. Chladni was able to produce sound, giving it a dynamic image; he discovered that the same sound would produce the same pattern each time.
Swiss doctor Hans Jenny was influenced by Chladni`s work in cymatics, which is the study of sound and vibration made visible. In 1967 she published the first volume of cymatics. The Study of Wave Phenomena documented several experiments performed by Jenny using sound frequencies on various materials including water, sand, liquid plastic and iron filings.
Many crystals are distorted by electric impulses and produce electric potentials when distorted. When a series of electric impulses are applied to the crystal, the resulting distortions will have the character of real vibrations. These crystals allowed for a whole range of experimental possibilities with the ability to display both frequency and amplitude. The oscillator is attached to the underside of the plate and when a frequency is outputted, the material on the plate generates a pattern.
Jenny then proceeded to invent the tonoscope, which was constructed to make the human voice visible. By singing into a pipe, the air passes through, causing vibrations on the black diaphragm, which has quartz sand evenly spread across it.
Hans Jenny stated that if you had the same frequency and the same tension, you would get the same form, with low tones generating simple patterns and high tones resulting in more complex designs.
The pattern is characteristic not only of the sound but also the pitch of the speech. Hans Jenny also used this device to visualise music, namely orchestral music such as Bach and Mozart.
Thomas Wilfred was born in Denmark in 1889. Upon moving to New York in 1919, he co-founded the Promethans, who were a group dedicated to exploring spiritual matters through artistic expression.
Speaking of light as an art form, 1922 saw Wilfred invent the Clavilux, which was considered to be the first device designed for audio-visual shows.
The Clavilux had six projectors, which were controlled by a keyboard consisting of banks of sliders, which would resemble a modern lighting desk. An arrangement of prisms would be placed in front of each light source. Wilfred mixed the intensity of colour along with a selection of geometric patterns.
Although most of Wilfred`s performances with the Clavilux were presented in complete silence, it was not until 1926, that he collaborated with the Philadelphia Orchestra in the presentation of Rimsky-Korsakov’s Scheherazade.
Thomas Wilfred produced roughly forty works before his death in 1968 but only eighteen pieces have since survived. The Clavilux was capable of creating complex light forms, which mix together to create a depth of light; this could be seen as resembling the northern lights in Iceland.
Influenced by Thomas Wilfred`s colour organ and Leon Theremins music, Mary Ellen Bute began to develop a kinetic visual art form. She produced several abstract animations set to classical music by Bach and Shostakovich.
This was achieved by submerging tiny mirrors in tubs of oil and connecting them to an oscillator. With the production of these animations, Mary Ellen Bute said that she sought to “Bring to the eyes a combination of visual forms unfolding along with the thematic development and rhythmic cadences of music”.
She referred to some of her films as seeing sound and a few of Bute`s abstract films were shown at Radio City music hall and were often screened before Hollywood feature films. Center for Visual Music. (2014)
In 1921 German, painter and filmmaker Walter Ruttmann created Opus 1. He assembled each projection print of the film with an old college friend who wrote the score. A string quintet performed live with each screening if Opus 1, which was shown in several cities across Germany. The abstract shapes moved onto the screen in time with the music, Ruttmann achieved this by drawing colour pictures in the musical score so musicians would be able to synchronise their playing with the film.
Upon the attendance of a rehearsal of Opus 1 in Frankfurt, Oskar Fischinger decided to make visual music. He started to experiment with slicing wax and clay images while using silhouettes combined with drawn animations.
Fischinger made some of his earlier films using a colour organ which was controlled by several slide projectors and stage spotlights that had changing colour filters and fading capabilities.
In 1925 he designed a new colour organ with five projectors, which added a more complex layer of colour. Fischinger created wooden cubes and cylinders that were painted and coloured with fabric, that were projected on screen to create his films.
When moving to America, Fischinger created great works such as “An Optical Poem” which was set to the music of “Hungarian Rhapsody no.2” and “Motion Painting no. 1” set to the music of J.S Bach`s “Brandenburg Concerto no.3”.
When attending the “Art in Cinema Festival” in San Francisco in 1947, Fischinger met two painters who had been inspired by his work. Harry Smith painted directly on the filmstrip and the resulting film was accompanied by a jazz performance.
Jordan Belson, in 1957 began to choreograph visual accompaniments to new electronic music. Composer Henry Jacobs composed the electronic music while Belson created the visuals using multiple projection devices.
In 1961 he began to create live visuals by the manipulation of pure light. Taking the role of a modern VJ, with his use of custom built optical bench with rotary tables, variable speed motors and lights of varied intensity, he would create live visuals to accompany electronic music.
Belson did not want any of his material uploaded online; therefore not many of his works are available. Norman Mc Laren was born in Scotland in 1914, while studying art and interior design at the Glasgow School of Art in 1933; he began to make short experimental films,
Mc Laren wrote that while listening to music he would see abstract images in his mind and after watching his first abstract film in 1934, he discovered a way in which he could make these images in his head visible to others through film.
By painting onto film cells, he had the ability to display a visual representation of music.
Incorporating a variety of musical styles into his films including Indian music by Ravi Shankar, Trinidadian string band and a jazz piano soundtrack by Oscar Peterson.
Mc Laren also used a technique he called “Animated Sound” by scratching directly onto the soundtrack of the film, he would create unusual electronic sounds and this can be heard in his film entitled “Blinkity Blank” from 1955
While an undergraduate student in electronic engineering and electronic music at the university of Illinois, American video artist Stephen Beck first began to experiment with the use of video and electronic wave forms to create images. In 1969, the Beck direct video synthesizer was designed; this device would construct an image using the basic visual elements of form, shape, colour, texture and motion. Using no camera Beck`s invention would generate videos from sound.
In his essay titles “Image Processing and Video Synthesis”, the video artist discussed that the four distinct categories of electronic video instruments are:
Camera Image Processing
Direct Video Synthesis
The Camera Image Processing was used to modify signal to a black and white television camera by adding colour to its signal.
Direct Video Synthesisers were designed to operate without a camera, containing circuitry to generate a complete video signal which included colour generators to produce colour, a form generator circuitry which was designed to create shapes and motion modulation to move the shapes through electronic wave forms such as curve, sine and other frequency wave patterns.
Scan Modulation/Rescan was used to manipulate images by means of deflection and electronic modulation, images on the screen can be rotated, stretched and reflected.
Non-VTR Recordable is a TV to display his output. Stephen Beck. (1975).
In 1973, a series of live performances took place titled ”Illuminated Music”. With Stephen Beck controlling the visuals and electronic musician Warner Jepson using the Buchla 100 analogue modular synthesiser while performing the music to accompany the visuals.
Both Beck and Jepson who were members of the National Center for Experiments in Television worked together, performing Illuminated Music in front of audiences in Dallas, Boston and Washington DC.
These performances demonstrated the integration between electronic music and video synthesis, an art form, which is still used to this day.
The majority of electronic music concerts have a visual element present, this is either performed by the artist themselves or more frequently by video programmers who will tour with and work with the artist in question in developing and performing the visual element of the performance.
Commonly used software for this is Resolume, VDMX or Mad Mapper.
With Graphics Processing Units (GPU) and processors getting more powerful over the years, many modern methods to develop and programme videos were made available. Quartz Composer, Jitter and VVVV are all video synthesis tools used to create original videos.
Thank you for reading , I hope you gained an insight into the history of how music was perceived visually over the years , the next post will be about modern digital artists and electronic musicians .
Over the last two years my interest in digital art and creative coding has increased. Initially I started using VVVV and this was used in my Level 7 degree final year project, where an Ableton Live user could trigger both video and audio simultaneously using a MIDI device I developed on the iPad using Lemur. Assigning the same MIDI control change message to both Audio and Video both could be played at the same time using just the one laptop.
With VVVV just supporting Windows Direct X , upon purchasing a Macbook Pro , I began to learn Max MSP and Jitter , which was used to develop my Level 8 degree final year college project in music production. Jitter was a steep learning curve for me and the attendance of a 4 day workshop on creative coding at the Digital Arts Studio in Belfast was extremely beneficial for me. For my Final Year Project I created a max for live patch which would enable Ableton live users to have a simple visual element to their performances. Using audio analysis, the audio from Ableton Live was used to animated the selected gridshape in Max for Live. This patch was MIDI controllable using any recognised hardware MIDI controller.
Over the last month , I began to work on a several pieces , which can be seen in the collage above. Computer generated art I suppose you could call it.
Thanks for reading this post , the next one will be on Electronica music production.
The image above is a screenshot of a project that I am currently working on. The feed from my webcam is being used as a texture , which is then applied to the mesh . The mesh allows for different draw modes , changing how the OpenGL shape is drawn. The gridshape object is also used to allow for the changing of shapes and colour (which in this case is pink). The shape is rotated onto its side for preference.
The patch is made audio reactive using the jit.catch~ object. This object transforms signal data into matrices which is essentially audio into animating the gridshape. Increasing the amplitude allows for the representation of quieter sounds on screen.
The idea of the patch is that the video content will change when someone moves in front of the webcam , creating a variation of some sort.
Im an electronic musician who has recently graduated from Limerick Institute of Technology . Over the years I`ve developed an interest in digital art and in the relationship between electronic music and video.
An avid user of Ableton Live , Cubase 8 , Max MSP Jitter/ Max for Live and Processing. I Intend to post screenshots, videos and audio clips of my work and blog about Music Production , Mixing and Mastering Techniques, digital art and the odd tutorial.
Feel free to follow and comment on posts , thank you for reading