Mastering on a Budget and Free

The point of this post is to go through a cheap way of mastering , for those with a low budget that want to get a demo mastered before sending it to labels or for net labels looking for a way to master artists music at a reasonable cost.
The term Mastering is given to the process of taking an already mixed down audio file and preparing it for distribution for multiple formats such as CD, Vinyl and Streaming. During this process tools such as limiting, compression and equalisation are used to ensure the consistency between multiple tracks on an album, where your music will sound good on all platforms such as headphones, monitors and a sound system in a venue.
A thing to remember is that Mastering wont fix a bad Mix , its essential to work on your mix down while creating a good stereo image. I have a post on some mixing tips , which can be read here.

Mastering on a Budget

There are many ways to get your music mastered with a mastering engineer being the best option available , generally expensive ( around $50 a track ) but being able to communicate with an engineer during the process is invaluable along with the results achieved.
One of these methods is the use of online mastering services, where a website will allow you to drag and drop your music and get your audio file ready for distribution within minutes.
CloudBounce is one of these services that uses a mastering engine to both analyse and apply several processing tools, such as a compressor , limiter and stereo imaging to your music , which also allows you to tweak your music several times before finalising.
24 bit Wav and 320kbps MP3 will then be available to you for distribution, although I personally dont see any point in mastering down to MP3 as it is not a lossless format.
With this service , your first track will be free then you can get 5 tracks mastered for under $10 or an infinite amount mastered for under $30 a month or $199 for the year (these options would be ideal for Net Labels on a budget).
Online services wont replace an actual mastering engineer but CloudBounce gives great results on a budget, its worth signing up and getting your first track mastered for free , id you are happy with the results then you can get an additional five tracks done for under $10.

Free Mastering Plug in

A simple master can be achieved in Ableton using an effects chain, which contains tools which are used by mastering engineers , this Ableton mastering tool is available for a free download , its ideal for mastering stems in preparation for a live set or getting your tracks as close to being mastered as possible. The To master in Ableton , simply drag and drop the effect into you master channel.

Screen Shot 2017-06-25 at 18.55.57


I can send this effects rack to anyone for free if they want, just leave a comment below.
My next post will be a tutorial and free download on another Audio Visual device for Ableton Live.

The new mastering engine is here. Get that crystal clear & airy sound for your tracks.
Advertisements

Mixing Basics & Tips

Ive decided to write a quick post on some basics and tips aimed at people just beginning to write music that have helped me with mixing down music , I intend to cover this topic more extensively soon along with some tutorials on electronic music production , mostly concentrating one specifics such as side chaining , reverb , programming  electronica/IDM beats and many more. For now Im just going to go through some tips which I found useful over the years , this were both learnt in college and things I generally have done in the studio for years.

1. Trust your ears

When using music recording software , it is so easy to rely on our eyes rather than our ears when mixing down. What I usually do is start the project file from the 4th bar , giving me a few seconds of silence before for play back and to also quickly ensure that new elements or changes are happening every 4th, 8th, 16th, 32nd, or 64th bar. Closing the laptop screen slightly , I sit in the sweet spot and listen to the mix , I find that errors are quickly found this way , it could be a timing issue of something or that something is not sitting well in the mix well due to a clash of frequencies which leads me to the next topic EQ.

2. Equalization

The correct use of EQ can be the difference between a muddy sounding mix and a great sounding one. There are several charts available displaying the ideal frequency ranges of instruments but generally I would use my ears for this while sometimes referring to the chart below.

c0cd4ff785f28932d87fe77a3f9098b0

The point of the chart is to show what frequency range each instrument lays and where a sound can be enhanced depending on what is required. For example with a hi hat/cymbal , to make the sound bright you would boost the frequency between 8-12kHz. Below I have included two screenshots of how I EQ a Kick Drum and Hats , generally I would have two kick drums in my mix .

screen-shot-2016-11-18-at-18-54-15
Kick Drum
screen-shot-2016-11-18-at-18-58-41
Hi Hat

The reason the kick drum is cut off at around 2kHz is because from there to 20kHz is generally un-used for this instrument , it also allows room for another instrument which would be in this frequency range. Putting it simply the Kick drum is predominantly in the bass range and the Hi Hat is in the treble , by EQ-ing both accordingly will create a nice clean mix , each instrument would be given space to breathe.

3. Structure

This is something that some musicians have an issue with and I was one of them. To fix this carefully listen to a genre which you are trying to create (house, electronica, techno) and break down the track , then by placing a track from an artist of your required genre into your session file, copy their exact structure while you are mixing down. I found this really helpful , I intend on doing a full post on structure within the next week or so.
I hope you enjoyed reading this post and learnt something that you can take with you to the studio, thank you for reading and feel free to comment and follow the blog.
The next post will be a tutorial and another free download of a Max for live Jitter device for Ableton Live.

The new mastering engine is here. Get that crystal clear & airy sound for your tracks.

Audio reactive webcam free download

Just a quick blog post on a Max for Live patch I had been working on which I have provided a free download of, the video of the patch is below , music is by very talented Ilkae-(track titled “KK”)  be sure to check his music out.

https://ilkae.bandcamp.com/

This patch was inspired by Masato Tsutsui who would be one of my favourite programmers/digital artists. By following his two tutorials which are linked at the end of the post , I was able to use the feed from my webcam as a texture, which was then made audio reactive in Ableton Live 9.6.

I advise you to watch both of his tutorials because he explains on how to use a webcam feed as a texture better than I can even though his tutorials are in Japanese ! .

After completing his two tutorials , I decided to continue on and make the patch both audio reactive and to be able to work in Ableton live as a Max for Live patch.

I have covered how to create a basic audio reactive Max for Live patch here and here, this same method is used to make the relevant shape audio reactive.

I have included the Max for Live patch as a free download .

The patch will be in presentation mode , to view patching mode press the yellow button as displayed in the image below. By pressing cmd+E you will unlock the patch.

Screen Shot 2016-02-24 at 22.37.13

I have commented the patch to try help explain some parts of it , if you have any questions feel free to ask , thank you for reading this post.

To install the patch

  • Open up Ableton Live
  • Select the Master Channel
  • Drag and drop the .amxd file into the effects
  • Drag in a track into any audio channel
  • Press the toggle to start the patch and select open to start the webcam
  • Play an audio file and view the screen (esc will put the screen into fullscreen mode)

http://audiovisualacademy.com/avin/en/software/maxmspjitter-masato-tsutsui-camera-part-1/

http://audiovisualacademy.com/avin/en/software/maxmspjitter-masato-tsutsui-camera-part-2/

https://vimeo.com/masato221

Very Basic Max for Live Jitter + Download

Max for Live Introduction 

When the developers of Cycling 74 met with both Robert Henke and Gerhard Behles from Ableton Live, Max for Live was soon formed. This programme allowed for the development of devices such as synthesisers, drum machines, audio effects, MIDI effects and the implementation of live video into the music creation and performance software using Max MSP. Max for Live allows Max MSP to be used to create MIDI effects (which processes MIDI data), audio effects (for processing audio) and for the development of instruments (takes MIDI performance data and transforms it into audio).

These devices can be created in Ableton Live for real time processing (ability to hear instruments as you develop them). With the series of specific Max for Live objects available (which all begin with live.), MIDI mapping parameters of a created device to a hardware MIDI controller is achievable.

Some of the objects include:

Live.dial: Circular slider or knob which outputs numbers according to its degree of rotation.

Live.gain: Decibel volume slider or monitor

Live.slider: Output numbers by moving a slider on screen

Live.toggle: Creates a toggle switch that outputs 0 when turned off and 1 when turned on.

By pressing command and m, these live objects can be mapped to any recognised MIDI controller, a GUI (Graphical User Interface) can be designed within the Max for live vertical limit to create both ease of use and accessibility for the user.

Max for Live works through access of the live object model, this map is a guide to everything accessible within Ableton Live. Not all parameters are available in the music software’s Application Program Interface (API), the live object model shows what max for live has access to.

ObjectModel

Creating a Max for Live Jitter Patch: 

Below I am going to briefly demonstrate how to create a basic Max for Live Jitter patch , I recommend you right click on each object used and select “reference” to read up on them more, it is one of the best ways to learn Max in my opinion. The download link for the created patch is below. The idea of this post is to demonstrate to people new to Jitter on how to create a basic audio reactive patch to work within Ableton live , I have added comments in the patch to try and explain on how it works.

To create a Max for Live device we first open up Ableton Live. Select Max for Live and drag in an empty Max for Live audio effect into the master channel.

Screen Shot 2016-02-24 at 21.02.04

This creates an empty audio device patch with just the plugin~ and plugout~ objects. These represent the audio coming from Ableton Live and the audio being sent to the audio output device.

When creating an audio effect, signal (which is indicated by the green and white patch chords) from Ableton is routed through a created effect and then sent to the left and right outputs.

For a Jitter patch, a copy of the audio signal is taken for audio analysis while leaving the plugin~ and plugout~ objects intact. This means that the audio will play as normal while also being sent to the relevant Jitter object for audio analysis.

Drag in a track of your choice to the same channel, which will be used for audio analysis for the creation of the jitter patch.

The Qmetro object bangs out frames per second, this is activated using the toggle and once it is selected the Qmetro starts the video. The Qmetro object has low priority properties, compared with the Metro object which triggers a bang at the set interval at all times. Meaning it will slow down in triggering bangs depending on current CPU usage, resulting in a lower frame rate. To view the frame rate , the jit.fpsgui object is used. This is attached to the jit.gl.render object.

To allow of the drawing and rendering of OpenGL, the jit.gl.render object is needed. This renders OpenGL objects to the destination window.

For a video to be rendered successfully , a message is required to allow for the erasing and drawing of frames. A trigger bang erase message is used , this first erases the existing frame , then receives a bang from Qmetro releasing a new frame and the trigger to draw the next frame, this process is then repeated.

By leaving out this message will result in the image being constantly drawn over and over again on the same frame.

To analyse the audio , the jit.catch~ object is used which transforms audio into matrices. These matrices can be seen by connecting the jit.pwindow  to the outlet of the object.

image

The next stage is to create a shape , to do this adds the jit.gl.gridshape object. This creates defined shapes such as sphere, torus , cube , plane , circle and others.

The jit.op object is added , this object is used to add the matrices from the jit.catch~ object to the gridshape. The @ symbols represent attributes for an object , in the case of jit.gl.gridshape, @Shape Sphere is added , this will automatically draw a sphere shape once the main toggle switch is pressed .

To add a menu of an attribute , click on the left hand side of the object (in this case the jit.gl.gridshape object) select shape , this will add a scrollable menu , allowing you to change to different pre determined shapes. This object is then attached to the jit.gl.mesh object, through changing attributes give you the ability to have different draw modes such as polygon, line, point, triangle, quads and line loop. These determine how the shape will be drawn. The @auto_colors attribute is added to give an array of colour onto the gridshape.

The jit.window object is added , this will create a floating screen where your visuals will be rendered.
The jit.gl.handle object is added, this allows you to move the gridshape with your mouse, it can be rotated, zoomed in and out (hold alt and click mouse), or positioned on screen (hold Cmd and click mouse)

Screen Shot 2016-02-24 at 23.55.20 Finished Max for Live Patch

In Max there is both a patching mode and presentation mode, the latter being used to create the graphical user interface in Ableton Live. Which an example of can be seen below.

Screen Shot 2016-02-24 at 22.06.55To add an object to presentation mode , just right click and select “Add To Presentation Mode” , when all relevant objects are selected then press “Alt + Cmd+E” or press the yellow highlighted button in the screenshot below, this will switch between Patching Mode and Presentation Mode. Screen Shot 2016-02-24 at 22.37.13

When in Presentation Mode , all relevant objects can be positioned above the vertical device limit. The device can be downloaded here . Press Cmd+E to unlock the patch and switch to patching mode to view this basic patch.

If you have any questions , feel free to leave a comment ,thank you for reading.

I should have another post up soon enough , please follow if interested.

The new mastering engine is here. Get that crystal clear & airy sound for your tracks.

Max for Live Jitter Patch

Ive been working on a few Max for Live patches over the last month or so , Im still relevantly new to Max and Jitter and constantly learning more each week.

This patch was inspired by Masato Tsutsui who would be one of my favourite programmers/digital artists. By following his two tutorials which are linked at the end of the post , I was able to use the feed from my webcam as a texture, which was then made audio reactive in Ableton Live 9.6 .

The idea of the patch was to see movement of who ever is in front of the webcam while also being audio reactive to the music playing in Ableton Live , I still have more that I would like to do with the project.

The video of the patch is below , music is by very talented Ilkae-(track titled “KK”)  be sure to check his music out.

https://ilkae.bandcamp.com/

Max for Live allows Max MSP to be used to create MIDI effects (which processes MIDI data), audio effects (for processing audio) and for the development of instruments (takes MIDI performance data and transforms it into audio).

These devices can be created in Ableton Live for real time processing (ability to hear instruments as you develop them). With the series of specific Max for Live objects available (which all begin with live.), MIDI mapping parameters of a created device to a hardware MIDI controller is achievable

When creating an audio effect, signal (which is indicated by the green and white patch chords) from Ableton is routed through a created effect and then sent to the left and right outputs, for a Jitter patch, a copy of the audio signal is taken for audio analysis while leaving the plugin~ and plugout~ objects intact. These objects represent the audio coming from Ableton Live and the audio being sent to the audio output device, this means that the audio will play as normal while also being sent to the relevant Jitter object for audio analysis.

To analyse the audio the jit.catch~ object was used. The jit.catch~ object transforms signal data, which essentially is audio into matrices, this can be seen in the image below.

Untitled

For my next blog post , I intend to have a tutorial on creating a basic Max for Live Jitter patch. The links to the tutorials used and the vimeo link to Masato Tsutsui are below, thank you for reading. 

http://audiovisualacademy.com/avin/en/software/maxmspjitter-masato-tsutsui-camera-part-1/

http://audiovisualacademy.com/avin/en/software/maxmspjitter-masato-tsutsui-camera-part-2/

https://vimeo.com/masato221

Brief History of Visual Music

The history of visual music dates back as far as the 16th century. Through Giuseppe Arcimboldi`s study of the Pythagorean harmonic proportions of tones and semitones he displayed the relationship between the musical scale and the brightness of colours. Starting with white and gradually adding more black, he managed to render an octave in the twelve semitones, with the colours ranging from white to black, this grey scale painting would gradually darken the colour white, using black for indicating a rise in semitones.

The Italian painter divided a tone into two equal parts, gently and softly, he would turn white into black, with the white representing a deep note and black representing the very high ones.

In 1704, while analysing the spectrum of light, Isaac Newton suggested a close link between the seven colours of the rainbow and the seven notes of the musical scale. The scientist stated that an increase of the frequency of light in the colour spectrum from red to violet made a corresponding increase in the frequency of sound in the diatonic major scale.

Since Isaac Newton’s idea, other people have had different response to the scientist’s link between colour and sound.

Rivera0

In 1743, a French mathematician by the name of Louis Bertrand Castel introduced the relationship between colour and notes. This led to him inventing and creating the ocular harpsichord, this musical instrument could transform sound into colour. With each note in the scale representing a different colour, for example when the C note was pressed, a small panel indicating the colour violet would appear above the instrument. The mathematician later perfected his system, proposing a range of twelve colours, which corresponded to the semitones.

A number of instruments and responses were since based on Castels work, all with their own ideas on the relationship between colour and sound.

With many studies on the relationship between colour and sound over the years, physician Ernest Chladni, took a different approach to the study and looked at the relationship between sound and form. In 1987 he investigated the patterns produced by certain frequencies through vibration on flat plates.

This was achieved by scattering fine sand evenly over a glass or metal plate and by gliding a violin bow against the plate to cause patterns through vibrations. The vibratory movement caused the powder to move from the antinodes to the nodal lines. Black lines represented the parts of the plate, which vibrated the most. Chladni was able to produce sound, giving it a dynamic image; he discovered that the same sound would produce the same pattern each time.

Swiss doctor Hans Jenny was influenced by Chladni`s work in cymatics, which is the study of sound and vibration made visible. In 1967 she published the first volume of cymatics. The Study of Wave Phenomena documented several experiments performed by Jenny using sound frequencies on various materials including water, sand, liquid plastic and iron filings.

Many crystals are distorted by electric impulses and produce electric potentials when distorted. When a series of electric impulses are applied to the crystal, the resulting distortions will have the character of real vibrations. These crystals allowed for a whole range of experimental possibilities with the ability to display both frequency and amplitude. The oscillator is attached to the underside of the plate and when a frequency is outputted, the material on the plate generates a pattern.

Jenny then proceeded to invent the tonoscope, which was constructed to make the human voice visible. By singing into a pipe, the air passes through, causing vibrations on the black diaphragm, which has quartz sand evenly spread across it.

Hans Jenny stated that if you had the same frequency and the same tension, you would get the same form, with low tones generating simple patterns and high tones resulting in more complex designs.

The pattern is characteristic not only of the sound but also the pitch of the speech. Hans Jenny also used this device to visualise music, namely orchestral music such as Bach and Mozart.

Thomas Wilfred was born in Denmark in 1889. Upon moving to New York in 1919, he co-founded the Promethans, who were a group dedicated to exploring spiritual matters through artistic expression.

Speaking of light as an art form, 1922 saw Wilfred invent the Clavilux, which was considered to be the first device designed for audio-visual shows.

The Clavilux had six projectors, which were controlled by a keyboard consisting of banks of sliders, which would resemble a modern lighting desk. An arrangement of prisms would be placed in front of each light source. Wilfred mixed the intensity of colour along with a selection of geometric patterns.

Although most of Wilfred`s performances with the Clavilux were presented in complete silence, it was not until 1926, that he collaborated with the Philadelphia Orchestra in the presentation of Rimsky-Korsakov’s Scheherazade.

Thomas Wilfred produced roughly forty works before his death in 1968 but only eighteen pieces have since survived. The Clavilux was capable of creating complex light forms, which mix together to create a depth of light; this could be seen as resembling the northern lights in Iceland.

Influenced by Thomas Wilfred`s colour organ and Leon Theremins music, Mary Ellen Bute began to develop a kinetic visual art form. She produced several abstract animations set to classical music by Bach and Shostakovich.

This was achieved by submerging tiny mirrors in tubs of oil and connecting them to an oscillator. With the production of these animations, Mary Ellen Bute said that she sought to “Bring to the eyes a combination of visual forms unfolding along with the thematic development and rhythmic cadences of music”.

She referred to some of her films as seeing sound and a few of Bute`s abstract films were shown at Radio City music hall and were often screened before Hollywood feature films. Center for Visual Music. (2014)

In 1921 German, painter and filmmaker Walter Ruttmann created Opus 1. He assembled each projection print of the film with an old college friend who wrote the score. A string quintet performed live with each screening if Opus 1, which was shown in several cities across Germany. The abstract shapes moved onto the screen in time with the music, Ruttmann achieved this by drawing colour pictures in the musical score so musicians would be able to synchronise their playing with the film.

Upon the attendance of a rehearsal of Opus 1 in Frankfurt, Oskar Fischinger decided to make visual music. He started to experiment with slicing wax and clay images while using silhouettes combined with drawn animations.

Fischinger made some of his earlier films using a colour organ which was controlled by several slide projectors and stage spotlights that had changing colour filters and fading capabilities.

In 1925 he designed a new colour organ with five projectors, which added a more complex layer of colour. Fischinger created wooden cubes and cylinders that were painted and coloured with fabric, that were projected on screen to create his films.

When moving to America, Fischinger created great works such as “An Optical Poem” which was set to the music of “Hungarian Rhapsody no.2” and “Motion Painting no. 1” set to the music of J.S Bach`s “Brandenburg Concerto no.3”.

When attending the “Art in Cinema Festival” in San Francisco in 1947, Fischinger met two painters who had been inspired by his work. Harry Smith painted directly on the filmstrip and the resulting film was accompanied by a jazz performance.

Jordan Belson, in 1957 began to choreograph visual accompaniments to new electronic music. Composer Henry Jacobs composed the electronic music while Belson created the visuals using multiple projection devices.

In 1961 he began to create live visuals by the manipulation of pure light. Taking the role of a modern VJ, with his use of custom built optical bench with rotary tables, variable speed motors and lights of varied intensity, he would create live visuals to accompany electronic music.

Belson did not want any of his material uploaded online; therefore not many of his works are available. Norman Mc Laren was born in Scotland in 1914, while studying art and interior design at the Glasgow School of Art in 1933; he began to make short experimental films,

Mc Laren wrote that while listening to music he would see abstract images in his mind and after watching his first abstract film in 1934, he discovered a way in which he could make these images in his head visible to others through film.

By painting onto film cells, he had the ability to display a visual representation of music.

Incorporating a variety of musical styles into his films including Indian music by Ravi Shankar, Trinidadian string band and a jazz piano soundtrack by Oscar Peterson.

Mc Laren also used a technique he called “Animated Sound” by scratching directly onto the soundtrack of the film, he would create unusual electronic sounds and this can be heard in his film entitled “Blinkity Blank” from 1955

While an undergraduate student in electronic engineering and electronic music at the university of Illinois, American video artist Stephen Beck first began to experiment with the use of video and electronic wave forms to create images. In 1969, the Beck direct video synthesizer was designed; this device would construct an image using the basic visual elements of form, shape, colour, texture and motion. Using no camera Beck`s invention would generate videos from sound.

In his essay titles “Image Processing and Video Synthesis”, the video artist discussed that the four distinct categories of electronic video instruments are:

Camera Image Processing

Direct Video Synthesis

Scan Modulation/Rescan

Non-VTR Recordable

The Camera Image Processing was used to modify signal to a black and white television camera by adding colour to its signal.

Direct Video Synthesisers were designed to operate without a camera, containing circuitry to generate a complete video signal which included colour generators to produce colour, a form generator circuitry which was designed to create shapes and motion modulation to move the shapes through electronic wave forms such as curve, sine and other frequency wave patterns.

Scan Modulation/Rescan was used to manipulate images by means of deflection and electronic modulation, images on the screen can be rotated, stretched and reflected.

Non-VTR Recordable is a TV to display his output. Stephen Beck. (1975).

In 1973, a series of live performances took place titled ”Illuminated Music”. With Stephen Beck controlling the visuals and electronic musician Warner Jepson using the Buchla 100 analogue modular synthesiser while performing the music to accompany the visuals.

Both Beck and Jepson who were members of the National Center for Experiments in Television worked together, performing Illuminated Music in front of audiences in Dallas, Boston and Washington DC.

These performances demonstrated the integration between electronic music and video synthesis, an art form, which is still used to this day.

The majority of electronic music concerts have a visual element present, this is either performed by the artist themselves or more frequently by video programmers who will tour with and work with the artist in question in developing and performing the visual element of the performance.

Commonly used software for this is Resolume, VDMX or Mad Mapper.

With Graphics Processing Units (GPU) and processors getting more powerful over the years, many modern methods to develop and programme videos were made available. Quartz Composer, Jitter and VVVV are all video synthesis tools used to create original videos.

Thank you for reading , I hope you gained an insight into the history of how music was perceived visually over the years , the next post will be about modern digital artists and electronic musicians .

Summary of my Level 7 and 8 degree Music Projects

Over the last two years my interest in digital art and creative coding has increased. Initially I started using VVVV and this was used in my Level 7 degree final year project, where an Ableton Live user could trigger both video and audio simultaneously using a MIDI device I developed on the iPad using Lemur. Assigning the same MIDI control change message to both Audio and Video both could be played at the same time using just the one laptop.

With VVVV just supporting Windows Direct X , upon purchasing a Macbook Pro , I began to learn Max MSP and Jitter , which was  used to develop my Level 8 degree final year college project in music production. Jitter was a steep learning curve for me and the attendance of a  4 day workshop on creative coding at the Digital Arts Studio in Belfast was extremely beneficial for me.  For my Final Year Project I created a max for live patch which would enable Ableton live users to have a simple visual element to their performances. Using audio analysis, the audio from Ableton Live was used to animated the selected gridshape in Max for Live. This patch was MIDI controllable using any recognised hardware MIDI controller.

Screen Shot 2016-01-15 at 20.17.21
Level 8 Music Production Project Screenshot

Over the last month , I began to work on a several pieces , which can be seen in the collage above. Computer generated art I suppose you could call it.

Thanks for reading this post , the next one will be on Electronica music production.

Max for Live Jitter Project

Screen Shot 2015-06-28 at 21.55.54

The image above is a screenshot of a project that I am currently working on. The feed from my webcam is being used as a texture , which is then applied to the mesh . The mesh allows for different draw modes , changing how the OpenGL shape is drawn. The gridshape object is also used to allow for the changing of shapes and colour (which in this case is pink). The shape is rotated onto its side for preference.

The patch is made audio reactive using the jit.catch~ object. This object transforms signal data into matrices which is essentially audio into animating the gridshape. Increasing the amplitude allows for the representation of quieter sounds on screen.

The idea of the patch is that the video content will change when someone moves in front of the webcam , creating a variation of some sort.

An Introduction

Im an electronic musician who has recently graduated from Limerick Institute of Technology . Over the years I`ve developed an interest in digital art and in the relationship between electronic music and video.
An avid user of Ableton Live , Cubase 8 , Max MSP Jitter/ Max for Live and Processing. I Intend to post screenshots, videos and audio clips of my work and blog about Music Production , Mixing and Mastering Techniques, digital art and the odd tutorial.

Feel free to follow and comment on posts , thank you for reading

The new mastering engine is here. Get that crystal clear & airy sound for your tracks.