Category Archives: Second Year

Audio Visual

Brief #1 – Short Narrative

The idea:

When deciding what to do for this brief, I came up with several ideas. My first was to do a stop motion-style, abstract piece that would involve finger puppet characters playing a life-sized piano together, and the other would be an over-dramatic portrayal of a mundane daily activity of some sort.

I settled on the latter, and developed the idea into a sort of parody of the film noir genre of cinema that would involve making a cup of tea in a fast cutting montage style.

I decided on this because it was a simple and straightforward idea that was effective for this brief. I made the decision for the piece to be in back and white and with Moonlight Sonata by Beethoven, to add an almost humorous drama to the boring activity taking place.

Research:

Epic Tea Time was where I took my biggest influence from for my idea. Which follows along the same lines. Epic Tea Time – With Alan Rickman

My first inspiration was the YouTube series “How To Basic” videos, which depict daily chores in a really over the top, chaotic way. HowToBasic – How To Correctly Add Milk to Your Tea

And my second was the film technique of fast cutting which involves cutting together really short clips into a fast-paced, dynamic montage

The Storyboard:

The Project:

_________________________________________________________________

Brief#2 – After Effects

Utilizing the skills we learned in class throughout this module, I began by creating a simple logo design on Photoshop and importing it into after effects.

I then applied two separate interchangeable lights to create a hazy, waving animated effect by adjusting and key-framing the parameters. I finished the piece with a simple sound effect to add a bit of depth and atmosphere.

_________________________________________________________________

Final Brief 

Pre-production:
When thinking about what material I could create for this project I spent a while weighing up the pros and cons of each option, including the financial drawbacks and technical ability required for each. It was imperative that I picked a project that was viable in consideration to my time and resource constraints, otherwise the quality of the work would be compromised and lead to a poor outcome, so I was careful to analyse the viability of each of my ideas.

My first idea was to create an animation of some sort set to a background score composed by myself and aimed towards young adults aged fifteen to eighteen. The premise of my idea was to create a dark humorous cartoon sketch featuring a quirky art style and an ambient electronic music score with voice actors.

Positive points about this idea was that it would be different and challenging compared to media projects I’ve done in the past, and it would be interesting to venture into new territory. To pull this idea off I would need to commission animators and voice actors or enlist the help of students to collaborate. Commissioning professionals would potentially be costly, so I had a look and networked with a few animation and art students.

I had a hard time finding someone with the quirky, unusual stylistic qualities I was after, which was one of the reasons I didn’t go with this idea for my final assessment piece. Other aspects that factored towards this decision was that it would be a very time consuming project that would involve a lot of excess work on the part of the artists (creating concept art and finial designs would be a lot of work in itself, let alone creating a 3 minute animated production). Also utilizing voice actors would involve a lot of liaising, studio time and post production with making the audio play, adding sound effects and lip syncing. With all this in mind, this idea was simply not suitable given the technical requirements in comparison to my time constraints.

My second idea was to film and edit a promotional video for a club event. Set to a 80s synth-pop / gothic electronic sound track, this idea would feature cinematic footage of a local monthly goth/alternative club night held in Swansea. I thought this would be a good choice because of the bold, striking looks associated with people in alternative sub-cultures, as attendees usually dress up in very interesting and eye-catching costumes that I know would add great aesthetic appeal to my promo. The event also regularly features fire dancers and burlesque/circus artists which could provide some very dynamic and interesting footage that would make viewers intrigued by this event.

This idea came with some risks though, as it would mean having a limited time of only a few hours to create my footage, with no opportunity for re-shoots if needed due to the monthly nature of this event. Also filming in a nightclub would mean having to shoot in very poor light conditions with no way of controlling this factor, as it is a public event. If I didn’t capture enough interesting/well lit footage then I wouldn’t be able to go back to try again so would only have one chance to get everything right, so for this reason I didn’t proceed with it.

My third and final idea was to create a simple music video for a previously existing track from my personal music project. For this idea I wanted to create an atmospherically dark piece with gritty and unnerving visuals revolving around a female character in various self destructive and dangerous situations, with an emphasis on the concept of suffering to create art. I chose to proceed with this idea, as it was the most viable in terms of technical requirements and it was the one that excited me the most.

_________________________________________________________________

Inspiration & Research:
Primarily independent cinema and short films, but I also looked into the concept of media as a whole through magazines etc. It was important to me to do plenty of research so I could find ideas and themes to reference later on.

One example of a big influence was a scene from British independent movie Franklyn Emilia from Franklyn in which a female character attempts suicide by taking an overdose on camera as part of an art project. The idea of suffering or even dying in the name of creating art was very interesting and unsettling to me so I decided I would like to somehow reference this in my piece, which later became the main theme in my videos’ narrative.

Another big influence was the movie Requiem For A Dream which depicts four desperate drug addicts experiencing delusions as they struggle to achieve their dreams and ambitions. For example the character Sara is a lonely housewife who dreams of being on TV and maintaining her youth, and in the process of striving for perfection she becomes addicted to prescription drugs to the point of developing psychosis and ends up institutionalised where she lays in bed all day having hallucinations of herself as a television star. I found this to be a really strong metaphor for societies fixation on vanity and glamour, and the consequences of fixating on the unrealistic world that the media feeds to us. Requiem for a Dream

The title of my song -“Hypodermic Model”- is a reference to the “hypodermic needle model” theory which is used in regards to media. The concept of this term is that the media “injects” ideas into our heads, and we as an audience passively accept these ideas as truth and are immediately influenced by them. It means that as a society we do not seek subtext and take on ideas at face value without critically analysing them.

With this in mind I wanted to create a piece that was a visceral and vividly realistic account of the unsettling nature of living in a consumerist society that is controlled by the media and an economy that is fuelled by making people insecure and unhappy to sell materialistic goods. To use media to illustrate how media can be damaging.

I did more research by looking at newspapers and women’s magazines. I came across a lot of bizarre and contradictory content; things like weight loss plans that involve pill diets right next to advertisements for fast food or advice on creating “effortless, natural beauty” looks that recommend spending hundreds of pounds on fancy designer make up. The beauty industry utilises media in a powerful way to instil feelings of inadequacy into the public in order to make people feel like in order to be beautiful they need to buy whatever product it is that they are selling.

Other factors I came across during this research period were the media’s fixation on celebrities suffering. Examples like Brittany Spears’ nervous breakdown in 2007 that resulted in humorous news headlines like “Brittany Shears” after she shaved her head, or the legacy of Amy Winehouse that meant despite her being a very talented musician the world was more fascinated by her alcoholism and drug abuse. The tabloids loved to utilise Winehouse’s deteriorating condition for humour, and issues like her going into rehabilitation were written about in a lighthearted, funny way instead of being considered serious.

The way society obsesses and entertains itself over the flaws of desperately unwell celebrities like Spears and Winehouse only highlights the sick nature of the media as a whole. As mentioned in the Hypodermic Needle theory, the audience of these “scandals” enjoy witnessing celebrities break down but fail to identify the realism behind these human beings going through difficult times. This is the key point I aimed to use as the message for my piece.

_________________________________________________________________

Treatment:
After exploring all the relevant themes and doing plenty of research, I felt I had enough inspiration to start thinking about specifics in terms of exactly what I wanted to film and what the final product would look like. I made sure to carefully plan my ideas out beforehand to ensure I’d be happy with the footage captured and wouldn’t need to re-shoot or settle for anything of unsatisfactory quality.

I needed to start with the narrative itself. As mentioned in my initial idea brainstorm, the majority of the piece was planned to depict a female character going through things like self harm, alcoholism and drug use for the sake of creating something she considers artistic. These would be simulated by the actress I enlisted the help of (who was the same person who contributed the female vocals for my track), and I needed to make sure they looked realistic. We shot these scenes using “interview technique” which entails making use of framing and space to give the impression that someone else is in the room. This was to make the focal point of each shot to be on her surroundings and overall environment rather than solely the acts themselves.

The “story” as such will initially show the camera (representing the audiences point of view) entering a viewing room where there is a video projector set up, followed by a montage of footage of the character engaging in various forms of self abuse before burning her house down in a suicide attempt (these will be played in reverse order, beginning with the end). The piece will end with a zoom-out from the projector and revealing the woman is in fact alive and documented all of these antics as part of an art project.

For the structure, I came up with the idea of having a linear storyline but where scenes are played in an impressionistic, almost muddled up order, with some footage played backwards. This hazy and vague style would allow me to create a cinematic experience for viewers, despite it being paced like a music video. The track I used for my soundtrack runs at 80 BPM, and so I needed the pace of the video to match. I added motion blurring to some of my footage to create an illusion of slowness and almost dragging, to match the slow pace of the song and I also pulled focus in and out of certain shots to create a hazy uncomfortable atmosphere were the viewer cannot tell exactly what is happening.

In terms of camera and editing techniques I planned to use a lot of speed variation; slowing down and speeding up certain footage to create a hazy or manic effect where needed to give a sense of unnatural motion. I also filmed a lot of shots from a low angle in order to cast high shadows and give an overbearing feeling of oppression for the female character.

I planned each scene and shot around the lyrics to my song, so piece by piece I had an idea in mind for every section. Whilst most of the scenes themselves are not literal or a direct illustration of the lyrics (which I did intentionally to avoid it coming across as contrived), a few I really wanted to include. I used the lyrics as a kind of storyboard to set the pace and flow of the video and framed each shot around each line to keep them in time with each other and to ensure the overall point I set out to make is understood.

I wanted a dark colour palette with very little use of bright colours besides the bright red that appears a few times throughout the video, which I exaggerated during colour correcting so it stood out more. I did this intentionally because the colour red is considered in media to represent anger, danger or passion, so I thought it would be an interesting bit of symbolism to have some of the backgrounds to be red as well as all of the blood, fire and the actresses hair. The contrast of the darkness against all the red makes a very interesting juxtaposition.

_________________________________________________________________

Conclusion:
I set out at the beginning to achieve a music video that would act as a social commentary on the damaging nature of media and the effect obsessing over it has on the public. I also wanted to explore the subjective nature of artistic expression and the meaning we place upon it.

It was important that the video had a bold aesthetic appeal and stylised look which I think I achieved to an extent, but if I could do this project again I would have filmed in better lighting conditions so that I wouldn’t have had to spend so much time colour correcting. Through having to boost a lot of colour, I compromised the quality of my footage so this is something I will be careful with in future projects.

I also found as I filmed in HD, the the file size by the end was very large, so due to time restrains I made a second converted and compressed version of the file. This allowed me to upload and share to websites and people a lot easier and faster, although this version was lower in quality and pixilated in some areas, If I were to work with HD again I would give myself plenty of time to be able to upload such a large file. This made me realize just how many technical aspects you have to take into consideration when filming a music video and next time I would manage my time wisely.

I am happy with how the pace of the video turned out. Utilizing effects like focus pulls, motion blur and reversing footage really created a distinctive atmosphere of haziness and discomfort and helped fit the footage to the slow pace of the song. I would have preferred to have had more shooting time to explore different angles and shots, but I am overall happy with how the pacing and structure turned out.

The research I gathered really helped to solidify my ideas and fine tune the points I wanted to make. Having a broad range of influences made it easier to come up with a cohesive message and helped me to understand what makes a successful media project that has an emotional effect on the viewer. Looking at magazines and such gave me an interesting insight into the effect media has on the public and the way it encourages them to think and behave. For example, the depersonalization of celebrities and the way we are encouraged to be entertained by their shortcomings whilst separating this from the real human being who is experiencing pain.

Putting all of my research into my final idea proved a to be a very effective method of giving my project an emotional appeal as a piece of social commentary. Making use of symbolism and subtext allowed me to embed a message into my piece to appeal to the part of my target audience who enjoy independent/artistic cinema, and also remain simple enough for casual experimental music fans to take what they want and apply their own message.

Something I learned from my research was that the protagonist to any piece of media needs to be believable and realistic in order to be empathised with so my concious decision to make my main character a stereotypical “punk” created a sense of realism that I believe my target audience would appreciate and relate to. I also utilized relatability successfully in my use of brand placement – such as the bottle of Jack Daniels’ whisky the main character is seen drinking, which is a household name and is easily identifiable.

Before releasing my music video online to the public as per my marketing plan, I did a “test screening” on some close friends. The feedback was generally positive, with most of the critique being in regards to the graphic re-enactments and dark subject matter. As these were focal points in my piece and crucial to the narrative I was adamant that I did not want to remove them, and the fact that it made some people uncomfortable means the mood and atmosphere I wanted to create was also successful. When I released the final product to the public the feedback was the same.

Overall I am very happy with my project as a whole. I believe I achieved everything I set out to do for the most part and I was well prepared with a well thought-out plan by the time the production stage came about. My use of camera and editing technique was effective in creating a look and mood that was well suited to my narrative, although I would experiment even more were I to do this again. My use of lighting and sets could have been better but I still feel my use of colouring and symbolism made up for this. I achieved a nice balance of the use of subtle visual effects, that didn’t draw any attention away from my narrative. Throughout the process I did try out several transitions and numerous effects within different areas of my project, but I felt was too distracting, so I kept with the subtlest of effects.

https://drive.google.com/a/students.southwales.ac.uk/file/d/0B55OXwYU5HDrWUZyNjlWVWxGLVk/view


Global Perspective Development

Initial Project Pitch:

For my Global Perspective project, I really enjoyed games systems within improvised music, so for my final project I aim to develop a improvised battle system, where two people can create music together using gaming controllers. The idea of the project is to put you into gaming mindset, whilst also still creating music. I think the project will also benefit and allow gamers who aren’t necessarily musicians to be able to create music.

I plan to develop a MAXMSP patch to be able to convert a Xbox 360 games controller into a midi device. I am hoping to get the most out of controller as possible by utilizing as many buttons possible, to give the player a greater amount of sounds and possibilities.

I hoping to create a system that is easy enough for people to simply pick up and play and allows for quick fun improvisation games.

______________________________________________________

The Development:

Within in another module this year, I had already began working on developing a MAXMSP patch converting game controllers into a midi devices. So by simply expanding upon that development, I was successfully able to get a fully functional patch.

This is a few images of my patch in several stages of the development:

After I was happy that I got the controllers working correctly and that I had thoroughly tested them out several times, I then started to midi map within ableton. I colour coded the samples I used in correlation with the colours of the controller.

ColourCodeScreenShotAbleton Colour Code

I used start and back button on the controller to trigger percussion and the Y, B, A and X buttons to trigger samples and sounds. I also midi mapped the right (RB)  and Left (LB) trigger buttons for one to allow me to loop any of the samples or sounds and the other to bypass or turn on distortion.  I mapped the analogue sticks for one of them to change pitch/transpose and for the other to control the amount of distortion. I repeated this process for the second controller, I knew it be important to keep the controls and functions the same for each controller.

Midi

Midi Mapping

Upon development, I decided for one of the controllers to trigger bassier sounds than the other. So, I selected higher pitch sounds for the other. After I found a various of sounds for each the controllers, I then past it onto a friend to test out, before finally testing with the class. The feedback I got back from trial runs, from both a friend and the class was along the same lines, of which they both agreed it would work a lot better if the controls diagram were left on the screen to refer to at all times. So, I took this feedback on board and decided to give and put the control diagram on the screen for my final performance.

Controller Controls

The Game Controls

Another thing I took into account in the test runs were that improvisation can go on a long time so, I decided in keeping with the theme of making it a games system, I took several space style videos and cut them all down to 2 minutes in length creating levels. In my final performance players can now choose from 5 selectable levels to improvise to. I felt by giving a time restriction not only allowed for a game style challenge, but for players to think on their feet. The time restriction also allowed for more people to get involved.

Lvl

The Five Selectable Levels

 

Once doing several trial runs, I decided to also pan the output sounds of each of the controllers, creating left verses right. This would make it clearer for not only the players to be more aware of what their individually creating, but also for views to hear the difference between what the two players are doing.

v-Games Controller Video Development Diary-v
(The Testing Process)

For my set-up, I plan to have two laptops, set out being one on the left and the other on the right. This would correlate with the panned controllers. (Being left on the left right on the right) The two laptops will be displaying a still image of the controls to the game.  In-between the two laptops I also plan to use a projector to project the selected levels. The two minute video will hopefully put the two players into a certain mindset/zone.

______________________________________________________

Conclusion:

Overall I am rather happy with the outcome of my project, I feel I achieved what I set out too and it naturally developed quite steadily, which brought it all together slowly. I feel I created a rather simple straightforward improvised game system, which is a nice middle ground between being a video game and creating music. If I were to do it again, I think Id give more of an explanation and maybe a demonstration on the day of the game system, for people to get a better idea of what they were doing. I would also tighten up the video/levels by adding the timer on screen and clear ending of the video, so people knew when the round was over. If I were to take this project further I would spend time tightening the project up and making those few changes I mentioned.

v-Final Performance-v

Research:

List Of Music Games

Looks Like a Game Controller, Plays Like a Chip Instrument

Cannon Fodder theme played on Game Controllers

Playing music with the Steam Controller Haptic Actuators : Still Alive

MUSICAL IMPROV GAMES


10th November – 25th Of January ~ Final Project.

The idea & Project pitch:

For my final Emerging Technologies project, I want to create and develop a live performance tool for myself, that I can use as a unique playable instrument at a live show. I aim to take a guitar hero controller and convert it into a midi controller device. I am hoping to open up the case of the guitar and use the Arduino/teensy inside.

My initial idea is to use the buttons already found on the guitar hero to trigger sounds. I also want to add a rotary dial within the design somewhere on the body of the guitar to change possibly the pitch or filter. I also want to add some metal wire through the frets of the guitar to give a larger range of sounds. Lastly the two buttons found near the bottom of the body, I want to convert into drum samples so, by pressing them you can create a beat using a kick drum & a snare.

VInitial Idea diagramV

Fist template

______________________________________________________

Guitar Hero Development Process:

This picture is the guitar with no alterations or modifications made, this is the foundations and device I am going to be working with. I first looked into the possible idea if the guitar hero was usb, due to the alternative possibility that I could create a MAXMSP patch, then converting it into a midi device from there. After I was unable to achieve this, I set out on my initial idea, of which I was going to modify the guitar and run it through a teensy device.

1

Original Guitar Hero Case

I firstly opened up the neck of the guitar hero, to expose the circuitry of the inside of the neck. I then removed the fixture of the board and all of the wires found within. (hollowing it out)

I then realized the board attached to the inside of the neck held the buttons in place and was also how the buttons worked. It was difficult to figure out how I was going to make the buttons work with the teensy. After a long process of experimentation, I really couldn’t achieve this so, I then set out to think of other possible ways I could still achieve my goal. After a little while, I came up with the idea of covering the buttons in a metal substance, then running metal wires from them to the touch sensors in the teensy. This worked and give me the effect and functions I was after. I then tested to ensure the method worked.

Once I was happy and felt confident the buttons were functioning correctly, I moved on to achieve the touchable frets. By drilling small holes into the neck of the guitar I was able to push through small thin wire, which I positioned and pulled tightly against the fret ridges of the design.

I then moved onto removing the strumming function of the guitar hero, I did this due to wanting to be able to see the teensy and be able to turn it off and on directly. By opening up the body, I place the teensy inside then taped it down from behind, holding it firmly in place. The positioning of the teensy also allow for easy accessibility to be able to attach the running wire down from the neck of the guitar. By also removing the whammy bar of the guitar hero, I was able to run the wire from the teensy out through the empty hole. (where the whammy bar use to be).

Once I felt happy with all the alterations I made, I tested the overall product out several times and also got a friend to test it. I realised due to no dials or changeable effects the guitar didn’t quite have the overall impact I was looking, I knew it needed something else, as it was possible to do a lot more with it.

v-Testing Video Diary-v

______________________________________________________

Midi Xbox 360 Controller Process:

As I knew I really enjoyed creating and developing the games controller using MAXMSP and that was something I wanted to look into taking further, I come up with the idea of developing the controller into a midi device, which would then control and change the guitar hero, giving it a lot more possibilities and functionality.

I first began by taking my previously developed patch and simply expanding upon it. I looked into using MAXMSP as a midi device, I then went on to delete the functions of triggering sounds within my previous patch and replacing that function with midi send function within MAX.

Once I got the Xbox 360 controller fully functioning, I then started to midi map within Ableton. I colour coded the samples I used in correlation with the colours of the Xbox controller. I used start and back button on the controller to trigger percussion and the Y, B, A and X buttons to trigger samples and sounds. I also midi mapped the right (RB)  and Left (LB) trigger buttons for one to allow me to loop any of the samples or sounds and the other to bypass or turn on distortion. I mapped the analogue sticks for one of them to change pitch/transpose and for the other to control the amount of distortion. This would work along side the playing of the guitar, allowing you to change the pitch and level of distortion of each note you play.

This

The Final Product.

Final Product Testing:

______________________________________________________

Research:

Guitar Hero:

DIY Arduino Ribbon Synth

Arduino Guitar

Servo Bender

Arduino MaxMSP Guitar

Electric Guitar, Arduino, and Max/MSP/Jitter

Dubstep Guitar Demo by Mukatu

Music with Guitar Hero Controller

Controller: 

Connecting a Joystick to MaxMSP/Jitter

Xbox 360 Controller MaxMSP Video

Max MSP Sampler/Looper Wireless Interface Patch

XBox 360 OSC Controller

MAXMSP – MIDI Tutorial 1: Basic MIDI

______________________________________________________

Conclusion:

I feel my project overall turn out rather well, and I personally feel I achieved what I set out to at the start of this project. The development of the project really challenged me to think of ways to over come obstacles, whilst not losing what I wanted to achieve. I feel I managed to scale down a rather complicated idea to my own technological level. So, I feel this was a personal success, in which I am happy with the outcome. If I were to do this again, I think I would focus more building upon the teensy and integrating the Xbox controller within in the guitar hero controller body itself. Overall I wasn’t too pleased with the final performance, I mistakenly brought an old template rather than the latest Ableton file. Due to this I felt it was more of a short demonstration than performance. So, If I were to do it again, I’d make sure I was more organised and practise more with the instrument in a improvised situation.


27th – 3rd Of November ~ Assignment #2: MaxMSP Patch

Slide one – The Brief: Using the tools and techniques covered in sessions 4 & 5, create your own piece of software using MaxMSP that takes an external input such as a camera feed, video, game controller or smartphone to control sound in someway. We can use any of the technologies covered in class. The aim of the assignment is to think about who will use the work, what it is being used for, where it is being used, how, and why. It is in this sense that you can start to interrogate the needs, approach, benefits and competition of the application.

Slide two – The Idea: By using some of the technique’s we covered in previous sessions of Emerging Technologies, Iv decided to create and develop a MaxMSP patch, that will allow a Microsoft games controller to trigger audio playback.

This item would be for personal live use. By self branding this gimmicky item, making it a unique feature to a live stage show, this would bring something different to your viewers, making you stand out from the crowd. Hypothetically, after awhile you could go on to brand and personally sell this item as a product.

Slide three – The Hypothetical Controls: These are the first draft of the controls I set out to achieve. I really wanted to get as much out of the controller as possible.

COntrolls

Slide four – The MaxMSP Patch: The Max patch took a lot of work and figuring out, I first started off by not only getting the MAXMSP to identify the controller, but to read all the various numbers, that was coming in from it. I then moved onto separating those numbers and identifying the parts of the controller. I found a great tutorial of recreating the Xbox 360 controller online, which I followed and give me a nice and clear layout of the controller on MAX. Once I finished and was happy that I achieved identifying the components of the controller, I went onto assign sounds to the controls, based on the method we covered in class.

Slide five – The Conclusion: The controller didn’t completely turn out the way I would have liked it. Although most of the buttons worked fine and I achieved the controller making sounds, I would have preferred to achieve all of the controls working and running a little smoother. I know I personally would like to take this project further and convert the controller into a midi device, that would allow the controller to trigger sounds directly from DAW software.

v The Project Pitch Presentation v

2nd Assignment Prezi

Research Links:

Connecting a Joystick to MaxMSP/Jitter

Xbox 360 Controller MaxMSP Video

Max MSP Sampler/Looper Wireless Interface Patch

XBox 360 OSC Controller


27th Of October – #3.MaxMSP Recap

Looping Patch

Today we looked into developing a patch that not only looped but also allowed you to manipulate a saw-waveform. The phasor~ in the patch is the saw-wave, which is stereo split into two separate message objects, containing the command *~  which in Maxmsp is a signal multiplier-operator that outputs a signal which is the multiplication between two signals. The next message box is the play~loop, which is Maxmsp way of reading and playing back the loop files found on the computer. This then goes into gain~ and the gain meters allows you to adjust each of the levels separately to simply mix the two sounds or bring one in and out. Finally the gain is wired up to ezdac~ which appears as a button which can be clicked with the mouse to turn audio on or off.

loop patch


23nd Of October – #2.MaxMSP Follow Up

My thoughts on – Joseph Paradiso 1998 analysis of Electronic Music Interfaces

Reading Joseph Paradiso analysis and knowing it was written in 1998, you could kinda tell he predicted or had an idea where music and especially electronic interfaces were heading and I found myself agreeing with his opinion on how Electronic music, in contrast, has no such legacy to classic acoustic instruments. He also said how The field has only existed for under a century, giving electronic instruments far less time to mature and I still feel that is still true even now.

He also went on to say the rapid and constant change in electronics including the conversion into digital and predicted how computers would play a major part in producing modern music with no additional hardware and this is undeniably true in today’s standards. Finally he also foreseen the rise of performance away from conventional electronic interfaces by saying: In the not-too-distant future, perhaps we can envision quality musical performances being given on the multiple sensor systems and active objects in our smart rooms, where, for instance, spilling the active coffee cup in Cambridge can truly bring the house down in Rio. What we’ve covered in class alone revealed this is now possible and easier than ever.

In conclusion, I  agreed with Joseph Paradiso’s analysis and feel he really had a clear understanding where music was heading and his predictions for the not-too-distant future are now truer than ever. I think the possibilities of  music these days are almost endless from producing to performing, and it is even harder than ever to predict or foresee what is in store for further future possibilities in music. But it is undeniably that classic acoustic instruments will always play a big part in music legacy.

_____________________________________________________

MaxMSP Research Links

A couple of further research links covering the methods we’ve used so far:

A remote controlled visualization of the sound generated in max/msp: Sound Sync in MaxMSP

This project combines complex feedback system and self-organization system in an interactive way in order to generate amazing and spacious sounding. It visualizes the phase differences and transforms them into dramatic graphics, trying to control the unpredictability and diversity with endless and evolving feedback effects. People’s motion affects the sound and therefore change the final visual image as well as feedback effects: Audio-Visual Feedback System on Max/MSP

A great project of someone who used a Nintendo Wii controller to wirelessly modify their guitar effects: Wii Guitar Max/Msp


20th Of October – #1.MaxMSP

Audio Reactive Video

Within class we covered triggering and synchronizing visuals with sounds. By using the macintosh built in microphone to pick up the sound, we developed a patch that lip synced a video when sound was picked up.  By using several object and message possibilities in Max, we picked a video and set the start and end point. This would be the first and last frame. Then the sound coming in would trigger the motion of the video. The sound flicked and changed the frame from the start to the end point, as the sound was coming in.

T (toggle)  – A overall on or off switch.
ezadc~ – This is the object name for the microphone interface.
gain~ – The level controller of the microphone.
meter~ – A visual meter that shows the level of the output sound in dB.
scale – Scale maps an input range of float. The ranges can be specified with hi and lo reversed for invert-mapping. If specified, the mapping can also be exponential.
number – Number is a number-box used to display, input, and output integer numbers. By setting the maximum attribute to the last frame in inspector, so that the video can’t go past this point.  Also, when audio is off, the number box will display a good number frame range for a start point.
M (message) frame $1, bang –
jit.qt.movie – The jit.qt.movie object provides a full suite of services for QuickTime movies: playback, editing, import, export, effect generation and direct-to-video-output-component streaming.
M – Read cat.mov – Message displays and sends any given message with the capability to handle specified arguments, in this case read/open cat.mov file.
loadbang – loadbang output is triggered automatically when the file is opened, or when the patch is part of another file that is opened.
Jit.pwindow – The jit.pwindow object takes a Jitter matrix and displays the numerical values as a visual image in a window you can place in any patcher.

 Talking Patch
T (toggle) > ezadc~ >gain~ >meter~ >scale >number >M (message) frame $1, bang >jit.qt.movie >M – Read cat.mov >loadbang >Jit.pwindow.

v Screen shot of final working patch vTalking Patch (2)

_____________________________________________________

Basic Motion Detection. 

This patch we developed in class covers the basics of motion detection. By using the built in camera on the macintosh, we triggered sound by creating a patch that read and translated any motion picked up by the webcam. The patch was set up so a certain amount of motion would trigger one of 4 sound effects. As you can see by the patch if the value was more than 0.05 then change. Small amount of movement would trigger the first sound effect, then more motion picked up by the webcam will change the active sound effect.

Chop patch