Innovations in Game Music

The closer video game departments work together, the greater the resulting outcome can be. And when we further blur this line between Sound Design and Music and question whether there should to be a line.Then I believe this is where the future of game music can lye

“Have Sound Designers embraced games as a medium to a greater extent than composers? To this question I believe the answer is an undeniable yes” (Whitmore,2016)

Guy Whitmore (writer at Designing Music Now) believes there is a cultural focus on game composers being less engaged with the interactive tools available to them compared to game sound designers. “At least some “Integration” is done by 1 in 2 composers” – (Game Sound Convention Survey, 2015) meaning that only 50% of those game composers used interactive techniques. The use of Middleware such as AudioKinetic Wwise and Fmod is the current go to for implementing music into games with some advanced level of interactivity. Figure 1 shows a system for side-chaining music in an upcoming game (Get Even  ,Bandai Namco. 2017) the effect in this example is the ducking of ambient SFX whenever the music pulses.

http://www.gdcvault.com/play/1024046/Interactive-Music-2-0-Merging @ 5:20 for side chaining

interactive music 1

Figure 1

MIDI has been around since the 1980’s as a way to assign a CC or continuous control to a perimeter such as pitch, velocity, tempo with a value of 0 – 127. Wwise can use MIDI in addition to audio files to create music, and this MIDI information can be used to send data for real time processes and perimeters. in this example tempo of the music can be interactive. there is a layered combination of a drone sound, the pulsing louder sound and the environmental sounds all being sent through the same process. the steam and light buzzing sfx have been pitched to the same as of the music, purely for aurally pleasing reasons. I would encourage you to watch the entire video as there are more example of interactive music such as dynamic tempo and parallel layered forms used.

interactive music 1

Figrue 2

Where things are heading with the use of Granular synthesis in engine, a powerful tool for Music and Sound Design

http://www.gdcvault.com/play/1023987/Data-Driven-Granular-Synthesis-for

 

Bibliography

Whitmore, G, 2016 . Scoring to Picture! Game Composing’s Final Frontier Part 2 August 1st available at :
http://www.designingmusicnow.com/2016/08/01/scoring-picture-game-composings-final-frontier-part-2/

http://www.gdcvault.com/play/1023987/Data-Driven-Granular-Synthesis-for

http://www.gdcvault.com/play/1024046/Interactive-Music-2-0-Merging

Left 4 Dead 2 – Monstrous Motifs

All the muL4D postersic in Left 4 Dead–themes, motifs, hits and effects–are based on a hybrid scale derived from the main opening menu theme. This theme, representing the near death of humanity as it fades to a distance only to be accessed through a radio, is an incremental melodic modulation collecting up all the notes used along the way. This scale is not absolute, and some notes outside the scale are used occasionally to achieve even greater dissonance; but by using this singular and very chromatic scale, almost any piece written in it will generally dovetail pretty decently with any other. At the very least, the pieces seem to spring from the same musical universe. The resulting tunes are deceptively complex–easy to remember but difficult to sing accurately.” (Buren.B, 2008)

The use of motifs in film is a common practice used for characterisation and memorability. One of the reasons that we find film music memorable is that it uses distinctive melodic motifs to ‘catch’ the main characters it describes. The James Bond theme is a good example of this, but a modern composer who has had great success with memorable motifs in all his scores is John Williams (Jaws, Star Wars, Harry Potter) 

“‘[We can] take themes and reshape them and put them in a major key, minor key, fast, slow, up, down, inverted, attenuated and crushed, and all the permutations that you can put a scene and a musical conception through.” (Williams. J, 1998)

The following examples relate very closely to the JAWS theme in terms of tonality,length and instrumentation. These are the musical motifs for each ‘Special’ Infected enemy in the game Left 4 Dead 2

 

There are separate cues that play as you get attacked by each different type of Special Infected enemy these can be heard below

 

Left 4 Dead has a More powerful Boss character called the TANK. The TANK has a piece of music that plays from when he is spawned into the game (alerting the player of his presence) until he is killed by the players, The nature of this game means that when this Boss can spawn is random and as such the music has to be triggered respectfully. Below is the TANKS theme Followed by a level specific variation of this same theme. The fictional band’The Midnight Rider’s (Diegetic) music is triggered by the player to alert for help and trigger the fight in the next video you can see what happens when one of the enemy motifs is playing the other music is present. Also the diegetic band music and the TANK motif become the same piece of music when the TANK enters the area,this makes no logical sense obviously but is a creative way of working around a problem of two songs competing with each other.

At 4:30

[Mike Morasky] We base the music on what the player is actually experiencing and not what on what we want them to experience. Working from artificial life work our audio designers had done on Lord of the Rings and the Matrix sequels, we implemented a simple system to examine what’s going on in the player’s immediate environment, then added the appropriate reactive, scalar rule-sets to control the music and its volume level. Most of the more prominent musical cues are thus ‘reactive’ results from rule sets processing this input–making the musical experience specific to each player. This system also controls the dynamic mix system—another feature unique to Left 4 Dead.

 

Finally here is an in game example of another ‘Special Infected’ the WITCH. with the WITCH there are 3 different levels of intensity, FAR, NEAR and Getting killed by layers of music. This enemy is musically in between the other examples, her music is not a global cue like the tanks music, it is present when only near her, likened to an environmental hazard. The Witch can be evaded or you can decide to kill her, the music notifies to the player in the same way Sound Effects and voice lines may do.

 

 

(Bill Van Buren,2008) Left 4 Dead (2008) In game developers commentary  

Mike Morasky, 2008)  Left 4 Dead (2008) In game developers commentary 

John Williams on Star Wars, quoted in Thomas, T. (1991) Film Score: The Art and Craft of Movie Music, Burbank, CA, Riverwood Press, pp. 334–335.

 

Shaa Li(2013) Left 4 Dead 2: All Special Infected Spawn Music (bacteria folder) Available at :https://www.youtube.com/watch?v=vAYd8ihcUVA (Accessed: 21/03/2017)

Manu0jedi(2009) Left 4 Dead 2 Special Infected Attack Music. Available at: https://www.youtube.com/watch?v=E3ECtGQcWgc (Accessed : 21/03/2017)

TheWilsonator (2008) Left 4 Dead Soundtrack- ‘Tank’. Available at : https://www.youtube.com/watch?v=WhiGnj7QX2A (accessed: 21/03/2017)

Dragonsamurai (2010) Left 4 Dead 2: Dark Carnival-Concert. Available at : https://www.youtube.com/watch?time_continue=240&v=vC4C7F1exwQ (accessed: 21/03/2017)

phenixflame123(2009) Wandering Witch. Available at : https://www.youtube.com/watch?v=oaoD2YDgcQE (accessed: 21/03/2017)

The Functions of Game Dialogue

Stevens/Raybould (2016, p.197) states that dialogue typically falls into the following categories:

Informational

This dialogue provides the player with information they need to play the game.
This technique is used when teaching new mechanics in tutorials or when using a new item for the first time. The aim is to solely give information but often developers find a way to assign this dialogue to a characters natural articulations and add some realism/immersion to whats being said. As the importance of this dialogue is high,mixing techniques are used to help clarity,such as sending the audio to the middle of the stereo field and ducking less important audio.

Character

This dialogue is all about emotion and  development of narrative, character ,game immersion or comedy in the scene. mixing can either be spatialized (panned relative to sound source and distance fall off etc.) or similar to the techniques used for informational dialogue, it depends on either the business of the mix or importance of the dialogue.

Below is a video I edited to show a few different examples of narration dialogue and how in this case tells the story, informs the player where to go and reacts to specific player actions, all whilst being comedic. The game is The Stanley Parable

(Duck Dreams, 2017)

The Stanley Parable uses the Dialogue almost as a way to create game play, The scene at 0:32 when the character gets to a choice of 2 doors and the narrator says “When Stanley came to set of two open doors, he entered the door on his left” the player is not only given a 50:50 choice, but it is a choice to either rebel or to conform to the directions of the game. This ties to the over aching narrative that the Character Stanley is an obedient worker drone that works for a company he hates and does not even know why himself. The narrator often crosses between most of the functions of dialogue in games. He tells the player directly what to do, he gives story and character to the game that would not be present without. By giving the player choice in their actions deriving from the narration dialogue, the game makes the dialogue an essential part of the games mechanics, function, story and entertainment.

Ambient/Incidental/Chatter/Grunts

This dialogue is for immersing the player but often involves both information and character.Often though, the information conveyed by the words is not necessarily important.

Enemy “Grunts” or “Barks” are an exclamation that give the player feedback on the game play, often used in stealth action games. Here is an example at 0:20 seconds.

( HystrixSA, 2012)

This example is useful for understanding why variation is very important in game audio, especially for dialogue.

“The Nature of games is that we are often in similar situations again and again and again and are consequently receiving similar dialogue (e.g., “Enemy spotted!”,”I’ve been hit!”). This indicates that, for the most heavily used dialogue for each character, we need to record lots of different versions of the same line.” (Stevens/Raybould, 2016, p.199)

There is a good reason to keep these lines short and that goes back to variations and how easy it is write and record lots of short lines than lots of specific , longer lines.

In an article “Why Video Game Characters Say Such Ridiculous Things” the author Kirk Hamilton interviews the writer for the Splinter Cell: Double Agent Richard Dansky.

It can be a challenge to attempt to convey too much information in a single bark: “Ideally, each bark should convey one piece of information – “We’re flanking” or “Grenade” or “I’ve been hit”. Trying to compress multiple tidbits in there means you have to be coming up with multiple variants of each chunk of each bark. That gets really awkward – and really long – really fast.” (Dansky, 2012)

“Games give us worlds to inhabit, and so characters’ dialogue must be much more flexible and reactive.” (Hamilton, K , 2012)

Below is an example of ambient dialogue in the game No one Lives Forever 2, notice the mix as well as the length and humour of the content, it is apparent that the intended dialogue be focused upon by the player



Here is an example of Chatter/Incidental dialogue, this section is an optional conversation with an NPC that shows variations of a question prompted by the player.

Duck Dreams,(2012)

Conclusion

A theme that has presented itself is that Humorous dialogue relies on variation and non-linearity and has therefore given good examples of what needs to be done in games dialogue. No one wants to hear the same joke twice, arguably even more than a standard dialogue line.

 

 

Bibliography.

Duck Dreams,(2017) The Stanley Parable Dialogue examples : available at (https://www.youtube.com/watch?v=k4KMzx9ebzk) accessed 28/02/2017

Duck Dreams,(2012) The Darkness II – Tony’s Bannanas : available at (https://www.youtube.com/watch?v=HLxTBRlvY8k) accessed 28/02/2017

Humakt83,(2009) No One Lives Forever 2 Funniest Conversation : available at (https://www.youtube.com/watch?v=zLF4WA2buoI) accessed 28/02/2016

Gareth Cocker, Andrew Lackey (2016) Ori and the Bling Forest: Sonic Polish through Distributed Development. GDC Vault : available at (http://www.gdcvault.com/play/1023195/-Ori-and-the-Blind) accessed 28/02/2016

Stevens, R. and Raybould, D (2016) Game Audio Implementation. Edited by `Dave Raybould. First edn. 6000 Broken Sound Parkway NW,Suite 300 Boca Raton, FL: CRC Press/Taylor and Francis.

HystrixSA, (2012) WHOSE FOOTSTEPS ARE THESE WHAT WAS THAT NOISE available at (https://www.youtube.com/watch?v=JjPpX1g1juo&feature=youtu.be&t=20) accessed 28/02/2017

Kirk Hamilton, 2012 Interviewing Richard Dansky ‘Why Video Game Characters Say Such Ridiculous Things’ Kotaku 28th June. Available at (http://kotaku.com/5921878/why-video-game-characters-say-such-ridiculous-things)

Procedural Sound Design in Video Games

 

This blog update will be about Procedural Sound Design, defining what it is, how it functions, what the use cases are, limitations and why procedural sound design is used in video games.
What is Procedural Sound Design in terms of video games? “Procedural Sound Design is about sound design as a system, an algorithm, or a procedure that re-arranges, combines or manipulates sound assets”(Stevens and Raybould, 2016) It is the concept of audio becoming a dynamic and interactive experience for the player, this is typically achieved by parameterisation within a system such as a game engine or MAX MSP. The main benefits are to achieve efficiency, detailed control, variation and flexibility in the Sound Design.

Whilst doing the initial research for this blog post, it became apparent that the term procedural sound design is not comprehensively definable. Misconceptions about

Procedural audio, Procedural synthesis and Procedural music are all prevalent. It could be said there is a matter of opinion to either define the exact differences or not to concern yourself with them as long as the resulting audio works as intended, however you wish to define the process. “Andy Farnell, who coined or at least popularised the term ‘Procedural Audio’ sees it as any kind of system where the sound produced is the result of a process…” “... So under that definition, as soon as you set up any kind of system of playback you could see it as being procedural audio.” (Asbjoern Andersen. 2016.)

Why use a Procedural approach to sound?
These sound systems are essentially a list of instructions that recall audio files/create synthesis in specific ways and this means it is an adaptable frame work that can be repurposed for mechanically similar systems.

See fig 1 for an example of a car system created by Andy Farnell

Fig 1 at 9.30

 https://www.youtube.com/watch?v=-Ucv7EXwnCM&list=PLLHtPBwbWUW5EG-4ajfz5BQBOIm31ClhC&index=5

Andy goes on to say that this system can be reused for another type of car by using this existing framework

…What we’re doing now is moving from a state based interaction model to a continuous interaction model…” “…In the tests that I’ve done with players, its the interactivity of the object, the way it responds to your input which defines realism, realism is not a sonic quality, realism is a behavioral quality.” (Matthew Cheshire. 2013, part 4 at 11.05)
Flexibility in terms of future proofing your sounds as they relate to other in game assets that maybe changed by another member of a development team such as a character animation is a possible consideration. My point is that it is easier to tweak a system to get a slight timing adjust on a sound than it is to go back to a DAW and rework the asset from this point. “Procedural Audio is a philosophy about sound being a process and not data. In its broadest sense, if I were to say it is the philosophy of sound design in a dynamic space, and the method is as irrelevant to Procedural Audio as whether you use oils or water colours is to painting.” (Andy Farnell, 2012) Andy Farnell states that whichever method the sound designer gets the desired result using is the correct one, and that procedural audio is just another tool available along side traditional methods of sound design

Limitations have always been a factor when creating sound for video games. When computer games were first being created the specific sound chip for the platform being developed on was the limitation. Then at the start of the CD era of gaming with the Playstation™ it was having sounds loading from the disk responsively enough. Today the main limitations will be platform specific hardware restrictions in terms of how much RAM and processing power is needed for the audio system to run.

footsteps-cue-cgAbove is an example of a footstep Sound Cue for Unreal Engine 4.

Inherently having lots of different variations of sounds will add to the memory used in the audio departments memory budget, and this is where a lot of the benefits can be found for a procedural system. In Unreal Engine 4 the ability to control the sound cues adds the ability to control single shot assets as you can alleviate the need for larger looping sections of audio for example. For this footstep example the Heel and the Toe sections of the foot steps have been separated and recombined randomly by the system, this technique along with adding modulators for pitch and volume create a larger variation of sounds than at all possible with conventions means.
“In general, the fact that with procedural audio not all voices are created equal – and that some patches will use more CPU cycles than others – is usually not a very good selling point against the predictability of a fixed voice architecture.” (Nicolas Fournel,2012)

 

Bibliography

Stevens, R. and Raybould, D (2016) Game Audio Implementation. Edited by `Dave Raybould. First edn. 6000 Broken Sound Parkway NW,Suite 300 Boca Raton, FL: CRC Press/Taylor and Francis.

designingsound.org Varun Nair. 2012. Procedural Audio: Interview with Andy Farnell. [ONLINE] Available at: http://designingsound.org/2012/01/procedural-audio-interview-with-andy-farnell/. [Accessed 13 February 2017].

designingsound.org Varun Nair. 2012. Procedural Audio: An Interview with Nicolas Fournel

[ONLINE] Available at: http://designingsound.org/2012/06/procedural-audio-an-interview-with-nicolas-fournel/ . [Accessed 13 February 2017].

ASoundEffect.com Asbjoern Andersen. 2016. WHY PROCEDURAL GAME SOUND DESIGN IS SO USEFUL – DEMONSTRATED IN THE UNREAL ENGINE. [ONLINE] Available at: https://www.asoundeffect.com/procedural-game-sound-design/ . [Accessed 13 February 2017].

Matthew Cheshire. (2013). Andy Farnell designing sound procedural / computational audio lecture part 1-5. [Online Video]. 7 March 2013. Available from: part 1 https://www.youtube.com/playlist?list=PLLHtPBwbWUW5EG-4ajfz5BQBOIm31ClhC , part 4 https://www.youtube.com/watch?v=kwK7OSkg4Gs&list=PLLHtPBwbWUW5EG-4ajfz5BQBOIm31ClhC&index=4 .part 5 9.30 https://www.youtube.com/watch?v=-Ucv7EXwnCM&list=PLLHtPBwbWUW5EG-4ajfz5BQBOIm31ClhC&index=5 [Accessed: 13 February 2017].

 

Overwatch: Play by Sound

Overwatch is my personal favourite game right now, so I just had to cover it in some aspect or another. A fantastic talk at the Game Developers Conference 2016 called “Overwatch – The Elusive Goal: Play by Sound ” by Scott Lawlor and Thomas Neumann,Is what I will use mostly use to explain the technical side of implementation in this article.

(along with my 400+ play time of the game)

Scott Lawlor and Thomas Neumann talk about how at the early stages of design they where given the task of “being able to play with sound alone” by the Game Director Jeff Kaplan, and how much the sound design should give as much relevant information to the player as possible.

They outline the main goals for the audio team into 5 categories or “pillars” of importance;

Pillars:

      • A Clear Mix

      • Pinpoint Accuracy

      • Gameplay Information

      • Informative Hero Voice Over

      • Pavlovian response

Each of these topics deserves its own in depth discussion, but i would like to focus mainly on the mixing system and the Gameplay information as these 2 link to each other quite well

Overwatch is a First Person Shooter based around two teams of six players.

All the game modes are fairly similar from a sound point of view, so this shouldn’t change anything in the overall discussion. The most intense moments of action would mean there are 12+ sounds happening all at once in a very small area of combat. This without the appropriate mixing would sound overwhelming and cause the pillar for “A Clear Mix” to fail at these moments, which are very common whilst playing the game.

The mixing process for most games is all done either in engine or through a middleware piece of software such as Audio Kinetics Wwise™. Custom software can also be developed to fit a specific need for any area of a game, and the audio team developed a few good examples to explore further.

overwatch-clear-mix-slide

The first is a simple program that relates a specific piece of audio to an assigned “importance” value ranging from 0 – 120, 120 being most important. Pictured above is an example of what maybe considered when assigning these values in real time. This data is further organised by the software into 4 different “buckets” of importance depending on their number value:
High/Medium/Low/Cull. The example below is which of these 4 buckets is the sound currently,
The character and the importance number. Then how many sounds are put into these buckets.

This slideshow requires JavaScript.

The reason the developers decided to group the audio into these buckets is because without a clear winner to what is most important, the mix would be muddied by too many important things happening at one time, so they limit this by having only 1 high priority sound at any one time, and the rest as follows in the other buckets.

This bucket value is what data is actually sent to Wwise, to then adjust volume and filtering parameters accordingly. 

A Real Time Parameter Control is what is used in Wwise to change the values of makeup gain on the sound depending on which of the 4 buckets the sound has been placed in.

each category of sound in the game will have its own specific values for how much or little gain will be applied to the sound in game.

An Ultimate ability for example will be something that when activated will have the highest importance. this is true for both friendly and enemy Ultimate ability usage.

A clever way of differentiating whether the current ability being used is friendly or not, is to have two different voice lines that players hear, one team will hear one distinct line and the other team will hear the other distinct voice line. An example

When the character Lucio uses his Ult (Ultimate ability);
allies hear “OH, Lets break it down!”

enemy will hear “drop the beat!” 

allied lines will still be quieter than enemy lines for the reasons above but they are both very high on importance.
There are 23 different heroes/playable characters in the game and there can be the same character used on both teams at the same time so this simple difference will help the player understand what is going on in the fight with audio cues alone.

Of these heroes, 6 of them  speak a different language to English from all across the globe, and Bastion the Omnic (robots in the Overwatch Universe) who just talk in a series of beeps and boops (think Wall-E for an example)
they use this idea like above and say their voice line in e.g. French from the enemy perspective and English for the ally players.

” Pavlovian conditioning: A method to cause a reflex response or behaviour by training with repetitive action.”

This example used supports the pillar for a Pavlovian response in the player as these voice lines are consistent every time you hear them so they are quickly learned by the player, which is very important as Ult usage is key to winning games in Overwatch, hence the need to clearly communicate every time.

The idea of having repetitive sounds in gaming these days is usually thought of as a bad idea, as these sounds may become annoying or seen as lazy development. The developers felt it was more important that the players could instantly know what something is just by listening for a fraction of a second than confuse players.
OW heroes.jpg

But repetition isn’t the only way to teach players the sounds in game.
Each hero is very different in all aspects be it, nationality, age, race, size, weapon type and materials worn on their feet. This diversity in characters attributes helps the player discern who it is walking up behind them, ready to strike from an audio perspective. This is very important information for winning a fight as you maybe  dying or getting the upper hand depending on audio alone. This example holds true for a lot of online First Person Shooter games, but here, you know which specific character will be walking round that corner and whether it is even a winnable fight.

This gameplay mechanic can be counter played by crouch walking to mute the movement sounds of your player at the expense of going a lot slower than walking speed. Again a common gameplay mechanic in other titles but worth noting.
The ability to know who is crucial, but what about where they are in the map?

Instead of using the standard occlusion  settings in Wwise, the team customised a ray tracing technique from another part of the AI path finding for the hero Pharah (who flies around the maps). this means the rays cast for the audio can go around corners from the source to the listener and measures the distance to determine volume instead of going through walls in a straight line. Filtering the audio with a high pass filter when behind walls and large obstacles as needed.

The team also developed a custom quad delay for use in giving realistic tails to the sounds heard in game. these 4 delay channels are assigned to the 4 surround sound speakers if the user has the setup. 4 rays are cast every frame in these 4 45 degree angles, the distance measured in in game meters, directly correlates to 1 millisecond of delay time for that specific channel of audio. This technique is usefully for understanding the space you are in whilst playing and helping with the realistic and immersive feel of the game world too. 

 overwatch-quad-delay

Overwatch also supports Dolby Atmos speaker setups which is 9.1.2
this support will enable the player to hear directly above and below them in addition to a 9.1 speaker array. This technology at the time of writing is relatively new and unexplored to its fullest in terms of gaming. I’m sure it would be a very immersive experience, they do support Dolby Atmos for stereo headphones also which is just fine. It should be said that this is the best option for listening to the game with standard equipment for that better stereo field differentiation.

To summarise

points I haven’t focused on are the Music, other more standard User Interface sounds and hero dialogue.These help communicate a lot of additional information about objectives, Ultimate ability readieness and other helpful features. The dialogue system also automatically and random chooses some context sensitive hero to hero story/lore dialogue. This brings a sense of fun and world building/immersion in the universe that is great to hear at the start of rounds (when you are basically just waiting for 1 minute for the action to start). It helps fill these gaps in the action nicely. thats all for now thank you for reading. Calum Grant

 

The Dark Souls Trilogy: a brief Audio Overview by Calum Grant

dark_souls___solaire_by_zedotagger-d8vxmac (©2015-2017 Zedotagger)

Hello “The Inter-webs”. My name is Calum. You may not know who I am quite yet, but some day I wish to be as proud of my own work as I am so very fond, In love with you could say of the Sound Design in this game series.

I would very much for you, my lovely reader, to take the time to really think about a specific moment. THE moment you finally reach the Fire Link Shine (the main hub of the game) for the first time, how it MADE you feel when the music started to play, not just suggest a mild mood to you, but MADE you feel. It was probably one of real relaxation, safety, a literal and Connotative pause in the horror and death that surrounds this very area. Two or so minutes before is the first boss fight which, as I will delve into in bit, is very different.

If for whatever reason you are reading this and you haven’t played it, then it might sound like such a simple, almost overlooked or obvious thing indeed. It’s just that this concept of contract really stays true throughout the first game and the series.

dark_souls___firelink_shrine_by_riot23-d5uslfn

CONTRAST is the thing this game understands in all of its design philosophy, from gameplay, to the environments and what I will be focusing on, the sound.

Imagine a graph with the Firelink Shrine at one extreme and the various Boss Themes on the other extreme. Now these moments are pretty much the only time the player will experience any form of music in the game. Which will be be less than 10% of the total play time, if that. That’s not a lot of time, but It makes for a bloody great roller-coaster when you start to consider how the lack of music will naturally emphasise the Sound design and Atmospheres. The mood this connotes to the player is one of insignificance, solitude, and dread of the unknown . If you are more of a visual type you can relate these feelings to say a wide camera shot and a pitch black environment. We all know that these types of environments I just described are recurring themes in all the titles as it happens.

This feeling of safety continues throughout the game as you reach the various bonfires (checkpoints) littered around the world, the distant crackle of fire means salvation and hope, if only for the briefest moments. The player quickly learns to listen out for this sound, as a lot of the time these bonfires maybe hard to find visually. I cant quite put my finger on what it is exactly that makes the specific sound used for the bonfires so pleasing to the ears. It doesn’t sound like any old fire sound effect, it has a character to it that has become somewhat a modern icon dare I say it.

prepare_to_die_by_ralphdamiani-d9cw0d3

(picture by ©2015-2017 ralphdamiani)

Whether this is me just gushing now, I’m not so sure, but its definitely left some sort of impact on me, and the way I will design my own sounds in the future.

When checking the date for this I realised that a lot of my favourite Games were released in this year with titles such as Battlefield 3, Portal 2, The elder Scrolls V: Skyrim, Rayman Origins and Dead Space 2! PHEW (we have to make some more Blogs very soon) and these all have absolutely stellar sound design all from very different directions too.

Whether its the heavy handed mickey mousing of the Dead Space horror strings.
The Vocal talent alone for Portal 2.
The bold and bone rattling style of Battlefield 3 or;
the back to basics design of Rayman Origins with one hella catchy original soundtrack.

2011 could be my personal favourite year for modern game audio as It happens. I prefix with modern because Sonic The Hedgehog 2 does exist. Bye for now and try and think of your own moment like this example while you go to sleep tonight.