At the time video games began to blossom as a form of entertainment in the 1970s, music was stored on physical medium in analog waveforms such as compact cassettes and phonograph records. Such components were expensive and prone to breakage under heavy use making them less than ideal for use in an arcade cabinet, though in rare cases, they were used (Journey). A more affordable method of having music in a video game was to use digital means, where a specific computer chip would change electrical impulses from computer code into analog sound waves on the fly for output on a speaker. Sound effects for the games were also generated in this fashion.
While this allowed for inclusion of music of arcade games in the 1970s, it was usually monophonic, looped, or used sparingly between stages or at the start of a new game, such as Pac Man, or Pole Position. The decision to include any music into a video game meant that at some point it would have to be transcribed into computer code by a programmer, whether or not the programmer had musical experience. Some music was original, some was public domain music such as folk songs. The popular Atari 2600 home system, for example, was capable of generating only two tones, or "notes," at a time. Some exceptions, such as arcade games developed by Exidy, took steps toward digitized, or 'sampled' sounds.
This approach in game development carried on into the 1980s. As advances in silicon and cost of technology fell it ushered in a definitive new generation of arcade machines and home consoles. In arcades, machines based on the Motorola 68000 CPU and Yamaha YM chips for sound generators allowed for several more tones or 'channels' of sound, sometimes 8 or more. Home console systems also had a comparable upgrade in sound ability beginning with the ColecoVision in 1982 capable of 4 channels. However, more notable was the Japanese release of the Famicom in 1983 which would later be known in the US as the NES in 1985. It was capable of a total of 5 channels, one being capable of simple PCM sampled sound. Also of note was the home computer Commodore 64 released in 1982, which was capable of early forms of filtering effects and different types of waveforms. Its comparatively low cost made it a popular alternative to other home computers, as well as its ability to use a TV for a monitor.
Approach to game music development in this time period usually involved using simple tone generation and/or frequency modulation synthesis to simulate instruments for melodies, and use of a 'noise channel' for simulating percussive noises. Early use of PCM samples in this era was limited to sound bites (Monopoly), or as an alternate for percussion sounds (Super Mario Bros 3). The music on home consoles often had to share the available channels with other sound effects. For example, if a laser beam was fired by a spaceship, and the laser used a 1400 Hz tone, then whichever channel was in use by music would stop playing music and start playing the sound effect.
The mid-to-late 1980s saw a tide of software releases for these platforms that had music developed by people with more musical experience than before. Quality of composition improved noticeably, and evidence of the popularity of music of this time period remains even today. Composers who made a name for themselves with their software include Koji Kondo (Super Mario Bros., The Legend of Zelda), Koichi Sugiyama (Dragon Quest), Rob Hubbard (Monty On the Run), Hirokazu Tanaka (Metroid and Kid Icarus), Martin Galway (Times of Lore), Hiroshi Miyauchi (Out Run), Nobuo Uematsu (Final Fantasy), Yuzo Koshiro (Ys). Toward the end of the life of the Famicom, some cartridge games were released with additional tone generating chips built into them, further expanding to the number of channels. This demonstrated a more dedicated attention to the sound and music of games as the expansion added to the developer's cost of the cartridge.
The oncoming generation of Arcade, home consoles, and home computers would reshape the approach to music in video games.
The first home computer to make use of digital signal processing in the form of sampling was the Commodore Amiga in 1985. The computer's sound chip initially featured four independent 8-bit wide digital-to-analog converters. Instead of simply generating a waveform that sounded like a simplistic "beep", such as FM synthesis, this allowed short samples of pre-recorded sound waves to be played back through the computer's sound chip from memory. It allowed a developer to take a 'sample' of a real instrument or sound they wanted at a significantly higher quality and fidelity than was previously available or would come to be available on home computing for several years. This was an early development example of what would later be called wavetables and soundfonts. For its role in being first and affordable, the Amiga would remain a staple tool of early sequenced music composing, especially in Europe.
The Amiga's main rival, the Atari ST, used the Yamaha YM2149 Programmable Sound Generator (PSG), which was limited compared to the Commodore 64's SID chip and thus digitized sound was heard on Atari ST only through certain programming tricks that consumed processor time making it impractical for games. Since it had in-built MIDI ports, the Atari ST was used by many professional musicians as a MIDI programming device.
IBM PC clones in 1985 would not see any significant development in multimedia abilities for a few more years, and sampling would not become popular in other video game systems for several years. Though sampling had the potential to produce much more realistic sounds, each sample required much more data in memory. This was at a time when all memory, solid state (cartridge), magnetic (floppy disk) or otherwise was still very costly per kilobyte. Sequenced soundchip generated music on the other hand was generated with a few lines of comparatively simple code and took up far less precious memory.
The previously mentioned hybrid approach (sampled and chip) to music composing in the Third Generation of consoles would continue into the Fourth Generation, or 16-bit era, of home game consoles with the Sega Mega Drive in 1988. The Mega Drive, (Sega Genesis in the US) sported advanced graphics over the NES and improved sound synthesis, but largely held the same approach to sound design. Ten channels of total tones with one for PCM samples were available in stereo instead of the NES's 5 channels (2 pulse, triangle, noice & DPCM) in mono. As before, it would often be used for percussion samples, or 'drum kits' (Sonic the Hedgehog 3). The 16-bit which referred to the CPU should not be confused with 16-bit sound samples. The Genesis did not support 16-bit sampled sounds. The sound system would still be considered rather limiting to most musicians and it forced much more imaginative use of the FM synthesizer to create an enjoyable listening experience.
As cost of magnetic memory declined in the form of diskettes, the evolution of video game music on the Amiga, and some years later game music development in general, shifted to sampling in some form. It took some years before Amiga game designers learned to wholly utilize digitized sound effects in music (an early exception case was the title music of text adventure game The Pawn, 1986). Also, by this time computer and game music had already begun to form its own identity, and thus many music makers intentionally tried to produce music that sounded like that heard on the Commodore 64, which resulted in the chiptune genre.
The release of a freely-distributed Amiga program named Sound Tracker by Karsten Obarski in 1987 started the era of MOD-format which made it easy for anyone to produce music based on digitized samples. MOD-files were made with programs called "trackers" after Obarski's Sound Tracker. This MOD/tracker -tradition continued with PC computers in 1990s. Good examples of Amiga games using digitized instrument samples include David Whittaker's soundtrack for Shadow of the Beast, Chris Hulsbeck's soundtrack for Turrican 2 and Matt Furniss's tunes for Laser Squad. Richard Joseph also composed some theme songs featuring vocals and lyrics for games by Sensible Software most famous being Cannon Fodder (1993) with a song "War Has Never Been So Much Fun" and Sensible World of Soccer (1994) with a song "Goal Scoring Superstar Hero". These songs used long vocal samples.
Similar to the Amiga, this approach to sound and music developments in arcades began to appear in certain specialized arcade system board revisions. In 1991, games like Street Fighter II on the CPS-1 used voice samples extensively along with sound effects and percussion. Neo Geo's MVS system also carried powerful sound development which often included surround sound.
The evolution also carried into home console video games, most notably with the release of the Super Famicom in 1990, and its US/EU version SNES in 1991. This home console system sported a specialized custom Sony chip for both the sound generation and for special hardware DSP. It was capable of 8 channels of sampled sounds at up to 16-bit resolution, possessed an impressive selection of DSP effects including a type of ADSR seen usually in high end synthesizers of the time period, and full stereo sound. This allowed experimentation with applied acoustics in video games, such as musical acoustics (early games like Castlevania IV, F-Zero, Final Fantasy IV, Gradius III, and later games Chrono Trigger), directional (Star Fox) and spatial acoustics (Dolby Pro-Logic was used in some games, like King Arthur's World and Jurassic Park), as well as environmental and architectural acoustics (Zelda III, Secret of Evermore). Many games also made heavy use of the high quality sample playback capabilities (Super Star Wars, Tales of Phantasia). The only real limitation to this powerful setup was the still costly solid state memory.
Other consoles of the generation could boast similar abilities. The Neo-Geo home system was capable of powerful sample processing, but was several times the cost of a SNES. The Sega CD upgrade to the Genesis added multiple PCM channels, but few titles used this feature and instead simply streamed music from the CD from a Red Book format. Neither saw the circulation of the SNES.
Popularity of the SNES and its software remained limited to regions where NTSC television was the broadcast standard. Partly because of the difference in frame rates of PAL broadcast equipment, many titles released were never re-designed to play appropriately and ran much slower than originally intended, or were simply never released. This represented a divergence in popular video game music between PAL and NTSC countries that still shows to this day. This divergence would be lessened as the Fifth Generation of home consoles launched globally, and as Commodore began to take a backseat to general purpose PCs and Macs.
Though the Sega-CD, and to a greater extent the PC Engine in Japan, would give gamers a preview of the direction video game music would take in streaming music, the use of both sampled and sequenced music continues in game consoles even today. The huge storage benefit of optical media would be coupled with progressively more powerful audio generation hardware and higher quality samples in the Fifth Generation. In 1994, the PlayStation with a CD-ROM drive supported 24-channels of 16-bit samples of up to 44.1 kHz sample rate, equal to CD audio quality. It also sported a few hardware DSP effects like reverb. Many Squaresoft titles continued to use sequenced music, such as Final Fantasy 7, Legend of Mana, and Final Fantasy Tactics. The Sega Saturn also with a CD drive supported 32 Channels of PCM at the same resolution as the PSX. In 1996 the N64, still using a solid state cartridge, actually supported an integrated and scalable sound system that was potentially capable of 100 channels of PCM, and an improved sample rate of 48 kHz. Games for the N64, because of the cost of the solid state memory, typically had samples of lesser quality than the other two however, and music tended to be simpler in construct.
The more dominant approach for games based on CDs, however, was shifting toward streaming audio.
Taking entirely pre-recorded music had many advantages over sequencing for sound quality. Music could be produced freely with any kind and number of instruments, allowing developers to simply record one track to be played back during the game. Quality was only limited by the effort put into mastering the track itself. Memory space costs that was previously a concern was somewhat addressed with optical media becoming the dominant media for software games. CD quality audio allowed for music and voice that had the potential to be truly indistinguishable from any other source or genre of music.
In the same timeframe of late 1980s to mid 1990s, the sampling approach had skipped over PC games. Early PC gaming was limited to a 1-bit PC speaker, leftover legacy from an IBM clone's standard and was poor for generating complex sounds. Expansion cards allowed for FM synthesis, such as the AdLib sound card. MIDI sequencing was used by the game developers to drive the FM synthesis (Doom). A typical PC lacked the specialised computing power to deal with sampling play back, or a way to output it. Rather than the game developer do their own sampling, wavetable sequencing became a popular alternative. A wavetable with samples pre-made and conforming to General MIDI would be installed on a sound card either by design, or by addition of a daughter board. Quality of these wavetable samples had the tendency to range wildly from on manufacturer to the next, but Roland's product were used as a standard until the release of Creative's Sound Blaster in 1989. The Sound Blaster represented an affordable catch-all solution for PC users to have access to sound features. It included a joystick port, midi support using AdLib FM synthesis compatibility, a standardised port for daughter cards for their own Wave Blaster and for other companies' products, and 8-bit 22.05 kHz (later 44.1 kHz) digital audio recording and playback of a single stereo channel. This still did not result in wide use of sampling for PC games because of the inability to play more than one sample at a time. Sequenced music would continue on PCs as the most commonly found game music until mid 90s as CD-ROMs became a more common feature of PCs and game software, as well as a general increase in storage capacity. This gave developers the memory space to use streaming for their soundtracks.
In fourth generation home video games and PCs this was limited to playing a Red Book audio track from a CD while the game was in play (Sonic CD). However, there were several disadvantages of regular CD-audio. Optical drive technology was still limited in spindle speed, so playing an audio track from the game CD meant that the system couldn't access data again until it stopped the track from playing. Looping, the most common form of game music, was also problem as when the laser reached the end of a track, it had to move itself back to the beginning to start reading again causing an audible gap in playback.
To address these drawbacks, some PC game developers designed their own container formats in house, for each application in some cases, to stream compressed audio. This would cut back on memory used for music on the CD, allowed for much lower latency and seek time when finding and starting to play music, and also allowed for much smoother looping due to being able to buffer the data. A minor drawback was that use of compressed audio meant it had to be decompressed which put load on the CPU of a system. As computing power increased, this load became minimal, and in some cases dedicated chips in a computer (such as a sound card) would actually handle all the decompressing.
Fifth generation home console systems also developed specialised streaming formats and containers for compressed audio playback. Sony would call theirs Yellow Book, and offer the standard to other companies. Games would take full advantage of this ability, sometimes with highly praised results (Castlevania: Symphony of the Night). Games ported from arcade machines, which continued to use FM synthesis, often saw superior pre-recorded music streams on their home console counterparts (Street Fighter Alpha 2). Even though the game systems were capable of "CD quality" sound, these compressed audio tracks were not true "CD quality." Many of them had lower sampling rates, but not so significant that most consumers would notice. Some games continued to use full redbook CD audio for their soundtracks (the Wipeout series) and could even be played in a standard CD player.
This overall freedom offered to music composers gave video game music the equal footing with other popular music it had lacked. A musician could now, with no need to learn about programming or the game architecture itself, independently produce the music to their satisfaction. This flexibility would be exercised as popular mainstream musicians would be using their talents for video games specifically. An early example would be Way of the Warrior on the 3DO, with music by White Zombie. A more well-known example would be Trent Reznor's score for Quake.
An alternate approach, as with the TMNT arcade, was to take pre-existing music not written exclusively for the game and use it in the game. The game Star Wars: X-Wing vs. TIE Fighter and subsequent Star Wars games, took music composed by John Williams for the Star Wars movies of the 1970s and 1980s, and used it for the game soundtracks.
Both using new music streams made specifically for the game, and using previously released/recorded music streams are common approaches for developing sound tracks to this day. It is common for X-games sports based video games to come with some popular artists recent releases (SSX, Tony Hawk, Initial D), as well as any game with heavy cultural demographic theme that has tie-in to music (Need For Speed: Underground, Grand Theft Auto). Sometimes a hybrid of the two are used, such as in Dance Dance Revolution.
Sequencing samples continue to be used in modern gaming for many applications, mostly RPGs. Sometimes a cross between sequencing samples, and streaming music is used. Games such as Republic: The Revolution (music composed by James Hannigan) and Command & Conquer: Generals (music composed by Bill Brown) have utilised sophisticated systems governing the flow of incidental music by stringing together short phrases based on the action on screen and the player's most recent choices. Other games would dynamically mix the sound on the game based on cues of the game environment. In a recent game, if your snowboarder in SSX took to the air after jumping from a ramp, the music would soften or even muffle a bit, and the ambient noise of wind and air blowing would become louder to emphasize the sensation of being airborne. When you land, the music would resume regular playback until its next 'cue'. The LucasArts company pioneered this interactive music technique with their iMUSE system, used in their early adventure games and the Star Wars flight simulators Star Wars: X-Wing and Star Wars: TIE Fighter. Action games such as these will change dynamically to match the amount of danger. Stealth-based games will sometimes rely on such music.
Being able to play one's own music during a game in the past usually meant turning down the game audio and using an alternate music player. Some early exceptions were possible on PC/Windows gaming in which it was possible to independently adjust game audio while playing music with a separate program running in the background.
Some PlayStation games supported this by swapping the game CD with a music CD, although when the game needed data, you had to swap the CDs again. One of the earliest games, Ridge Racer, was loaded entirely into RAM, letting the player insert a music CD to provide a soundtrack throughout the entirety of the gameplay. In Vib Ribbon, this became a gameplay feature, with the game generating levels based entirely on the music on whatever CD the player inserted.
Microsoft's Xbox, a competitor in the sixth generation of home consoles opened new possibilities. Its ability to copy music from a CD onto its internal hard drive allowed gamers to utilize their own music more seamlessly with gameplay than ever before. The feature, called Custom Soundtrack, had to be enabled by the game developer. The feature carried over into the seventh generation with the Xbox 360.
The Wii is also able to play custom soundtracks if it is enabled by the game (Excite Truck).
The PlayStation Portable can, in games like Need for Speed Carbon: Own the City also let the player play their own music from a Memory Stick.
The PlayStation 3 has the ability to utilise custom soundtracks in games using music saved on the hard drive, however no game developers have used this function so far.
The Xbox 360 wields Dolby Digital support, sampling and playback rate of 16-bit @ 48 kHz, hardware codec streaming, and potential of 256 audio simultaneous channels. While powerful and flexible, none of these features represent any major change in how game music is made from the last generation of console systems. PCs continue to rely on third-party devices for in-game sound reproduction, and SoundBlaster despite being largely the only major player in the entertainment audio expansion card business continues to advance its product development at a significant pace.
Future technology, while also powerful, does not represent any fundamental shift in video game music creation either. The PlayStation 3 will handle multiple types of surround sound technology, including Dolby TrueHD, and DTS-HD. Nintendo's Wii console shares many audio components with the Nintendo GameCube from the previous generation, including Dolby Pro Logic II. These features are extensions of technology already currently in use.
The game developer of today has many choices on how to develop music. More likely, changes in video game music creation will have very little to do with technology and more to do with other factors of game development as a business whole. As sales of video game music separate from the game itself became marketable in the west (compared to Japan where game music CDs had been selling for years), business elements also wield a level of influence that it had little before. Music from outside the game developer's immediate employment, such as music composers and pop artists, have been contracted to produce game music just as they would for a theatrical movie. Many other factors have growing influence, such as editing for content, politics on some level of the development, executive input and other elements.
Many of the games made for the Nintendo Entertainment System and other early game systems featured a similar style of music which may come closest to being described as the "video game genre" in terms of musical composition, as opposed to simply "video game music" for being in a video game or being played on a video game console. Some compositional features of this genre continue to influence certain music today, though, game soundtracks currently tend to emulate movie soundtracks more-so than this classic genre. This genre's compositional elements may have developed due to technological restraints. The genre might also have been influenced by technopop bands such as Yellow Magic Orchestra, which were quite popular during the period in which videogame music took its trademark sound. Features of this genre include:
Appreciation for video game music, particularly music from the third & fourth generations of home video game console and sometimes newer generations, continues today in very strong representation in both fans and composers alike, even out of the context of a video game. Melodies and themes from 20 years ago continue to be re-used in newer generations of video games. Themes from the original Metroid by Hirokazu Tanaka can still be heard in Metroid games from today as arranged by Kenji Yamamoto.
Video game music soundtracks were sold separately on CD in Japan well before the practice spread to other countries. Interpretive albums, re-mixes, and live performances were also common variations to original soundtracks (abbreviated OST). Koichi Sugiyama was an early figure in this practice sub-genres, and following the release of the Dragon Quest game in 1986, a live performance CD of his compositions was released and performed by the London Philharmonic Orchestra (then later by other groups including the Tokyo Philharmonic Orchestra, and NHK Symphony). Yuzo Koshiro, another early figure, released a live performance of the Actraiser soundtrack. Both Koshiro's and fellow Falcom composer Mieko Ishikawa's contributions to Ys music would have such long lasting impact that there were more albums released of Ys music than of almost all other game-type music.
Like anime soundtracks, these soundtracks and even sheet music books were usually marketed exclusively in Japan. Therefore, interested non-Japanese gamers have to import the soundtracks and/or sheet music books through on or offline firms specifically dedicated to video game soundtrack imports. This has been somewhat less of an issue more recently as domestic publishers of anime and video games have been producing western equivalent versions of the OSTs for sale in UK and US, but only for the most popular titles in most cases.
Other original composers of the lasting themes from this time have gone on to manage symphonic concert performances to the public exhibiting their work in the games. Koichi Sugiyama was once again the first in this practice in 1987 with his "Family Classic Concert" and has continued concert performances almost annually. In 1991 he also formed a series called Orchestral Game Concerts, notable for featuring other talented game composers such as Yoko Kanno (Nobunaga's Ambition, Romance of the 3 Kingdoms, Uncharted Waters), Nobuo Uematsu (Final Fantasy), Keiichi Suzuki (Mother /Earthbound), and Kentaro Haneda (Wizardry).
Global popularity of video game music would begin to surge with Squaresoft's 1990s successes, particularly, Chrono Trigger, Final Fantasy VI, and Final Fantasy VII. Compositions by Nobuo Uematsu on Final Fantasy 4 were arranged into Final Fantasy IV: Celtic Moon, a live performance by string musicians with strong celtic influence recorded in Ireland. The Love Theme from the same game has been used as an instructional piece of music in Japanese schools.
On August 20, 2003 for the first time outside of Japan, music written for video games from all over the world ranging from Final Fantasy to The Legend of Zelda was performed by a live orchestra, the Czech National Symphony Orchestra in a Symphonic Game Music Concert in Leipzig, Germany at the Gewandhaus concert hall. This event was held as the official opening ceremony of Europe's biggest trading fair for video games, the GC Games Convention and repeated in 2004, 2005, 2006 and 2007.
On November 17, 2003, Square Enix launched the Final Fantasy Radio on America Online. The radio station has initially featured complete tracks from Final Fantasy XI and Final Fantasy XI: Rise of Zilart and samplings from Final Fantasy VII through Final Fantasy X.
The first officially sanctioned Final Fantasy concert in the United States was performed by the Los Angeles Philharmonic Orchestra at Walt Disney Concert Hall in Los Angeles, California, on May 10, 2004. All seats at the concert were sold out in a single day. "Dear Friends: Music from Final Fantasy" followed & was performed at various cities across the United States.
On July 6, 2005, the Los Angeles Philharmonic Orchestra also held a Video Games Live concert, which was founded by video game music composers Tommy Tallarico and Jack Wall at the Hollywood Bowl. This concert featured a variety of video game music, ranging from Pong to Halo 2. It also incorporated real-time video feeds that were in sync with the music, as well as laser and light special effects. Video Games Live has been touring worldwide since.
On August 20, 2006, the Malmo Symphonic Orchestra with host Orvar Safstrom performed an outdoor concert of game music before an audience of 17 000, currently the attendance record for a game music concert.
On April 20 to April 27, 2007, Eminence Symphony Orchestra, the only orchestra dedicated to video game and anime music, performed the first part of their annual tour, the "A Night in Fantasia" concert series in Australia. Whilst Eminence had performed video game music as part of their concerts since their inception, the 2007 concert marked the first time ever that the entire setlist was comprised of pieces from video games. Up to seven of the world's most famous game composers were also in attendance as special guests.
Other notable examples of video game music outside of games are listed in the timeline in this article.
In addition to these professional deviations, a huge network of English speaking fandom has sprung up with the help of emulators and the Internet in recent years.
Below listed are some of the milestones reached in video game music, having to do explicitly with video games themselves.