11 Keyboards With The Best Real Piano Sounds

These 11 keyboards all have strong real piano sounds relative to their size, complexity and retail price. This list is a mix of more modest consumer keyboards, home-centric units and professional, studio- and stage-ready upper-end models. All of them deliver good quality piano sounds compared to others in their respective categories, and the very top choices are landmark instruments in their own right.

The Vast And Elusive Art Of Recording Music

If you take into account the available methods, gear, musicians and processes, you could say that there are infinite ways to record music. So if you’re interested in the recording process and all the elements that can go into it, we’ve prepared a comprehensive guide. Not only will it cover the 5 main stages of a recording (music generation, production, recording, mixing and mastering) but we’ll also talk about how each of these processes differ depending on the type of artists.

The thing is, the way that an independent rock band tackles a recording is quite different from what a teen pop idol would do. So for each of these we’ll look into how the process usually is for Bands, Solo Artists, Popstars, and Composers.

Now, I know those categories may overlap each other in some cases. You could argue that an artist like George Michael qualifies as a solo artist, a popstar and also a composer if he’s doing sheet music for a band or orchestra to play. So let’s clarify…

Types Of Musical Acts

Band means any ensemble of musicians that exercises a collective creative approach to generating music. Even if one person writes the songs and the band executes them, if the emphasis on sound is how the band plays each song together, other band members add parts and arrangements, there are multiple songwriters, and the live act is presented as a group; it’s a band. It doesn’t matter if it’s a rock band, a jazz band, a French house duo, a hip hop trio, etc.

Solo Artist means that the same artist who is presenting the music is also doing most of the writing. This is either completely by himself (see Tame Impala) or with an interchangeable cast of collaborators, some of which may be in the live band as well, but are really not that involved in the creative direction of the act. This category is where someone like George Michael would fit. Also a Lady Gaga, most singer songwriters, jazz men like Miles Davis, solo rappers, DJ’s and even one-man rock acts that use band names like Queens Of The Stone Age or Nine Inch Nails.
Pop Star means that there is a person who is the face of the act and performs the music on tours, but mainly does not write the music. It’s the type of act where there is a big record label and a team behind the artist or “talent”. Usually, a cast of songwriters do the songs, a big-name producer chooses the musicians and prepares the tracks, then the artist, whose record label bought those songs, sings over those tracks and puts his or her pretty face on the album cover. These are your Rihanna’s, Katy Perry’s, most Boy Bands, etc.

By Composers I mean most recording situations where a performer, or group of performers. Are adhering to the music written by one or various individuals almost exactly. This would cover any works that were created by a deceased composer, as with much of the best-known classical music, but also most film scores, some musicals and even some types of experimental or avant garde music.

So those are, in a very broad sense, the musical categories where most sound recording situations fall into. Now we’ll see how the stages we outlined above look like for each of these.

Music Songwriting

Some “jam bands” or improvisation groups/individuals may be the exception. But for the most part, there has to be written music before recording is even an option. Don’t you agree? So let’s take a look at the way each of our artists types (in general terms) goes about the difficult and sometimes elusive task of creating music out of thin air.

With bands, in general terms, a given band member will present the rest of the band with a musical idea and they’ll develop that either through live playing or in little units before going to the rest of the band to finish the song. The beauty about this is that there can be a lot of different ways of doing it. Pink Floyd, with such a long history, illustrates several of these.

In the beginnings, lead singer Syd Barrett wrote the vast majority of the songs and then had the band play them. Naturally, he also sang most of them. This arrangement, of having a chief songwriter, is repeated in numerous other groups, like The Strokes, Pulp and The Smashing Pumpkins, to name a few.

After Syd went a bit crazy with LSD, the other band members had to step into the songwriting role. Most notably Roger Waters and David Gilmour. The band would finish most songs collaboratively and they would also sing each other’s songs. The best known Pink Floyd albums credit all the music to simply “Pink Floyd” and the lyrics to either one, but mostly Waters. This is also the case with the Red Hot Chili Peppers, Interpol and many, many others.

Now Solo Artists, apart from the occasional collaboration, are usually completely alone in this process. Most of them will finish complete songs on their own, and even demo them to try out the arrangement and structural ideas that they will present to a producer afterwards. For example, American indie folk songwriter Justin Vernon famously wrote and recorded most of Bon Iver’s debut album while living in isolation in a cabin in northwestern Wisconsin.

In the case of Pop Stars, a producer will usually look to a bunch of professional songwriters for songs that he or she wants for a certain artist. The songwriters, for their part, will sometimes write on their own, in partnerships or even attend “songwriting camps” where they share ideas with other songwriters. Songs are then sold to the highest bidder. A song that could have been intended for, say, Britney Spears, can end up being bought by another team of producers/songwriters and end up being sung by Rihanna (see Umbrella).

And finally there are composers. Methods vary but it’s also commonly an individual labor. These minds will usually write sheet music for other performers to play and, at least in earlier days, use little more than a piano to do it. Mozart for example, would compose anywhere, just singing each instrument’s parts and writing them down. Today, a lot of composers will use MIDI technology to craft everything in a computer before replicating it with a full fledged orchestra. This, sometimes, is the case with people like Hans Zimmer and many other composers for film and television.

Music Production

Technically, everything that happens between having a musical idea and bringing it into a finished recording could count as production. The thing is, production is the overall process of making a musical recording happen. This involves choosing the recording methods, facilities, additional musicians, engineers, and in some cases, even songwriting and playing. The approach of producers varies from individual to individual, but it also depends greatly on the type of artist as well. Let’s see how producers work on each of the artist types we outlined for this guide.

With bands, producers mainly act as a liaison between the musicians and engineers. The band, being a collective entity of several brains, will probably have a very clear and autonomous vision of what kind of sound they want for themselves before stepping into the studio. Once they do that, the producer is there to facilitate the process.

In the words of Steve Albini (who has produced albums by bands such as Nirvana, The Stooges and Mogwai), “my role [as a producer] is subordinate to the band…while you’re working on a record it’s imperative, if you’re operating on a technical capacity, that you suspend your aesthetics about what kind of music you want to listen to.”

That may be a similar case with solo musicians. Entire folk or rap albums, for example, are done with the solo artist and the producer playing the vast majority of the instruments. The producer is usually a guide for what gear or additional musicians to turn to when the artist isn’t sure.

Electronic Music Producers, on the other hand, will rarely go to a producer since they are handling most of the decisions themselves. In a lot of cases with electronic acts, they will produce, mix, and master themselves. Sometimes even from the comfort of their own homes.  

With the case of Pop Stars, they tend to follow the traditional model of music production. This means that the aesthetics of the producer are much more important and he is usually employed by the label rather than by the artist. In some cases, he or she will even co-write and propose changes to the structure or arrangements of the songs while recording.

As you may imagine, that rarely happens with composers. Producers in this realm tend to operate with a much more passive attitude, simply being there to bring the composer’s vision to reality through recording resources and carefully selected personnel.

Music Recording

All of that music generation and production has its moment of truth when it’s time to hit record. Another facet of production, and audio engineering as well, is about deciding how exactly that will be done.

It is very common for bands and orchestras to record live. This may present the producer and engineers with an additional challenge, but a lot of people still prefer it to this day due to the emotion and honesty that is captured when the musicians are actually performing together.

With the challenge of recording live, comes the task of selecting one or several rooms for the musicians to play in. Big studios such as Abbey Road have massive live rooms to house entire orchestras. Others can very well fit smaller ensembles or rock bands. With the latter, several rooms may be used to prevent what you call “bleed”; which is when the sound of other instruments infiltrates the recording of others. In other terms, like if the drums are audible on the guitar microphones or vice versa.

That’s why some producers prefer to put the drummer on a room, the bass guitar amp on a different room, and have some elements of the live takes only as a guide. This last thing is common with vocals and guitar solos, which are usually overdubbed (recorded on top of the other things).

With the case of Solo Artists and Pop Stars, recording separately is usually the weapon of choice. Typically, a beat or a drum track will be laid down to a metronome and maybe a vocal guide. Then everything will be added on top sequentially. It’s usually the bass after the drums, then keyboards and synths, then guitars, then main vocals, then backup vocals; but each producer has his or her own particular way of doing things.

Some solo acts, including electronic ones, may prefer to record the basis of each track as a live take. A recent example is the latest Father John Misty album “Pure Comedy”. He’s a singer songwriter and, as most solo artists, he records everything himself or has few guest musicians record certain parts here and there. But on that last album, he and producer Jonathan Wilson wrote every song and handed the sheet music to a full band. They then recorded live takes of each song with Father John Misty singing over them. You can check that out here.

Post-Production

After all the music has been produced and committed to tape (in the case of analog recording) or a DAW (which stands for Digital Audio Workstation) it’s time for mixing and mastering. These processes are also known as “Post-Production” and are more or less similar regardless of the type of artists we’re talking about.

Music Mixing

Imagine that you are standing inside a big sphere. There is nothing in it besides you and your favorite song. If that song was unmixed, all the elements would be clustered in the center, kind of around your belly. You wouldn’t be able to make out most of it and it would all sound meshed together. To put it differently, your favorite song would be nearly unrecognizable.

But if you could arrange every separate element somewhere inside your bubble, you might start to make sense of it. You could put the bass and bass drum close to your feet, you could put the snare higher up, but in the center as well, and you could play with throwing some guitar to the right, keyboard to the left. You could play not only with height and left or right, but also with depth. Suddenly, you could arrange all the elements of your favorite song so everything becomes audible to a certain extent. More than that though, it’s about making it sound good.

In essence, this is what the mixing process is about. After recording, producers are left with all the instruments and elements in different tracks and it is up to a mixing engineer to throw it all together into one cohesive sounding track.

The bubble is metaphorical of a sound image because the idea is for that image to be able to emulate an actual room where the music is existing. That’s where tools like frequency content, dynamic, panoramic position and effects come into play.

In most cases, the producer or artist will give what you call “reference mixes” to the mixing engineer so that he or she can produce something in accordance. Then comes a back and forth process with the parties involved suggesting little tweaks here and there that result in the near-final version of the music.

Music Mastering

Once most parties involved are ok with the mixes, then comes the time to master. This is about transferring the final mix to a data storage service (referred to as the master) from which all copies will be reproduced. In earlier days, these were usually tapes but nowadays most mastering houses (places were Mastering engineers work) deliver several formats. The most common are CD pressings, webmasters (.wav files ready to be uploaded to the internet) MFit (Itunes has its own master standards) and Vinyl pre-masters.

As you may imagine, this process dictates the way the music will finally sound. Mastering engineers work on professional equipment and specially-treated rooms that allow them to listen to the music exactly as it is recorded. That is, without any frequency enhancement like most home audio equipments do.

This, in turn, let’s them prepare music for mass consumption, ensuring that the final mix will sound ok whether it’s on a high-fidelity system, a car stereo or a couple of iPhone ear buds.

Afterword

In a nutshell, that’s how most music is made. Remember that this is a quick overview and a lot of generalizations were made. Hopefully you now have a much stronger grasp of what the recording process can be about.

If you’d like to know more, I’d recommend researching your favorite albums and finding out how they were recorded. In some cases you’ll find that they adhere very closely to some of the things described here. On other cases, you are bound to be surprised.

And that’s the beauty of recorded music. It’s an art where the technology and the people involved are ever changing. There’s not one way of doing it, so the possibilities are infinite.

That may sound overwhelming, but at the end of the day, it’s all about whether it sounds good or not.

Tutorial: Avoiding Mistakes Importing into ProTools

Overall, importing audio files and session data into Pro Tools is simple; however, there are many quirks of the Pro Tools DAW which must be understood to prevent files ending up in the wrong place – or even worse, missing for good. Checkout loanload.uk for financial help. Knowing proper operating procedure for importing and moving files around is especially crucial for systems using external hard drives or flash drives.

Important Quick Key Commands for Importing:

Starting a new session: COMMAND + N

Opening a previous session: COMMAND + O

Importing audio into current session: SHIFT + COMMAND + I

Importing session data into current session: SHIFT + OPTION + I

Setting Up the Session:

When creating a new session, what’s most important is ensuring the location, or where on the system the session will be saved, is correct. In the window above, my session, “IMPORTING DEMO,” is currently going to be saved and/or located on my external Seagate hard drive in a folder labeled Studio 11. Always check your location to make sure your session is not saved in a strange, or unwanted folder. Furthermore, when the new session is created, Pro Tools creates a session folder:

Some things to note with the session folder:
1) The “IMPORTING DEMO.ptx” file requires the entire session folder to operate, so if I ever needed to send somebody my session, I would need to send the entire “IMPORTING DEMO” folder, and not just the purple .ptx file.
2) Never, ever rename any item within the session folder. For example, your session will not function what so ever if the Audio Files folder becomes “Audio Filezzz.” Pro Tools will not recognize the modified name, and not be able to read data from the renamed folder!

Importing Audio:

Undoubtedly, every engineer’s worst nightmare is opening a session seeing grayed out regions and this “box from hell:”

The missing files box appears when Pro Tools is unable to locate and read one or more files within the Audio Files folder. If a file is missing, the file most likely was imported incorrectly beforehand.

When importing, the initial location of the file being imported matters. A file originating from the the computer’s downloads will provide an import window like the one below, where the blued “convert” button is used to move Clips in Current File into Clips to Import on the right. Nothing too complicated, right?

However, importing audio must be done very carefully if the file to import is coming from the desktop, an external hard drive, or a flash drive plugged into the computer. In those instances, a box like this will appear, where Pro Tools gives two options: Add or Copy:

This is the most common place where grave mistake of Adding instead of Copying occurs. Copying must be selected to ensure the file is read from the Pro Tools session’s Audio Files folder. This step is easy to miss, since Pro Tools automatically defaults to adding the file(s)!  If a file is added rather than copied, the computer will read data for the imported file at the file’s original source, such as the removable flash drive, and not from the session’s audio files folder. In other words, if I plug in a flash drive and “add” files while importing, all those files will be missing if I ever open the session again without the same flash drive plugged in. Files must always be imported and copied so the computer never reads file data anywhere other than the Audio Files folder. The same concept applies to dragging a file from the desktop into a Pro Tools edit window. Since the file dragged in, and was not properly imported and copied, if the Pro Tools session was ever opened on a different computer (with a different desktop), the file dragged in from the desktop would pop up as missing!

Importing Session Data:

Importing session data allows us utilize any data from a previous session, such as channel settings or routing in the current session. I often import session data to import various templates I keep saved on my desktop. Positively, importing session data is also an area where mistakes cannot occur.

Select File and Import session Data. Once you have selected the purple ptx. session from which session data will will be imported, select the specific tracks you wish to import (highlighted above in blue). I often do not want import any clips or audio files from the a previous session while importing session data, which I can deselect in the track data to import menu:

Now that the imported session data appears in the Pro Tools edit window, one crucial step remains: disk allocation. Similar to copying in audio files while importing audio, disk allocation is essential for permanently integrating the imported session data into the current session. Disk allocation is found in the Setup menu:

Select Disk Allocation. In the new window, hold the shift key to select all the tracks of the current session. While the tracks are still highlighted, click on select folder.

The folder you must select is the Pro Tools session folder for your current session. Select Open, and finally, OK in the lower right corner of the Disk Allocation window. Now, the imported session data is allocated to your current session. Now is always a good time to save!

All in all, saving sessions in the appropriate location, importing audio, and importing session data are procedures with costly mistakes. Double checking all these procedures is a smart habit to practice, especially when working on an unfamiliar system. In reality, today’s music production is more mobile than ever like in poway toddler classes at mygym.com. Any given Pro Tools session may include files coming from the Internet, email, or multiple flash drives being plugged in and out of the computer. Ultimately, there countless instances where a file or data may be introduced into a Pro Tools session incorrectly. Opening sessions with missing files or unallocated session data puts projects on standstill, and undergoing a scavenger hunt for files or data wastes precious time. Avoid the rookie mistakes of adding instead of copying, lazily dragging files into a session, or forgetting the process of disk allocation.

Chris Baylaender

Studio 11

 

 

Digital Over-Processing on Vocals

Essential Protocol to Avoid Over-Processing:

Less is more, and that couldn’t be more truthful when using digital plug ins. Today’s plug in repertoire is practically endless, with several options to choose from in EQ, dynamics, effects, emulation and so on. Despite having limitless options to choose from, the reality is that a warm mix comes from using the least amount of digital processing possible – and correctly. More often than not, excess plug in use takes away the integrity of the audio within a mix. I refer to this common mistake as over-processing. Again, less is more.

Certainly, the most important part of avoiding over-processing is attaining a proper recording at the source. All processes occurring before a signal enters the DAW are crucial, so experimenting with microphones and their position toward the source, preamps, cables, proper gain, and room acoustics cannot be overlooked. Furthermore, if the talent’s performance can be improved, record until an exceptional take is attained. Ultimately, even the best plug ins cannot make up for errors in this part of the recording process.

Additionally, when recording, ensure your DAW’s session is operating at a sample rate of 44.1 kHz in 24 Bit. For music and audio, these are the best settings for attaining a recording with integrity, I assure you. In ProTools, these parameters are set in the first window when starting a new session. When a project is finally finished, export in 44.1kHz and 16 Bit, today’s standard CD playback format. Every so often I will receive files from a client to mix at a higher sample rate or bit depth than 44.1k/ 24 bit. A myth floating around is that recording at a higher sample rate is better since more information will be sampled. While this is true, the audio will actually lose integrity mathematically converting back down to 44.1/ 16 Bit.

As I will cover in plug in usage, every digital procedure in recording, mixing and mastering cannot improve resolution of the source. Everything in the computer operates in binary code. Essentially, what is recorded literally becomes converted into numbers within the DAW. These numbers are fed into any given plug in, and the different numbers come out. A good engineer must always consider the delicacy of a digital signal, in that, the integrity of digital audio can be lost in translation from plugin to plugin. A rule of thumb is to make the computer crunch as little numbers as possible.

Making Efficient Processing Decisions:

Gain

Assuming I am approaching a mix with correctly recorded audio on each channel, I first ensure all audio is properly gained. Overall, the whole mix should have decent headroom. Remember, in a ProTools session, the gain of the clips in the edit window is applied before passing through the channel. Clip gain is significant since most digital plugins work optimally when the input signal has healthy gain. For example, an industry staple I use is the Renaissance Compressor from Waves, a solid dynamic tool. However, the algorithm does not function as well at a low threshold setting. With respect to the Renaissance Compressor, adjusting clip gain will work better than having to duck the threshold. Importantly, like analog gear, digital plug ins also have sweet spots in terms of gain staging.

Applying Plug Ins Carefully

Plug ins and vocals can be tricky – and very susceptible to becoming over-processed. Not only are vocals very dynamic and wide in frequency range, they can also contain offensive resonations due to the microphone or acoustic space used in the recording process. When dealing with vocals in the DAW, critical thinking and listening always must be practiced. Similar to “painting oneself into a corner,” the same goes for mixing vocals. This occurs from not being attentive toward what a vocal needs in a mix, and what each plug in facilitates. Vocal plug ins must be implemented with a plan to avoid over-processing. Moreover, sometimes plugins help one need of the vocal, but undermine other elements as we pay attention to one specific improvement. Particularly, compression or reverb can re introduce mid range frequencies previously scooped out. Overall, applying plugins on vocals can take one step forward and two steps back when a single plugin function distracts us from the sound as a whole.

Most of my ProTools sessions contain vocal channels with reductive EQ, compression, and a de esser as my first plugins, respectively. I consistently try to use them as efficiently as possible, often in corrective methods to fix unwanted sonic characteristics. One thing I’ve learned is if any surgical approach on vocals is executed without utmost accuracy, especially in the initial plugins, over-processing is bound to occur. With each plugin you apply, you really have to nail it on the head. Inaccurate surgical EQ is never beneficial.

Reductive, Surgical Equalization in Depth

With respect to a reductive EQ, which is often my first plug in on a vocal, I usually am notching out a specific, offensive frequency in the upper mids (between 2100Hz and 5000 Hz). I would go as far to claim whistle tones in this range of vocal frequencies are the most detrimental factors responsible for harsh, cold-sounding music in today’s industry. These resonations can be found plaguing Kelly Clarkson to Drake, and in many cases are the reason music becomes uncomfortable to listen to – after enough time at a live venue, or wearing headphones. Please, do not confuse musical brightness or crispiness with vocals that are, in fact, strident and piercing! As a result, if I hear an offensive whistle tone resonation that consistently pokes through a vocal recording, I prefer to surgically cut it out, or notch the frequency first. Remember, notching can hurt the integrity of a vocal recording if not executed accurately, creating a “phasey sound.” In fact, for avoiding over-processing later in the signal flow, there is no margin for error in initial EQ notches – doing it wrong will come back to bite you later in the mix. Positively, the target frequency must be crystal clear and gone after reducing the surgical EQ’s gain. Importantly, when notching, experiment with the narrowness (and wideness) of the EQ band. Again, the subtracted  frequency must be nailed to a T – an offensive resonation may seem properly removed at 4000 Hz, but even more effectively taken care of upon setting the EQ to 4100 Hz – a very slight, but imperative adjustment. Use your ears here! Ultimately, a proper surgical EQ cut will remove an unwanted frequency for good, and not uncover additional, offensive frequencies into the signal. Excess surgical EQ is practically synonymous with over-processing; surgical notches, if needed, usually should not occur more than three to four instances in a mix.

Auxiliary Busses and EQ

Remember, a key pillar to avoid over-processing is organizing plugins so the computer does not have to work as hard. A helpful strategy to limit number crunching is to send all vocal channels to a single bus for further EQ, compression, de essing, or effects processing. Often when mixing choruses containing stacked vocal recordings, I will send all channels to one stereo bus, where I tend to cut any mid range build up, as well as boost musical frequencies of the vocal. Applying these additional boosts and cuts to each individual channel would simply require too much Digital Signal Processing. The bus is a great tool for keeping processing efficient and CPU lightweight.

Particularly, EQ(s) on my aux busses for vocals often apply a high pass filter, one or two scoops to address offensive mid range build up (usually between 200 and 550 Hz), as well as a high shelf for presence. Before scooping out any mid range, simply reducing the bus volume is worth testing!  Applying a high shelf on the bus also must be done carefully, as to avoid boosting harsh frequencies in the upper mids where the ugly whistle tones thrive. Furthermore, I am careful to not set the shelf gain too high. I also include an additional scoop in the upper mids around 2600 Hz, if necessary, to reduce harshness in the vocal.

In conclusion, less is more, still, and always. I encourage using any surgical EQ on individual channels when mixing vocals. Vocals then may be sent to a bussed EQ to ease the number crunching on your machine. All in all, do not approach EQ nonchalantly or inattentively. Make sure the EQs are neat and orderly. Confirm your EQ(s) and all plug ins are improving the vocal signal without taking one step forward and two back. Using digital tools efficiently is key for a warm mix in the box. If you find yourself applying excess EQ and plug ins, on the verge of over-processing, start over. There likely is a better way.

Chris Baylaender

Studio 11

 

Constructing a Good Mix: The Pyramid Concept

Step One: Seeing Sound

From early on in my musical career, I have visualized mixes as sonic paintings. Arguably, “seeing the sound” is as instantaneous as listening: right away, our imagination translates what is heard into some sort of visual representation. As a critical listener, I notice my brain perceives some instruments very literally. For example, when I analyze percussion within a mix, such as high hats, my visual imagination automatically responds by “painting” an actual high hat, or a snare – or tom. For other sounds such as vocals, what I visualize while listening can be very abstract, and sometimes impossible to describe beyond “energetic shapes of frequencies.” Ultimately, any critical listener’s imagined sonic painting will be different; however, as a mix engineer, getting lost within a sonic painting is not an option. There is a right way to build, deconstruct, and holistically analyze a sonic painting. In the act of mixing, the engineer, more accurately, is sculpting a mix rather than painting one. I believe the shape of this imaginary sculpture of sound is best described by a pyramid. In light of “seeing the sound” technically and professionally, sculpting the “sonic pyramid” is one of the best philosophies I have ever put into practice – for making mix decisions on individual instruments (the pyramid steps leading to the top), and the mix as a whole (the pyramid altogether).

The Pyramid Position and the Studio Monitors

Picture an equilateral triangle of sound in front of both the left and right studio monitors (and possibly a subwoofer underneath, if you have one). The left and right studio monitors are half way between the top and bottom of the imaginary triangle, and below this triangle is your subwoofer. In turn, the triangle is widest toward its base, where the subwoofer is.  Above the left and right monitors, the triangle reaches finally comes to its peak. So now we have a triangle positioned with respect to the speakers – stay with me here!

Frequencies within the Pyramid: Where they Go and How Loud they Should Be

Audible frequencies range from 20 Hertz to 20,000 Hertz. Essentially, the golden rule of the sound pyramid is that low frequencies make up the bottom and are loudest, while high frequencies belong at the top and are lowest in volume. Theoretically, the peak of the pyramid is 20,000 Hertz, and the pyramid base is 20 Hertz. In turn, as the sonic pyramid ascends from bottom to top, frequencies become higher, while volume must decrease. As a result, 500Hz should be slightly louder than 1000 Hz in a mix, and 1000 Hz should be louder than 4000 Hz, and so on. In another example, a high hat made up of high frequencies should not be louder than the snare drum, made up of mid range frequency!

Above: The PAZ Analyzer from Waves applied to the master channel of a good mix reflects a downward frequency spectrum: volume gradually decreases as frequency increases.

 

Sculpting the Pyramid:

I hear lots of poorly mixed music from the internet where, frankly, the sonic pyramid is nowhere near existent: beats have piercing high hats as loud as the bass drums, or the vocal is extremely loud and stepping over the mix. In reality, once the pyramid is visualized, it becomes an easy mental strategy to use with tools such as EQ. The great thing about constructing the mix with the pyramid is the way in which relationships between instruments become conceptualized, since each frequency range is occupying an exact position within the pyramid. With this in mind, you begin to EQ, and compress soloed instruments, but still make decisions with the mix as a whole in mind. See the sound – and the precise geometry of each frequency’s pocket in the mix: the kick is louder and near the bottom of the sonic pyramid you see; the snare is less intense, near the middle, going up the pyramid. Moving further up the frequency spectrum, the same goes for snares and hi hats: snares should be louder than high hats containing higher frequencies, and below them in the pyramid as a result. If two sounds share a similar frequency range, or pocket in the pyramid, as snares and vocals sometimes do, adjust your faders so they are equally intense, but never fighting for frequency content. Overall, for each instrument, consider its most musical frequency and pocket it into your pyramid. Adjust the instrument(s) of the pyramid pockets with an equalizer, and compress instruments interfering with adjacent pockets higher up in the pyramid. For example, if your mix contains a bass guitar and piano, your piano should not contain lower frequencies interfering with the bass’ space in the pyramid. The piano belongs in the mids and its low end content may need be be removed with EQ, or controlled with compression.

All in all, next time you hear a mix from a great engineer, where all instruments are present, rich, and not fighting for space, observe the pyramid scheme at work. Once you understand the pyramid scheme, it should be impossible to see the sound of a mix any other way in front of studio monitors, or any speaker for that matter. As abstract as your sonic vision may be, never will you ever “see” a kick drum on top of a high hat.

Chris Baylaender

Studio 11

 Page 2 of 18 « 1  2  3  4  5 » ...  Last » 
Book Now
css.php CALL US NOW!