Video for Joey Price new mixtape “Barely Broke Intellect”

On April 27th, Studio 11 welcomed the richly talented, smooth flowin’ Chicago Hip Hop artist known as Joey Price into our studio to record his brand new mix tape titled “Barely Broke Intellect”, dropping September 9th, 2014. “Barely Broke Intellect” was recorded and mixed in our B room studio by veteran engineer/producer Kris Anderson, the same room in Studio 11 that such artists as Kanye West, Lupe Fiasco, Rockie Fresh and King Louis all started their careers in.

Joey Price’s smooth layed back vocal delivery was recorded using the AT 4060 tube condenser microphone, which is known for capturing a rich warm mid range tone that is not to heavy on the high end. The signal was then routed into the Manley Voxbox for signal processing and amplification into Pro Tools HD, the digital recording system inside our B room.

Once all the vocal tracking had been finalized for each song on the mixtape, the vocals were then mixed together with their instrumental accompaniment inside Pro Tools. During the mixdown process, Joey Price’s lead and background vocals were treated using the Waves Renaissance Compressor and De-Esser plug-ins, along with the Sonnox GML EQ plug-in for equalization. For basic effects such as reverb and chorus, Joey’s vocals were then processed using Waves Renaissance Reverb, Waves Metaflanger, Sonnox Reverb, Digidesign’s Reverb One and DVerb. Digidesign’s Extra Long Delay and Waves HDelay and Supertap were used for all special delay effects. Distortion and phone filter processing were done using Ampfarm, Digidesign’s Lofi and Digidesign’s 7 band parametric eq plug-in.

Check out Joey Price’s new promotional video for ‘Barely Broke Intellect’ where he discusses his own life experiences and the influences, music and words of ‘Barely Broke Intellect’.

 

 

 

Getting Bass to Translate

Dan Zorn studio 11 pic revamped

Getting bass to translate is one of the toughest things to accomplish as an engineer or producer.  After many years of working with various genres (and making countless mistakes) , I have finally compiled a list of tips that will help you get your bass to sit right in the mix and to be heard on any playback system (including those wretched Mac Book speakers). But before we delve deeper, we first must understand a bit about the playback systems themselves, and how our human hearing affects the way we perceive “bass”.

On a fundamental level, humans are able to hear sound because our ears (through a complex series of processes), pick up air molecule displacements (vibrations), and convert them into electrical impulses.   Our brain detects these impulses and then, as a transducer, translates them into “sound”.  On a piece of paper, humans  are capable of hearing frequencies from 20hz to 20Khz. However that’s “perfect” hearing. Most of us do not have  perfect hearing, and on top of that begin to loose sensitivity to certain frequency ranges as we get older.  We pick up vibrations through hair cells in our inner and outer ear, and as we age, some of these cells began to deteriorate. The first hair cells to go are typically specific ones that are attached to our “outer ear” , the part of our hearing responsible for detecting high frequency content. So depending on your age when your reading this, you can have a very different hearing response than someone much younger or older than you. So the reason your old man can’t hear you isn’t necessarily that he is loosing all of his hearing, but most likely because he’s losing or lost some of those higher frequency hair cells (typically where the articulation of the human voice sits).

Because of the way we humans have evolved, we are most sensitive to mid frequencies around the human voice (2-5Khz), and will hear these over other frequencies of the same SPL. This concept is widely known as the Fletcher Munsun Curve, and understanding this can help you greatly when mixing, and specifically when dealing with bass.

Fletcher Munson Curve

fletcher_munson_ 2

 

The Fletcher Munson  Curve  refers to how our “frequency response” changes with volume. As shown in the graph above,  when  1Khz  is at 60db,  it takes about 80dB to hear 50hz as the same perceived “volume”. A 20dB difference. As the overall dB increases ,  1Khz at 110dB will sound the same as 50 hz, if 50 hz is played only 10 dB louder.  This relationship changes or “flattens” out as the volume is increased.  So what does this mean for you? If you listen to your mix at a louder volume, things are going to sound equal  and balanced in volume. Bass, mids and highs will seem in their place, but it’s  is a trick!   Once you turn the level down, all of a sudden bass (and some highs)  get lost in the mix. You may not have chosen to turn the bass up when it is at a high level because it sounds present in the mix, but when played at a quiet or reasonably loud level, it is lost in the mix.  On top of the frequencies flattening out, if the overall volume  is loud  when mixing the music, the song has a greater impact. This “greater” impact fools your ears into being satisfied. But they aren’t satisfied because things are clear in the mix, they are satisfied because the music is cranked and your body can “feel” the bass. You aren’t going to think anything is really wrong with the mix if it’s loud…so the solution? Monitor at low levels, and you won’t trick your mind into thinking things are balanced when they aren’t, especially when dealing with bass.

Another reason to monitor at low levels is because it won’t interfere with the acoustics of your room as much. If something is cranked, and you’re in your not so perfect sounding bedroom, then that will reflect in your mix. Low frequencies will be boosted, standing waves will cause strange phase issues, and your mix will be all wrong. Listing at a low level, will eliminate any problems relating to acoustics, and will give you a more direct , unaffected sound.

If that isn’t enough, yet another reason to monitor at low volumes is so your ears don’t fatigue. If you spend enough time working on a mix at high volumes, you will certainly began to loose sensitivity to certain frequency ranges. Mids will become washed and hard to distinguish, highs will become less harsh, and your decision making will be less accurate. We’ve all had at least one song where we thought we nailed the mix, then after checking on it the next day, thought “Man, what the hell was I doing?”. Well that’s ear fatigue, and it can greatly destroy the quality of your final mix (and especially your hearing). And after all, without your hearing, you’d be out of a job!  If you want more information on what loud sounds can do to your hearing from the perspective of a used to be engineer, now hearing specialist, check out this website  http://www.heartomorrow.org/

Monitoring at low volumes isn’t at all a new concept, and has been a “secret” of mix engineers for decades. This “secret” is based off of the concept that  if it sounds good quiet, it’ll sound good loud, but if it sounds good loud, it wont’ necessarily sound good quiet. After all a good mix sounds good at all volumes, on all playback systems. So next time you listen to a professionally mixed and mastered song on a laptop, or cheap playback system, listen to how  the  bass is audible and clear. You will find that even on your cheap 15 dollar  portable whatever, that you can still hear the bass, crystal clear. Why is that you ask? Keep reading and you’ll see why.

It’s 2014 and we have entered an era where people are no longer listening to your mixes on vinyl through a good home stereo system. Now your audience is listening to your music on mp3 through their macbook speakers, cheap  ipod earphones, and ihomes. There is even more demand on getting your bass to translate because, with the exception  from the earbuds, these are playback systems that generally struggle to reproduce fundamental bass frequencies. Here are two frequency response graphs that illustrate this “lack of low end” The top graph is the frequency response for a Mac Book laptop, and the bottom is for a Sony laptop.

Mac Book Pro Speaker Response

 

Sony Laptop Freq Response

Looking at these graphs, one can see right away that there is a serious roll off of low end around 200-300 hz and below. As you may know this is where most “bass” or low end sits in a mix. So why can you still hear the bass on professionally made albums through your laptop speakers? It’s because the artist, and engineer learned to compensate for this issue. . Similar to what I said before about having your mix translate if it is monitored quietly versus loudly, if your bass sounds good through crappy speakers, it will sound good through great speakers. This is why you will see many professional studios, and even some home studios using “unflattering” speakers to run their mixes through. Speakers like the Yamaha Ns10’s are a staple in the recording industry not because they sound good, but actually because they sound “bad”.

There are two major steps in getting a good bass sound that will translate. And it all starts with the Artist. ( And if your the engineer, don’t worry, you still have options.)

 

For the Artist:

Proper Bass Arrangement

Many good artists are aware of this problem, and consciously try to avoid it during the writing process.  Something amateurs do often when making music, whether it’s hip hop, house,  rock or what have you, is to choose bass that is bone rattlingly low because it sounds “epic”. While this may sound “epic” in your Dre beat headphones, it’s not going to sound good anywhere else. Trust me. A good way to avoid having a lost bass, is to play either an octave higher than you normally would or if that’s too high, if you can play a different arrangement somewhere in between. As we discussed earlier, our ears are less sensitive to frequencies that are lower, and tend to gravitate towards ones that are higher in the spectrum. So bass with more upper mid/high end content is going to cut through easier.  But that’s not bass anymore you say? Well actually even a bass line played in an upper register will still contain alot of low end content and also  point to the fundamental bass frequency (a nice little trick you can thank your ears for).

Sub Bass-

As a general rule, for most genres I’d say stick away from writing in a sub bass line if you’re worried about bass translation.  This is not to say sub bass can’t be used, but it all depends on where you are using it. If you have a bass with alot of low end content already, adding a sub is only going to make things worse. If you have a bass that hardly contains any low mid “meat”, you can put a sub on there . And if you do put a sub on there, filter out the highs, and low mids, so when combined with the actual bass, won’t sound muddy. Sub can easily destroy a mix if it isn’t sitting in the right place so take the time to make sure it sits in the right place.

For the Engineer:

If your an engineer, and you get a bass line that is too low to be heard on any small playback system and can’t be changed, don’t panic, there are a few things you can do.

1) Apply Maxbass. 

The engineers over at Waves realized the issue of bass translation, and actually made a plug in specifically designed to help bass translate called Maxbass. Without getting too technical, Maxbass basically duplicates the signal, adds new upper harmonics to it then mixes it back in with your original bass. It’s essentially adding more audible frequencies to your bass to make it more audible for the listener.

2) Add Harmonic Distortion

Simular to what maxbass is doing, a great way to get your bass heard is to add frequencies that are more easily heard to it. By adding a bit crusher, or sample reducer you can add upper harmonics that will give your bass a lift in the mids and highs to make it stand out in the mix more, all without actually boosting the volume of it. Or try running your bass through a saturator, or subtle distortion effect. Used in parallel, can produce wonders for a bass.

3) Run Your Bass through a Transformer.

When lower frequencies pass a  transformer, especially an old one, the audio signal gets more “DC” or slower moving. Transformers don’t pass DC current through them, so as the signal passes through them, various things begin to happen.  Saturation, new harmonics, and interesting phase changes occur and are added to the signal. The end result will be a little more edge in the midrange, and the frequencies that were too low to be audible will have been shaped in a way to make them sound colored and surprisingly, louder! The added saturation and color of the transformer will shift your bass a little higher, and allow your ears to fill in the fundamental frequency. (Studio 11 offers individual track processing. So if you want to run your bass through one of our many units with transformers, it’ll only cost you about 10 bucks. Hint, hint ; )

4) Compression

Compressing bass can increase the overall subjective volume of it, and will help keep a more constant level throughout your track. Compression also naturally brings out the subtleties such as the sound of the pick, and guitar slaps that are more audible to our ears. Increasing these will increase the over all presence of the bass.  Compress away!

5) Move Things Out of the Way 

Sometimes the kick drum may have too much low end content and will be masking the bass. By side chaining the  bass with the kick, you can increase intelligibility greatly between the two instruments, and as a result your bass will sound more present. Also, if the kick is in the way, roll off some low end content . When combined with the bass, the  low end of the bass will fill in what the kick is missing and your ears will  assume it’s from the kick.

6) EQ

Try simply boosting upper frequencies , and taking out some mud to improve intelligibility. It doesn’t hurt to roll off some sub on the bass either. Rolling off frequencies (like 30hz) we can’t hear well anyway will only make the bass cleaner, and adding upper frequencies around 1khz for example will bring out the presence of the bass more.

Try listening to your finished product on various playback systems to get an average of how the bass sounds.  Check it on the laptop. If you can hear your bass clearly on a mac book for instance,  in my opinion, you’ve nailed it. Also one of my favorite tests is to hear how it sounds is on  my ipod earbuds. Because I listen to alot of music through these while i’m out and about, I have a good reference of my how bass stands next to other  recordings. Or if you have a car, that’s always a good test too. The point is to try it all over,so that you can make adjustments as necessary. Follow these tips and you’ll be well on your way to crafting bass that translates!

 

Dan Zorn

For recording in Chicago hit up Dan at Studio 11!

new-11-logo,

 

 

 

 

 

 

 

MASTERING -6db | TOO LOUD? OR JUST RIGHT…

In the thousands of masters we’ve printed here at Studio 11 in Chicago, the first thing that I can assure you of is that there is no silver bullet in as far as RMS level is concerned when it comes to mastering. I am assuming that you, the reader are in search of mastering -6db RMS as your primary topic. That being the average level of a equalized, compressed, and limited master recording. It may be however, that you are inquiring as to the PEAK volume of your mix print prior to mastering. If this is the case, you’re spot on in that thinking. You should leave about 6 db of headroom on your mix print to allow the mastering engineer some “room” to work with. So in effect, print your mix to a peak level of -6db. If you are searching for the meaning of -6db RMS, read on…

In mastering there are three fundamental processes. These pharmacy in the united kingdom processes can be augmented by other processes, as well as ordered differently but they are essentially equalization, compression, and limiting. In the quest for a record that sounds great, has dynamics, and is loud, all sorts of combinations of these mastering tools might be used. In the end we end up with a mastered product that has a peak level near the digital ceiling and an RMS level that is often between -10db and -6db depending on the music. RMS level stands for root mean square and in a nutshell, it is a method of calculating the average level of the program material.

Personally, I have found most records to sit comfortably mastered at the -8db RMS level. If all processes are well executed, this provides a wonderful balance between loudness and dynamics. The significance of the -6db RMS level is that it is in my opinion as loud as one can smash a recording without it going to complete crap. Using clever mastering techniques this level can still be made to sound dynamic. Most modern urban styles of which we specialize in mastering at Studio 11 are pressing in the -6db RMS range as dance music and hip hop especially demand this sort of loudness. The instruments used such as electronic drums and keyboards are not very dynamic to begin with and do not suffer the lack of fidelity that live instruments do when pressed this hard. So there you have it – the only time I recommend -6db RMS as a mastering level is for urban electronic music styles. Frankly it is too much for more organic musical textures.

Please feel free to contact me, Alex Gross via the Studio here for free consultation and rates regarding your mastering project. Studio 11 is the highest quality and most affordable solution for online mastering.

Bouncing down your music project stems for use in another studio or DAW

Here at Studio 11, we usually get a few multi-track sessions every week from clients that need to be mixed out. That includes all individual tracks from the beat & music, as well as each individual vocal track (verses, hooks, bridges, adlibs). One thing we’ve noticed is that few clients rarely bring these projects into our studio correctly the first time around. The clients that do get it right get awarded free cookies. Yay! In today’s blog, I am going to go over the best methods of rendering your files on the top DAW systems as well as talk about all the things to look out for and consider in your rendering from sound quality to reliable multi-track transfers. So if you want that free cookie, read up.

The DAW systems I will be discussing today will include Pro Tools, Logic, Ableton Live, FL Studios, and lastly Digital Performer. The one thing to keep in mind, if you are using a different DAW, is that the concepts I will be discussing are pretty consistent throughout all the DAW’s. But honestly, change to one of the systems I am discussing. Why you ask? So you can apply what I am talking about in this blog, that’s why you guber.

The first concept I am going to talk about is the differences between offline, online and internal bouncing and why I believe internal bouncing is the best way to go. The reason why I want to discuss this first has to do with the sound quality of your rendered files and because this is a method that can be used in any of the DAW’s discussed to assure multi-track transfer reliability. Lastly, because I am the guy writing the blog, not you.

Offline Bouncing

This term is applied to the rendering of audio files in non real time, which means it does not take the time of playback to convert your file or files for use outside of your DAW. Considered the quickest option to render a multi track session but depending on the DAW you use, certain features (plug ins, automation, midi) in your session may not work well or at all. This is why it is always best to listen back to your offline bounced mix to be sure that it rendered correctly.

Online Bouncing

Also known as the Real Time Bounce, this term is applied when it take the full time of playback to convert your file or files for use outside of your DAW. Online Bouncing is good because it gives you the opportunity to listen to your final mix as it is bouncing down. However, if you are bouncing down a very processor intense session or mix with heavy automation, it may lead to glitches in certain tracks that will cause you to have to rebounce your session, freeze a track or tracks, or bounce down the individual track or tracks that may be causing the glitch.

Internal Bouncing

I apply this term to bussing your individual tracks or grouped tracks to new tracks and recording them in real time and then exporting the recorded files for conversion to use outside your DAW system. The reason why I prefer this method, scientifically speaking, when you select bounce to disk, the DAW mix engine architecture is actually taking your mix out of the DAW Audio Engine and into a SEPARATE mix engine. On the way there, it first truncates your data, then begins tossing out bits of information. If you’re low on voices, the process is even more detrimental as it needs more power to do the operation and throws out more. the result is something lacking in high end definition, a log jam of mid range and a cluttered low end. With internal layback, you’re staying in the DAW Audio engine and avoiding all of that. Essentially, what your tracks are is what you print. Another reason why I like this option the best, it allows you to listen to what you are bouncing down. And lastly, if a glitch occurs while you are recording your mix or tracks down, you can punch in at the spot of the glitch and keep recording on instead of having to go back to the start of playback to bounce again. The only time punching in might not work out is if you are using some form of modulation (tremolo, vibrato, flange, chorus, phaser) on your master mix or individual tracks. At the point of punch-in the mix or track could make a sudden sonic jump or change, which could make your mix or track not as smooth and organic sounding. If your modulation is tempo synced to your DAW, then you you should be fine.

Another important aspect I want to briefly discuss when bouncing your files down is consideration of your sessions’s bit depth and sample rate. When bouncing your tracks down, always make sure you bounce your files down at the same bit depth and sample rate as your session. If your session is at 24 bit 44.1 khz, make sure you bounce your tracks down as 24bit 44.1 khz WAV or AIF files. If it is at 16 bit 44.1 khz, then bounce your tracks down as 16 bit 44.1 khz WAV of AIF files. If your session is at 24bit 48 khz, bounce down your files at 24 bit 48khz. One thing though, unless you are working on sound to picture or music for picture, never create a session at 48 khz. At the end of the day your file must be converted to 44.1 khz to be used in most listening devices and software, and the conversion process from 48 khz to 44.1 khz will decrease the overall fidelity of your mix. That is a known fact jack.

The last thing I want to discuss before we get into specific instructions on how to bounce down your project’s tracks is the concept of Group Comping. Group Comping is when you combine two or more tracks together into a single mono or stereo track. Group comping can help decrease overall track count in your mix, and it can also allow for a combination of tracks to be processed and treated together in exactly the same way. Why put the same EQ and Compression across six tracks when you can sum up those six tracks and put EQ and Compression on only one track. It is way more effective, and allows you more control over your mix at the end of the day.

The most common occurrence of Group Comping is when a project has a large vocal track count. This would include several lead tracks, many background vocal tracks, several adlib tracks and anything else vocal related. The engineer may choose to comp down the many background tracks to two or three mono/stereo tracks. Usually backgrounds are comped together in accordance to the part they are performing. For instance, chorus backgrounds might be grouped together on one stereo track, verse backgrounds comped together on another stereo track, bridge backgrounds to another. If there are multiple melodies or harmonies that are part of each background, then each harmony group is bounced separately to a stereo audio track.

Instructions on how to perform Internal Bouncing in your DAW

ProTools

1. In your Pro Tools session, create as many stereo audio tracks as you need stems. If some of your stems to be are in mono be sure to create mono audio tracks for them.
2. Assign your newly created stem tracks to a different available aux bus input.
3. Send whichever audio/instrument tracks are required for each stem to your newly created audio tracks. You do this by assigning the outputs of each audio/instrument to the same busses on the inputs of your newly created stem tracks.
4. Arm all stem tracks and record all the stems in once pass.
5. Select the newly recorded stem tracks in the region view & ‘export selected files as audio’ at whatever bit depth and sample rate your session requires.
6. Be sure when you export your files that you export them in the same sample rate and bit depth as your session.
7. Lastly, create a new folder, label it the name of your project then stems afterwards followed by the bpm of the song. Example ‘project_stems_bpm’. Place all exported files into the folder.

Digital Performer

1. The export feature for Digital Performer is pretty much the same as Pro Tools.
2. Create as many stereo audio tracks as you need stems, assign each one a different aux bus input. If some of your sessions tracks are in mono be sure to create mono audio tracks for them to create your stems.
3. Send whichever audio/instrument tracks are required for each stem to whichever stem bus using aux sends.
4. Arm all stem tracks and record all the stems in once pass.
5. Select each newly recorded stem track in the arrangement, control click on each file and choose export selected audio file. Be sure to make sure each file is exported at the same sample rate and bit depth.
6. Lastly, create a new folder, label it the name of your project then stems afterwards followed by the bpm of the song. Example ‘project_stems_bpm’. Place all exported files into the folder.

Logic

1. Logic is probably one of the easier DAW’s to create stems on for export into another system.
2. Disengage any inserts you may have on your tracks that you are preparing to make stems of. If you wish for your insert/fx to remain on a particular track, you don’t have to disengage.
3. Go to the file menu and open up the export to audio option.
4. In the export window, select all the tracks that you would like to make stems of.
5. Be sure to set the same sample rate and bit depth for your exported stems as your project’s session. Click ok.
6. A Window will pop up asking for you to choose the destination of your exported files. C reate a new folder, label it the name of your project then stems afterwards followed by the bpm of the song. Example ‘project_stems_bpm’. Click save and your Logic project will begin exporting all your tracks to the folder.

FL Studio

1. Make sure ever signal is routed to a mixer.
2. Switch off every insert effect on the mixer unless you want the stem file to be treated. Sometimes it’s a good idea to make stems without fx inserts and separate stems with fx inserts on.
3. Right click on every fader, press reset. If you want to keep your levels, skip that step. But you should definitely make sure, that every channel has some headroom and isn’t clipping. Repeat the step for all the pannings.
4. Make sure every mixer channel is labeled correct.
5. Be aware that Fruity is somehow “buggy” with some virtual synths. Trilogy for example plays a bit late, which gets worse if you raise the latency. So I always render at a as low as possible buffer setting. (FL7)
6. If you don´t want to render your send effects, turn them off to save some cpu power and make the rendering faster.
7. Go to File-Export-Wave File.
8. A Menu Pops up, where you can select where you want to save your File. As we want to split the tracks, just select the right folder and write the the song name.
9. Press Enter, A Menu pops up.
10. Set looping mode to “leave remainder” This way you make sure that nothing gets cut up, for example a long release of a note.
11. Quality: Select 512-point sinc. NEVER select Dithering. If you use TS 404 you can select “Alias free for TS404”. Select “HQ for all plug-ins”. Select “Disable Max Poly”.
12. Output: You should select WAV and not MP3. Only select MP3 if you really want to piss of your mixing engineer.
13. WAV: You shoul select “24bit float (0.24) here for a optimal sound Quality.
14. Options: Select “Split Mixer Tracks”.
15. Press Start. Every Channel will now be rendered to a separate wav. The name will be: “Song title_channel name”
16. Create a new folder, label it the name of your project then stems afterwards followed by the bpm of the song. Example ‘project_stems_bpm’.

Ableton Live

1. Highlight your song in the arrangement window from beginning of song to the end. Be sure to highlight between 5-8 seconds after your song is down just in case any of audio tracks have a long sustain.
2. If there are any insert/fx settings on your tracks, bypass them unless you want those particular tracks to be rendered down with insert/fx settings.
3. Click on the file menu at the top left corner of the arrangement window and choose the ‘Render to Disk’ option.
4. Inside the ‘Render to Disk’ option click on the Rendered Disk tab. This defines which tracks will be exported out of Ableton. Choose ‘All Tracks’, which will then export all your tracks out of Ableton.
5. Inside the ‘Render to Disk’ option look for Audio file tab. Click on it and change it to .wav file. Adjust your bit depth of the exported files to the bit depth of your session, or the bit depth of the sample clips you are using. For example if you are using all 16 bit samples in Ableton, then render your audio files down to 16 bit. If you are using 24 bit samples, then render your audio files down to 24 bit.
6. Once you are finished adjusting your audio file settings, click ‘OK’ at the bottom right corner of the ‘Render to Disk’ option. This will begin the process of exporting your individual tracks out of your Ableton Project for use on any DAW. A save to disk folder will pop up. Create a new folder, label it the name of your project then stems afterwards followed by the bpm of the song. Example ‘project_stems_bpm’.
7. Lastly, one thing to keep in mind when you are rendering your files is if any of your tracks in your project are in mono, you should render them separately as mono tracks. You would do this by choosing ‘mono’ in the Audio files tab in the ‘Render to Disk’ option under the file menu.

So this about wraps up this discussion on how to prepare your sessions to use/mix in another DAW. A few quick things to remember is always label your files, always make sure all your exported files start at the beginning of your song, even if there is dead space. This way, everything will line up perfectly when they are brought back into a new DAW system. And lastly, if there are any tracks rendered/exported/bounced down that you are not using, delete them from the stems folder. This will save whoever is working with them stem files time, and will make for a smaller stem file size when sending to another person, copying to drive or burning to disc.

If you need assistance mixing your project call us at 312-372-4460 or email studio11chicago@gmail.com . We’ve mixed 1000’s of projects for people just like yourself!

 Page 12 of 24  « First  ... « 10  11  12  13  14 » ...  Last » 
Book Now
css.php CALL US NOW!