3D Thriller: Mixing Michael Jackson with 8X GRAMMY-nominated Engineer Martin Nessi

Producer and mixing engineer Martin Nessi behind the board

When Michael Jackson’s fourteen-minute music video for Thriller debuted at the Avco Theatre in Los Angeles in 1983, it sold out for three weeks straight. Now, thirty-five years later, it debuted at the 2017 Venice Film Festival with the audio mixed in Dolby Atmos surround sound.

The audio engineer on the job? Eight-time GRAMMY-nominee Martin Nessi (Nessi has worked with artists like Andrea Bocelli, Celine Dion, Josh Groban, and Ariana Grande). We spoke with Nessi about mixing audio in 3D and using RX to restore audio the original Thriller audio.

"There were so many speakers when mixing Thriller in Dolby Atmos, and many audio elements stood on their own. That means if there was a bad edit or non-musical artifact, someone was going to hear it. I had to make sure every track was 100 percent perfect, because everything was more exposed. That was the biggest challenge from the forensic point of view, and it would have been impossible to do without RX." —Martin Nessi, 8X GRAMMY-nominated audio engineer


How did you get the opportunity to engineer and mix Thriller in 3D?

Michael Jackson’s estate got in touch with a senior executive from Sony and asked them who they thought would be the right person to put together the music for remixing Thriller for a modern delivery, which was 3D for the visual, Dolby Atmos for the audio. The senior executive recommended producer and engineer Humberto Gatica.

Back in the day, he was one of the engineers on Thriller along with Bruce Swedien. They recommended him, and he called me to work with him.

What’s an easy way of explaining what mixing in Atmos is and how it differs from other types of surround sound?

In surround sound you have 5.1, which is center, left, right, and then two surround speakers plus a sub. Then you have 7.1, which is a left, center, right, left surround, right surround, back left surround, and right back surround.

Then there’s Dolby Atmos. Dolby Atmos is basic atmospheric audio—3D sound. They want to make you feel like you're inside the sound.

The main audio comes from a 7.1 array, which is the one I just explained. They also have audio objects, which come from other speakers that are placed on the ceiling and walls. Basically, you need to have a minimum of four speakers to be considered Dolby Atmos, but if you go to the Dolby Cinema in Hollywood, I think they have around 120 speakers or more.

For those objects, there's a Dolby Atmos plugin in Pro Tools that’s it's like a 3D panner. You can basically move the panner to where you want the objects to go in the 3D space.

Mixing Thriller in Atmos, what were you able to convey to the listener that you might not have been able to in other listening environments like 5.1 or 7.1?

There's a lot that you didn't “hear” in the stereo version of the song. You were actually feeling them, rather than being able to hear them 100 percent clearly. Now in 7.1, you can hear every component completely separated in the soundstage.

Then you have to add all the other stuff that's happening in the video, like the sound effects and zombies. They also did a lot of work on the video itself. It's Thriller on steroids.

What were some of the biggest audio restoration challenges on this project?

The people from Sony sent us what they thought was the digital multitrack of Thriller from many years ago. They basically had grabbed all the tapes (master and slaves), and printed them to Pro Tools. Most of the sessions from all these artists back in the day are already in Pro Tools sessions for archival purposes.

They sent us a session, which is what they thought was the master session that was used for mixing Thriller. When I opened the session, I noticed that there were elements that didn't make sense to be summed together. For example, all the drums in one stereo track, etc. I knew it wasn’t the master tapes used for the final mix.

Rest assured, I confirmed with Humberto, who worked on the original project. I showed him the Pro Tools session, and he confirmed that it wasn’t the multitrack master that was used for the mix.

I suddenly remembered that many years ago I read a magazine where Mick Guzauski said he had mixed the Thriller album in 5.1, back when Michael was still alive. I suggested to get in touch with him to see if he knew where to get it. The next day we got it from Sony.

It’s important to tell you that the multitrack was for the song, not the video. [The video is 14 minutes long—the song is five and a half minutes]. I knew I had to create a multitrack of the video, because if we mixed the song as is, then when we were done, I would have to start editing all the stems to fit the length of the video. It had to sync with the visual. The problem is that since you're automating to a visual, you have to be able to automate the stuff while you're looking at the picture, so I decided to create a multitrack of the video using the multitrack of the song.

I started putting together the multitrack, but if you remember, the video is not verse, verse, chorus, verse. It's actually three verses, and there's an interlude, and then it's the chorus a couple of times repeated.

What I found out when I started putting that together is that the speed of the multitrack was different than the speed of the song on the video. That could've been because of two reasons. Reason number one is that the song was sped up during mastering. When they would do a production and then the producer was already done with the production, he would be like, "Wow, this could be a little bit faster." Instead of redoing everything, they would finish the production, and in mastering they would speed it up the final mix, so nobody would know, but the multitrack was at a different, usually slower, speed.

The other option was that it was sped up for the video. To me, it didn't make sense just to speed it up for the video because if people saw the video and then they heard the song on the radio, they would be like, "What? It's not the same beat." I think they sped it up in mastering. When I saw that, I was like, "Wow, this is going to be interesting," because I have to sync the multitrack  to a video that has a faster version of Thriller.

I had to basically tempo-map the multitrack beat per beat per beat from beginning to end, and then I tempo-mapped the video. After I tempo-mapped the whole video, I started creating a multitrack.

The moment I started moving the stuff into the tempo map of the video, whatever speed was off, it would adjust automatically to the tempo map. That was the first forensic difficulty.

When you start looking at the tracks, there was hiss because of tape. Edits were not as precise in some aspects, because you would have to punch in, or you would have to literally cut the tape. There was no “undo” function back in the day.

You could sometimes notice that there was a part got cut on the tape, so I started cleaning up all that stuff, and then there was a lot of different tracks that had headphone bleed, so that's when iZotope RX came into the picture.

I started using it extensively to fix almost everything that was a problem, because I knew that down the line, when we got to Dolby Atmos, since there are so many speakers, there were going to be many opportunities for noises and other non-musical artifacts to be heard. I had to make sure that every track was 100 percent perfect, because when you start making it bigger in the soundstage and every track is more exposed, every element in those tracks is going to exposed, like seeing it with a magnifying glass, and that includes noises and non-musical artifacts. That was the biggest challenge from the forensic point of view. Fortunately, it was possible to deal with all of this with RX.

You mentioned hiss and tape dropouts. Were there modules from RX that came in particularly handy when dealing with these issues?

I used most of the modules, to be honest. Spectral Repair a lot as well as De-Noise and De-Click. It depends on what needs to be addressed, but I usually found a solution with RX.

Anything else you want to mention about repairing audio or mixing on Thriller 3D?

We first mixed in stereo because we were not going to show up to a Dolby Atmos studio to get sounds of stuff (like EQ, compression, reverbs, delays, etc). We did the Atmos mix over at Dennis Sands'. He's a score mixer. He works with Danny Elfman, has been nominated for Oscars, and owns a private Dolby Atmos studio in Santa Barbara.

We first mixed in stereo using an analog console, as well as analog gear and a bunch of different plugins. I used Nectar as one of the reverbs on the vocals. One of the reverbs was a modified patch preset of Nectar. I tried using reproductions of the same reverbs that Bruce Swedien might have used, but I couldn’t get them to sound the same. I started digging into Nectar, which I've known since the beginning, because I tried Nectar when you were developing it. I started messing with a patch, and suddenly it clicked that it was the right sound for Michael and Thriller.

When we finished mixing in stereo, I printed stems of every component. For some tracks, I printed more than one stem. For example, for the lead vocal, it was not only one stem. It was the lead vocal dry and other stems of reverbs. Maybe I split the reverbs up into two or more stems, just to have more control. The delays were another stem on its own. When we went to the Dolby Atmos stage with Dennis, we could have control where to place each element of the mix and also change the levels of the effects, because that's important. When you mix in stereo, the reverb and delays get hidden behind all the other elements in the mix, but when you put it up in Dolby Atmos, and all of a sudden your soundstage is “ten times” bigger, then everything is more open and exposed, including the effects.

The mix didn’t change that much from stereo to Dolby Atmos, but you need to have that flexibility, because if not, you're going to end up with a mix that is not a true representation of the original vision you had. Also, this allows you to take your vision even further, without worrying about getting “sounds” and basic balances. You’re just being creative with the placement of the different musical components of the song, and when necessary, adding some more high end or low end, and automating in the 3D field, which includes the audio objects I mentioned earlier.

After the Dolby Atmos mix was done at Dennis’ place, we went to the Universal Pictures Stage and worked with the team there. We worked with Jon Taylor in particular. I spent a couple of days in the stage making sure that the Atmos mix from Dennis’ place transferred and translated correctly. Then there was the process of making sure the balances were not lost or blurred by all the other sonic elements of the video. There is a lot of other stuff going on. The stage at Universal is even bigger, and everything is even more exposed. That meant we had to reach for RX a couple of more times during the process.

That’s how we did it, and iZotope RX and Nectar were vital to achieve the final result.

Try RX 6 by downloading a free 30-day demo.