fbpx

News

Midi Talk 08 - Chase Bethea
Midi Talk Ep. 8 – Chase Bethea 1024 576 Audio Modeling

Midi Talk Ep. 8 – Chase Bethea

MIDI Talk Episode 08: A Visit to Chase Bethea’s Interactive World

 

Video game composer Chase Bethea has a simple approach that guides him through the myriad complexities of his job: “I’m always thinking about the player first,” he offers. That is a perspective he comes by honestly. “I am a player first; I’ve been playing games since I was six. Most people start (learning) their first (musical) instrument at that age, but that was my instrument.”

 

Born in Chicago, Bethea received his higher education in Southern California, earning an Associates Degrees in Audio Engineering from The Los Angeles Recording School, another AA in Music Theory and Composition from Moorpark College, and, finally, a BM in Media Composition from California State University Northridge.

After finishing LA Recording School in 2007, Bethea was mixing popular bands around Los Angeles and working at Music Plus TV (which became Vlaze Media) when he came to the realization that composing music for games was a viable career option. “I started writing music in 2001, so through high school I was making tracks and experimenting with everything I could possibly think of, and people would tell me, ‘(Your music) sounds like it should be in a video game.’ I didn’t understand what that was and how to tap into it, until the IT person at Music Plus TV said, ‘Hey, this sounds like it should be in Castle Crashers,’ which was a very popular game. So I thought ‘You know what, I’ve been told this for seven years. I think I’ll look into this more.’”

Since that time, Bethea has shipped music in more than 20 games, including I Can’t Escape: Darkness, Super Happy Fun Block, Aground, Cubic Climber, and Potions Please. His Cubic Climber score earned a “Noteworthy” on Destructoid.com, and in 2016, Bethea was nominated for an Outstanding Artist–Independent Composer award from VGMO (Video Game Music Online). He also worked on pre-production for Virtual Reality Company’s Jurassic World VR Expedition, and with a half dozen projects currently in progress, it’s amazing Bethea finds the time to serve on the IASIG (Interactive Audio Special Interest Group) Steering Committee, too!

Simone Capitani from Audio Modeling pinned Bethea down for an extended discussion that took a deep dive into the process of composing music for games and VR. What appears here is only a part of the conversation; for the complete exchange point your browser to: A Visit to Chase Bethea’s Interactive World — MIDI Talk Ep. 8.

 

From Fruit to Flux

Bethea has used technology since he started writing music, working in Image-Line Software’s FruityLoops (which morphed into FL Studio) for years before eventually migrating to his current primary composing tool, Steinberg Cubase. His first real exposure to the requirements of composing music for games came when he was contracted to provide music for Tim Karwoski’s Electron Flux, a game for Android devices. There were many lessons to be learned, Bethea recalls, including “understanding what loops were and how they were (used), understanding the limitations of the device (on which the game would be played), and understanding how much your music is going to be (data) compressed.” He learned to generate his finished content at a high resolution, so that it would survive the often brutal bit rate reduction to its delivery format with at least a shred of fidelity. And then there was the issue of audio file formats.

“MP3s do not loop well in games; they have a gap,” Bethea explains, “so if you were to send those (to the developer), it would be jarring for the player.” (This is because MP3s encode based on blocks of data that rarely coincide with a given musical tempo, making precise looping impractical.) “But you can’t send WAV files, either, they’re way too big. I wasn’t using OGG files just yet, so, at the time, what I had to do was figure out a way to do a different version of the WAV. I was natively compressing the best way I could. Obviously, it wasn’t the best utilization, but it worked.”

 

We Control the Vertical and the Horizontal

As a composer for interactive media, Bethea views his work through an entirely different lens than composers working in linear media like film or TV. “You know where a movie is going to go. We design the game, but we never know what the player is going to do or at what speed, so things need to adapt to enhance the player experience overall,” he elucidates. “You really need to think in a design format to comprehend it, and this is what can trip up a lot of composers, because they typically won’t have that design mentality. You need to plan out what you’re going to do before you do it. Then, if the game needs an orchestra, you have to adapt to those things: you already wrote the music, you designed it, you designed the different layers – the vertical, the horizontal – but now you need an orchestra to perform it. It’s like an onion, with layers and layers.”

(Vertical and horizontal composition are two primary techniques used to create adaptive music. Vertical composition is the process of stitching together fully composed and produced chunks of music, where the order of chunks changes depending on game play. In horizontal composition, larger chunks of music are composed in multiple synchronized layers that are brought in or out to change the texture and feeling in response to gameplay. The two techniques are commonly mixed and matched at different points in a game.)

 

The 11-Day Virtual Sprint With Dinosaurs

Media production is typically performed in a high-stress, fast-paced environment, but projects involving cutting edge technology have the added challenge of unforeseen issues cropping up, and interactive media is subject to constant changes in the fundamental design and structure of the project. The biggest and coolest projects tend to be the craziest, and so it proved to be with Bethea’s work on pre-production for The Virtual Reality Company’s Jurassic World VR Expedition.

“It was an 11-day sprint; I only had 11 days to conceptualize and get approved assets for this iconic IP (Intellectual Property). I have to say, it was pretty challenging,” recalls Bethea in a tone of awe. “(The project was being done) in the Unreal engine. I brought my hard drive of sounds and music things, and was trying to conceptualize those sounds that everybody knows.

“I’m in meetings everyday, I’m driving down into Los Angeles, but I was not familiar with what pre-production was. Pre-production is something that radically changes almost every two hours! ‘We think we want this. OK, whatever meeting we had? We’re not doing that anymore. Now we’re doing this. Tomorrow, we’re doing this plus three other things. Oh, but, by the way, you better be in that meeting to do that, too, AND you’ve still got to get the work done.’ In 11 days!

“I freaked out for the first five days. I even went in on a weekend, but that weekend saved me, because when I did that, I actually finished a day early! I’m flying through Cubase doing these things and implementing the music into the system and giving feedback and testing the VR technology, finding limitations like: it doesn’t accept 24-bit (audio), it can only work with 16-bit. And it can have WAV files, but how do they interact with the nodes in the blueprint system? And using the hierarchies and the workflow of the repository, so that everyone is getting the check-ins and things are working together. You do the music, push it to the repository, demo it on the headset, listen, figure it out, it’s good, move on to the next thing, rinse, repeat, rinse, repeat. Long, long days, but good experience, I was pretty proud to finish in that time, and it was the most creative experience I could ever ask for. I would do it again; it was actually really great.”

 

Chase’s AI Sidekick

As a composer deeply enmeshed in technology and having to produce creative content in short timeframes, Bethea has some thoughts on how he’d like to see technology serve him better. “I have had some epiphanies of what I would like to have,” says Bethea as he lays out his dream. “I would like an AI assistant. I would love to design a product where, when I’m writing music and I know my weaknesses, I can ask the AI assistant, ‘Hey, with this Eb minor can I do this?’ And I play it, and it helps me along the way. ‘Well, actually, I found some stuff online and I thought that you might do this, let me pull this in.’ It enacts a MIDI drop and says, ‘Do you like this?’ and I’ll say ‘No, I don’t think I like that, but what if I did this instead?’ You can come up with some really different things. Our brains and our minds can only absorb so much in a day. I can only have so many of the books behind me (gesturing to a bookshelf in the background) that I can read, but if (the assistant is) reading that stuff for me, and saying, ‘You mentioned that you like this person for inspiration. Did you know that they used this melody style or this theory set for this?’ ‘No, I didn’t.’ – that would be really, really cool. I think it would be dangerous, but it would be cool at the same time. I conceptualize it as being better than Google Assistant, but for music.

 

Modeling’s Massive Difference

Having written for both electronic and orchestral instruments, Bethea has great appreciation for the strengths of the modeled instruments Audio Modeling produces and is enthused by his experience with them. “They’re so great. Wow. I was a conductor’s assistant, so I was able to be around an orchestra every single week for, like, two years, and hearing the technology of how you have the expression really down and the vibratos in the instruments…it’s incredible. I’m really, really, really, really impressed. A few of my composer friends said, ‘You have got to try this and have it integrated.’ And it really makes a massive difference with the musicality. Obviously, nothing beats live musicians, but this is the second best thing and they can sit next to each other. I would love a piano version, a supernatural one. There’s so many great, great products that you’re doing, and it’s fantastic.”

 

MIDI Talk is sponsored by Zoom, the maker of Q2n-4k, the camera used to produce this show

 

Version 2.1 of Camelot is here! 1024 576 Audio Modeling

Version 2.1 of Camelot is here!

Version 2.1 of Camelot is here!

 

Version 2.1 of Audio Modeling’s live performance environment for MIDI hardware devices, software instruments, and effects plugins, adds audio input features, expanding it from a powerful live tool for keyboardists to the premiere gigging rig for vocalists, guitarists, horn players, and, well, everybody!

Camelot lets users place items (virtual instruments, effects processors and MIDI connectors), into layers, which are then combined into scenes. Scenes and prerecorded backing tracks are strung together into songs, which can be configured into setlists. This structure lets users instantly redefine sounds, processing and PDF attachments for each section of each song, thus enabling entire shows to be performed with complex changes to sounds automated by Camelot.

Version 2.1 introduces an Audio Inputs feature, which can be added to any layer in a Camelot scene, allowing external audio signals to be processed and mixed with the outputs of virtual instruments, or fed to virtual instruments that have audio inputs. The Audio Inputs feature enables processing a vocal through a synth filter or vocoding a guitar, plus, of course, simply mixing external signals with synthesizer sounds or backing tracks. Vocalists and players of various acoustic instruments can now apply the best EQ, compression, reverb, and other effects available from the wide world of plugins, as well as make use of Camelot’s powerful processing, control, and automation capabilities.

Camelot 2.1 provides glitch-free switching between scenes that incorporate external audio inputs, and the new Audio Layer Connector enables routing audio between layers in a scene.

MIDI features also have received a major upgrade in Camelot 2.1, including MIDI transformers that can reassign MIDI continuous controller messages, a note-to-chord function that generates entire chords from a single note, and remapping that enables continuous controller and/or velocity messages to be scaled or otherwise modified. This is especially useful when using a single controller to modify multiple parameters that respond differently to controller messages. A smart scale function conforms incoming MIDI notes to selected scales, or generates harmonies that fit a selected scale.

Learn more about Camelot 2.1

Version 2.1 will be available on https://audiomodeling.com/camelot/try-buy/ from December 15, and current Camelot users can upgrade at no cost. For new users, Camelot’s price remains the same, with the iPad version costing 29.99 EUR/USD, the full Mac/Win version at 149 EUR/USD, and the free version being, um, free!


Midi Talk 07 - Nick Petrillo
Midi Talk Ep. 7 – Nick Petrillo 1024 576 Audio Modeling

Midi Talk Ep. 7 – Nick Petrillo

MIDI Talk, Episode 07: Nick Petrillo’s Journey of Discovery

 

Sometimes, discovering that something does not work as you thought can bring about an epiphany that entirely alters how you approach a task. Revelation can be transformational. This is a common occurrence among students, and so it was with composer/music director Nick Petrillo.

“One of the things that really drove me into using music technology was film scoring,” Petrillo explains. “Growing up, I loved movies like Star Wars and Indiana Jones, and I fell in love with the music of John Williams. At the time, I thought film scoring was all about orchestral scoring, either with pen and paper or through Finale (music scoring software), doing a large orchestra (recording) session, getting a mixing engineer, going from music cue to cue, and placing the music into the picture. But I learned differently when I attended Berklee College of Music, where I studied film scoring. That was really a catalyst for me to get into software synthesizers and DAWs.”

Hailing from Bound Brook, New Jersey, Petrillo moved to Los Angeles after emerging from Berklee in 2010 with a dual Bachelors Degree in Film Scoring and Contemporary Writing/Production. Today, Petrillo writes music for film, TV, and advertising campaigns, is a resident orchestrator at Snuffy Walden Productions (Walden has composed music for hit TV series including Thirtysomething, The West Wing, and Friday Night Lights), and has toured as a Music Director for artists including Aubrey Logan (PMJ), Dave Koz & Friends, Frenchie Davis (The Voice) and David Hernandez (American Idol).

Audio Modeling’s Simone Capitani probed Petrillo’s views on both his post-production work and his live performance world, and the tools on which Petrillo relies to get him through his projects. Petrillo started out by setting out some context for how music is built for film and TV using current technology.

 

Scoring Music to Picture: The Invisible Art

 

“Nowadays, a lot of film composition is production; it’s a lot of drum loops, soundscape creation in things like Absynth, or Kontakt, where you’re layering different sounds and patches over each other, playing with attack and release and decay to round the sound over a given amount of time. So, you have a 20-bar cue – a cue is a piece of music that exists in the film – and sometimes you have a single note or two notes that are swelling based on their attack, decay, and release, and that is what is actually formulating the soundscape. That’s all sound design, that’s music synthesis.”

Music for picture is a support role that amplifies the emotional content being evoked in a show, piece by piece, points out Petrillo. “Let’s take a TV show, for instance. A TV show might have 20 music cues, each of which can last a minute and a half, two minutes, up to five minutes. Say there’s an action sequence that goes into a very dramatic scene with somebody who has just died or is dying. One cue is the action sequence, we tackle that as a chunk. Then we tackle that emotional dramatic scene as a chunk. So we’re not scoring a 40-minute TV show, we’re scoring these different chunks.”

The objective, Petrillo insists, is entirely to complement the action. “What we’re always doing is adding to the emotional integrity (of a scene) and not detracting from it. The music is always secondary. The rule of film scoring is to always stay behind (the action); you shouldn’t really be heard.

“If you have this emotional scene where somebody is passing away, you don’t want a crazy violin line detracting from that moment, you want to stay beneath it and give some emotional chords and soundscape. Maybe you do have a solo violin doing something beautiful that’s not detracting, but you’re not doing anything from Rimsky-Korsakov or Tchaikovsky, there’s nothing huge and grandiose about it. A lot of what I’m doing is commercial film scoring for network TV, which is very cut-and-dried. We’re not doing much out of the box,” he concludes.

To get the big orchestral sounds he needs on the limited budgets and schedules of TV and ad campaigns, Petrillo relies on sample libraries, which have seen tremendous development since he was at Berklee. “Back in 2007, the best (sample) library you had was the Vienna Symphonic Library, and I think the platinum (version of VSL) was 20 grand, or something insane,” he recalls, shaking his head. “As a college student, I had Garritan Personal Orchestra, which was $300 or something, but the acoustical modeling was just not there. Now, I have a subscription to the East West Composer Cloud for maybe 200 bucks a year, and the sounds are incredible. You’re talking a little more than a decade from (when he was a student). It’s crazy how far it’s come and how far it’s going to probably go. There’s a lot of vocal libraries now where you can type in a sentence for the background vocal you want, and play a line on the keyboard, and (a sampled vocal) will sing it back to you.”

 

Love You Live

 

Audio post is a staple of Petrillo’s career, but he also spends a good deal of time out of the studio, traveling the world to work. Live performance today means presenting a show with production every bit as sophisticated as that heard on modern studio recordings. Touring as a player and/or a Music Director, Petrillo navigates an entirely different landscape of equipment and techniques than he does working on TV or ad campaigns. The challenge becomes one of coordinating all of the elements, live and technological.

“Nowadays, a lot of live performance is heavily (built around) production,” Petrillo asserts, “including arpeggiators, filters running at a certain BPM that we need to lock into a clock…and, for all of that, everyone needs to be on the same page, including the drummer, who may be running loop libraries, or different tracks that need to lock into the grid. And then, maybe you have horn tracks or background vocal tracks running on top of that.

“Sometimes a Music Director will get hired and fly out somewhere, and be basically working with an entirely new band, running tracks, working off charts…how do we integrate all of that stuff into something absolutely brand new that nobody’s seen before? That’s where things get tricky,” he reveals. “I know a lot of purists don’t like backing tracks, but I love them, because there’s a safety net there: this is what it will always sound like.”

Making sure all of those individual events were happening when they should in the way they needed to was a major hurdle for Petrillo in the past, requiring a pastiche of different software programs. Recently, however, he discovered Audio Modeling’s Camelot Pro, which is designed for exactly this purpose.

“It comes down to the ability to integrate hardware synthesizers, software synthesizers, and some sort of DAW rig, whether you’re running (Apple) Logic or Ableton (Live), or if the drummer is running some sort of clock system that is feeding into my keyboards via MIDI. Camelot is a huge help with things like that,” he notes.

“For about seven or eight years, I’ve been using Ableton, which has really been the only program I could use to run a track and a click track to the band, and then I had to lock all of my hardware synths and software synths into a DI and go straight to the (mixing) board and deal with it that way.

“But a few months ago, I was talking to everybody over at (the software distribution firm) ILIO about this program called Camelot. They said I should give it a shot because I could not find a program that integrated my synthesizers so that I could send out a click track to the drums (and) run my (prerecorded backing) tracks, and I could run a pdf chart of my song and lock it into my song, and lock all of that into setlists. Now, because of Camelot, I’ve been moving everything into MIDI keyboards and software. For a while, I was programming everything hardwire into a Yamaha MOTIF and a Nord Electro 3 for all my B3 samples, clavinet, Rhodes, and all that kind of stuff. Recently I picked up (Spectrasonics) Keyscape, so I can use that as a soft synth, and my Kontakt stuff – I run all of that straight through Camelot. I just run my patch changes through that, and it is absolutely brilliant.

“Camelot is a very exciting program. It’s so beautifully streamlined. It’s completely changed my workflow. I think everybody should be getting a copy of that.”

While Camelot Pro may meet many of Petrillo’s live performance needs, it still leaves at least one requirement unmet. “The only thing I really need is some type of universal click system. The reason I believe that is so crucial is that it locks all those big moments on stage, those pauses or big stops, into a certain time zone, and I think that’s going to help as far as perfecting music. Everybody nowadays needs perfect music, right? And some of the things in live music that can throw music off are fermatas, caesuras, and pauses within the music. The universal click track, getting everybody on the same page, is really going to change the game of live music production. It already has been that way for pop music; I think it’s on its way to becoming a thing in the cabaret world and the smaller niche markets,” Petrillo predicts.

 

Traveling the Rhodes To Music Technology

 

Having made his way from Berklee into the heart of the fray in music production, Petrillo has words of advice to those trying to get started with technology in live performance. “There’s many different approaches to trying to use technology (live). If you’re brand new and trying to get your feet wet, I would go with learning the basics of analog keyboards that have been digitized. What is a Fender Rhodes and what are the different things you can put on one? You can put a tremolo effect, you could put a chorus effect, you could put a phaser, you could throw it through a Fender tube guitar amp and let that do some warm distortion…right there, you can do a lot of stuff.

“Even with tremolo patterns, that tremolo needs to lock into a certain tempo, even though you may be working with rate. There’s two different ways time-based effects work: there’s rate or BPM based. (Using) BPM-based, you would obviously lock to a clock system, but a rate, you’re going to have work it around your tempo and find where it’s rotating correctly. Running a Rhodes through a distortion effect is going to make it almost sound like a synthesizer. So, already, just working with a Rhodes, you have an abundance of sounds you can work with,” Petrillo offers to illustrate how to approach a journey of discovery like his.

“If you’re going to use digital technology, I’d pick up a program like Absynth or Massive, or something out of the Kontakt realm,” he advises. “I feel like Kontakt is one of the mainstays of technology right now. Dig into oscillators and see what two different waveforms sound like against each other. Then try to detune them and see what that sounds like, try to add some digital effects, and see what the synthesization of the whole thing does. Increasing your release time – what does that do? Or decreasing your attack time – what does that do? Get into the weeds a bit with the synthesization end of it.” Petrillo reflects for a moment on his own learning process, and realizes how overwhelming his advice might seem. “I’m speaking of it now like it’s easy, but back when I was in college, this stuff scared me to death, really,” he admits. “I had no idea how any of this stuff worked.”

Certainly, Petrillo, at this point, has figured out how the stuff works! But he had much more to say to Capitani. To hear Petrillo detail the process of scoring to picture, how Quentin Tarantino gets away with using music that contrasts with picture in his films, the impact technology is having on jobs in music, and more, watch the full discussion.

MIDI Talk is sponsored by Zoom, the maker of Q2n-4k, the camera used to produce this show

More about Nick Petrillo

Website: www.nickpetrillomusic.com

Facebook: https://www.facebook.com/nickpetrillomusic

Instagram: https://www.instagram.com/nickpmusic/

Midi-Talk-06-Matt-Johnson
Midi Talk Ep. 6 – Matt Johnson 1024 576 Audio Modeling

Midi Talk Ep. 6 – Matt Johnson

MIDI Talk Ep. 6 – The history and wisdom of Matt Johnson

 

Matt Johnson has spent nearly two decades in the band Jamiroquai. He sat down with Audio Modeling’s Simone Capitani and opened up about his background, the band, equipment, and how to be a professional musician. Read on to get the history and wisdom of a true road warrior and studio samurai.

From an early age, Matt Johnson seemed headed to a career playing music. “I started playing piano at about five years old. I played for a couple of years and sort of got bored with it.” Johnson chuckles at the irony, because, after turning next to trumpet and getting pretty good, synthesizers circled him back to the keys in the 1980s,. “When I became a teenager, I didn’t want to play classical music on the trumpet,” he recounts, “so I gravitated towards the keyboards, and by the time I was 17 or 18, that’s what I was doing for a living.” That decision proved fateful, as it led Johnson to his 19 year-long (and counting) tenure as keyboardist, songwriter, and producer for British jazz-funk stalwarts Jamiroquai.

Having been recommended to the band by former Jamiroquai guitarist Simon Katz, Johnson replaced founding keyboardist Toby Smith in 2002. His role in the band grew steadily, until 2017 found him co-writing and co-producing the band’s Automaton album with lead singer Jay Kay. Johnson also became Jamiroquai’s Musical Director on the road.

Producing Jamiroquai was a natural step for Johnson. “I’ve always gravitated towards production,” he says. Having started with an eight-track reel-to-reel tape recorder, Johnson now does his work in the box. “Pro Tools has become the thing for me; I really love Pro Tools,” he enthuses. “I love the fact that it very much is coming from an audio recorder perspective, rather than (being like) a sequencer. You can manipulate audio and be so intricate in how you treat it in Pro Tools.”

Of course, synthesizers also loom large in Johnson’s approach, both live and in the studio. “The last album with Jamiroquai was based around the Roland Jupiter 8 because we were bringing a bit of ‘80s sound into the mix, and that synth is really so nostalgic ‘80s.” Johnson reveals. “It’s got a beautiful crystalline sound. It doesn’t have a massive feature set, but you can get a huge range of sounds out of it, and it’s really big sounding. That was the centerpiece on the last album.

“Lately, I’ve really been getting into the Moog One. It’s such a stunning synth, and it’s so deep,” he continues. “I’m discovering more things all the time: the modulation possibilities, and the sequencer possibilities – you can have three sequences, all polyphonic with sixty-four steps, and you can change the filter or the resonance or almost anything on the synth, and then go in and craft it in fine detail. I’ve been disappearing down the rabbit hole of the Moog One,” concludes Johnson with a laugh.

From a sound design standpoint, as important as basic sound quality is to him, it is through dynamics that Johnson brings his synthesizer sounds to life, particularly in live performance. “A modern synth is as much a living, breathing instrument as a piano, the difference being that you can tailor (a synth) to your own playing a hundred percent,” Johnson explains. “Originally, synths were absolutely linear; they had no soul, and that was what was cool about them. It felt very futuristic, the fact that they didn’t care about what the velocity was (that you played). It was always the same. But now, synths can be very dynamic and whether you hit them soft or hard can affect the filter or whatever you determine it to affect.

 

“My playing is very based around dynamics,” he emphasizes. “I think dynamics are crucial in music if you want to make people feel something. The way that you hit a note, how hard or soft you hit it, can really affect the listener emotionally. It’s something I’ve been thinking about a lot lately: how the way you hit a note will make someone feel different. And it’s something we probably don’t think about enough as musicians.”

 

Programming dynamic sounds is only one aspect of bringing highly produced records to the stage, however. Going from studio to stage, says Johnson, “is a big transition to make.

 

“For instance, on the last Jamiroquai album: while I’m producing the album, I’m not thinking about how we’re going to recreate it. I wanted to make the best-sounding record by any means (possible). So then, at the end of it, you have to think about ‘how are we going to do this live?’

 

“I had taken some precautions. We’d used a lot of vintage synths, and any time we got a part that was like ‘Oh, that’s definitely going to make the record, that was really good,’ what I did was that I sampled the synth, note by note. Even some of these old vintage ones that didn’t have any MIDI or you couldn’t save the sound, I sampled it into Pro Tools and kept it. So I had all the really key sounds from the album. I have a Yamaha Montage 8, among some other keyboards, so I could sample the notes (into the Montage 8) and know that I had these sounds. In the past, I had to try and recreate them on another synth, and you can never quite get the same quality.

 

“Also on the last album, there was a lot more electronic elements that we’d had previously, because I’m really into electro stuff and I wanted to update the sound a little bit and bring a bit of that into the record. When it came to doing it live, it was a bit of a quandary, because the way Jay, the singer, likes to work, he doesn’t stick to an arrangement, he’ll come in when he feels like it, he might suddenly want to go to another chorus, or a solo or a breakdown. He’s almost a James Brown type of bandleader, so we couldn’t be slaved to a computer arrangement, we couldn’t just have a backing track running. I had to think about that. I spoke to a few people who were more into the tech side of things and they suggested getting someone on Ableton (Live) with Push. That turned out to be a perfect solution, because we could still have all the sequencer parts from the record that we wanted, put into Ableton, so if the arrangement changed, we had Howard (Whiddett) there, running Ableton, so he could change it on the fly, just as one of the band. For us, that was fantastic, because we’ve always been a live band and we don’t want to work with the computer (dictating performance), but we still had the option to have these electro elements and keep them absolutely live. The computer became a member of the band.”

Johnson has obviously learned a tremendous amount in the couple of decades he had spent touring and recording at the top of his field, and, when asked what advice he would offer young musicians, he has plenty to say. His main message is simple: just do it.

“It’s obviously very difficult at the moment (due to COVID),” he begins. “There’s nothing they can do but sit at home and write songs. But soon things will get back to normal and shows will start. My advice to young musicians is always: get out and play. Just get out and play as much as you can. It doesn’t matter what it is; it doesn’t matter if it’s to three old ladies down at the town hall. Just play, play, play, because that’s the only way you can build up your confidence and experience of being a performer.

“I’ve been in a few situations where I’ve worked with young artists who were 18 or 19, and suddenly they get a big record deal straightaway, but they’ve never gone out live. The first show they have to do is massive pressure, on the BBC or something, and, of course, it’s difficult for them. They can’t be great because they haven’t learned how to do that.

“You have to work hard at what you do. You might see musicians and think ‘Oh, they’re just geniuses. I could never be at that level.’ Of course, they weren’t like that at one point. They’re like that because they dedicated themselves to it and they kept trying harder and harder to get higher up. And it never stops; I’m still doing that now. I still try to improve my playing and learn all the time. You just have to try to be as good as the best in the world. That’s really hard, which means you have to try really hard.”

Even once a musician has worked hard enough to get really good at what they do, there is still another whole set of skills to master: that of the stage performer. Working in front of a live audience is its own set of challenges. What is the secret to that? “I think, personally, that the main thing is to be generous as an artist,” Johnson asserts. “There are some artists who, when they’re on, the audience almost feels nervous for them, because they’re just there in their own little world and they’re not really reaching out. The best artists are the ones that reach out to the audience. That means not thinking about yourself, but thinking about them – how to make them happy. Try to give out your energy to the crowd, because when you give it out to them, they send it back to you, and it becomes this sort of vortex, and that’s when you get that sort of ecstatic energy at gigs, people going nuts. It’s not just about the performer, it’s a relationship between the performer and the audience. Every time I get onstage, I’m just so grateful I can do a gig, and I make sure I give it my hundred percent, you know what I mean? I’m on it, I’m trying to give out energy the whole time.”

To hear Johnson talk about how the past reaches into the present, how influential the ‘80s were on his sound, how he sees the current information landscape for musicians, and more, watch his entire interview with Simone at: <link>.

MIDI Talk is sponsored by Zoom, the maker of Q2n-4k, the camera used to produce this show.

 

Midi Talk 05 - Mike Lindup
Midi Talk Ep. 5: Mike Lindup 1024 576 Audio Modeling

Midi Talk Ep. 5: Mike Lindup

MIDI Talk Ep. 5 — Leveling Up with Level 42’s Mike Lindup

 

MIDI Talk is a podcast featuring Audio Modeling’s Simone Capitani in conversation with musicians and producers about intersections of music and technology.

In episode 5 of MIDI Talk, Simone Capitani catches up with Mike Lindup, keyboard player, singer, and founding member of Level 42.

Formed in London in 1979 as a jazz-funk band, Level 42 soon turned to writing songs and enjoyed a string of hits all through the 80s. Highly influential worldwide, the band reached its commercial peak when the song “Something About You”, released in 1985, reached number seven on the Billboard Hot 100 chart in the United States. The next year, their single “Lessons in Love” reached number three in the UK Singles Chart, and number 12 on the US Billboard Hot 100. Through changing personnel, breaking up, and reforming, Lindup not only continues working with Level 42 today, but along the way has delved into other musical genres and released two solo albums.

 

To the Music Born

 

With a singer-songwriter mother and a TV and film composer father, music has been an important part of Lindup’s life since infancy. Lindup spent his childhood immersed in a diverse spectrum of music from Miles Davis and Duke Ellington to Bob Dylan, Pete Seeger, Yehudi Menuhin, Tchaikovsky, and various soundtracks from British shows.

“Music was organically part of life in the house when I was growing up,” he remembers. “My Mum used to sing around the house, and sometimes she’d be rehearsing with a few musicians in the living room. Our living room was like my playground. There was an upright piano, a guitar, some hand drums… There was also a tape recorder, because mom sometimes used to record her rehearsals.”

Lindup started piano lessons at the age of six, and remembers hours spent at the piano recreating melodies and finding chords from songs he liked as a way to navigate the complex emotions of childhood. Teenage angst was handled similarly after his father bought him a drum kit. “I soon realized playing drums was great for processing angry feelings, which as a teenager I obviously had my share of,” he chuckles.

At 14, Lindup studied percussion, composition, and piano at Manchester’s Chethams School of Music, as well as singing in the school’s senior and chamber choirs. “Switching from piano to percussion was great,” he states, “because then I could play in the orchestra. Also, as a percussionist, I got to play on a bunch of different instruments.”

After graduation, Lindup entered London’s prestigious Guildhall School of Music & Drama as a percussionist. At Guildhall, he met Phil Gould, who then introduced him to Mark King and Gould’s brother Boon. From this core emerged Level 42 (and, yes, the name is a reference to The Hitchhiker’s Guide to the Galaxy).

 

Taking It To the Next Level

 

The band’s strong potential soon became clear. Phil and Boon’s older brother, John, was connected to Andy Sojka from Elite Records, who signed them. They recorded their first single, Love Meeting Love,” while Lindup was still in college. The song was released in May 1980, Lindup finished college in July, and the band went straight into the studio to record what ended up being released as their second album, The Early Tapes.

“When ‘Love Meeting Love’ came out, we had no idea how it would be received,” Lindup comments. “We liked it, but we didn’t know how many other people would like it. We certainly weren’t thinking about a career, we just wanted to see what would happen with this.”

“Andy Sojka knew there was a slightly underground movement of jazz-funk happening in the UK at the time. He knew there were certain DJs playing this kind of music on the radio and in the clubs. There was a pretty big club scene and that became our first audience. Andy is the one who made that connection for us.”

“Then Polydor, our distributors for ‘Love Meeting Love’, started to take an interest in us. In those days, record companies were still looking for what was new: what are the new waves, who are the new bands in these waves, and so on. ”

“In 1981, we signed a five-year contract with the Polydor label,” continues Lindup. “It seems extraordinary, especially now with the way the music business is. But in fact, at the time it was pretty standard to sign a band and put them with a producer just to see if it would develop into something. They saw us and recognized some raw talent, but I mean, it was pretty raw. In reality, this gave us the opportunity to do a five-year apprenticeship. We wrote five albums in those five years.”

“We got to learn how to write songs because we were instrumentalists, not songwriters. Our musical influences were Mahavishnu Orchestra, Return to Forever, Miles Davis, and the whole ‘Bitches Brew’ diaspora: Herbie Hancock, Wayne Shorter, Joe Zawinul, Chick Corea, John McLaughlin, and so on. We didn’t know anything about songwriting until Andy told us ‘Make this into a song and you got a record deal’. It was a fantastic training opportunity.”

“At the same time, we learned how to become a live band,” Lindup reveals. “We could play our instruments quite well, but really putting on a show and making music that hits the audience is a different story. That’s especially true if you’ve been in the studio for a while, because sometimes, you can’t copy exactly what you did in the recordings; it doesn’t work live because the situation is different, the arrangements are different. We had all of that to learn and we were able to learn it because we had this sort of development time.”

 

Lindup’s Lessons in Stagecraft

 

Lindup is not shy about sharing the lessons in live performance Level 42 clearly learned quite well during that development time.

“Every time you go on stage, it’s a new challenge, even though you might know what you’re going to play,” he explains. “For example, at the beginning of a tour when the show is new, you don’t know exactly how it’ll be received. But you kind of need to have a mixture of self-confidence and trust that what you’re doing will be appreciated and not be put off by the voice in your head saying stuff like ‘Oh no, this is not going well’ or ‘this person is looking at their phone, which means they’re not enjoying it.’ You need to learn how to ignore these things going on in your mind.”

“At the same time, it’s important to be authentic. It’s OK to be vulnerable, and it’s OK to make mistakes. With experience, you kind of get used to the fact that most of the time, the audience is on your side, and they’re there because they just want to enjoy the evening. It’s no good trying to pretend everything is cool when it’s obviously not, because then, you’re not being genuine.”

Ah, but size matters, Lindup admits. “We had a huge lesson about performing when we did our first show in front of a big audience in 1981. Up until then, we’d been playing small clubs, up to about 300 people, around the UK, mostly in the jazz-funk scene. Then Miles Copeland (manager of The Police) got in touch with our manager, John Gould, and invited us to support The Police for eight shows in Germany. So we went from an audience of 300 to 8,000, which is a big jump. The very first show was almost a disaster.”

Of course, the band grew beyond these first challenges, as shown by the international success they enjoyed in the following years. But you know you’re curious about the near-disaster in that first show with The Police, aren’t you? Well, you can get all the juicy details, as well as hear Lindup’s perspective on technology and advice for getting started as a professional musician, by listening to the full episode 5 of MIDI Talk, .

MIDI Talk is sponsored by Zoom, the maker of Q2n-4k, the camera used to produce this show.

 

Midi Talk 04 - Peter Major
MIDI Talk Ep. 4: Peter Major aka OPOLOPO 1024 576 Audio Modeling

MIDI Talk Ep. 4: Peter Major aka OPOLOPO

Combining the Best of Different Musical Worlds with Peter Major aka OPOLOPO — MIDI Talk Ep. 4

 

In this week’s episode of MIDI Talk—a podcast dedicated to topics around music-making in relation to technology—Simone Capitani interviews Swedish music producer and remixer Peter Major aka OPOLOPO.

Born in a musical family, Peter discovered music through his father’s love for jazz and fusion. “Dad brought a lot of records back home,” Peter says. “I liked all the harmonies and melodies but I also was very drawn to electronic sounds. My dad used to listen to a lot of 70’s fusion and I remember being drawn to synth sounds very early on.”

“I don’t have any traditional music training. I tried to take lessons from my dad but that never really worked out. First, because it’s not a good idea to take lessons with your dad, and second, because I was never motivated enough to learn how to play other people’s written music. Instead, I was always messing around trying to play my own little melodies on the piano.”

 

Discovering the Experimental World of Club Music

 

“Growing up, I wasn’t so much into club music, even though I did like the beats and the electronic sounds it used. It’s only later on, when I discovered I could combine jazz and funky stuff with club music, that something really clicked for me.”

“I love jazz. I love the harmonies and I can definitely appreciate the rules behind doing things a certain way. But what’s so fascinating about club music is that many of the pioneers in that genre didn’t know anything about music theory. They were just messing around with these machines and getting amazing sounds out of them, even though they didn’t use them the intended way.”

“Like the Roland TB-303 Bass Line, for example. My dad brought one of these back home at some point. He tried to play around with it but he thought it was rubbish because it didn’t sound like a real electric bass. Meanwhile, maybe somebody in Detroit or Chicago creates this totally new sound with it that nobody thought about before just because they approached the same machine from a completely different angle.”

“I love this aspect of club music. You can just experiment and come up with all these weird new concepts and ways of doing things. You don’t find that in jazz or classical music where everything is kind of strict and things sound the same for 30 years. In club music, things move at a different pace.”

 

Achieving Success by Falling in Love With a Song

 

Sometimes, remixes have the power to breathe a new life into an original song. That’s what happened in the case of Gregory Porter’s song “1960 What?”.

“At the time, I was working a lot as a DJ around town here in Stockholm. That song, ‘1960 What?’ is from Gregory Porter’s first album. Nobody knew about him back then. I used to play it in my DJ sets and every time, I thought ‘this would work well in a clubby setting if you add a bit of a kick to it and tighten it up a bit.’ So one day I decided to just try it out.”

“I loaded it up in Ableton, tightened up the track, and quantized it. Then, I added some percussions and a kick drum, and played a new bassline, just one take from start to finish. The whole process was really quick but I liked the result so I put it up on Soundcloud, not as a download but just for people to listen to it.”

“Immediately, I started getting messages in my inbox. I thought ‘ok, something is going on here.’ Obviously, the success of that track comes from the original because it’s a very powerful song. I just gave it a little nudge to the dance floor. But still, a lot of people discovered the track through this edit.”

“The day after I put it out on Soundcloud, I received an email from Gregory Porter’s record label. The subject was something like ‘Regarding your Gregory Porter Remix’ and I thought ‘oh no, now I’ll have to put it down and they’re all angry and everything.’ But actually, they were super cool and very happy and grateful I did this edit. They asked me if they could use it and after a while, they licensed it and bought it as an official remix.”

“It was an interesting project because it’s something I just did out of love for the music. I didn’t think too much about it. You can spend three weeks on a remix and nothing will happen and this one took me something like 2h to make and it took off. But like I said, the original is really powerful and the lyrics resonate with a lot of the racial stuff still going on in the U.S. today so that’s obviously one of the main reasons why it took off that way.”

 

Having a Clear Preference for Software Instruments

 

Some producers have a preference for hardware instruments, others combine both hardware and software tools. In Peter Major’s case, he admittedly works exclusively with software.

“From the first time I tried Propellerhead’s ReBirth and experienced creating songs directly in the software I thought ‘Wow, this was amazing! What if one day we could have a whole production environment on the computer?’ I’ve always been fascinated by that idea. Then obviously, over time plug-ins became better and better and computers became more powerful. So after a while, I completely gave up on the hardware side of things. Not because of the sound but because it was just more convenient.”

“For me, it’s a way to achieve what I want, which is good music. I don’t care too much about the actual tools. I don’t need to have the actual hardware to get inspired, I can get inspired by what’s in the computer. I used to work a lot with Ableton for performing live while using Cubase for all my production work and official remixes but nowadays, I rarely use Ableton. I almost work exclusively with Cubase.”

“When it comes to drums, I use drum plug-ins from a Swedish company called XLN Audio. They have a drum plug-in called Addictive Drums and I use that on pretty much everything. Then, I use Kontakt Battery for straight-up one-shot sample sounds, and also something called Kick by Sonic Academy which does analog 909-style of kicks.”

“I love Rhodes sounds so I’ve been using Scarbee’s EP-88S, which is the largest Rhodes sample library. For synths, I use a lot of the Arturia stuff lately. In the past, I used DCAM Synth Squad from FXpansion. I like soft synths that give you a lot of modulation possibilities.”

“There’s a great Oberheim extension called OP-X I use quite a lot as well and also, Massive X because it has such a unique sound and gives you so much control over the very fine details of the sound. I try to use fewer instruments and know them well instead of buying everything that’s out there just to have the latest and greatest.”

 

A Special Request to Audio Modeling: Emulating an Imaginary Instrument

 

When asked if there are any frustration points in his use of software instruments that Audiomodeling could perhaps work towards implementing, Peter’s response was an interesting one.

“Before trying your SWAM instruments, I wasn’t aware how far the physical modeling technology has come. The realism of these instruments is amazing. Obviously, now that I hear what you’re doing with trumpets or strings, how about a drum or a piano? I just want to hear it applied to everything! I’m looking forward to seeing what else you’ll come up with.”

“But then again, you’re emulating something that already exists. Whatever magical engine you’re using to model real instruments, if you applied that to something crazy, something that doesn’t exist but has that same feel of realism, that would be fascinating.”

Listen to the full episode of MIDI Talk with Peter Major to hear his advice on getting started into the world of remixes and discover how he broke into the music industry.

MIDI Talk is sponsored by Zoom, the maker of Q2n-4k, the camera used to produce this show.

Midi Talk 03 - Martin Ivenson
MIDI Talk Ep. 3: Martin Iveson 1024 576 Audio Modeling

MIDI Talk Ep. 3: Martin Iveson

Hustling Your Way to the Top of the Music Industry by Staying True to Yourself — MIDI Talk Ep. 3 with Martin Iveson

 

In the third episode of MIDI Talk—Audio Modeling’s podcast dedicated to music and specifically the relationship between music and technology—Simon Capitani interviews world-renowned, multi-award-winning music producer, sound designer, and remixer Martin Iveson, aka Atjazz.

Martin is the founder of Atjazz Record Company and is now based in the UK. His work in the music industry started by composing music for video games. In 1996, he released his first EP which was very well received and led to the release of the album “That Something” licensed in Japan by Sony/Kioon in 1998.

Later on, top vocalists such as Clara Hill, Replife, and Deborah Jordan have sought Martin’s trademark sound for their own album projects. As a remixer, his work includes collaborations with artists such as Nitin Sawhney, Bob Sinclar, Jazzanova, Zed Bias & Omar, to name but a few.

 

A “Total Geek” in the Making

 

Atjazz’s passion for music has always been rooted in a deep fascination for technology. “I’ve had no musical training in my life apart from the odd music lesson at school when I was young,” he says. “Music has always been about feeling it out. I started to like music when I was a small kid because I loved the melodies of the cartoons I watched on TV.”

When Martin was 6 or 7 years old, his brother bought him his first Casio keyboard. His unorthodox self-taught music training started with him trying to replicate the melodies of the cartoons he loved so much. As he grew slightly older, Martin became interested in computer games. Through some clever exchanges of toys that not everyone in the family was happy about, he put his hands on his first Commodore 64.

“The music in the Commodore 64 games was brilliant. The C-chip is being used in applications and synths even today because it’s a really classic sound. I remember playing a video game called Delta. The music was composed by a very famous video game composer called Rob Hubbard. He created this 4 mono-channel synth loader. You could change the beats on it, you could change the melody, the chords… I used to sit for hours making music with this thing.”

As a teenager coming from a modest family, Martin hustled his way into making enough money to buy more equipment by working as a milk boy. He first got a Commodore Amiga on which it was possible to use free software to create music. He then saved up enough money to buy an 8-bit sampler and started sampling his tapes and the radio into his Commodore Amiga to make tunes.

“My first music creations were made using samples, which still happens quite a lot today even though I’m mostly sampling myself, my synths, or my drum machines.”

“Over time, I got to know some really geeky people all over the world—mainly in Europe but also in the U.S. and in Japan. People used to put their information at the bottom of their productions. If you picked up a demo from a shop, you’d pop it in your computer to discover some cool text scrolling around on the screen with some good music. Everyone would be credited at the end of the demo and we would contact each other and send each other discs through the post. That’s how I got to know this guy from the south of England called Dan Scott really well.”

“Dan got a job at a company called Core Design. They needed an in-house musician so they called me in for an interview and I got the job. I was only 17 at the time. A month later, my boss bought a Roland JD-800, the big mothership keyboard. With that, I was able to experiment a bit more and sample myself into my Commodore Amiga. We were still making Amiga games at the time, also Atari SD games, and PC games… ”

“Things grew from there. Later, I started to use MIDI and Steinberg 24. I’m still using Cubase today but it was quite a difficult transition from working with ‘everything in a box’ to using equipment and recording it to DAT. You couldn’t just reload it back up and change things a bit— everything was recorded—but things had to move on and we moved into the world of CD. A few years later, the company became much bigger. We turned around the ‘Tomb Raider’ game which I was a sound designer on and eventually, also a composer.”

While Martin started working at Core Design in ‘91, it was while he was still working at that company he decided to start his record label in ‘95.

“I would work from 9:30 am to 5:30 pm, walk across town to my office where my studio was, start my work there around 6-7 pm, work until 2-3 am, go home, get about 5h of sleep, and start again the next morning. I’m just a total geek, I love my equipment, I love making sound.”

 

Learning About Musical Integrity With Charles Webster

 

It’s in 1998 while working on his album under his own label, that Martin met musician and producer Charles Webster. “Charles is a very private person so I was very lucky that he invited me to the studio because no one ever got an invite.”

They quickly became friends and have been collaborating together on different music projects to this day.

“When I was a few years into my career, he was the person who told me ‘Always do what you do.’ He meant to always do things from the inside rather than try to be something I’m not, or try to follow a trend.”

“The moment he told me this was an important time in my career because I think I could’ve gone down the wrong route. I could’ve started to make music that meant nothing just for the sake of being popular but Charles always told me to always stick to what I know. ‘You’re good at what you do’, he said, ‘you got here on your own, just continue doing what you’re doing.’ I respect him for that.”

“Even today when I’m making music in the studio and I get stuck in my head—when I feel I’m doing something that isn’t natural or for the wrong reasons because I’m trying to fit in or just because it’s popular—I’ll delete it and start again. I remember what he said to me so that I can let the music flow properly.”

Iveson believes this is the way to bring out the individuality and the personality of an artist.

“Many people say I have a sound. Maybe that’s because I’m constantly battling with myself to stay in my center and make music from the inside instead of constantly watching what’s going on in the scene and trying to be part of it. I think that carving out my own thing was the only way I could do it. If I ever had a mentor, it was definitely Charles Webster.”

 

SWAM Instruments Are Part of the Band

 

Simone Capitani was eager to know if Iveson thought we were going in the right direction in the development of our SWAM instruments.

“I don’t normally use plug-ins. I’m surrounded by synthesizers. I like to be able to play the notes on the synthesizers that are making the sound. I’m not one of these guys that will load the MIDI in and then flip through the plug-ins to try and find a sound that works. That’s not writing music for me, that’s guessing and hoping for the best.”

“You make such beautiful happy accidents when you’re being expressive with synthesizers. I got quite a lot in the studio: Korg Prologue, Moog Matriarch, ARP 2600, lots of modular synths… So I don’t need to use plug-ins. But one thing that was always missing is orchestral instruments. I never had any sample library that was expressive enough, nothing that would give me an instant touch to the instrument like an analog synth does.”

“An instrument has many different sounds it can make if you know how to manipulate it. The way I think about SWAM is more like using a synthesizer for me because of the level of expression they allow me to have. Solo instruments never worked for me in sample libraries. SWAM instruments became part of my band.”

Listen to the full episode to hear a live demo of Martin using SWAM as well as discovering his secret for interacting with an audience, getting an inside look at his experience remixing some iconic songs like Giant Steps and Saint Germain’s Rose Rouge, and listening to some of the stories behind his love for the South African culture.

The full episode of MIDI Talk with Martin Ivenson aka Atjazz is available as a podcast or as a video on our YouTube channel. Don’t forget to subscribe and hit the bell button to get notified of future episodes!

 

MIDI Talk Ep. 2: Dom Sigalas 1024 576 Audio Modeling

MIDI Talk Ep. 2: Dom Sigalas

A Music Producer’s Workflow, Fighting Elitism, and the Importance of Music Education With Dom Sigalas — MIDI Talk Ep. 2

 

In this second episode of MIDI Talk, a series dedicated to meeting creators, artists, and technicians from all around the world to discuss their unique musical experience and the role of technology in their profession, Simone Capitani met with music producer and composer Dom Sigalas.

Classically trained as a pianist from a young age, Dom is now based in London, UK, has been a professional producer, mixing and mastering engineer for several top-level studios as well as working as a sound designer for well-established audio technology companies. From film music to advertising to developing sounds and presets, Dom’s musical experience is diverse and covers a wide range of skills and technologies related to music.

 

Make Music, Not War

 

“The reason why I got into music and stopped watching sports is that I felt music was something that brought people together instead of making them fight with each other.”

“I used to be a huge football fan. I watched the games, cheered for my team, was happy when we won and angry when we lost. With my friends, we fed on this energy of competition. Music doesn’t do that. Now, if I meet someone who plays different kinds of music than me, I’m curious to know what their process is.”

It’s true that differences between musicians and artists can be a great way to connect and learn new things, but they can also create all kinds of friction between people from different musical backgrounds.

“I was with friends talking about different harmony approaches — the different approaches between jazz and classical for example — and how we hear music in our heads. Because we’re all open-minded and curious, we learned a lot from each other from this conversation. But sometimes, there are ‘camps’ forming, for example, the traditional musicians vs the tech guys.”

“Many people have to face a lot of music snobs out there. I’ve seen it. Some people don’t know how to play an instrument very well but they can create amazing stuff. Sometimes, these people are even more creative than musicians who understand music in a traditional way. But unfortunately, a lot of times one type of musician is afraid of the other. The one who doesn’t know how to play is afraid of the skills of the player, and the players are intimidated by people who are amazing creators without knowing music in a traditional way.”

“I think the people using stuff like modular synths or other technological means to create music would become better musicians if they also knew a minimum about music theory. On the other hand, traditional musicians would gain a lot by embracing the possibilities technology brings to the table.”

“This snobbism is also present in music education. I visited many schools and sometimes, there’s a bit of a problem with the way teachers approach technology. When I was a student in University, many traditional professors frowned upon people going into music and technology because to them, that’s not real ‘academic’ material. They would say things like ‘oh yeah, you’re going with the buttons and knobs and stuff’ as if this meant taking some kind of shortcut.”

“Now, when I visit schools, I always say to students ‘look, you can go on and not use computers or tech. But if you want to get into the music industry, you’ll find it much easier to know all this stuff.’ Kids and young people should know about all the possibilities that exist out there because that’s what gets them excited. I remember the first time I tried out presets on instruments when I was younger. I remember thinking ‘there’s no way I’ll be able to create a preset!’. And now here I am, loving sound design and creating presets for big companies.”

 

Workflow of a Diverse Music Producer

 

“One of the great things about working in music is how diverse the work is. Even working on a different piece of music makes the day different from the previous one.”

Even though every day has the same start. “On a normal day, I get to the studio, turn on the equipment, and check what needs to get done.”

“Sometimes, I need to produce specific lines or replicate something the client wants me to replicate. Other times, clients send just the vocals and my task is to create the song — the chords, arrangement, drums, everything. Sometimes, they want a very specific sound, for example, a sound they heard on a hit record. That’s when I wear my sound designer hat.”

“When composing, I tend to work differently if I’m working on my own music or if I’m working for a client. For a client, my goal is to do the job to the best of my abilities and as fast as I can. For my music, I don’t have this time pressure so I take the time I need to explore and experiment with more ideas. It’s a very different mindset.”

“It’s important to have reliable gear and know your gear very well. You don’t need a lot of equipment, but whatever you have it’s important to know it inside out. This is true also for sounds and sample libraries. Some libraries are good for one thing but bad for another. It’s important to do your research, try everything, and play. I’m a huge library geek! I have maybe 8 TB of sample libraries on my system.”

“My setup is hybrid. I have synths, sample instruments, and of course SWAM instruments. When I first saw the SWAM Violin and SWAM Brass, I was completely blown away. SWAM blends really well with orchestral sample libraries. It’s one of these things that can add this extra realism, that missing ingredient. I know very few libraries that can do solo instruments well.

 

Instant Gratification and the Value of Good Musicians

 

“I got interested in music education because I was passionate about passing on the correct information to people getting into digital music. But at the same time, what I noticed is that sometimes, people ask the most obvious questions because they’re not used to do their own research and putting in the work.”

“We are in the age of instant gratification — we take our phones, download an app and everything is ready… But because we didn’t really have to work for it, we don’t open the app again. I think good musicians are even more valuable now because they had to fight this tendency for instant gratification everyone is looking for nowadays. Learning an instrument is hard work and takes practice. Fewer people are willing to put in the time needed these days I think.”

This episode of MIDI Talk with Dom Sigalas is available as a podcast or as a video on our YouTube channel. Don’t forget to subscribe and hit the bell button to get notified of future episodes! Upcoming guests include Martin Ivenson aka Atjazz, and Peter Major aka OPOLOPO.

MIDI Talk is sponsored by Zoom, the maker of Q2n-4k, the camera used to produce this show.

Award-Winning Film Composer John Powell Talks About How He Uses SWAM Instruments, Music, and Creativity
Award-Winning Film Composer Talks About SWAM, Music, and Creativity 1024 576 Audio Modeling

Award-Winning Film Composer Talks About SWAM, Music, and Creativity

Award-Winning Film Composer John Powell Talks About How He Uses SWAM Instruments, Music, and Creativity

 

With over fifty scored motion pictures under his belt, John Powell is a prolific and influential film composer, to say the least. Some movies he worked on include Shrek (2001), Robots (2005), the second and third Ice Age films (2006-2012), the Happy Feet films (2006-2011), the How To Train Your Dragon trilogy (2010-2019), amongst many others.

He’s earned three Grammy nominations for his work on Happy Feet, Ferdinand, and Solo: A Star Wars Story, and his score for How To Train Your Dragon 1 earned him an Academy Award nomination.

So when Emanuele Parravicini, Audio Modeling’s CTO and co-developer of the SWAM engine, received an email from Mr. John Powell’s office, the whole team got excited. It’s not every day an award-winning film composer takes the time to write to us to express his appreciation of our products!

What started as a friendly email exchange developed into an hour-long interview where we had the chance to talk to the man himself not only about SWAM but also about his perspective on music, the creative process, and the music industry.

 

Meeting the Man Behind Those Iconic Film Music Scores

 

We logged into Zoom at the appointed time. John Powell warmly greeted us, and we introduced ourselves and the company. It seemed we shared the excitement as Mr. Powell settled in to answer our first and most obvious question — one we were dying to know the answer to — how did he discover our SWAM instruments?

“Honestly, I’m not sure how I found you guys. It was definitely through the internet, maybe something I read on Apple News or some other tech-related news platform. No one told me about SWAM, that I remember. Then, I saw one of your demonstration videos on your YouTube channel.”

“That video caught my attention because, at the moment, I’m re-writing an opera I wrote 28 years ago with composer Gavin Greenaway and librettist Michael Petry. We didn’t have a big budget back then, so it was written only for fourteen instruments. I tried to program it the usual way I write but since it’s written for solo instruments, I couldn’t find good samples to work with.”

“With SWAM instruments, I can perform from the score. They’ve been particularly useful in this case because all the instruments in this piece are solo instruments.”

 

Discussing John Powell’s Use of SWAM Instruments

 

We listened to John Powell explain why he enjoys working with SWAM.

“When you’re film scoring, you’re not sketching something out on paper and handing it over to someone to have it orchestrated. As composers, we need to write every note played by every single instrument. Film composers need to become a master of every instrument they write for or at least, a master of the keyboard version of it. That’s something I understood working with Hans Zimmer back in 93-94. Because of that, I’m drawn to any sound that is very playable.”

“I hate sample libraries that are endless patches. You have to load up a patch to do this thing and then load up a patch to do that thing… Then you end up with ten different tracks. Or when you’re trying to get a result that sounds natural and performable from an articulation set and you end up cross-fading with MIDI from one type of sound to another to get the slides, portamentos, the right vibrato… It’s annoying.”

“I always loved sounds that allow me to do it all. That’s what drew me to your instruments — you seemed to have gotten everything inside one performable instrument and that’s not something really possible with samples at the moment.”

“When I’m working with samples, I need to shift between a technical and a creative mindset all the time. To counter that, I set up huge auto-loads on my systems, just because I don’t want to have to go through that technical mindset when I’m in the process of creating. But with SWAM, I can stay in this creative state longer because I can play the instruments as I go.”

After the praises came the time for some constructive feedback.

“There’s one thing I’m lacking in your instruments and that’s the relationship between the instrument and the room.”

At that moment, we understood why Mr. Powell asked us for a meeting. He had some questions of his own about our technology, questions Emanuele Parravicini eagerly answered. What followed was an enthusiastic conversation between audio software experts on ideas about how to not only model the sound of an instrument but also the sound of the room it’s recorded in and even the sound of specific microphones used to record them.

Audio Modeling has been aware of this issue for quite some time and is actively conducting research to understand what would be the best approach to achieve this kind of result.

“Whatever you do, I think you shouldn’t include the sound of speakers,” John said. “That’s the big problem with acoustic modeling at the moment, it always includes the sound of speakers and I think that’s a disaster. We already have speakers at the end of the chain, so it’s like having them twice.”

“Regardless of that aspect, it’s fascinating to me how you’ve developed such an unlikely level of quality that I haven’t seen before. We are so used to working with samples, and we know the problems that come with using samples, but this is such a different approach.”

 

The Importance of Aligning the Use of Technology With a Human Connection in Music Interpretation

 

We were curious to know if working with SWAM will influence the way he writes music in the future.

“Admittedly, I’m not approaching these instruments from a creative perspective. I’m looking for accuracy so that I can create a performance-based audio representation of the score.”

“It’s true that when using technology, one thing that’s interesting is exploring possibilities that go away from realism. But if we decide to move away from what real instruments can do, we need to keep sight of the fundamental reasons we love these instruments in the first place and why they work for us.”

“Let me give you an example. In the original recording of ‘American In Paris’, there’s a specific scene, it’s a very sexy dance between the main characters. There’s one trumpet note there, just a single note that slowly does a crescendo. Many people played that same piece beautifully afterward, but no one has played it quite like Uan Rasey, the trumpet player in the original recording.”

“One day, I arranged for Uan to come to my studio and sort of ‘bless’ my trumpet section since he had taught many of them. I talked to him about that note in ‘American in Paris’. I told him that for me, I hear everything in that note — everything I ever felt about love, sex, life, death… Everything! Just in that one note. Something about the way it changes from one thing to another and how it blossoms. Every time I hear it, I see the Universe open, and I see all human experience. I told him all this and he kind of looked at me and said ‘You know, it was just a gig that day.’ But when I asked him what happened in the studio and how he got to playing and recording it this way, he said they did the first take and then the director came to him and said ‘Listen, this needs to be the sexiest note you ever played.’”

“Now, ‘sexy’ is a difficult word to use in music, and in the end, what I got from that note is not sex, it’s much more than that. But his response was a very deep and human contact with this word and he made it blossom because he was a master of his instrument.”

“It’s not just the note, it’s also the arrangement and where it goes, but that note itself, how the timbre changes, always struck me as the epiphany of what musical expression is. Singers can do it, great players can do it. Gershwin wrote that single note with a crescendo and a slur over it and like I said, others have played it magnificently but no one has played it quite as magnificently as him, in my opinion.”

“That’s the musical and human connection I will always say is required for everything you do in music. So if you’re taking an instrument away from reality, you need to try and hold on to that. If there’s a synth note that doesn’t sound at all real but it blossoms in some way, or it changes in the timbre — the timbre meaning the change in the note means something — that’s what I’ll always be looking for, even if the sound is unrealistic.”

How to Develop Your Own Unique Sound

 

So what gives a musician this kind of unique and recognizable sound? And how can music composers and producers achieve this kind of sound quality while using technologies like MIDI and sample libraries?

“You can make your own sounds. Hans Zimmer has always created new libraries of sounds and I’ve done this also in the past: recording instruments, then sampling and processing them to make new sounds. For example, some reasons the Bourne films sound the way they do is because I was using a particular TC Fireworx box as my processor, and I chose to use a lot of bass and very specific guitars, playing them a certain way with different tunings. But then the most important part came afterward when editing.”

“It’s the choices you make that create your sound and the technique you have when writing music. People ask me which sample libraries I use. Honestly, I use the same ones as everybody else! But it’s the choices I make when I’m working with these libraries that create my sound. When you work with samples, or even when you work with something like SWAM, you have the possibility to change everything. You can change the acoustic and see what happens if you go with a drier or a wetter sound. You can place instruments differently in the space and see what happens if you have instruments far away from each other that are normally close or the other way around.”

“For example, I always loved ensembles, especially taking solo instruments and making ensembles with them. In The Call of the Wild, I had an ensemble of fourteen banjos. It’s not a sound you usually hear. That’s one way of developing your own sound, to just think and do things differently in some way. It sounds cliché, but it’s difficult to do and for me, it comes down to my fascination for other people’s music.”

“I have many musical obsessions I always come back to. There’s a four-bar phrase in a Vaughan Williams piece I always remember, there’s a record from Judie Tzuke that has a string arrangement I always remember, and then there’s my own experience of playing certain pieces that I always remember. You go through your whole life, you hear music, and it does certain things to you, depending on what is happening in your life at that moment. Or you simply remember that music because it sparks something in you.”

“All these connections music makes, they are like emotional diamonds buried inside us that we carry around. Then when you write music, really all you do is start pulling them out and using them. I think artists and musicians with very unique sounds simply carry around slightly weirder diamonds or they pick diamonds that are very different.”

“If you can remember all of Star Wars’ music and write like that, it’s great, but it’s not very useful to anybody else in the world. We need people to write like Star Wars but not. We need something new that doesn’t sound exactly like Star Wars. A person might try to write like Star Wars but can’t so the result comes out as something different but equally wonderful.”

“That missing accuracy is important. You need the memory of these emotional diamonds, but you also need to forget the details of what you heard so it can become whatever feels right at the moment. Some people are very accurate in recording those emotional moments and it just comes out exactly like the thing they are remembering. That’s fine, but it doesn’t move anything forward; people won’t see it as anything new.”

“In my case, at my best, I’m remembering the strengths of the emotions but completely forgetting the details of how the piece was played or written. When you fight to achieve the same kind of emotion while forgetting the details of how it was done originally, the result becomes something else. Then it has a chance of being unique.”

“That’s because in the end, if I’m remembering Ravel, I can’t forget I also love Esquivel or Timbaland beats but Vaughan Williams and Ravel never heard these things. It makes no sense to me to leave out some of these influences when I’m trying to reach that same emotion. Why would I do that? Because they are different? They’re not different, they feel the same to me. I get the same emotion from one than from the other so why not use both? Then, if I’m lucky, it becomes something different but with a strong feeling to it that people can recognize.”

 

Finding joy in the process of music creation

 

The art of music composition is one thing, but breaking into the industry is something else entirely. We asked Mr. Powell for his advice to anyone who dreams to one day be where he is.

“In many ways, I had as much pleasure working on my first advert many years ago back in the UK as I had working on How To Train Your Dragon, even though that advert was terrible. I probably did something like $150 for the demo and $450 for the final. It wasn’t anything great at all, but what I liked was pursuing the idea of making it work, of making it right, and I enjoyed the act of creation more than the effect my work had on people. If you don’t enjoy the creation process, it’s very hard to balance that with the amount of rejection you’ll get.”

“For some, it’s worse. Take actors for example. They can get rejected from the moment they walk into a room, just because of their looks. At least, for composers, looks are not that important… Even though, admittedly, we all look like this,” pointing at himself, “we’re all white men. It’s embarrassing and I really hope this will change soon.”

“If you enjoy writing, you’ll do it again. And if you get rejected, you’ll figure out why and how you can write better. Tenacity is the key to that.”

“Making money in this business is hell, and it’s worse now than it ever was because so many people can do it. Technology has made it possible for anyone to write and record music. I squeezed in at the end of a period when it was very much about people’s talents. I don’t think I had as much talent as many of the people surrounding me, but I had a talent for looking into technology to find different ways to do things. It’s not much of a vision, it’s not a Philip Glass kind of vision, but it was enough for me to keep pursuing technology and its use along with a musical understanding of things.”

“I got through in the industry because I was lucky and tenacious, and I was tenacious because I enjoyed the process. Not the process of people liking my work, even though it’s wonderful when they do, but the process of me liking my work was more important.”

“If I had anything to say to anybody, it’s that if you don’t enjoy the process all the time, and if it’s not all about the moment of creation rather than the moment of presentation, you shouldn’t do it. It’s going to be too painful. It’s difficult, you’ll get rejected a lot, and the stress is huge. Obviously, compared to many other jobs, real jobs, it’s much easier. It’s the hardest ‘easy job’ in the world in my opinion.”

 

What’s coming next

 

We didn’t want to end this interview without hearing a bit more about the opera John is working on at the moment.

“The opera is written for four soloists, a women’s chorus, a small orchestra, and also a larger orchestra for the recording. It’s going to be a bit on the shorter side, about 65 minutes long.”

Grinning, he said, “I like to think of it as an ‘operatainment’ because it’s perhaps too much fun to be an opera.”

“We are planning to record it in September and release a CD by the end of the year. But as far as performing it goes, classical music has a very long process. So even though we’ll use the recording to promote the opera, find places to perform it, and approach different opera groups, and even if people love it, we probably won’t be able to perform it before 2023.”

His last words to us were “Long live the nerds!” to which we enthusiastically cheered. Yes, Mr. Powell, we are indeed going proudly towards the future, and we are all looking forward to seeing how you’ll be using our technology, and hear all the incredible music you’ll create in the years to come.

 

Written by: Mynah Marie
Photo credits: Rebecca Morellato

You Tube_MIDI Talk_Ep 1
MIDI Talk Ep. 1: Claudio Passavanti 1024 576 Audio Modeling

MIDI Talk Ep. 1: Claudio Passavanti

Thoughts on Succeeding in the Music Industry with Claudio Passavanti — MIDI Talk Ep. 1

 

MIDI Talk is a new podcast and video series about making music, music creation, and specifically the relationship between music and technology.

In this series, Simone Capitani, UX/UI designer of Camelot Pro and Marketing Expert for Audio Modeling, chats with creators, artists, technicians from all around the world to discuss their unique musical experience and what was the role of technology in their profession.

 

Introducing Our First Guest — Claudio Passavanti

 

Claudio Passavanti is a British Italian pianist, music producer, and digital entrepreneur, known as Sunlightsquare and Doctor Mix.

In his early twenties, Claudio moved to Los Angeles, California to study Orchestral Composing and Arranging at The Grove School Of Music.

Back in Italy after completing his studies and collaborating with many prestigious musicians in the L.A. music scene, he played keyboards, produced albums, and conducted orchestras for artists such as Zucchero, Lucio Dalla, Pino Daniele, Alex Britti, Niccolo’ Fabi, Gabin, Brian Adams (Italian leg of the tour), Ambra Angiolini, as well as many others.

“My interest in music started when I was really young. My mom says that apparently, already at the age of 3 I was playing with my brother’s melodica. Then, around the age of 8 or 9, I was already experimenting with record players. I think I was around 8 when I had my first ‘multi-track’ experience using my dad’s cassette tape recorder.”

“When I was a kid, I was always fascinated with computers”, explains Claudio. “I remember when I got my first computer, a Commodore 64, that’s what really got me started in technology.”

 

Clash of cultures leading to a unique sound

 

“In Italy, I hit a point in my career when I got tired of being put in this box of a pop musician. The music wasn’t fulfilling me anymore, I wanted to discover my own sound. So I moved to London when I turned 30.”

Claudio goes on to explain the difficulties of reinventing himself in a new country, one where he didn’t have the professional recognition he already had in Italy and where he had everything to learn.

“I remember going to jam sessions and feeling so out of place because I didn’t know the songs everyone else knew.”

There’s an incredible opportunity for growth and learning when someone is able to approach the unknown with humility, an open mind, and the drive to learn and work hard. Claudio shares how the musicians he met in the London music scene completely blew his mind because of their way of creating electronic music. By discovering new styles, genres, and producing techniques, he developed a very unique sound which he later became recognized for.

He became passionate about creating electronic music people would want to dance to and that became his inspiration to create Sunlightsquare.

 

Advice from the experts on making your place in the music business

 

Reminiscing on the challenges to adapt to a new culture and carve a place in a new industry brought up interesting and constructive thoughts around the topic of “making it” in the music business.

“Too many artists think that success comes from ‘being discovered’ when in fact, most success stories happen because someone put in the energy to put themselves out there, meet people, and learn from them.”

“One of the major keys to success is being able to listen to the needs of an industry, and then apply your knowledge to a different area so that you can create a unique combination, something truly ‘you’, that the industry wants and needs.”

The full episode of MIDI Talk with Claudio Passavanti is available as a podcast or as a video on our YouTube channel. Don’t forget to subscribe and hit the bell button to get notified of future episodes! Upcoming guests include Dom Sigalas, Martin Ivenson, and Peter Major.

MIDI Talk is sponsored by Zoom, the maker of Q2n-4k, the camera used to produce this show.

SWAM for iPhone is out now