Experience Story | Audio Modeling https://audiomodeling.com/ Expressive Virtual Instruments and Live Performance Tools Tue, 10 Oct 2023 08:19:30 +0000 en-US hourly 1 https://audiomodeling.com//wp-content/uploads/2019/05/cropped-AM-Audio-Modeling-fondo-bianco-square-32x32.jpg Experience Story | Audio Modeling https://audiomodeling.com/ 32 32 MIDI Talk, Season 2, Episode 2: Andrew Dugros /midi-talk-season-2-ep-2-andrew-dugros /midi-talk-season-2-ep-2-andrew-dugros#respond Fri, 23 Sep 2022 08:49:56 +0000 /?p=23078 MIDI Talk, Season 2, Episode 2: Andrew Dugros Andrew Dugros is a native-born Italian, but it would not be inaccurate to say that his first language is music. Having been born into a musical family, Dugros started learning music at the tender age of 6, when he began studying classical piano, often being taught by […]

The post MIDI Talk, Season 2, Episode 2: Andrew Dugros first appeared on Audio Modeling.
MIDI Talk, Season 2, Episode 2: Andrew Dugros was first posted on September 23, 2022 at 10:49 am.
©2022 "Audio Modeling". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at user@example.com
]]>
MIDI Talk, Season 2, Episode 2: Andrew Dugros

Andrew Dugros is a native-born Italian, but it would not be inaccurate to say that his first language is music. Having been born into a musical family, Dugros started learning music at the tender age of 6, when he began studying classical piano, often being taught by a cousin who has long been a music teacher.

Barely a year later, Dugros had his first encounter with music technology. “There was an electromechanical organ at my uncle’s home,” recalls Dugros. “When I went to this instrument and was able to change its sound (with the drawbars), that was the moment when I decided that technology and music was unique work. The (role) of technology in music is very important for me.”

Dugros got his first electronic instrument, a Yamaha Portasound keyboard, before he was even a teenager. “It was a little keyboard, but, for me, it was a big experience,” he smiles. In a common pattern for keyboardists, Dugros soon acquired another keyboard, and then another.

At age 15, Dugros’ journey into music technology took another major turn. “The big change was when I bought my first sequencer and learned MIDI. This day changed my life as a musician,” Dugros emphasizes. “When I was able to record a track with the keyboard, and then record many more tracks to create a song with MIDI, it was fantastic! Obviously, I continue to play the piano as my first instrument, but keyboards and synthesizers are very important in my life.”

The first years of the new millennium found Dugros digging in to all these areas by becoming a certified Cubase teacher, studying at Milan’s famed CPM Music Institute, founding AMM (a music and technology academy you can find at http://www.accademiamusicamodernaaosta.it/), and, finally, becoming an engineer at CPM and starting his own recording studio.

Beyond all of his studio work, Dugros has also maintained an active performing career across a broad range of musical genres, including jazz, pop, gospel (Dugros is also an accomplished vocalist, often singing lead), and folk. As if that were not enough challenge for him, Dugros also played solo piano concerts.

His studio work has similarly crossed genre barriers, producing more than 50 albums for artists in classical, pop, folk, and rock.

This broad range of experience taught Dugros key lessons about what the true priorities of a musician and composer should be. With acoustic instruments, the heart of mastering them is endless practicing of fundamentals. With software, that idea translates into having a good workflow.

“We are musicians, not programmers or engineers,” Dugros begins, “We want to play, we want to compose, we want to arrange. When the computer has a problem, this is a situation in which a musician is not happy. So the workflow with a computer is very important.”

During the years when Dugros was really coming of age in music production, the quantity of available musical tools, software and hardware, was exploding. Many musicians loaded their DAWs up with endlessly scrolling menus of software instruments, but Dugros realized that was not a viable path to true mastery.

“I say to my students that it is not important to have a million applications or a million effects in the computer. You must have the instruments that you really use, and you must be able to use them. I have seen students of mine – and other musicians – that have millions of instruments but are not able to use them.

“I have instruments from four or five, maybe six manufacturers. I’ve made all of my productions and my live shows using products from only these five or six developers, plus two or three other synthesizers. If I can use an instrument very well, I can make things that are impossible to think.

“Now, I am curious, so when I see a new product, I’m curious to see it and try it. On my YouTube channel, I try many products, but the number of products I use for real production is very restricted.”

The biggest reason Dugros is so concerned with having full control over his instruments is because he believes that is necessary to expressivity, which is his number one objective.

“You can play the correct notes and you can play the correct chords, but if you don’t play expressively, you don’t play. For me, expressivity is all (that matters) in music. It’s not necessary to play 100 notes, because if you play the correct notes at the correct time with the correct expressivity, the music transmits an emotion, it transmits a sensation.

“This is one of the most important things in all of music, in everything that I do in the studio, in what I do in life, and in what I teach to my students.”

This emphasis on expressivity is what captivated Dugros when he first tried Audio Modeling’s modeled instruments, which enable expressivity through real-time control by modeling the behavior of acoustic instruments. “When I was younger, playing a saxophone sound from a keyboard was just a dream because the sound was very unnatural, very ridiculous in some cases. But when I played SWAM Saxophone for the first time, that dream became reality; I could realize the dream.”

But saying this brings Dugros right back to his argument for truly knowing your instrument in order to be able to get music out of it, because the true sound of each instrument is born largely from the idiomatic differences imposed by its physical mechanisms. “When you play a saxophone or an organ or an electric or acoustic piano, you play them in different ways. You can’t play every instrument in the same way, and expressivity changes with the sound that you use. You need to change finger position, for example, and you need to change your brain and think that you are that musician. When I play a saxophone or trumpet sound, I think like I am a saxophone or trumpet player. You need to change your mind and your technique, because (playing one of these other instruments) is not the same as simply having your fingers on a keyboard.”

While Dugros insists that both the instrument and one’s approach to it must be expressive, both of those are simply means to an end. In the final analysis, it is the approach to the music itself that determines expressivity, and that, says Dugros, comes down to immersion in the music and being in touch with your own musical voice.

“It is important to enter into the soul of the song. It’s not only about the current note or chord, you need to take your mind into the song. Expressivity is unique, so when you play a song, your version is different than any other version in the world because you are a unique person.”

Of course, this is true whether in the studio or performing live. “When you go on the stage, you’re not just a musician, you are saying something emotional. That’s one of the most important things a musician can do.”

When technology is mastered and under control, it can enable great expressivity in performance, but when things go awry, it can destroy the whole message the performer is trying to convey. Many performances today employing a lot of music technology are heavily mapped out and programmed, and Dugros feels that preparing well and then sticking with the plan is the key to everything turning out well in such situations.

“From my experience, it is very important to prepare and then not change anything. If you get on stage and begin improvising, it’s a big mistake. It is vital to go onstage and not change anything that you have prepared. That is important for the MIDI programming, but also for the music. So, when I go onstage, I don’t change a note that I have prepared.”

It was this notion that caused Dugros to put Audio Modeling’s Camelot performance environment at the center of his performance system. “It is a great frustration when you have a big setup and go to play live. It is very difficult to play and change to the correct sound at the right moment in the song. I discovered that Camelot Pro is the solution for all of these problems, both live and in the studio.”

Certainly, there are always situations that force changes, and a performer needs to be prepared to “wing it” when necessary. Dugros remembered an incident where his guitarist suddenly had a problem, which he had to cover up by jumping into a solo, so you must always have one or two things you can grab on the fly, if needed.

That is not to say that Dugros doesn’t believe in improvisation, in fact, improvising is key to his compositional method. “When I compose, I start with an improvisation. I start to play, and an idea might be right for a song. So improvisation is very important.”

The last half-century has seen synthesizers and music technology take hold and then dominate nearly every genre of music and every situation in which music is produced or heard. In all that time, the determinant of technology’s success has rested on a musician’s ability to take sophisticated instruments requiring some degree of electronic or computer savvy and get sounds out of them that evoke feelings in people. Andrew Dugros has fully immersed himself in this effort, recognizing that the mysterious art of extracting human warmth from cold electron flows and digital bits is the real objective in working with music technology. And that can only be accomplished by diving deep and mastering the technology.

Andrew Dugros has learned well that, as the song says, “You gotta get into it if you wanna get out of it.”

 

 

The post MIDI Talk, Season 2, Episode 2: Andrew Dugros first appeared on Audio Modeling.
MIDI Talk, Season 2, Episode 2: Andrew Dugros was first posted on September 23, 2022 at 10:49 am.
©2022 "Audio Modeling". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at user@example.com
]]>
/midi-talk-season-2-ep-2-andrew-dugros/feed/ 0
Omri’s Touring Rig: SWAM, Camelot, and Lots of Talent! /omris-touring-rig-swam-camelot-and-lots-of-talent/ /omris-touring-rig-swam-camelot-and-lots-of-talent/#respond Thu, 08 Sep 2022 22:00:46 +0000 /?p=22854 A few months back, we spoke with saxophonist/EWI player Omri Abramov for Audio Modeling’s “MIDI Talk” video podcast series. Recently, we caught up with Omri again as he passed through Italy playing behind Israeil singer Noa on her summer 2022 tour of Europe to talk about how he built his performing rig for the tour […]

The post Omri’s Touring Rig: SWAM, Camelot, and Lots of Talent! first appeared on Audio Modeling.
Omri’s Touring Rig: SWAM, Camelot, and Lots of Talent! was first posted on September 9, 2022 at 12:00 am.
©2022 "Audio Modeling". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at user@example.com
]]>
A few months back, we spoke with saxophonist/EWI player Omri Abramov for Audio Modeling’s “MIDI Talk” video podcast series. Recently, we caught up with Omri again as he passed through Italy playing behind Israeil singer Noa on her summer 2022 tour of Europe to talk about how he built his performing rig for the tour around our SWAM instruments and Camelot live performance software.

Omri’s setup provides a great deal of flexibility from a relatively simple configuration. He gave us all of the juicy details, which we pass on to you with a thorough explanation and high-quality graphics, in this video:

Find out how a top musician puts SWAM and Camelot to work in performance!

The post Omri’s Touring Rig: SWAM, Camelot, and Lots of Talent! first appeared on Audio Modeling.
Omri’s Touring Rig: SWAM, Camelot, and Lots of Talent! was first posted on September 9, 2022 at 12:00 am.
©2022 "Audio Modeling". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at user@example.com
]]>
/omris-touring-rig-swam-camelot-and-lots-of-talent/feed/ 0
MIDI Talk, Season 2, Episode 1: Omri Abramov /midi-talk-season-2-ep-1-omri-abramov/ /midi-talk-season-2-ep-1-omri-abramov/#respond Thu, 21 Apr 2022 13:16:00 +0000 /?p=22455 MIDI Talk, Season 2, Episode 1: Omri Abramov Omri Abramov (https://abramovmusic.com/) is a musician of his times: global in scope, drawing on tradition and breaking new ground, playing acoustic instruments and exploring the boundless possibilities of technology, and all in the service of his expression of the human experience. Born in Israel, Abramov’s first instrument […]

The post MIDI Talk, Season 2, Episode 1: Omri Abramov first appeared on Audio Modeling.
MIDI Talk, Season 2, Episode 1: Omri Abramov was first posted on April 21, 2022 at 3:16 pm.
©2022 "Audio Modeling". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at user@example.com
]]>
MIDI Talk, Season 2, Episode 1: Omri Abramov

Omri Abramov (https://abramovmusic.com/) is a musician of his times: global in scope, drawing on tradition and breaking new ground, playing acoustic instruments and exploring the boundless possibilities of technology, and all in the service of his expression of the human experience.

Born in Israel, Abramov’s first instrument was the recorder, a first musical experience for many children, at the tender age of five years old. He moved through the recorder family, on to flute, and then, in high school, picked up the saxophone. “There’s something that has always clicked for me with the movement of fingering the wind instrument,” Abramov observes.

Abramov studied jazz at Tel Aviv’s Israel Conservatory of Music and made tenor saxophone his primary instrument. Receiving designation as an Outstanding Musician from the Israel Defense Forces, he spent his time in the military playing shows all over Israel, including multiple appearances at the Tel Aviv and Red Sea Jazz Festivals.

Since being discharged, Abramov has played in the Idan Raichel Project, co-founded jazz-fusion band Niogi with keyboardist Dr. Guy Shkolnik, relocated to Berlin for better access to Europe’s jazz community, joined Sistanaglia (an ethnic/jazz group of Israeli and Iranian musicians with a message of peace), and formed Hovercraft, a New York City-based trio. Add in composing and producing, and Omri Abramov is a pretty busy dude.

 

Omri at Play

 

Today, Abramov plays a lot of tenor and soprano saxophone, but also has taken a deep dive into electronic wind instruments, playing both the Akai EWI (Electronic Wind Instrument – https://www.akaipro.com/products/ewi-series) and Aodyo Sylphyo (https://www.aodyo.com/?langue=en). Few outside of wind players are familiar with these instruments.

“(Wind controllers) deal with electronic signals, but have this humanizing effect produced through a sensor that is based on breath,” explains Abramov. “This sensor is converted to MIDI data, like expression (controller). You can assign this to different MIDI parameters and humanize virtual instruments or analog synthesizers, or whatever you would like to control.”

The difference between this and playing an acoustic instrument is not always easy to grasp. “People often ask, ‘You are playing an electric saxophone?’  It’s not electric saxophone, it’s an instrument in itself,” Abramov clarifies. ”You have to really dig into it and explore it, and then the possibilities are endless.”

When asked which is his favorite between tenor sax, soprano sax, EWI, and Sylphyo, Abramov glances skyward as he ponders the question. After a moment, he realizes the reason for his hesitation. “I don’t have a favorite instrument,” he begins, “because everything is connected to what (the instrument’s) functionality is in a specific situation.” Further, the experience of playing each instrument impacts his playing on the others. “The saxophone playing benefits my playing on the EWI and vice versa. They all contribute to reaching my vision.”

 

The Deep Dive

 

“My relationship with technology started about 13 years ago, in my old jazz fusion band, Niogi,” says Abramov. “That’s where I got into sound synthesis through playing the EWI, getting a bit out of the traditional acoustic zone that I was in.

“I started doing music of my own in the line of fusion and sound design, and thought, ‘I’m playing all that uptempo jazz on my sax, it will be easy: I’ll just go to the store and pick up the EWI and start shredding.’ And I couldn’t play a note because it was so different. Still, I was brave enough to get one home, sit with it and practice a lot.

“I got a MacBook and a version of Apple Logic, and the first synthesizer I used was the ES-2 that came with Logic. Then I went to Spectrasonics Omnisphere, and explored sound design combinations with samples and stuff. I’ve also worked in the area of plugin development, like my connection with Audio Modeling, and with Aodyo, the manufacturer of the Sylphyo.

“Each of these requires me to dive deeper into its own zone, and then the combinations, for example the combinations you can achieve with Sylphyo and its wave controller controlling the parameters you can assign in the SWAM instruments. In the new SWAM v3 especially, what you can assign is amazing.”

 

Drawing Expressivity Out of Technology

 

Having spent years studying playing techniques on saxophone and learning how to employ them expressively, Abramov is essentially repeating that journey with electronic instruments.

“There’s definitely (challenges in technology) I had to find a way around. For example, vibrato. There’s a great vibrato response in the parameters on the SWAM instruments, and on the EWI, you can get vibrato through biting the mouthpiece. I tried that a bit and ruined a few mouthpieces because my bite is apparently too strong. I started using pitch bend, and it felt more like a natural response. The EWI has a pitch bend up on the upper side of the thumb and pitch bend down on the bottom side. I use many upward pitch bends, and I do it with my thumb. Naturally searching for that feeling from the sax, I found this thing and it started to be part of my playing.

“Maybe that is something personal to myself, I found my way to do it. Many EWI players ask me, ‘oh, you actually use the thumb to do vibrato?’ Yeah, but it’s a tiny thing, because if you do it too much, it’s like oowahoowahoowah. But that’s the beauty of music: you search for this feeling, you suddenly create this thing that maybe if you were coming from the outside and tried to analyze it like a computer, you’d be like, “ah, no, that’s not the way to do it,” but then you put a person in this process trying to reach for something from the heart, and it creates something.”

The other example Abramov cites is the importance of dialing in the velocity response of the sound source, which is an iterative process involving small adjustments on the software instrument, then adjustments to the controller, and back and forth until it feels natural to play. ”It’s not really like I look for what resembles the saxophone, because if I play a trumpet sound, or more than that, a synth combined with a trumpet sound, that’s a new thing. I’m just looking for how it will have a soul, how it will have a character that you can hear someone behind it playing it like it was something natural in the room.

“What we’re trying to do is give technology and electronic instruments and software instruments the human element that keeps them so exciting for years without becoming dated. Acoustic instruments are never getting old. Synths, usually the ones that are flashy and super hip right now, in 20 years will not be hip at all, because fashions change. But things that are more subtle and grounded, that have soul inside and feel more natural–it is more likely they will endure the test of time.

 

Swimming in SWAM

 

Abramov discovered Audio Modeling’s SWAM instruments at a couple of NAMM (National Association of Music Merchants) trade shows. “The first time was at NAMM 2019,” Abramov recounts. “ I connected my EWI to one of the SWAM instruments, and was blown away. Before that, I was doing stuff that was more connected to the internal EWI sounds. But then I used the EWI as a controller for this amazing SWAM sound engine, and I was really intrigued. To be frank, it changed my views about virtual instruments and what you can do with modeling synthesis to emulate an actual instrument.

“Even now, I love taking several SWAM instruments and combining them together to make new sounds. These possibilities hit me right away. I asked, ‘Can I combine this clarinet sound with the flute sound that you just played?’ And (the Audio Modeling person) said, ‘Sure,’ and then he set up two sound engines and put the sounds together, and it was so fun to play with that I was hooked. There’s one big video called SWAMMED (https://www.youtube.com/watch?v=OocavJfNiv0) where I did an orchestra of wind instruments. It was a whole process. I was diving more and more deeply into it. “

Abramov’s interest in Camelot had just as much impact on him. “Camelot is really inviting, that would be the word for it. It’s not messy,” asserts Abramov. “The most crucial things, like volume control, are really easy to approach in Camelot. I can create all my sound combinations and give them different effects, run them through different processing–even use different controllers–see the level of each of those layers (containing the various instruments) and what processors they are running through. Perhaps 50 percent of my time is spent creating those combinations, that’s a lot. Camelot became my go-to for that purpose. My iPad works hard, and Camelot works hard.”

Abramov’s attention to features that aid his ability to play expressively has led him to focus heavily in Camelot on one thing: the response curves that translate controller response into sound source parameter response. Camelot provides powerful tools for shaping these curves to match the style of a player or the characteristics of a controller to the way that a sound changes. “I use the SWAM MIDI Learn function to assign different parameters, then I modify the curves,” he details.

“I use that for things like opening a filter. If I rotate the Sylphyo one way or the other, it opens the filter in different ways. What I like to do is apply just a bit of filtering; even on SWAM Trumpet, I’ll put an external filter on it. The filter is not 100 percent open, but almost, and every small movement I do opens it a little bit. It’s like I’m trying to emulate the feeling of what happens when the trumpeter is moving his head and changes the angle of the lip.

“It’s a territory I’ve started to explore relatively recently, but I feel like there’s a lot there, because I didn’t realize how much expressivity happens from the actual movement of the player. I mean the small gestures you do when you’re excited or you move for a higher or lower register. The aspect of movement is one thing I think Aodyo did great in the Sylphyo, with its movement sensor. When you approach it in a delicate way, the small gestures in the curves, in the volume combinations between layers, in the movement, that is where the magic is. I can spend hours on that.”

 

The Never-Ending Story

 

Omri Abramov has his nimble fingers into many musical pies, but while maintaining his deep love of the saxophone, he is clearly transfixed by the unexplored territory that music technology lays before him. He still views himself as being barely past the starting line.

“All this is a constant search for new ways of expressing myself through technology. Of course music is the engine, but technology is the tool. This is a constant journey since (starting with the EWI). I’m trying to get deeper and deeper. I feel like I’m just scratching the surface.”

The post MIDI Talk, Season 2, Episode 1: Omri Abramov first appeared on Audio Modeling.
MIDI Talk, Season 2, Episode 1: Omri Abramov was first posted on April 21, 2022 at 3:16 pm.
©2022 "Audio Modeling". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at user@example.com
]]>
/midi-talk-season-2-ep-1-omri-abramov/feed/ 0
Midi Talk Ep. 10 – Whynot Jansveld /midi-talk-ep-10-whynot-jansveld/ /midi-talk-ep-10-whynot-jansveld/#respond Tue, 08 Feb 2022 15:05:32 +0000 /?p=22008 MIDI Talk 10: Whynot Jansveld Takes Tech On Tour Making a living in music is a hustle, but diversification strengthens your ability ride the winds of change that so often sweep through the industry. Whynot Jansveld is a case in point. As a bassist, Jansveld has toured with far more artists–well-known to obscure–than there is […]

The post Midi Talk Ep. 10 – Whynot Jansveld first appeared on Audio Modeling.
Midi Talk Ep. 10 – Whynot Jansveld was first posted on February 8, 2022 at 4:05 pm.
©2022 "Audio Modeling". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at user@example.com
]]>
MIDI Talk 10: Whynot Jansveld Takes Tech On Tour

Making a living in music is a hustle, but diversification strengthens your ability ride the winds of change that so often sweep through the industry. Whynot Jansveld is a case in point. As a bassist, Jansveld has toured with far more artists–well-known to obscure–than there is space in this article to name, but we’ll throw in a few: The Wallflowers, Richard Marx, Butch Walker, Natasha Bedingfield, Gavin DeGraw, Sara Breilles…OK, we better stop there if want to get to his words. You can hear the entire conversation here.

He has appeared on both daytime and late night talk shows, and worked with numerous producers of note. But his bass career is not his only one.

Jansveld has also built himself secondary careers as a composer, particularly writing a lot for Founder Music, a stock music library, and as a mastering engineer, many of his clients being the same folks for whom he plays bass.

A native of The Netherlands, Jansveld emigrated to the US in his 20s to attend Berklee College of Music in Boston. “I grew up on a lot of rock and pop music on the radio, and a lot of that came from (the US). Ultimately, that’s why I wanted to explore it and became a professional musician here,” recalls Jansveld.

After coming to Berklee to do one semester and graduating after spending three years there instead, Jansveld slipped down the road to New York, where he worked for 16 years before relocating to Los Angeles 10 years ago.

 

Tech On Tour

 

Jansveld’s introduction to music technology while, at age 18, Jansveld traveled for a year playing bass with Up With People, a US nonprofit organization founded in 1968 to foster cross-cultural understanding amongst young adults through travel, performing arts, and community service. The troupe traveled with early digital music technology: a Roland MC Series sequencer and a Yamaha RX5 drum machine. Jansveld took the bait. “I just started messing around (with the sequencer and drum machine) in our free time. I wanted to learn how that all worked and I thought I could somehow make something out of it. And I did.”

At Berklee, Jansveld took the next step when a bandmate sold him a Macintosh SE computer loaded with Opcode Systems Vision, a very early (and, in fact, visionary) MIDI sequencer program.

Today, Jansveld’s primary gig remains touring as a bass player, which puts a lot of his emphasis on music technology on mobile systems. Powerful devices in small, robust packages are valuable to him. His heaviest use of music technology on the road is for composing and mastering, but some of it naturally finds its way onstage, as well.

Jansveld’s mobile technology use is mostly behind the scenes. At his level of touring, musicians are expected to arrive at rehearsals for a tour already knowing all of the material, so Jansveld often needs to create song charts while he travelling.

“It used to be that you’d have a stack of papers (of charts), but now, I write all my charts on my iPad with an Apple Pencil.” This works for Jansveld because he doesn’t need to be hands-on with his instrument in order to transcribe.” I think I’m a little bit different than most musicians, in that I almost never touch a bass guitar while I’m preparing for any of this, even up to the point where I get to the rehearsal. I do it all in my head. I get a big kick out of being on a five-hour flight and transcribing a whole set of tunes. I have my in-ears (monitors) plugged into the iPad, and three-quarters of my (iPad) screen is where I write my chart, and the little quarter of a screen is my music program. I’m playing the music, rewinding a little bit, and writing as I go along.”

Jansveld also carries a mobile production rig for composing on the road. “I have a laptop and a little bag with an Apogee Jam (USB guitar interface for iPad), an Apogee mic, some hard drives, and a couple of cables, and that’s it. I can work in my studio at home, disconnect everything, close the laptop, put it in a bag and take it on the road. (I) open it up in a coffee shop, and everything that I was working on is there and I can keep working on stuff.”

Mastering is more of a challenge. “(That), obviously, is a little hard to do on the road because you are on headphones. I try to avoid doing it on the road unless somebody needs something quickly.”

 

The Show Must Go On

 

Jansveld sees the impact of technology in support infrastructure for performances, as well, especially in the areas of monitoring and show control.

“Having in-ears, you can have a beautiful mix, but it does feel like you’re cut off from the world,” he laments. “Sensaphonics have this technology where the in-ears also have microphones on them and you can mix the sound of the microphones with the board feed coming from the monitor side of things.

“It’s a little bit of a hassle to deal with, but at the time (I got them), it was worth it to me to do, because (I could) feel the room. I can’t overstate how important that is to me, because I’m not just there to play the notes, I’m there to perform for sometimes an incredibly huge room with a lot of people in it, and I want a (strong sense of) how that feels to me and how that makes me play and perform.“ Hearing the “outside world” also improves Jansveld’s onstage experience, because “even just walking towards the drums, the drums get louder.” Onstage communication is improved, as well. “If the singer walks up to you and says something while you have normal in-ears, you just nod and hope it wasn’t something important.

“I now use ACS in-ear monitors instead of Sensaphonics, because they came up with a system that also lets in ambient sound from around you, but instead of using microphones and a second belt pack, it simply uses a vent with a filter (like a custom earplug) that keeps the seal intact.”

Digital mixers can store in-ear monitor mixes as files that can be ported to another mixer of the same type. “That is super, super helpful if you’re doing a run of a few weeks, because every time you get to sound check, it’s already been mixed from the night before, It’s an incredible timesaver because, (without that,) you spend a lot of time on, ‘OK, kick..more kick. OK, snare…more snare,’ and it seems incredibly repetitive for no reason.”

Show automation is another area where Jansveld sees the effect of technology on touring. “Currently, I’m touring with the Wallflowers, and before that was Butch Walker, and both of those were really rock shows,” he points out. “Changing something up was just a matter of changing up the setlist.

“(On) some tours, everything is run by Ableton Live including the lights and all of that kind of stuff, and it takes a lot more to change things up because somebody has to go in and reprogram stuff. But for a band that doesn’t have a lot of money to spend on the tour and wants to make an impact, it looks incredible, because everything is timed. The chorus hits and the lights go crazy and come back down on the verse.”

Jansveld also sees greater reliability than existed before in the ability to play backing tracks, another highly automated task.

 

Mastering Kept Simple

 

“Never in a million years growing up did I think that mastering was something I wanted to do, or that I had the skills or ears to do,” Jansveld muses. “I ended up doing it four or five years ago simply because friends of mine had some new tracks mastered and were pretty unhappy with how it sounded. I had started composing music, mostly for commercial stuff, and had used those (same) tools to make my stuff sound as good as I thought it could be, so I just told him, “Why don’t you send me your tracks and I’ll see if I can beat (the other mastering efforts). That led to a whole bunch of other stuff with him and people that he recommended me to, and it took off from there.

“I don’t do anything special and I don’t have any tools that you or I or anybody else don’t have access to. But I take the time, and I pay attention, and I trust my ears enough and my experience in music enough to know what I want to hear. I don’t start altering things just because someone is paying me money to master it. If the mix sounds great, I make it louder but I keep it dynamic. It’s a simple concept, but apparently it goes wrong often enough that there’s room for me to do this.

Jansveld tries not to overcomplicate things in mastering. “If I had to put it very, very simply: “nice and loud” still works all of the time. It kind of works everywhere: on CD, on vinyl, it works for MP3, it works coming out of my phone. It sounds dumb, and it can’t always be that simple, but that’s my experience. You’re definitely pushing up against 0 dB (peak level) or a little bit less, you’re definitely compressing and limiting things, but with the tools we have these days, you can get things nice and loud, still have them be dynamic, and not really experience a feeling that things are squashed or pumping or just made worse.”

 

What Do You Want From Life?

 

When asked what he would ask from music manufacturers, Whynot Jansveld’s request is to harness more powerful technology for his bread-and-butter needs: “I would love a floor box with a bunch of switches on it that can load Audio Units plugins. Plain and simple, just a big old hefty processor, a really amazing CPU, a bunch of RAM. I have incredible stuff I can use on my computer, and I’d love to use all of it live.”

As we prepare to take our leave of Jansveld, he raises one more point on which to comment: “We haven’t talked at all about what Audio Modeling does, but it’s certainly exciting technology for me. What you guys do, where you basically create these sounds out of nothing, is pure magic for me, and there’s no limit as to how far that can go. I can’t wait to see what’s next and I’m having a lot of fun playing your instruments. I’m going to be using them a lot.”

The post Midi Talk Ep. 10 – Whynot Jansveld first appeared on Audio Modeling.
Midi Talk Ep. 10 – Whynot Jansveld was first posted on February 8, 2022 at 4:05 pm.
©2022 "Audio Modeling". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at user@example.com
]]>
/midi-talk-ep-10-whynot-jansveld/feed/ 0
Midi Talk Ep. 9 – Don Lewis /midi-talk-ep-9-don-lewis/ /midi-talk-ep-9-don-lewis/#respond Tue, 25 Jan 2022 08:18:40 +0000 /?p=21998 MIDI Talk 09 – Don Lewis: Living At the Corner of Humanity and Technology   It would be an understatement to say that Don Lewis is a legend of music technology because that would not begin to cover his innovations and their impact. Hailing from Dayton, Ohio, in the heartland of the US, Lewis became […]

The post Midi Talk Ep. 9 – Don Lewis first appeared on Audio Modeling.
Midi Talk Ep. 9 – Don Lewis was first posted on January 25, 2022 at 9:18 am.
©2022 "Audio Modeling". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at user@example.com
]]>
MIDI Talk 09 – Don Lewis: Living At the Corner of Humanity and Technology

 

It would be an understatement to say that Don Lewis is a legend of music technology because that would not begin to cover his innovations and their impact. Hailing from Dayton, Ohio, in the heartland of the US, Lewis became interested in music as a child watching a church organist, and in high school was introduced to electronics. When things would go wrong with the electropneumatic organ at his church, he would crawl up to the organ chamber and try to fix it. “I was very inquisitive,” Lewis states. “I think that is why we are here as human beings: to be inquisitive and explore.”

Lewis’s curious nature led him to study Electronic Engineering at Alabama’s historical Tuskegee Institute (now Tuskegee University), during which time he played at rallies led by Dr. Martin Luther King. From there, Lewis became a nuclear weapons specialist in the US Air Force, then did technical and musical jobs in Denver, Colorado, until he was able to become a musician full-time.

After moving to Los Angeles, Lewis worked with the likes of Quincy Jones, Michael Jackson, and Sergio Mendez, toured opening for the Beach Boys, and played the Newport Jazz Festival at Carnegie Hall.

In the late 1970s, most of a decade before the advent of MIDI, Lewis designed and built LEO (Live Electronic Orchestra), a system that integrated more than 16 synthesizers and processors in a live performance rig that took the phrase “one man band” to an entirely new level. So new, in fact, that it frightened the Musicians Union, who declared Lewis a “National Enemy” in the 1980s and worked to stop him from being able to perform. (Lewis notes that Michael Iceberg, a white musician also doing a solo electronic orchestra act at the time, never faced the opposition that Lewis, who is black, encountered.)

But Lewis kept innovating, digging into an entirely different role contributing to the design of new electronic musical instruments. A long, fruitful collaboration with RolandCorp’s Ikutaro Kakehashi brought Lewis’s influence to the design of numerous drum machines and rhythm units, including the iconic TR-808 and numerous synthesizer keyboards.

A conversation with Lewis, however, only occasionally touches on all of these accomplishments. Mostly, he weaves together philosophy, humanism, spiritual and cosmic aspects of musicmaking, and the application of all of those to technology to create an emotional experience for listeners. Audio Modeling’s Simone Capitani took the ride and followed Lewis to every corner of his universe in a discussion so wide-ranging we can only touch on a very limited portion of it here. But you can hear more by watching the video.

And now, ladies and gentlemen, the one and only Don Lewis.

 

The Wiggle and the Two Songs

 

Lewis views music as a fundamental ingredient of the human universe. “If we look at quantum physics, in string theory, there is a wiggle,” he begins. “Even if you can’t see (something) move, it is wiggling on some scale that is beyond nano. Working with sound and music is a very macro perspective of this nano thing that’s happening, but we can control the wiggle. That’s one way to look at why music and sound are so pervasive and so innate to our being – because we are working with something on a level that everything else is made out of.”

The art of music, then, lies in how we control the wiggle. Lewis poses a simple answer to how humans accomplish this, once again going to the foundations of existence.

“I have this feeling that every human being is musical. When you came into this world, you could sing two songs. What is a song? It is a sound and a language that someone else can understand.

“Before we learned our language – Italian, English – we sang two songs: the first, crying, the second song, laughter. We sang those songs before we could say ‘mama’ or ‘papa.’ We live our lives between those two songs: one of need, one of joy. We need each other to be joyful. When we are against each other, we are not joyful.”

How can electronic instruments express such profound abstractions? A good first step, suggests Lewis, is to relate the properties of acoustic musical instruments to those of electronic instruments.

“Conventional musical instruments are mechanical synthesizers. The voice is made up of three components of subtractive synthesis: the vocal cord is the oscillator, the mouth cavity is the filter, the envelope generator is the movement of the mouth,” describes Lewis.

“Looking at those ingredients tells you that, for the most part, this is why we have extended the range of the human voice through technology. Musical instruments extended the range: the bass is lower than any bass singer could sing, and the highs are higher than any soprano could sing. So, what we have done with mechanical synthesizers, we are now (doing with electronic instruments,) creating another palette of sound and color that extends it even more, and also articulates much more.

“But the articulation is the main ingredient of how you create emotion: that crying and that laughter. Those are the two emotions that get us – the crying and the laughter. How do you get those two emotions to be represented in the envelopes and the pitches and the filtering? How do you get Louis Armstrong’s and Joe Cocker’s and Janis Joplin’s and Aretha Franklin’s and Ray Charles’s voices, that emotion?“

 

The Delicate Sound of Thunder

 

Lewis does not limit himself to acoustic instruments and the voice in trying to generate emotion. “When we compose and when we sound design, we can pull from nature, the things external to us that still make sound: the wind, the rain, the thunder, the lightning.”

Lewis has long been captivated by thunder. “One of the most exciting things that would happen to me when I was a kid in Dayton, Ohio, especially in the summertime, there would be these afternoon thunderstorms, and everybody would go hide and get scared. Me, I’m sitting there looking at the lightning and going ‘Ooh, this is so cool.’ One Sunday, my grandfather and I were walking to church from our home, going to Sunday School. In the middle of us walking, a big thunderstorm started. There was this paper factory that had this really tall chimney all the way from the ground, it must have been 100 feet high. Lightning struck that chimney, and some bricks from that chimney fell. My grandfather went ‘whooo!’ and I went ‘Wow!’ These are things that prompted me to want to make this thunder (on a synthesizer),” relates Lewis.

“But the other thing was that, on Wendy Carlos’s Sonic Seasonings album, the ‘Spring’ episode had thunderstorms going on while this music was going on. I thought the Moog synthesizer was doing that. I was working with ARP, and I said, ‘Oh, I have to learn how to do this on the ARP.’ But I come to find out, it wasn’t the Moog, somebody recorded this and integrated it and mixed it in. So I did something for the first time: ARP thunder!”

 

The Importance of Intention

 

Don Lewis is a thinking man who has seen a lot, played a lot, and done a lot. When asked what advice he would give to young musicians working with technology, he pauses, then comes back with the wisdom he has gathered over his decades making music.

“What are you going to do with the music you produce with these tools? What is your intention? Are you going to be inspiring? Is there something you want to express? If it’s going to make a mark on anybody else’s life, do you have an intention here? If you can figure out what’s going inside of you that you think needs to be expressed, then use that tool for that. Otherwise, you will be distracted forever, because there are so many other ideas going on in the world.

I look at it this way: the first 20 years of your life (is spent) ingesting everything everybody else thinks you should have. Your education and the whole bit. The next 20 years, you try to put all of that stuff to work. Then you find out at the end of that 20 years – this is working or this is not working. And then the next 20 years, you try to erase all the things (from) the first 20 years that didn’t work for you. So, those first years are very susceptible to being ones of confusion, especially now.

You have to be not only about you, (but) about others, and when you become about others, you are actually being more about you. How would this make a difference not only in my life, but in others’ lives? Is my protest song actually going to help the movement? Or is it going to stop the movement? In the days of civil rights, the protest songs were songs, they were things that people could sing and march. They weren’t chanting, they were singing. The difference between chanting and singing is that chanting only takes in the left side of the brain, which is only speech. Singing takes in the musical side and the language side, the creative side and the logic side. And then you get more power and you get more people participating.

I know I would not be here if it had not been for my ancestors, who sang their way through slavery. They sang those work songs, they sang those spirituals, and that’s what helped them to survive. What helped our civil rights movement in the United States was the singing, the marching, Martin Luther King, John Lewis, and others. I met both of these people, I knew them. So I understand those rudiments, and I hope that the ability to access the creative efforts of (Audio Modeling) and others in making music accessible, can create this atmosphere; this is the atmosphere we need.

If music disappeared from the earth, the earth will continue, but our existence may not, if we don’t become in harmony, first, with ourselves. And that’s what music is all about.

Visit www.DonLewisLEO.com for more informations about Don Lewis and the new upcoming documentary about his life and career.

MIDI Talk is sponsored by Zoom, the maker of Q2n-4k, the camera used to produce this show.

The post Midi Talk Ep. 9 – Don Lewis first appeared on Audio Modeling.
Midi Talk Ep. 9 – Don Lewis was first posted on January 25, 2022 at 9:18 am.
©2022 "Audio Modeling". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at user@example.com
]]>
/midi-talk-ep-9-don-lewis/feed/ 0
Midi Talk Ep. 8 – Chase Bethea /midi-talk-ep-8-chase-bethea/ /midi-talk-ep-8-chase-bethea/#respond Tue, 21 Dec 2021 13:21:56 +0000 /?p=21989 MIDI Talk Episode 08: A Visit to Chase Bethea’s Interactive World   Video game composer Chase Bethea has a simple approach that guides him through the myriad complexities of his job: “I’m always thinking about the player first,” he offers. That is a perspective he comes by honestly. “I am a player first; I’ve been […]

The post Midi Talk Ep. 8 – Chase Bethea first appeared on Audio Modeling.
Midi Talk Ep. 8 – Chase Bethea was first posted on December 21, 2021 at 2:21 pm.
©2022 "Audio Modeling". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at user@example.com
]]>
MIDI Talk Episode 08: A Visit to Chase Bethea’s Interactive World

 

Video game composer Chase Bethea has a simple approach that guides him through the myriad complexities of his job: “I’m always thinking about the player first,” he offers. That is a perspective he comes by honestly. “I am a player first; I’ve been playing games since I was six. Most people start (learning) their first (musical) instrument at that age, but that was my instrument.”

 

Born in Chicago, Bethea received his higher education in Southern California, earning an Associates Degrees in Audio Engineering from The Los Angeles Recording School, another AA in Music Theory and Composition from Moorpark College, and, finally, a BM in Media Composition from California State University Northridge.

After finishing LA Recording School in 2007, Bethea was mixing popular bands around Los Angeles and working at Music Plus TV (which became Vlaze Media) when he came to the realization that composing music for games was a viable career option. “I started writing music in 2001, so through high school I was making tracks and experimenting with everything I could possibly think of, and people would tell me, ‘(Your music) sounds like it should be in a video game.’ I didn’t understand what that was and how to tap into it, until the IT person at Music Plus TV said, ‘Hey, this sounds like it should be in Castle Crashers,’ which was a very popular game. So I thought ‘You know what, I’ve been told this for seven years. I think I’ll look into this more.’”

Since that time, Bethea has shipped music in more than 20 games, including I Can’t Escape: Darkness, Super Happy Fun Block, Aground, Cubic Climber, and Potions Please. His Cubic Climber score earned a “Noteworthy” on Destructoid.com, and in 2016, Bethea was nominated for an Outstanding Artist–Independent Composer award from VGMO (Video Game Music Online). He also worked on pre-production for Virtual Reality Company’s Jurassic World VR Expedition, and with a half dozen projects currently in progress, it’s amazing Bethea finds the time to serve on the IASIG (Interactive Audio Special Interest Group) Steering Committee, too!

Simone Capitani from Audio Modeling pinned Bethea down for an extended discussion that took a deep dive into the process of composing music for games and VR. What appears here is only a part of the conversation; for the complete exchange point your browser to: A Visit to Chase Bethea’s Interactive World — MIDI Talk Ep. 8.

 

From Fruit to Flux

Bethea has used technology since he started writing music, working in Image-Line Software’s FruityLoops (which morphed into FL Studio) for years before eventually migrating to his current primary composing tool, Steinberg Cubase. His first real exposure to the requirements of composing music for games came when he was contracted to provide music for Tim Karwoski’s Electron Flux, a game for Android devices. There were many lessons to be learned, Bethea recalls, including “understanding what loops were and how they were (used), understanding the limitations of the device (on which the game would be played), and understanding how much your music is going to be (data) compressed.” He learned to generate his finished content at a high resolution, so that it would survive the often brutal bit rate reduction to its delivery format with at least a shred of fidelity. And then there was the issue of audio file formats.

“MP3s do not loop well in games; they have a gap,” Bethea explains, “so if you were to send those (to the developer), it would be jarring for the player.” (This is because MP3s encode based on blocks of data that rarely coincide with a given musical tempo, making precise looping impractical.) “But you can’t send WAV files, either, they’re way too big. I wasn’t using OGG files just yet, so, at the time, what I had to do was figure out a way to do a different version of the WAV. I was natively compressing the best way I could. Obviously, it wasn’t the best utilization, but it worked.”

 

We Control the Vertical and the Horizontal

As a composer for interactive media, Bethea views his work through an entirely different lens than composers working in linear media like film or TV. “You know where a movie is going to go. We design the game, but we never know what the player is going to do or at what speed, so things need to adapt to enhance the player experience overall,” he elucidates. “You really need to think in a design format to comprehend it, and this is what can trip up a lot of composers, because they typically won’t have that design mentality. You need to plan out what you’re going to do before you do it. Then, if the game needs an orchestra, you have to adapt to those things: you already wrote the music, you designed it, you designed the different layers – the vertical, the horizontal – but now you need an orchestra to perform it. It’s like an onion, with layers and layers.”

(Vertical and horizontal composition are two primary techniques used to create adaptive music. Vertical composition is the process of stitching together fully composed and produced chunks of music, where the order of chunks changes depending on game play. In horizontal composition, larger chunks of music are composed in multiple synchronized layers that are brought in or out to change the texture and feeling in response to gameplay. The two techniques are commonly mixed and matched at different points in a game.)

 

The 11-Day Virtual Sprint With Dinosaurs

Media production is typically performed in a high-stress, fast-paced environment, but projects involving cutting edge technology have the added challenge of unforeseen issues cropping up, and interactive media is subject to constant changes in the fundamental design and structure of the project. The biggest and coolest projects tend to be the craziest, and so it proved to be with Bethea’s work on pre-production for The Virtual Reality Company’s Jurassic World VR Expedition.

“It was an 11-day sprint; I only had 11 days to conceptualize and get approved assets for this iconic IP (Intellectual Property). I have to say, it was pretty challenging,” recalls Bethea in a tone of awe. “(The project was being done) in the Unreal engine. I brought my hard drive of sounds and music things, and was trying to conceptualize those sounds that everybody knows.

“I’m in meetings everyday, I’m driving down into Los Angeles, but I was not familiar with what pre-production was. Pre-production is something that radically changes almost every two hours! ‘We think we want this. OK, whatever meeting we had? We’re not doing that anymore. Now we’re doing this. Tomorrow, we’re doing this plus three other things. Oh, but, by the way, you better be in that meeting to do that, too, AND you’ve still got to get the work done.’ In 11 days!

“I freaked out for the first five days. I even went in on a weekend, but that weekend saved me, because when I did that, I actually finished a day early! I’m flying through Cubase doing these things and implementing the music into the system and giving feedback and testing the VR technology, finding limitations like: it doesn’t accept 24-bit (audio), it can only work with 16-bit. And it can have WAV files, but how do they interact with the nodes in the blueprint system? And using the hierarchies and the workflow of the repository, so that everyone is getting the check-ins and things are working together. You do the music, push it to the repository, demo it on the headset, listen, figure it out, it’s good, move on to the next thing, rinse, repeat, rinse, repeat. Long, long days, but good experience, I was pretty proud to finish in that time, and it was the most creative experience I could ever ask for. I would do it again; it was actually really great.”

 

Chase’s AI Sidekick

As a composer deeply enmeshed in technology and having to produce creative content in short timeframes, Bethea has some thoughts on how he’d like to see technology serve him better. “I have had some epiphanies of what I would like to have,” says Bethea as he lays out his dream. “I would like an AI assistant. I would love to design a product where, when I’m writing music and I know my weaknesses, I can ask the AI assistant, ‘Hey, with this Eb minor can I do this?’ And I play it, and it helps me along the way. ‘Well, actually, I found some stuff online and I thought that you might do this, let me pull this in.’ It enacts a MIDI drop and says, ‘Do you like this?’ and I’ll say ‘No, I don’t think I like that, but what if I did this instead?’ You can come up with some really different things. Our brains and our minds can only absorb so much in a day. I can only have so many of the books behind me (gesturing to a bookshelf in the background) that I can read, but if (the assistant is) reading that stuff for me, and saying, ‘You mentioned that you like this person for inspiration. Did you know that they used this melody style or this theory set for this?’ ‘No, I didn’t.’ – that would be really, really cool. I think it would be dangerous, but it would be cool at the same time. I conceptualize it as being better than Google Assistant, but for music.

 

Modeling’s Massive Difference

Having written for both electronic and orchestral instruments, Bethea has great appreciation for the strengths of the modeled instruments Audio Modeling produces and is enthused by his experience with them. “They’re so great. Wow. I was a conductor’s assistant, so I was able to be around an orchestra every single week for, like, two years, and hearing the technology of how you have the expression really down and the vibratos in the instruments…it’s incredible. I’m really, really, really, really impressed. A few of my composer friends said, ‘You have got to try this and have it integrated.’ And it really makes a massive difference with the musicality. Obviously, nothing beats live musicians, but this is the second best thing and they can sit next to each other. I would love a piano version, a supernatural one. There’s so many great, great products that you’re doing, and it’s fantastic.”

 

MIDI Talk is sponsored by Zoom, the maker of Q2n-4k, the camera used to produce this show

 

The post Midi Talk Ep. 8 – Chase Bethea first appeared on Audio Modeling.
Midi Talk Ep. 8 – Chase Bethea was first posted on December 21, 2021 at 2:21 pm.
©2022 "Audio Modeling". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at user@example.com
]]>
/midi-talk-ep-8-chase-bethea/feed/ 0
Midi Talk Ep. 7 – Nick Petrillo /midi-talk-ep-7-nick-petrillo/ /midi-talk-ep-7-nick-petrillo/#respond Mon, 06 Dec 2021 13:25:20 +0000 /?p=21681 MIDI Talk, Episode 07: Nick Petrillo’s Journey of Discovery   Sometimes, discovering that something does not work as you thought can bring about an epiphany that entirely alters how you approach a task. Revelation can be transformational. This is a common occurrence among students, and so it was with composer/music director Nick Petrillo. “One of […]

The post Midi Talk Ep. 7 – Nick Petrillo first appeared on Audio Modeling.
Midi Talk Ep. 7 – Nick Petrillo was first posted on December 6, 2021 at 2:25 pm.
©2022 "Audio Modeling". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at user@example.com
]]>
MIDI Talk, Episode 07: Nick Petrillo’s Journey of Discovery

 

Sometimes, discovering that something does not work as you thought can bring about an epiphany that entirely alters how you approach a task. Revelation can be transformational. This is a common occurrence among students, and so it was with composer/music director Nick Petrillo.

“One of the things that really drove me into using music technology was film scoring,” Petrillo explains. “Growing up, I loved movies like Star Wars and Indiana Jones, and I fell in love with the music of John Williams. At the time, I thought film scoring was all about orchestral scoring, either with pen and paper or through Finale (music scoring software), doing a large orchestra (recording) session, getting a mixing engineer, going from music cue to cue, and placing the music into the picture. But I learned differently when I attended Berklee College of Music, where I studied film scoring. That was really a catalyst for me to get into software synthesizers and DAWs.”

Hailing from Bound Brook, New Jersey, Petrillo moved to Los Angeles after emerging from Berklee in 2010 with a dual Bachelors Degree in Film Scoring and Contemporary Writing/Production. Today, Petrillo writes music for film, TV, and advertising campaigns, is a resident orchestrator at Snuffy Walden Productions (Walden has composed music for hit TV series including Thirtysomething, The West Wing, and Friday Night Lights), and has toured as a Music Director for artists including Aubrey Logan (PMJ), Dave Koz & Friends, Frenchie Davis (The Voice) and David Hernandez (American Idol).

Audio Modeling’s Simone Capitani probed Petrillo’s views on both his post-production work and his live performance world, and the tools on which Petrillo relies to get him through his projects. Petrillo started out by setting out some context for how music is built for film and TV using current technology.

 

Scoring Music to Picture: The Invisible Art

 

“Nowadays, a lot of film composition is production; it’s a lot of drum loops, soundscape creation in things like Absynth, or Kontakt, where you’re layering different sounds and patches over each other, playing with attack and release and decay to round the sound over a given amount of time. So, you have a 20-bar cue – a cue is a piece of music that exists in the film – and sometimes you have a single note or two notes that are swelling based on their attack, decay, and release, and that is what is actually formulating the soundscape. That’s all sound design, that’s music synthesis.”

Music for picture is a support role that amplifies the emotional content being evoked in a show, piece by piece, points out Petrillo. “Let’s take a TV show, for instance. A TV show might have 20 music cues, each of which can last a minute and a half, two minutes, up to five minutes. Say there’s an action sequence that goes into a very dramatic scene with somebody who has just died or is dying. One cue is the action sequence, we tackle that as a chunk. Then we tackle that emotional dramatic scene as a chunk. So we’re not scoring a 40-minute TV show, we’re scoring these different chunks.”

The objective, Petrillo insists, is entirely to complement the action. “What we’re always doing is adding to the emotional integrity (of a scene) and not detracting from it. The music is always secondary. The rule of film scoring is to always stay behind (the action); you shouldn’t really be heard.

“If you have this emotional scene where somebody is passing away, you don’t want a crazy violin line detracting from that moment, you want to stay beneath it and give some emotional chords and soundscape. Maybe you do have a solo violin doing something beautiful that’s not detracting, but you’re not doing anything from Rimsky-Korsakov or Tchaikovsky, there’s nothing huge and grandiose about it. A lot of what I’m doing is commercial film scoring for network TV, which is very cut-and-dried. We’re not doing much out of the box,” he concludes.

To get the big orchestral sounds he needs on the limited budgets and schedules of TV and ad campaigns, Petrillo relies on sample libraries, which have seen tremendous development since he was at Berklee. “Back in 2007, the best (sample) library you had was the Vienna Symphonic Library, and I think the platinum (version of VSL) was 20 grand, or something insane,” he recalls, shaking his head. “As a college student, I had Garritan Personal Orchestra, which was $300 or something, but the acoustical modeling was just not there. Now, I have a subscription to the East West Composer Cloud for maybe 200 bucks a year, and the sounds are incredible. You’re talking a little more than a decade from (when he was a student). It’s crazy how far it’s come and how far it’s going to probably go. There’s a lot of vocal libraries now where you can type in a sentence for the background vocal you want, and play a line on the keyboard, and (a sampled vocal) will sing it back to you.”

 

Love You Live

 

Audio post is a staple of Petrillo’s career, but he also spends a good deal of time out of the studio, traveling the world to work. Live performance today means presenting a show with production every bit as sophisticated as that heard on modern studio recordings. Touring as a player and/or a Music Director, Petrillo navigates an entirely different landscape of equipment and techniques than he does working on TV or ad campaigns. The challenge becomes one of coordinating all of the elements, live and technological.

“Nowadays, a lot of live performance is heavily (built around) production,” Petrillo asserts, “including arpeggiators, filters running at a certain BPM that we need to lock into a clock…and, for all of that, everyone needs to be on the same page, including the drummer, who may be running loop libraries, or different tracks that need to lock into the grid. And then, maybe you have horn tracks or background vocal tracks running on top of that.

“Sometimes a Music Director will get hired and fly out somewhere, and be basically working with an entirely new band, running tracks, working off charts…how do we integrate all of that stuff into something absolutely brand new that nobody’s seen before? That’s where things get tricky,” he reveals. “I know a lot of purists don’t like backing tracks, but I love them, because there’s a safety net there: this is what it will always sound like.”

Making sure all of those individual events were happening when they should in the way they needed to was a major hurdle for Petrillo in the past, requiring a pastiche of different software programs. Recently, however, he discovered Audio Modeling’s Camelot Pro, which is designed for exactly this purpose.

“It comes down to the ability to integrate hardware synthesizers, software synthesizers, and some sort of DAW rig, whether you’re running (Apple) Logic or Ableton (Live), or if the drummer is running some sort of clock system that is feeding into my keyboards via MIDI. Camelot is a huge help with things like that,” he notes.

“For about seven or eight years, I’ve been using Ableton, which has really been the only program I could use to run a track and a click track to the band, and then I had to lock all of my hardware synths and software synths into a DI and go straight to the (mixing) board and deal with it that way.

“But a few months ago, I was talking to everybody over at (the software distribution firm) ILIO about this program called Camelot. They said I should give it a shot because I could not find a program that integrated my synthesizers so that I could send out a click track to the drums (and) run my (prerecorded backing) tracks, and I could run a pdf chart of my song and lock it into my song, and lock all of that into setlists. Now, because of Camelot, I’ve been moving everything into MIDI keyboards and software. For a while, I was programming everything hardwire into a Yamaha MOTIF and a Nord Electro 3 for all my B3 samples, clavinet, Rhodes, and all that kind of stuff. Recently I picked up (Spectrasonics) Keyscape, so I can use that as a soft synth, and my Kontakt stuff – I run all of that straight through Camelot. I just run my patch changes through that, and it is absolutely brilliant.

“Camelot is a very exciting program. It’s so beautifully streamlined. It’s completely changed my workflow. I think everybody should be getting a copy of that.”

While Camelot Pro may meet many of Petrillo’s live performance needs, it still leaves at least one requirement unmet. “The only thing I really need is some type of universal click system. The reason I believe that is so crucial is that it locks all those big moments on stage, those pauses or big stops, into a certain time zone, and I think that’s going to help as far as perfecting music. Everybody nowadays needs perfect music, right? And some of the things in live music that can throw music off are fermatas, caesuras, and pauses within the music. The universal click track, getting everybody on the same page, is really going to change the game of live music production. It already has been that way for pop music; I think it’s on its way to becoming a thing in the cabaret world and the smaller niche markets,” Petrillo predicts.

 

Traveling the Rhodes To Music Technology

 

Having made his way from Berklee into the heart of the fray in music production, Petrillo has words of advice to those trying to get started with technology in live performance. “There’s many different approaches to trying to use technology (live). If you’re brand new and trying to get your feet wet, I would go with learning the basics of analog keyboards that have been digitized. What is a Fender Rhodes and what are the different things you can put on one? You can put a tremolo effect, you could put a chorus effect, you could put a phaser, you could throw it through a Fender tube guitar amp and let that do some warm distortion…right there, you can do a lot of stuff.

“Even with tremolo patterns, that tremolo needs to lock into a certain tempo, even though you may be working with rate. There’s two different ways time-based effects work: there’s rate or BPM based. (Using) BPM-based, you would obviously lock to a clock system, but a rate, you’re going to have work it around your tempo and find where it’s rotating correctly. Running a Rhodes through a distortion effect is going to make it almost sound like a synthesizer. So, already, just working with a Rhodes, you have an abundance of sounds you can work with,” Petrillo offers to illustrate how to approach a journey of discovery like his.

“If you’re going to use digital technology, I’d pick up a program like Absynth or Massive, or something out of the Kontakt realm,” he advises. “I feel like Kontakt is one of the mainstays of technology right now. Dig into oscillators and see what two different waveforms sound like against each other. Then try to detune them and see what that sounds like, try to add some digital effects, and see what the synthesization of the whole thing does. Increasing your release time – what does that do? Or decreasing your attack time – what does that do? Get into the weeds a bit with the synthesization end of it.” Petrillo reflects for a moment on his own learning process, and realizes how overwhelming his advice might seem. “I’m speaking of it now like it’s easy, but back when I was in college, this stuff scared me to death, really,” he admits. “I had no idea how any of this stuff worked.”

Certainly, Petrillo, at this point, has figured out how the stuff works! But he had much more to say to Capitani. To hear Petrillo detail the process of scoring to picture, how Quentin Tarantino gets away with using music that contrasts with picture in his films, the impact technology is having on jobs in music, and more, watch the full discussion.

MIDI Talk is sponsored by Zoom, the maker of Q2n-4k, the camera used to produce this show

More about Nick Petrillo

Website: www.nickpetrillomusic.com

Facebook: https://www.facebook.com/nickpetrillomusic

Instagram: https://www.instagram.com/nickpmusic/

The post Midi Talk Ep. 7 – Nick Petrillo first appeared on Audio Modeling.
Midi Talk Ep. 7 – Nick Petrillo was first posted on December 6, 2021 at 2:25 pm.
©2022 "Audio Modeling". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at user@example.com
]]>
/midi-talk-ep-7-nick-petrillo/feed/ 0
Midi Talk Ep. 6 – Matt Johnson /midi-talk-ep-6-matt-johnson/ /midi-talk-ep-6-matt-johnson/#respond Tue, 23 Nov 2021 15:36:47 +0000 /?p=21643 MIDI Talk Ep. 6 – The history and wisdom of Matt Johnson   Matt Johnson has spent nearly two decades in the band Jamiroquai. He sat down with Audio Modeling’s Simone Capitani and opened up about his background, the band, equipment, and how to be a professional musician. Read on to get the history and […]

The post Midi Talk Ep. 6 – Matt Johnson first appeared on Audio Modeling.
Midi Talk Ep. 6 – Matt Johnson was first posted on November 23, 2021 at 4:36 pm.
©2022 "Audio Modeling". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at user@example.com
]]>
MIDI Talk Ep. 6 – The history and wisdom of Matt Johnson

 

Matt Johnson has spent nearly two decades in the band Jamiroquai. He sat down with Audio Modeling’s Simone Capitani and opened up about his background, the band, equipment, and how to be a professional musician. Read on to get the history and wisdom of a true road warrior and studio samurai.

From an early age, Matt Johnson seemed headed to a career playing music. “I started playing piano at about five years old. I played for a couple of years and sort of got bored with it.” Johnson chuckles at the irony, because, after turning next to trumpet and getting pretty good, synthesizers circled him back to the keys in the 1980s,. “When I became a teenager, I didn’t want to play classical music on the trumpet,” he recounts, “so I gravitated towards the keyboards, and by the time I was 17 or 18, that’s what I was doing for a living.” That decision proved fateful, as it led Johnson to his 19 year-long (and counting) tenure as keyboardist, songwriter, and producer for British jazz-funk stalwarts Jamiroquai.

Having been recommended to the band by former Jamiroquai guitarist Simon Katz, Johnson replaced founding keyboardist Toby Smith in 2002. His role in the band grew steadily, until 2017 found him co-writing and co-producing the band’s Automaton album with lead singer Jay Kay. Johnson also became Jamiroquai’s Musical Director on the road.

Producing Jamiroquai was a natural step for Johnson. “I’ve always gravitated towards production,” he says. Having started with an eight-track reel-to-reel tape recorder, Johnson now does his work in the box. “Pro Tools has become the thing for me; I really love Pro Tools,” he enthuses. “I love the fact that it very much is coming from an audio recorder perspective, rather than (being like) a sequencer. You can manipulate audio and be so intricate in how you treat it in Pro Tools.”

Of course, synthesizers also loom large in Johnson’s approach, both live and in the studio. “The last album with Jamiroquai was based around the Roland Jupiter 8 because we were bringing a bit of ‘80s sound into the mix, and that synth is really so nostalgic ‘80s.” Johnson reveals. “It’s got a beautiful crystalline sound. It doesn’t have a massive feature set, but you can get a huge range of sounds out of it, and it’s really big sounding. That was the centerpiece on the last album.

“Lately, I’ve really been getting into the Moog One. It’s such a stunning synth, and it’s so deep,” he continues. “I’m discovering more things all the time: the modulation possibilities, and the sequencer possibilities – you can have three sequences, all polyphonic with sixty-four steps, and you can change the filter or the resonance or almost anything on the synth, and then go in and craft it in fine detail. I’ve been disappearing down the rabbit hole of the Moog One,” concludes Johnson with a laugh.

From a sound design standpoint, as important as basic sound quality is to him, it is through dynamics that Johnson brings his synthesizer sounds to life, particularly in live performance. “A modern synth is as much a living, breathing instrument as a piano, the difference being that you can tailor (a synth) to your own playing a hundred percent,” Johnson explains. “Originally, synths were absolutely linear; they had no soul, and that was what was cool about them. It felt very futuristic, the fact that they didn’t care about what the velocity was (that you played). It was always the same. But now, synths can be very dynamic and whether you hit them soft or hard can affect the filter or whatever you determine it to affect.

 

“My playing is very based around dynamics,” he emphasizes. “I think dynamics are crucial in music if you want to make people feel something. The way that you hit a note, how hard or soft you hit it, can really affect the listener emotionally. It’s something I’ve been thinking about a lot lately: how the way you hit a note will make someone feel different. And it’s something we probably don’t think about enough as musicians.”

 

Programming dynamic sounds is only one aspect of bringing highly produced records to the stage, however. Going from studio to stage, says Johnson, “is a big transition to make.

 

“For instance, on the last Jamiroquai album: while I’m producing the album, I’m not thinking about how we’re going to recreate it. I wanted to make the best-sounding record by any means (possible). So then, at the end of it, you have to think about ‘how are we going to do this live?’

 

“I had taken some precautions. We’d used a lot of vintage synths, and any time we got a part that was like ‘Oh, that’s definitely going to make the record, that was really good,’ what I did was that I sampled the synth, note by note. Even some of these old vintage ones that didn’t have any MIDI or you couldn’t save the sound, I sampled it into Pro Tools and kept it. So I had all the really key sounds from the album. I have a Yamaha Montage 8, among some other keyboards, so I could sample the notes (into the Montage 8) and know that I had these sounds. In the past, I had to try and recreate them on another synth, and you can never quite get the same quality.

 

“Also on the last album, there was a lot more electronic elements that we’d had previously, because I’m really into electro stuff and I wanted to update the sound a little bit and bring a bit of that into the record. When it came to doing it live, it was a bit of a quandary, because the way Jay, the singer, likes to work, he doesn’t stick to an arrangement, he’ll come in when he feels like it, he might suddenly want to go to another chorus, or a solo or a breakdown. He’s almost a James Brown type of bandleader, so we couldn’t be slaved to a computer arrangement, we couldn’t just have a backing track running. I had to think about that. I spoke to a few people who were more into the tech side of things and they suggested getting someone on Ableton (Live) with Push. That turned out to be a perfect solution, because we could still have all the sequencer parts from the record that we wanted, put into Ableton, so if the arrangement changed, we had Howard (Whiddett) there, running Ableton, so he could change it on the fly, just as one of the band. For us, that was fantastic, because we’ve always been a live band and we don’t want to work with the computer (dictating performance), but we still had the option to have these electro elements and keep them absolutely live. The computer became a member of the band.”

Johnson has obviously learned a tremendous amount in the couple of decades he had spent touring and recording at the top of his field, and, when asked what advice he would offer young musicians, he has plenty to say. His main message is simple: just do it.

“It’s obviously very difficult at the moment (due to COVID),” he begins. “There’s nothing they can do but sit at home and write songs. But soon things will get back to normal and shows will start. My advice to young musicians is always: get out and play. Just get out and play as much as you can. It doesn’t matter what it is; it doesn’t matter if it’s to three old ladies down at the town hall. Just play, play, play, because that’s the only way you can build up your confidence and experience of being a performer.

“I’ve been in a few situations where I’ve worked with young artists who were 18 or 19, and suddenly they get a big record deal straightaway, but they’ve never gone out live. The first show they have to do is massive pressure, on the BBC or something, and, of course, it’s difficult for them. They can’t be great because they haven’t learned how to do that.

“You have to work hard at what you do. You might see musicians and think ‘Oh, they’re just geniuses. I could never be at that level.’ Of course, they weren’t like that at one point. They’re like that because they dedicated themselves to it and they kept trying harder and harder to get higher up. And it never stops; I’m still doing that now. I still try to improve my playing and learn all the time. You just have to try to be as good as the best in the world. That’s really hard, which means you have to try really hard.”

Even once a musician has worked hard enough to get really good at what they do, there is still another whole set of skills to master: that of the stage performer. Working in front of a live audience is its own set of challenges. What is the secret to that? “I think, personally, that the main thing is to be generous as an artist,” Johnson asserts. “There are some artists who, when they’re on, the audience almost feels nervous for them, because they’re just there in their own little world and they’re not really reaching out. The best artists are the ones that reach out to the audience. That means not thinking about yourself, but thinking about them – how to make them happy. Try to give out your energy to the crowd, because when you give it out to them, they send it back to you, and it becomes this sort of vortex, and that’s when you get that sort of ecstatic energy at gigs, people going nuts. It’s not just about the performer, it’s a relationship between the performer and the audience. Every time I get onstage, I’m just so grateful I can do a gig, and I make sure I give it my hundred percent, you know what I mean? I’m on it, I’m trying to give out energy the whole time.”

To hear Johnson talk about how the past reaches into the present, how influential the ‘80s were on his sound, how he sees the current information landscape for musicians, and more, watch his entire interview with Simone at: <link>.

MIDI Talk is sponsored by Zoom, the maker of Q2n-4k, the camera used to produce this show.

 

The post Midi Talk Ep. 6 – Matt Johnson first appeared on Audio Modeling.
Midi Talk Ep. 6 – Matt Johnson was first posted on November 23, 2021 at 4:36 pm.
©2022 "Audio Modeling". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at user@example.com
]]>
/midi-talk-ep-6-matt-johnson/feed/ 0
Midi Talk Ep. 5: Mike Lindup /midi-talk-ep-5-mike-lindup/ /midi-talk-ep-5-mike-lindup/#respond Tue, 09 Nov 2021 09:01:46 +0000 /?p=21585 MIDI Talk Ep. 5 — Leveling Up with Level 42’s Mike Lindup   MIDI Talk is a podcast featuring Audio Modeling’s Simone Capitani in conversation with musicians and producers about intersections of music and technology. In episode 5 of MIDI Talk, Simone Capitani catches up with Mike Lindup, keyboard player, singer, and founding member of […]

The post Midi Talk Ep. 5: Mike Lindup first appeared on Audio Modeling.
Midi Talk Ep. 5: Mike Lindup was first posted on November 9, 2021 at 10:01 am.
©2022 "Audio Modeling". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at user@example.com
]]>
MIDI Talk Ep. 5 — Leveling Up with Level 42’s Mike Lindup

 

MIDI Talk is a podcast featuring Audio Modeling’s Simone Capitani in conversation with musicians and producers about intersections of music and technology.

In episode 5 of MIDI Talk, Simone Capitani catches up with Mike Lindup, keyboard player, singer, and founding member of Level 42.

Formed in London in 1979 as a jazz-funk band, Level 42 soon turned to writing songs and enjoyed a string of hits all through the 80s. Highly influential worldwide, the band reached its commercial peak when the song “Something About You”, released in 1985, reached number seven on the Billboard Hot 100 chart in the United States. The next year, their single “Lessons in Love” reached number three in the UK Singles Chart, and number 12 on the US Billboard Hot 100. Through changing personnel, breaking up, and reforming, Lindup not only continues working with Level 42 today, but along the way has delved into other musical genres and released two solo albums.

 

To the Music Born

 

With a singer-songwriter mother and a TV and film composer father, music has been an important part of Lindup’s life since infancy. Lindup spent his childhood immersed in a diverse spectrum of music from Miles Davis and Duke Ellington to Bob Dylan, Pete Seeger, Yehudi Menuhin, Tchaikovsky, and various soundtracks from British shows.

“Music was organically part of life in the house when I was growing up,” he remembers. “My Mum used to sing around the house, and sometimes she’d be rehearsing with a few musicians in the living room. Our living room was like my playground. There was an upright piano, a guitar, some hand drums… There was also a tape recorder, because mom sometimes used to record her rehearsals.”

Lindup started piano lessons at the age of six, and remembers hours spent at the piano recreating melodies and finding chords from songs he liked as a way to navigate the complex emotions of childhood. Teenage angst was handled similarly after his father bought him a drum kit. “I soon realized playing drums was great for processing angry feelings, which as a teenager I obviously had my share of,” he chuckles.

At 14, Lindup studied percussion, composition, and piano at Manchester’s Chethams School of Music, as well as singing in the school’s senior and chamber choirs. “Switching from piano to percussion was great,” he states, “because then I could play in the orchestra. Also, as a percussionist, I got to play on a bunch of different instruments.”

After graduation, Lindup entered London’s prestigious Guildhall School of Music & Drama as a percussionist. At Guildhall, he met Phil Gould, who then introduced him to Mark King and Gould’s brother Boon. From this core emerged Level 42 (and, yes, the name is a reference to The Hitchhiker’s Guide to the Galaxy).

 

Taking It To the Next Level

 

The band’s strong potential soon became clear. Phil and Boon’s older brother, John, was connected to Andy Sojka from Elite Records, who signed them. They recorded their first single, Love Meeting Love,” while Lindup was still in college. The song was released in May 1980, Lindup finished college in July, and the band went straight into the studio to record what ended up being released as their second album, The Early Tapes.

“When ‘Love Meeting Love’ came out, we had no idea how it would be received,” Lindup comments. “We liked it, but we didn’t know how many other people would like it. We certainly weren’t thinking about a career, we just wanted to see what would happen with this.”

“Andy Sojka knew there was a slightly underground movement of jazz-funk happening in the UK at the time. He knew there were certain DJs playing this kind of music on the radio and in the clubs. There was a pretty big club scene and that became our first audience. Andy is the one who made that connection for us.”

“Then Polydor, our distributors for ‘Love Meeting Love’, started to take an interest in us. In those days, record companies were still looking for what was new: what are the new waves, who are the new bands in these waves, and so on. ”

“In 1981, we signed a five-year contract with the Polydor label,” continues Lindup. “It seems extraordinary, especially now with the way the music business is. But in fact, at the time it was pretty standard to sign a band and put them with a producer just to see if it would develop into something. They saw us and recognized some raw talent, but I mean, it was pretty raw. In reality, this gave us the opportunity to do a five-year apprenticeship. We wrote five albums in those five years.”

“We got to learn how to write songs because we were instrumentalists, not songwriters. Our musical influences were Mahavishnu Orchestra, Return to Forever, Miles Davis, and the whole ‘Bitches Brew’ diaspora: Herbie Hancock, Wayne Shorter, Joe Zawinul, Chick Corea, John McLaughlin, and so on. We didn’t know anything about songwriting until Andy told us ‘Make this into a song and you got a record deal’. It was a fantastic training opportunity.”

“At the same time, we learned how to become a live band,” Lindup reveals. “We could play our instruments quite well, but really putting on a show and making music that hits the audience is a different story. That’s especially true if you’ve been in the studio for a while, because sometimes, you can’t copy exactly what you did in the recordings; it doesn’t work live because the situation is different, the arrangements are different. We had all of that to learn and we were able to learn it because we had this sort of development time.”

 

Lindup’s Lessons in Stagecraft

 

Lindup is not shy about sharing the lessons in live performance Level 42 clearly learned quite well during that development time.

“Every time you go on stage, it’s a new challenge, even though you might know what you’re going to play,” he explains. “For example, at the beginning of a tour when the show is new, you don’t know exactly how it’ll be received. But you kind of need to have a mixture of self-confidence and trust that what you’re doing will be appreciated and not be put off by the voice in your head saying stuff like ‘Oh no, this is not going well’ or ‘this person is looking at their phone, which means they’re not enjoying it.’ You need to learn how to ignore these things going on in your mind.”

“At the same time, it’s important to be authentic. It’s OK to be vulnerable, and it’s OK to make mistakes. With experience, you kind of get used to the fact that most of the time, the audience is on your side, and they’re there because they just want to enjoy the evening. It’s no good trying to pretend everything is cool when it’s obviously not, because then, you’re not being genuine.”

Ah, but size matters, Lindup admits. “We had a huge lesson about performing when we did our first show in front of a big audience in 1981. Up until then, we’d been playing small clubs, up to about 300 people, around the UK, mostly in the jazz-funk scene. Then Miles Copeland (manager of The Police) got in touch with our manager, John Gould, and invited us to support The Police for eight shows in Germany. So we went from an audience of 300 to 8,000, which is a big jump. The very first show was almost a disaster.”

Of course, the band grew beyond these first challenges, as shown by the international success they enjoyed in the following years. But you know you’re curious about the near-disaster in that first show with The Police, aren’t you? Well, you can get all the juicy details, as well as hear Lindup’s perspective on technology and advice for getting started as a professional musician, by listening to the full episode 5 of MIDI Talk, .

MIDI Talk is sponsored by Zoom, the maker of Q2n-4k, the camera used to produce this show.

 

The post Midi Talk Ep. 5: Mike Lindup first appeared on Audio Modeling.
Midi Talk Ep. 5: Mike Lindup was first posted on November 9, 2021 at 10:01 am.
©2022 "Audio Modeling". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at user@example.com
]]>
/midi-talk-ep-5-mike-lindup/feed/ 0
MIDI Talk Ep. 4: Peter Major aka OPOLOPO /midi-talk-ep-4-peter-major-aka-opolopo/ /midi-talk-ep-4-peter-major-aka-opolopo/#respond Tue, 19 Oct 2021 13:07:26 +0000 /?p=21322 Combining the Best of Different Musical Worlds with Peter Major aka OPOLOPO — MIDI Talk Ep. 4   In this week’s episode of MIDI Talk—a podcast dedicated to topics around music-making in relation to technology—Simone Capitani interviews Swedish music producer and remixer Peter Major aka OPOLOPO. Born in a musical family, Peter discovered music through […]

The post MIDI Talk Ep. 4: Peter Major aka OPOLOPO first appeared on Audio Modeling.
MIDI Talk Ep. 4: Peter Major aka OPOLOPO was first posted on October 19, 2021 at 3:07 pm.
©2022 "Audio Modeling". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at user@example.com
]]>
Combining the Best of Different Musical Worlds with Peter Major aka OPOLOPO — MIDI Talk Ep. 4

 

In this week’s episode of MIDI Talk—a podcast dedicated to topics around music-making in relation to technology—Simone Capitani interviews Swedish music producer and remixer Peter Major aka OPOLOPO.

Born in a musical family, Peter discovered music through his father’s love for jazz and fusion. “Dad brought a lot of records back home,” Peter says. “I liked all the harmonies and melodies but I also was very drawn to electronic sounds. My dad used to listen to a lot of 70’s fusion and I remember being drawn to synth sounds very early on.”

“I don’t have any traditional music training. I tried to take lessons from my dad but that never really worked out. First, because it’s not a good idea to take lessons with your dad, and second, because I was never motivated enough to learn how to play other people’s written music. Instead, I was always messing around trying to play my own little melodies on the piano.”

 

Discovering the Experimental World of Club Music

 

“Growing up, I wasn’t so much into club music, even though I did like the beats and the electronic sounds it used. It’s only later on, when I discovered I could combine jazz and funky stuff with club music, that something really clicked for me.”

“I love jazz. I love the harmonies and I can definitely appreciate the rules behind doing things a certain way. But what’s so fascinating about club music is that many of the pioneers in that genre didn’t know anything about music theory. They were just messing around with these machines and getting amazing sounds out of them, even though they didn’t use them the intended way.”

“Like the Roland TB-303 Bass Line, for example. My dad brought one of these back home at some point. He tried to play around with it but he thought it was rubbish because it didn’t sound like a real electric bass. Meanwhile, maybe somebody in Detroit or Chicago creates this totally new sound with it that nobody thought about before just because they approached the same machine from a completely different angle.”

“I love this aspect of club music. You can just experiment and come up with all these weird new concepts and ways of doing things. You don’t find that in jazz or classical music where everything is kind of strict and things sound the same for 30 years. In club music, things move at a different pace.”

 

Achieving Success by Falling in Love With a Song

 

Sometimes, remixes have the power to breathe a new life into an original song. That’s what happened in the case of Gregory Porter’s song “1960 What?”.

“At the time, I was working a lot as a DJ around town here in Stockholm. That song, ‘1960 What?’ is from Gregory Porter’s first album. Nobody knew about him back then. I used to play it in my DJ sets and every time, I thought ‘this would work well in a clubby setting if you add a bit of a kick to it and tighten it up a bit.’ So one day I decided to just try it out.”

“I loaded it up in Ableton, tightened up the track, and quantized it. Then, I added some percussions and a kick drum, and played a new bassline, just one take from start to finish. The whole process was really quick but I liked the result so I put it up on Soundcloud, not as a download but just for people to listen to it.”

“Immediately, I started getting messages in my inbox. I thought ‘ok, something is going on here.’ Obviously, the success of that track comes from the original because it’s a very powerful song. I just gave it a little nudge to the dance floor. But still, a lot of people discovered the track through this edit.”

“The day after I put it out on Soundcloud, I received an email from Gregory Porter’s record label. The subject was something like ‘Regarding your Gregory Porter Remix’ and I thought ‘oh no, now I’ll have to put it down and they’re all angry and everything.’ But actually, they were super cool and very happy and grateful I did this edit. They asked me if they could use it and after a while, they licensed it and bought it as an official remix.”

“It was an interesting project because it’s something I just did out of love for the music. I didn’t think too much about it. You can spend three weeks on a remix and nothing will happen and this one took me something like 2h to make and it took off. But like I said, the original is really powerful and the lyrics resonate with a lot of the racial stuff still going on in the U.S. today so that’s obviously one of the main reasons why it took off that way.”

 

Having a Clear Preference for Software Instruments

 

Some producers have a preference for hardware instruments, others combine both hardware and software tools. In Peter Major’s case, he admittedly works exclusively with software.

“From the first time I tried Propellerhead’s ReBirth and experienced creating songs directly in the software I thought ‘Wow, this was amazing! What if one day we could have a whole production environment on the computer?’ I’ve always been fascinated by that idea. Then obviously, over time plug-ins became better and better and computers became more powerful. So after a while, I completely gave up on the hardware side of things. Not because of the sound but because it was just more convenient.”

“For me, it’s a way to achieve what I want, which is good music. I don’t care too much about the actual tools. I don’t need to have the actual hardware to get inspired, I can get inspired by what’s in the computer. I used to work a lot with Ableton for performing live while using Cubase for all my production work and official remixes but nowadays, I rarely use Ableton. I almost work exclusively with Cubase.”

“When it comes to drums, I use drum plug-ins from a Swedish company called XLN Audio. They have a drum plug-in called Addictive Drums and I use that on pretty much everything. Then, I use Kontakt Battery for straight-up one-shot sample sounds, and also something called Kick by Sonic Academy which does analog 909-style of kicks.”

“I love Rhodes sounds so I’ve been using Scarbee’s EP-88S, which is the largest Rhodes sample library. For synths, I use a lot of the Arturia stuff lately. In the past, I used DCAM Synth Squad from FXpansion. I like soft synths that give you a lot of modulation possibilities.”

“There’s a great Oberheim extension called OP-X I use quite a lot as well and also, Massive X because it has such a unique sound and gives you so much control over the very fine details of the sound. I try to use fewer instruments and know them well instead of buying everything that’s out there just to have the latest and greatest.”

 

A Special Request to Audio Modeling: Emulating an Imaginary Instrument

 

When asked if there are any frustration points in his use of software instruments that Audiomodeling could perhaps work towards implementing, Peter’s response was an interesting one.

“Before trying your SWAM instruments, I wasn’t aware how far the physical modeling technology has come. The realism of these instruments is amazing. Obviously, now that I hear what you’re doing with trumpets or strings, how about a drum or a piano? I just want to hear it applied to everything! I’m looking forward to seeing what else you’ll come up with.”

“But then again, you’re emulating something that already exists. Whatever magical engine you’re using to model real instruments, if you applied that to something crazy, something that doesn’t exist but has that same feel of realism, that would be fascinating.”

Listen to the full episode of MIDI Talk with Peter Major to hear his advice on getting started into the world of remixes and discover how he broke into the music industry.

MIDI Talk is sponsored by Zoom, the maker of Q2n-4k, the camera used to produce this show.

The post MIDI Talk Ep. 4: Peter Major aka OPOLOPO first appeared on Audio Modeling.
MIDI Talk Ep. 4: Peter Major aka OPOLOPO was first posted on October 19, 2021 at 3:07 pm.
©2022 "Audio Modeling". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at user@example.com
]]>
/midi-talk-ep-4-peter-major-aka-opolopo/feed/ 0