fbpx

News

blog_post_camelot
The 21st Century Legend of Camelot and the Hardware and Software Instruments 1024 576 Audio Modeling

The 21st Century Legend of Camelot and the Hardware and Software Instruments

A legend of a thousand years past tells of how, after pulling the sword Excalibur from the stone in which the wizard Merlin had placed it, Arthur became king and would gather his knights at a round table. At this table, King Arthur would command his knights to adventures and glory.

Now come back to the 21st century. In this time, YOU are the king (or queen, as you please), Excalibur is Audio Modeling’s powerful Camelot live performance environment, and the knights are all of your instruments, be they hardware-based, software instruments, or even acoustic instruments played into microphones. Using Camelot, you are able to command all of your instruments, in unified fashion, to adventures in performance, and the greater glory of music (ok, yeah, and maybe a little glory for yourself, too, sure, why not?).

Camelot allows you to press a single button (or step on a footswitch) and, in an instant, change presets on all of your instruments at once, for a complete transition to a new section of a song. Or maybe, instead of changing a sound, it might entirely swap out which instruments are being used for that section.

Camelot can cast spells that let you play and make real-time gestures on your MIDI controller and have each instrument be controlled differently, with the gestures scaled or mapped to curves that optimize their effect for each instrument. Camelot even gives you the sorcery to mix and process the audio from all of your instruments; the hardware devices and microphones being connected to audio interface inputs, the software instruments being routed directly.

Camelot’s magic is so awesome that it has Smart Maps which let you recall presets from your hardware devices without any kind of MIDI programming, just choosing your sound from a preset list. It casts such powerful enchantment that, far from being limited to controlling hardware synthesizers, Camelot serves guitarists just as well as synthesists, easily controlling a Line 6 Helix, Kemper Profiler, or Fractal Audio Axe.

Be the king. Be the wizard, too. This is the 21st century, and the magic of Camelot will let you be whoever you want to be. You can start right now with the many incredible magic spells in this tutorial on how to make your hardware and software instruments serve you as knights at your round table:

READ THE FULL ARTICLE

Blog-Camelot-for-wind-players
Camelot for Wind Players: An Example 1024 576 Audio Modeling

Camelot for Wind Players: An Example

As a wind player, performing live with music technology presents a number of challenges, some of the most common ones being:

  • playing with backing tracks
  • recalling the right sound preset for each part of a song
  • using a single lead voice (your horn or wind controller) to generate harmony parts on different instruments
  • setting up MIDI controllers to remotely control software instruments and FX
  • displaying music scores

In the past, accomplishing these tasks could be difficult, and you might have given up on some of them because you couldn’t find the right tool with which to do them. You probably bashed your head against a wall once or twice in frustrating attempts to cobble together several devices or software programs only to get some compromised version of what you wanted.

Yeah, we’ve been there, done that. At the end of the day, we only want to play and have fun, and be able to focus on music-making without being distracted by infuriating technicalities. So we created Camelot to let us–and now, you–have more fun and less head bashing.

READ THE FULL ARTICLE

software center blog post image
Introducing Audio Modeling Software Center 1024 576 Audio Modeling

Introducing Audio Modeling Software Center

Introducing Audio Modeling Software Center

 

You are taking the plunge. You’re going to buy upgrades for the SWAM instruments you already own, buy another SWAM instrument bundle, upgrade to Camelot Pro, and wholly commit to live performance with Audio Modeling instruments and whatever other instruments and audio plugins you may have. How exciting!

But…you are afraid. You think it is going to be a long, frustrating process to get everything installed, authorized, updated, and ready to rock.

We say: No, it’s not.

How can we say this so definitely, with such absolute certainty? I mean, we ARE talking about computers here, how can the Audio Modeling team be SO sure?

Easy. Because we have rolled out the faboo Audio Modeling Software Center to handle all of those processes for you with no more trouble (and not much more time) than it takes for you to make an espresso.

OK, we admit, Software Center doesn’t have the rich, strong flavor of espresso, and it won’t give you quite the same jolt of energy. But it DOES have the sweet taste of success, so that, by the time you sit back down in front of the computer with your steaming espresso, your Audio Modeling products will be waiting for you to take that coffee energy and make music with it. And, even BETTER than espresso (if such a thing is even possible), it will keep putting that sweet taste in your mouth as it makes it effortless to always have the most current versions of all of your Audio Modeling apps.

So, fear not. Software Center is here and it will be simple to manage all of your Audio Modeling apps. Which leaves only one question: do you want a little steamed milk with that?

 

READ THE FULL ARTICLE

Audio Modeling Names Three New Resellers 1024 576 Audio Modeling

Audio Modeling Names Three New Resellers

Audio Modeling is pleased to announce the addition of three new resellers: SudeepAudio.com in India, Gear4music in the UK, and Best Service in Germany.

 

Sudeep Audio

One of India’s leading pro audio dealers, Sudeep Audio carries more than 100 major brands…and now one more, with the addition of Audio Modeling!

Audio Modeling products are available online at https://www.sudeepaudio.com/.

 

The late Mr. Nikhil Mehta founded Sudeep Studio–Sudeep Audio’s parent company–in Mumbai in 1977, with the idea of providing a resource for talented musicians working to launch or further their careers. In 2003, SudeepAudio.com was created by Metha’s son Aditya to carry on his father’s vision of always being a true friend to musicians.

“We are proud to partner with Audio Modeling as their exclusive Indian reseller,” glows Aditya Mehta, now one of two partners running Sudeep. “Audio Modeling is a global leader in multi-vector expressive acoustic instrument emulation, and technology is what excites us and connects us to our Indian music community. SWAM truly is a breakthrough in the world of music production. In time, we hope to work with Audio Modeling on creating SWAM versions of Indian instruments, which have unique characteristics.”

With 45 years in business, Sudeep is very well-regarded in India, according to renowned Grammy- and Oscar-winning composer A.R. Rahman. “Sudeep Audio has introduced several revolutionary products, in addition to distributing music production software in India from across the globe,” said Rahman in 2008. “Their competitive pricing, quick deliveries, and strong after-sales support are world-class. I hope they go further in the business, contributing to every musician’s success.”

 

For those who are wondering, “Sudeep” is not the name of anyone in the organization or the founder’s family. In Hindi, “su” means “pure,” and “deep” means “light,” so “Sudeep” translates basically to “pure light.” Mr. Mehta decided on this name to encourage the next generation to carry on with the “pure light” of music.

 

 

Gear4music

Since being debuted by sound engineer Andrew Wass in 2003, UK-based Gear4music has grown into a leading retailer of musical instruments and music equipment, with more than a million active customers, nearly 62,000 products available, and distribution centers in Germany, Spain, Sweden, Ireland, as well as the UK. And now Audio Modeling products can be found and purchased at https://www.gear4music.com/, as well.

 

 

 

Best Service

Munich-based Best Service, founded in 1986, was a pioneer in the emerging sampling market. Committed to providing the best possible virtual instruments and sound library experience, Best Service provides access to an exceptional range of inspiring tools for musicians.

“We are beyond excited to offer Audio Modeling’s outstanding virtual instruments through our webshop,” enthuses Managing Director Robert Leuthner. “Authentic sound isn’t the only feature of Audio Modeling’s products we find impressive. We feel that the SWAM instruments’ intuitive workflow and user-friendly graphical interface with smart controls lets musicians focus on making music, instead of worrying about technical considerations. Plus, for those who want to play virtual instruments live, Audio Modeling offers Camelot, a convenient tool that combines a setlist manager, mixer, MIDI processor, and effect host in a single, integrated performance environment.”

Audio Modeling products are available online at https://www.bestservice.com/

We at Audio Modeling are very excited to collaborate with these amazing companies. Welcome aboard, and we look forward to making beautiful music together!

 

MIDI Talk, Season 2, Episode 1: Omri Abramov 1024 576 Audio Modeling

MIDI Talk, Season 2, Episode 1: Omri Abramov

MIDI Talk, Season 2, Episode 1: Omri Abramov

Omri Abramov (https://abramovmusic.com/) is a musician of his times: global in scope, drawing on tradition and breaking new ground, playing acoustic instruments and exploring the boundless possibilities of technology, and all in the service of his expression of the human experience.

Born in Israel, Abramov’s first instrument was the recorder, a first musical experience for many children, at the tender age of five years old. He moved through the recorder family, on to flute, and then, in high school, picked up the saxophone. “There’s something that has always clicked for me with the movement of fingering the wind instrument,” Abramov observes.

Abramov studied jazz at Tel Aviv’s Israel Conservatory of Music and made tenor saxophone his primary instrument. Receiving designation as an Outstanding Musician from the Israel Defense Forces, he spent his time in the military playing shows all over Israel, including multiple appearances at the Tel Aviv and Red Sea Jazz Festivals.

Since being discharged, Abramov has played in the Idan Raichel Project, co-founded jazz-fusion band Niogi with keyboardist Dr. Guy Shkolnik, relocated to Berlin for better access to Europe’s jazz community, joined Sistanaglia (an ethnic/jazz group of Israeli and Iranian musicians with a message of peace), and formed Hovercraft, a New York City-based trio. Add in composing and producing, and Omri Abramov is a pretty busy dude.

 

Omri at Play

 

Today, Abramov plays a lot of tenor and soprano saxophone, but also has taken a deep dive into electronic wind instruments, playing both the Akai EWI (Electronic Wind Instrument – https://www.akaipro.com/products/ewi-series) and Aodyo Sylphyo (https://www.aodyo.com/?langue=en). Few outside of wind players are familiar with these instruments.

“(Wind controllers) deal with electronic signals, but have this humanizing effect produced through a sensor that is based on breath,” explains Abramov. “This sensor is converted to MIDI data, like expression (controller). You can assign this to different MIDI parameters and humanize virtual instruments or analog synthesizers, or whatever you would like to control.”

The difference between this and playing an acoustic instrument is not always easy to grasp. “People often ask, ‘You are playing an electric saxophone?’  It’s not electric saxophone, it’s an instrument in itself,” Abramov clarifies. ”You have to really dig into it and explore it, and then the possibilities are endless.”

When asked which is his favorite between tenor sax, soprano sax, EWI, and Sylphyo, Abramov glances skyward as he ponders the question. After a moment, he realizes the reason for his hesitation. “I don’t have a favorite instrument,” he begins, “because everything is connected to what (the instrument’s) functionality is in a specific situation.” Further, the experience of playing each instrument impacts his playing on the others. “The saxophone playing benefits my playing on the EWI and vice versa. They all contribute to reaching my vision.”

 

The Deep Dive

 

“My relationship with technology started about 13 years ago, in my old jazz fusion band, Niogi,” says Abramov. “That’s where I got into sound synthesis through playing the EWI, getting a bit out of the traditional acoustic zone that I was in.

“I started doing music of my own in the line of fusion and sound design, and thought, ‘I’m playing all that uptempo jazz on my sax, it will be easy: I’ll just go to the store and pick up the EWI and start shredding.’ And I couldn’t play a note because it was so different. Still, I was brave enough to get one home, sit with it and practice a lot.

“I got a MacBook and a version of Apple Logic, and the first synthesizer I used was the ES-2 that came with Logic. Then I went to Spectrasonics Omnisphere, and explored sound design combinations with samples and stuff. I’ve also worked in the area of plugin development, like my connection with Audio Modeling, and with Aodyo, the manufacturer of the Sylphyo.

“Each of these requires me to dive deeper into its own zone, and then the combinations, for example the combinations you can achieve with Sylphyo and its wave controller controlling the parameters you can assign in the SWAM instruments. In the new SWAM v3 especially, what you can assign is amazing.”

 

Drawing Expressivity Out of Technology

 

Having spent years studying playing techniques on saxophone and learning how to employ them expressively, Abramov is essentially repeating that journey with electronic instruments.

“There’s definitely (challenges in technology) I had to find a way around. For example, vibrato. There’s a great vibrato response in the parameters on the SWAM instruments, and on the EWI, you can get vibrato through biting the mouthpiece. I tried that a bit and ruined a few mouthpieces because my bite is apparently too strong. I started using pitch bend, and it felt more like a natural response. The EWI has a pitch bend up on the upper side of the thumb and pitch bend down on the bottom side. I use many upward pitch bends, and I do it with my thumb. Naturally searching for that feeling from the sax, I found this thing and it started to be part of my playing.

“Maybe that is something personal to myself, I found my way to do it. Many EWI players ask me, ‘oh, you actually use the thumb to do vibrato?’ Yeah, but it’s a tiny thing, because if you do it too much, it’s like oowahoowahoowah. But that’s the beauty of music: you search for this feeling, you suddenly create this thing that maybe if you were coming from the outside and tried to analyze it like a computer, you’d be like, “ah, no, that’s not the way to do it,” but then you put a person in this process trying to reach for something from the heart, and it creates something.”

The other example Abramov cites is the importance of dialing in the velocity response of the sound source, which is an iterative process involving small adjustments on the software instrument, then adjustments to the controller, and back and forth until it feels natural to play. ”It’s not really like I look for what resembles the saxophone, because if I play a trumpet sound, or more than that, a synth combined with a trumpet sound, that’s a new thing. I’m just looking for how it will have a soul, how it will have a character that you can hear someone behind it playing it like it was something natural in the room.

“What we’re trying to do is give technology and electronic instruments and software instruments the human element that keeps them so exciting for years without becoming dated. Acoustic instruments are never getting old. Synths, usually the ones that are flashy and super hip right now, in 20 years will not be hip at all, because fashions change. But things that are more subtle and grounded, that have soul inside and feel more natural–it is more likely they will endure the test of time.

 

Swimming in SWAM

 

Abramov discovered Audio Modeling’s SWAM instruments at a couple of NAMM (National Association of Music Merchants) trade shows. “The first time was at NAMM 2019,” Abramov recounts. “ I connected my EWI to one of the SWAM instruments, and was blown away. Before that, I was doing stuff that was more connected to the internal EWI sounds. But then I used the EWI as a controller for this amazing SWAM sound engine, and I was really intrigued. To be frank, it changed my views about virtual instruments and what you can do with modeling synthesis to emulate an actual instrument.

“Even now, I love taking several SWAM instruments and combining them together to make new sounds. These possibilities hit me right away. I asked, ‘Can I combine this clarinet sound with the flute sound that you just played?’ And (the Audio Modeling person) said, ‘Sure,’ and then he set up two sound engines and put the sounds together, and it was so fun to play with that I was hooked. There’s one big video called SWAMMED (https://www.youtube.com/watch?v=OocavJfNiv0) where I did an orchestra of wind instruments. It was a whole process. I was diving more and more deeply into it. “

Abramov’s interest in Camelot had just as much impact on him. “Camelot is really inviting, that would be the word for it. It’s not messy,” asserts Abramov. “The most crucial things, like volume control, are really easy to approach in Camelot. I can create all my sound combinations and give them different effects, run them through different processing–even use different controllers–see the level of each of those layers (containing the various instruments) and what processors they are running through. Perhaps 50 percent of my time is spent creating those combinations, that’s a lot. Camelot became my go-to for that purpose. My iPad works hard, and Camelot works hard.”

Abramov’s attention to features that aid his ability to play expressively has led him to focus heavily in Camelot on one thing: the response curves that translate controller response into sound source parameter response. Camelot provides powerful tools for shaping these curves to match the style of a player or the characteristics of a controller to the way that a sound changes. “I use the SWAM MIDI Learn function to assign different parameters, then I modify the curves,” he details.

“I use that for things like opening a filter. If I rotate the Sylphyo one way or the other, it opens the filter in different ways. What I like to do is apply just a bit of filtering; even on SWAM Trumpet, I’ll put an external filter on it. The filter is not 100 percent open, but almost, and every small movement I do opens it a little bit. It’s like I’m trying to emulate the feeling of what happens when the trumpeter is moving his head and changes the angle of the lip.

“It’s a territory I’ve started to explore relatively recently, but I feel like there’s a lot there, because I didn’t realize how much expressivity happens from the actual movement of the player. I mean the small gestures you do when you’re excited or you move for a higher or lower register. The aspect of movement is one thing I think Aodyo did great in the Sylphyo, with its movement sensor. When you approach it in a delicate way, the small gestures in the curves, in the volume combinations between layers, in the movement, that is where the magic is. I can spend hours on that.”

 

The Never-Ending Story

 

Omri Abramov has his nimble fingers into many musical pies, but while maintaining his deep love of the saxophone, he is clearly transfixed by the unexplored territory that music technology lays before him. He still views himself as being barely past the starting line.

“All this is a constant search for new ways of expressing myself through technology. Of course music is the engine, but technology is the tool. This is a constant journey since (starting with the EWI). I’m trying to get deeper and deeper. I feel like I’m just scratching the surface.”

Camelot for Bassist
Camelot for Bass Players 1024 576 Audio Modeling

Camelot for Bass Players

Camelot for Bass Players

 

Bobby Bigboddum keeps his plate pretty full. He plays both fretted and fretless electric bass in two wedding bands and a rock band. He also plays acoustic double bass in a jazz group and a bluegrass band. As if that weren’t enough, Bobby is a pretty good singer, too. Because he is so versatile and a solid player and singer, Bobby frequently gets booked to play bass with acts on tour, but finds it exhausting hauling around a heavy bass amp and a big pedalboard of effects to get the various sounds he needs to cover gigs.

 

Recently, Bobby played a gig with a guitarist who brought nothing to the gig but his guitar, a small audio and MIDI interface and an iPad. He plugged his guitar into an interface instrument input, ran a cable from the interface’s output to the sound system, and sounded great all night, with a different sound for each song, and sometimes several sounds within the same song. Impressed, Bobby asked him about his rig, which is when Bobby was introduced to Audio Modeling’s Camelot Pro.

 

The guitarist explained that he could host any plugin in Camelot and, since the audio inputs got added in Camelot 2.1, run his guitar through it, using a MIDI pedalboard to change sounds and control parameters like sweeping his wah wah. Bobby instantly knew he had found his new rig. He learned that Camelot ran on macOS and Windows, as well as iOS, and, since he already had a laptop, an audio/MIDI interface, and some great plugins, he decided to run Camelot on that. So Bobby downloaded a copy of Camelot Free and was off to a new adventure. In less than a week he decided to buy a license for Camelot Pro to get its greater power.

 

Although Bobby’s interface has a nice, clean sound on its instrument input, he wants the grit and beef of a real bass amp, so an amp simulator is the basis of his sound. As we shall see in a moment, getting Bobby’s bass into the world of Camelot is fast and easy.

 

Bobby uses different amp sounds for each of his bands, and the wedding bands call for a wide variety of different sounds. Camelot’s Layers make it easy for Bobby to accommodate all of these different needs.

 

Each Layer has its own signal path, with whatever processing and effects plugins Bobby might want, and even the ability to route his bass signal to other layers, if need be. One or more Layers can be stored as a Scene, and Scenes can be changed manually or automatically at specified times. This makes it easy for Bobby to change his bass sound at different points within a song.

 

But Bobby often wants the same amp sound for a whole song, or, and, in the case of the rock band, for the entire evening. Camelot has two special-purpose sections that make this easy. Layers in Camelot’s Setlist Rack stay the same through every song of the entire set, while Layers in the Song Rack change with each song, but not within a song.

 

So, for the rock band, Bobby makes a Layer in the Setlist Rack, puts his amp sim there, and selects the interface audio input as the audio input to the amp sim. Since the amp sim has some built-in effects, that’s often all he needs for an entire gig with that band.

 

Figure 1 – On the right, Bobby has stompbox effects and an amp simulator in a Setlist Rack Layer. On the left, his bass is selected as the audio input to the layer.

 

There are a few songs, however, with spots needing a sound that is in some way unusual (for example, one song has a section where he uses a Leslie sound). Bobby’s amp sim can call up presets from MIDI program change commands, so when those songs are in the setlist for the night, he has Scenes that send the appropriate program change command to the amp sim. Each Scene has a Layer that sends the appropriate program change command, and an item that routes the MIDI to the Layer in the Setlist Rack that has the amp sim. When he calls a Scene, it changes the amp sim program. He uses this same method to change presets on the amp sim for songs where he plays the fretless electric bass, which needs different settings than his regular bass.

 

To call up these different Scenes, Bobby turns to Camelot’s Timeline view. Many users (including the guitarist who first showed him Camelot) use the Timeline to play backing tracks during a performance. In that situation, Scene changes can be programmed to happen at exact times while the backing tracks are playing. Bobby places his Scene changes on the Timeline, but doesn’t have backing tracks and doesn’t play the Timeline. Instead, he uses his MIDI foot controller to manually step Camelot to the next Scene change on the Timeline.

 

Figure 2 – The Timeline (on the left) here holds Scene changes Bobby calls manually. On the right is shown the MIDI command used to advance to the next Scene.

 

Some of Bobby’s bluegrass and jazz gigs don’t require amplification at all, but the ones that do call for the same sound all night, just like his rock gigs. For these bands, he’s looking for a totally different sound than he can get from an amp sim. Instead of the amp sim, Bobby’s Layer in the Setlist Rack is built around a plugin that emulates a high-class mixing console channel strip, which gives him the sound of a nice recording preamp, plus EQ and compression.

 

Figure 3 – Bobby’s basic rig for his bluegrass gigs is a nice channel strip (shown on the R), followed by a graphic EQ he uses to adjust for each room. Since he uses the same sound all night, he puts everything in the Setlist Rack.

 

Bobby has a pickup on his double bass, but sometimes it works out better to use a microphone. So he has two copies of his jazz and bluegrass Camelot Setlists, one copy with levels, EQ, and compression settings optimized for the pickup, the other identical but with settings optimized for the microphone. (He tried just having separate Scenes, instead of separate Setlists, but he found having so many Scenes confusing.) He also has a graphic EQ plugin that follows the channel strip, which he uses to make adjustments at each gig for the sound of the particular room. There are a few rooms he works regularly, and he simply stored additional copies of his Setlist documents named for those venues. Bobby can go to a gig and use either a pickup or a microphone, with perfect correction for the room at regular gigs – all by doing nothing more than opening the right Camelot document at the beginning of the night.

 

The wedding band gigs are a whole different story. The band plays well-known songs and tries to come as close as possible to the original versions, so Bobby needs a different sound for each and every song. The Setlist Rack is not as much help to him on these gigs, so he relies more on putting his amp sim or channel strip (depending on the demands of the gig) in a Layer in his Song Rack, where it changes for each song. As before, when there are special needs in the middle of a song, he uses Scene changes to let him bring other sounds in and out.

 

Having all of his sounds for all of his bands instantly available in Camelot comes in very handy in one other way. When Bobby is on the road doing a tour, he often needs to be learning new material, sometimes for additions to the setlist, sometimes for other upcoming tours or gigs. Camelot has a Timeline view that many users (including the guitarist who first showed him Camelot) use to play backing tracks during a performance. In that situation, Scene changes can be programmed to happen at exact times while the backing tracks are playing.

Figure 4 – Bobby practices new songs by playing over a recording or a minus-bass mix of the song as a backing track. He also can program scene changes as he learns the song.

 

But instead of using the Timeline for programmed changes, Bobby uses it for rehearsal. He gets recordings of the songs he needs to learn and inserts them as backing tracks in yet another copy of one of his Setlist documents. He creates a new Song with a backing track for each piece he has to learn. He runs it as many times as necessary, stopping to program Scene changes where necessary, then moves to the next Song. Since he works while listening over headphones, he is able to learn while traveling, in motel rooms before or after a gig, or anywhere, really, that he can have his bass with him and play along. If he needs a sound he doesn’t already have, he creates a new Scene, adds whatever plugins he needs, and then stores the Scene as a template, which he can easily import to any of his existing Setlists.

 

When Bobby sings, his microphone goes directly to the sound system on larger gigs, but for some of the smaller gigs the bluegrass band often does, he plugs his mic into his interface’s second audio input and selects that as the audio input to another Layer in the Setlist Rack, in which he has put another channel strip plugin. He usually sends a mix of his bass and vocal out a single interface output, but there have been occasions when he has routed them to separate outputs. This is exactly the kind of flexibility that convinced Bobby to try Camelot Pro in the first place.

 

Figure 5 – When Bobby sings, as well as playing bass, his Setlist Rack has a second layer for processing vocals. Note the addition of reverb after the vocal channel strip.

 

Sometimes people think bass players have it easy. It seems to them that only having four strings instead of six is easier, and that nobody expects lots of crazy sounds and effects out of the bass player. But the art of playing bass lies in large part on how the part is played: the sound has to be right, the touch has to be right, and the phrasing has to be exactly what is needed at any moment. Bobby’s favorite quote is from Little Feat’s Lowell George, who noted on an album cover, “Do not underestimate, nor take lightly, this thing we call ‘bass.’” Bobby understands his role as a bassist very well, which is why he now relies on Camelot Pro to give him all of the tools he needs (beyond his fingers and instrument, of course) to be the player Lowell George was talking about. Judging from his packed gig schedule, no one is underestimating Bobby, or taking him lightly.

Audio In, Out, and All About in Camelot 2.1
Audio In, Out, and All About in Camelot 2.1 1024 576 Audio Modeling

Audio In, Out, and All About in Camelot 2.1

Audio In, Out, and All About in Camelot 2.1

 

Perhaps the most exciting improvement in Camelot 2.1 is the addition of new capabilities for getting audio into Camelot and moving it around within a Scene. Layers, hardware device items, and sidechain or aux inputs for items can all now accept external audio through their Audio & MIDI Settings. Let’s take a closer look.

 

Layer Audio Input

 

Each Layer can now be fed with a selected audio input. This means you can bring vocals into Camelot, process them through your best compressor and delay or reverb plugins, then mix the result with all of your software and hardware instruments. You can even create effects loops that send audio from Camelot to an external processor and bring the processor returns back into layer audio inputs. The fact is, Camelot now can be the mixer for your entire rig.

 

Using a Layer audio input takes only three steps: first, be sure that your microphone or external processor is connected to inputs on your audio interface.

 

Figure 1 – Connect mics and instruments to interface

 

Second, go to the Audio>Audio Input pane of the System Settings view. Click the plus sign at the bottom of the Audio Inputs pane to add additional Camelot audio inputs, if you need them. Click on the name of one of the Camelot audio inputs shown there, and choose your interface’s audio inputs from the list that is displayed.

 

Figure 2 – System Settings>Audio Inputs

 

Click on the name of a Camelot audio input to edit its mapping to your interface. Note the toggle field in the upper right that lets you make any input be mono or stereo. It is possible to change from one to the other after an input has been assigned, but, obviously, doing that necessarily changes the assignment, so be careful.

 

Figure 3 – Map interface inputs to Camelot inputs

 

 

If you want to delete an audio input, click the “three dots” icon in the upper right and select Delete from the menu that drops down.

 

Figure 4 – Delete input

 

Finally, open the Audio and MIDI Settings>Audio Input pane on the layer and click the name of the Camelot audio input you want to feed that layer. When the Settings view opens, the whole panel will be displaying MIDI Settings. Just click the arrow in the upper right corner to collapse those and you will see the Audio settings and be able to click the Audio Input pane.

 

Figure 5 – Map audio input to layer

 

Click Done in the upper right corner, and your external audio is now flowing into that layer and any processors in the layer. When you look at the layer, you’ll see a triangle icon on the left side with the name of the Camelot audio input mapped to that layer.

 

Figure 6 – Layer audio input indicator

 

The layer volume fader in the Mixer view lets you balance your external input with everything else. The meter next to the fader shows you the level in the layer, but you might want to check meters in any plugins you’re using to monitor levels through the signal chain.

 

MIDI Device Item Audio Input

 

Up until now, you could control a hardware MIDI synthesizer or other device from Camelot, but you had to mix its audio externally. Now, Camelot’s Hardware Devices items (whether Smart Maps or custom maps) can accept audio inputs, so Camelot can both control an external synth and mix its audio.

 

Setting up an audio input for an external device is pretty much the same process as for a Layer audio input, except that you want to access the Hardware Device item’s Audio & MIDI Settings, instead of the layer’s settings. Click on the name of the item and its settings will show up. If preset selection is being displayed, click the arrow where it says Audio & MIDI Settings at the bottom to collapse the preset display and show the audio settings. Click Audio Inputs and select the input(s) you want to use.

 

Figure 7 – Showing Audio & MIDI Settings

 

Beneath the Audio Input setting is the Item Main Knob and Pan setting. When this is set to MIDI, the big knob that appears on an Item on a Layer will send out MIDI CC 11 messages by default, and the associated Pan control sends out CC 10 (which is the CC dedicated to Pan). But the main knob can be changed to send out any MIDI CC number you wish by expanding the Item and clicking on the CC11 legend at the top of the fader.

Figure 8 – The Item Main Knob and Pan setting (shown on the left) lets you assign an Item’s main knob and pan control to either be sent as MIDI messages or controlled in Camelot’s audio engine. If MIDI is chosen, the main knob can be assigned to send any MIDI CC number, as shown on the right.

You can mix several external devices in a single layer by selecting audio inputs for each item, and then mix in a microphone as a layer audio input. With the ability to accept audio inputs to items and layers, you can get quite sophisticated with mixing in Camelot. Your biggest limitation is likely to be the number of audio inputs on your interface.

 

Plugin Sidechain Inputs

 

Some plugins have the ability to accept auxiliary audio inputs. That could mean inputs to a vocoder or ring modulator, or a sidechain input for a compressor. Sidechain inputs are accessed exactly as they are for external devices: click the item name to bring up Audio & MIDI Settings and then go to the Audio Input pane, where you will see two settings. SideChain Bus displays the inputs available in the plugin, while SideChain Input allows you to select the Camelot audio input that will feed the plugin input selected in the SideChain Bus setting.

 

Figure 9 – Sidechain inputs

 

Routing audio between layers

 

In addition to accepting external audio, Camelot 2.1 also adds the ability to route audio between layers. Simply insert an Audio Layer Connector item in the source layer, choose the destination layer, and set the level to be sent to the destination. This feature makes it easy to set up a layer as a dedicated processing chain and then send audio from other layers to it, with a separate send level for each layer. In addition, the send can be placed anywhere in the source layer, so it can be tapped from any point in the layer’s signal path.

 

Figure 10 – Layer Audio Send

 

To insert an audio layer connector, add a new item, choose Post-Processors> Audio Layer Connector, click Audio Send, and select the destination layer from the list that’s shown.

 

Figure 11 – Choosing layer Audio Send destination

 

Glitchless switching

 

One of Camelot’s most important capabilities is being able to switch scenes, which lets you completely redefine the sonic world in an instant. That’s so easy that it hides how difficult it is to actually make such a big switch seamlessly. Camelot allows no glitching or interruption in the audio when scenes are changed, and MIDI changes are handled appropriately to avoid problems, as well.

 

This means, for example, that harmony vocal microphones can be enabled for song choruses, with compression, reverb, or any other processing, then muted for the quiet verses where there is only lead vocal.

 

Figure 12 – Full band mix

 

blog post Camelot for iPad
Camelot for iPad 1024 576 Audio Modeling

Camelot for iPad

Camelot for iPad: Mark Basile’s iOS Virtual Instrument Faves

 

As Apple’s iPad has grown in computing power, musicians have increasingly turned to it as a platform for music-making, both live and for recording. The raw capability of today’s iPads can’t be denied, and, even though the iPad is still not as powerful or flexible in some respects as a laptop or desktop computer, the stability of its OS and the familiarity of its interface have been key factors in the growth of the iPad as a music machine. iOS apps have grown to be numerous and sophisticated, and we at Audio Modeling are often asked how truly viable an iPad is as a standalone music platform. So we decided to take a shot at an answer.

 

Unsurprisingly, we’ll start with this: Audio Modeling’s Camelot live performance environment runs on the iPad, as well as under macOS and Windows, which makes all of those great instruments and effects usable for live performance. Camelot can host the full range of virtual instrument and effects plugins available for iOS, and provide control for them from a MIDI controller plugged into an iOS MIDI interface. USB controllers can be used, too, but require a USB adapter.

 

To get a bigger picture of just what is out there for iOS, Audio Modeling turned to Mark Basile, vocalist for long-established progressive metal band DGM, and Musical Director and Vocal Coach for the Echoes Music Academy in Naples, Italy. We asked Mark to share some of his favorite iOS instruments, which, by a funny coincidence, he just happens to run within Camelot. By no means is this a comprehensive survey of iOS virtual instruments, as a prowl through Apple’s AppStore will quickly show. It should be more properly thought of as a curated collection from the musical mind of Mark Basile.

 

Camelot Pro

 

Camelot is a complete live performance environment, with virtual instrument and effects plugin hosting, MIDI controller management and processing, multitrack playback, music score display, and much more. In addition to MIDI control of iOS instruments, Camelot can also accept external audio inputs from an audio interface, so you can even use Camelot as a mixing and processing environment for vocals or acoustic instruments.

 

Camelot lets you construct entire VI (Virtual Instrument) and audio rigs, store them, and recall them, enabling you to call up an entirely different sound and, in fact, complete system, for each song or section of a song. Then you can build setlists out of the songs. Camelot is a powerful way of putting iOS plugins to work on a gig.

https://apps.apple.com/us/app/camelot-pro/id1326331127

 

SWAM Solo Instruments

 

SWAM Solo Strings, Solo Woodwinds, and Solo Brass are modeled instruments, not sample instruments, which means that, rather than playing recordings of acoustic instruments, they emulate the behaviors of the sound-producing mechanisms that make acoustic instruments distinctive. This makes modeled instruments far better and more intuitive for playing live and imparting instrumental style. Modeled instruments also need far less memory than sample instruments.

 

Audio Modeling’s SWAM instruments provide a collection of models of many of the instruments that dominate classical, jazz, and other styles of music, from violin, cello, and double bass, to trumpet, trombone, and tuba, to saxophones, oboe, and flutes. But the SWAM instrument bundles also go further afield to include instruments like piccolo trumpet, euphonium, and bass clarinet.

 

“SWAM instruments are very important collections of Solo Strings, saxophones, Solo Woodwinds (which includes the saxophones), flutes (also in the Solo Woodwinds family)…in short, a lot of things,” enthuses Basile. He finds the SWAM instruments easy to program and play. “We always have a graphic interface that is extremely clear, which is very helpful for programming and during interaction. The graphics allow you to have everything under control: vibrato, velocity, and with the expression pedal.”

 

“The legatos are beautiful, and the breath noise….amazing!”

https://audiomodeling.com/iosproducts/

 

Acoustic Piano:

 

“For acoustic piano, I’m using a Ravenscroft app,” Basile reveals.”I’ve used this app for quite some time and it gives me great satisfaction.” Ravenscroft Pianos are ultra-high end, hand-built pianos, and Ravenscroft 275 for iOS is a virtual instrument constructed from exacting recordings of an original Ravenscroft Model 275 Titanium Grand Piano.

 

Careful scripting brings out fine detail in hammer attacks, natural resonances, and other low-level properties that make for a rich, convincing piano sound.

 

Ravenscroft 275 Piano by UVItouch

https://apps.apple.com/us/app/ravenscroft-275-piano/id966586407

 

Strings and Pads for layering

 

Basile frequently layers the VI Labs Ravenscroft 275 piano with strings and pads from Korg’s Module Pro. Module Pro is a sound library player that comes with a core library that includes keyboard, strings, brass, and synth sounds, but it really comes into its own with additions from the large selection of expansion sounds available for it. The expansion libraries add more keyboard instruments, sounds from (of course) the Korg Triton, orchestral and choir sounds, cinematic sound effects, house music sounds, and so on and so forth. Module Pro is an all-arounder.

 

KORG Module Pro

https://apps.apple.com/us/app/korg-module-pro/id932191687

 

 

Expansion Sound Libraries

https://www.korg.com/us/products/software/korg_module/modules.php#expansion

 

Electric Piano

 

Neo-Soul Keys Studio 2 is focused on electric piano, and, most particularly–though not exclusively–Rhodes sounds. Gospel Musicians is fond of telling how the late, great George Duke bought Neo-Soul Keys Studio 2 because he felt it had more grit and funk than other Rhodes emulators. Another plus for NKSK2 is its onboard effects, which are licensed from Overloud.

 

Gospel Musicians/MIDIculous LLC Neo-Soul Keys Studio 2

https://apps.apple.com/us/app/neo-soul-keys-studio-2/id1482448438

 

B3 Organ

 

Guido Scognamiglio is an Italian countryman of ours, and his company, Genuine Soundware & Instruments, is renowned for their VB3 emulation of the Hammond B3 organ. VB3m is the iOS version of this instrument, and boasts lots of authentic detail, from the drawbars to tube overdrive to the Leslie cabinet emulation, as well as various well-known B3 features like percussion and vibrato. Like Audio Modeling SWAM instruments, VB3m is a physical model, not a sample instrument. VB3m also includes flexible MIDI features.

 

GSi VB3m

https://apps.apple.com/us/app/vb3m/id1560880479

 

Synth Brass

 

discoDSP makes a number of virtual instruments, but the OB-Xd captures the magic of one of the most-loved synths of the 1980s, the Oberheim OB-X. Possibly the greatest fame of the original OB-X was in producing the keyboard riff that anchored Van Halen’s “Jump,” and, yes, the OB-Xd can resurrect that sound very well, indeed.

 

The OB-Xd starts with an OB-X emulation, but then adds improvements, since, well, things have advanced some in the last 40 years! Basile pronounces the OB-Xd to be “one of the best Oberheim emulations,” though he then adds “The only issue with this app, is its lack of internal effects. But I can absolutely address that with Overloud TH-U effects, like chorus, delay, and reverb, that are part of the equipment to use with the Oberheim (sound) to get more fatness, richness, presence, and a wider sound palette, and also stronger stereo positioning.”

 

discoDSP OB-Xd

https://apps.apple.com/app/id1465262829#?platform=ipad

 

Delay, Reverb, Effects

 

Basile uses Overloud’s TH-U as an outboard effects device following a virtual instrument in Camelot. TH-U iOS can share presets with the desktop version of TH-U. Oh,and by the way, TH-U is an amazing guitar amp simulator, too. The free version has a limited set of amp sims and effects, but there are many, many available as inexpensive paid add-ons.

 

Overloud TH-U iOS

https://apps.apple.com/us/app/th-u/id1478394489?ls=1

 

Synth Lead, Pads, Arpeggiators

 

KV331 Audio’s SynthMaster One is a wavetable synthesizer with a straightforward interface and lots of features like scales, filters that emulate a few famous analog designs, microtuning, and lots and lots of presets.

 

The voice structure on SynthMaster One is quite powerful, with stereo oscillators, two oscillators, two sub oscillators, two filters, four envelopes, two LFOs and a 16-step sequencer oscillator. Eleven different effect types round out the package.

“An amazing app. I really do everything with it,” says Basile, “not just my signature lead, but there are synth basses, cinematic atmospheres and soundscapes…I can program so many things.”

 

KV331 Audio SynthMaster One

https://apps.apple.com/us/app/synthmaster-one/id1254353266

 

Pads/Juno 60

Roland’s beloved Juno 60 arrived in 1982 and became a favorite instrument for making pads, synth brass sounds, and for its gritty stereo chorus. The TAL-U-No-LX captures the warmth and funk of the Juno 60’s sound without its noisiness. The TAL-U-No-LX goes the Juno 60 one better in that it has 12 voices, as opposed to the original’s six.

 

The Juno 60 was the first of the Juno series, which included the Juno 106 that was one of Basile’s favorites. TAL-U-No-LX, says Basile, “is available both on desktop and iPad, and it is faboo, with a layout that is just a wonder, extremely clear in its emulative intention. It has an incredible sound.”

 

TAL Software GmbH TAL-U-No-LX

https://apps.apple.com/us/app/tal-u-no-lx/id1505608326

 

Sample Player/Rompler

 

Pure Synth Platinum is Rompler synthesizer, despite having no actual ROM (since it is all software). Nevertheless, it has a ton of tones, plus four layers per voice and effects licensed from Overloud. If your iPad has limited storage, samples can be stored on an external SSD or Thumb drive.

 

Basile explains that Pure Synth Platinum 2 “has a sample library of FM sounds, like the DX7 and beyond,” but he appreciates the range of sounds available “I have a sound we will call ‘Stratovarius-ish’ or ‘Malmsteen-ish,’ created using Pure Synth Platinum 2, that combines sounds from multiple internal parts.”

 

Gospel Musicians LLC Pure Synth Platinum

https://apps.apple.com/us/app/pure-synth-platinum/id1459688500?ls=1

 

Synth Bass

 

It’s the Minimoog. From Moog Music, no less. Do we really need to say anything more about it?

 

Moog Music Inc. Minimoog Model D

https://apps.apple.com/us/app/pure-synth-platinum/id1459688500?ls=1

 

MIDI Talk 010 - Whynot Jansveld
Midi Talk Ep. 10 – Whynot Jansveld 1024 576 Audio Modeling

Midi Talk Ep. 10 – Whynot Jansveld

MIDI Talk 10: Whynot Jansveld Takes Tech On Tour

Making a living in music is a hustle, but diversification strengthens your ability ride the winds of change that so often sweep through the industry. Whynot Jansveld is a case in point. As a bassist, Jansveld has toured with far more artists–well-known to obscure–than there is space in this article to name, but we’ll throw in a few: The Wallflowers, Richard Marx, Butch Walker, Natasha Bedingfield, Gavin DeGraw, Sara Breilles…OK, we better stop there if want to get to his words. You can hear the entire conversation here.

He has appeared on both daytime and late night talk shows, and worked with numerous producers of note. But his bass career is not his only one.

Jansveld has also built himself secondary careers as a composer, particularly writing a lot for Founder Music, a stock music library, and as a mastering engineer, many of his clients being the same folks for whom he plays bass.

A native of The Netherlands, Jansveld emigrated to the US in his 20s to attend Berklee College of Music in Boston. “I grew up on a lot of rock and pop music on the radio, and a lot of that came from (the US). Ultimately, that’s why I wanted to explore it and became a professional musician here,” recalls Jansveld.

After coming to Berklee to do one semester and graduating after spending three years there instead, Jansveld slipped down the road to New York, where he worked for 16 years before relocating to Los Angeles 10 years ago.

 

Tech On Tour

 

Jansveld’s introduction to music technology while, at age 18, Jansveld traveled for a year playing bass with Up With People, a US nonprofit organization founded in 1968 to foster cross-cultural understanding amongst young adults through travel, performing arts, and community service. The troupe traveled with early digital music technology: a Roland MC Series sequencer and a Yamaha RX5 drum machine. Jansveld took the bait. “I just started messing around (with the sequencer and drum machine) in our free time. I wanted to learn how that all worked and I thought I could somehow make something out of it. And I did.”

At Berklee, Jansveld took the next step when a bandmate sold him a Macintosh SE computer loaded with Opcode Systems Vision, a very early (and, in fact, visionary) MIDI sequencer program.

Today, Jansveld’s primary gig remains touring as a bass player, which puts a lot of his emphasis on music technology on mobile systems. Powerful devices in small, robust packages are valuable to him. His heaviest use of music technology on the road is for composing and mastering, but some of it naturally finds its way onstage, as well.

Jansveld’s mobile technology use is mostly behind the scenes. At his level of touring, musicians are expected to arrive at rehearsals for a tour already knowing all of the material, so Jansveld often needs to create song charts while he travelling.

“It used to be that you’d have a stack of papers (of charts), but now, I write all my charts on my iPad with an Apple Pencil.” This works for Jansveld because he doesn’t need to be hands-on with his instrument in order to transcribe.” I think I’m a little bit different than most musicians, in that I almost never touch a bass guitar while I’m preparing for any of this, even up to the point where I get to the rehearsal. I do it all in my head. I get a big kick out of being on a five-hour flight and transcribing a whole set of tunes. I have my in-ears (monitors) plugged into the iPad, and three-quarters of my (iPad) screen is where I write my chart, and the little quarter of a screen is my music program. I’m playing the music, rewinding a little bit, and writing as I go along.”

Jansveld also carries a mobile production rig for composing on the road. “I have a laptop and a little bag with an Apogee Jam (USB guitar interface for iPad), an Apogee mic, some hard drives, and a couple of cables, and that’s it. I can work in my studio at home, disconnect everything, close the laptop, put it in a bag and take it on the road. (I) open it up in a coffee shop, and everything that I was working on is there and I can keep working on stuff.”

Mastering is more of a challenge. “(That), obviously, is a little hard to do on the road because you are on headphones. I try to avoid doing it on the road unless somebody needs something quickly.”

 

The Show Must Go On

 

Jansveld sees the impact of technology in support infrastructure for performances, as well, especially in the areas of monitoring and show control.

“Having in-ears, you can have a beautiful mix, but it does feel like you’re cut off from the world,” he laments. “Sensaphonics have this technology where the in-ears also have microphones on them and you can mix the sound of the microphones with the board feed coming from the monitor side of things.

“It’s a little bit of a hassle to deal with, but at the time (I got them), it was worth it to me to do, because (I could) feel the room. I can’t overstate how important that is to me, because I’m not just there to play the notes, I’m there to perform for sometimes an incredibly huge room with a lot of people in it, and I want a (strong sense of) how that feels to me and how that makes me play and perform.“ Hearing the “outside world” also improves Jansveld’s onstage experience, because “even just walking towards the drums, the drums get louder.” Onstage communication is improved, as well. “If the singer walks up to you and says something while you have normal in-ears, you just nod and hope it wasn’t something important.

“I now use ACS in-ear monitors instead of Sensaphonics, because they came up with a system that also lets in ambient sound from around you, but instead of using microphones and a second belt pack, it simply uses a vent with a filter (like a custom earplug) that keeps the seal intact.”

Digital mixers can store in-ear monitor mixes as files that can be ported to another mixer of the same type. “That is super, super helpful if you’re doing a run of a few weeks, because every time you get to sound check, it’s already been mixed from the night before, It’s an incredible timesaver because, (without that,) you spend a lot of time on, ‘OK, kick..more kick. OK, snare…more snare,’ and it seems incredibly repetitive for no reason.”

Show automation is another area where Jansveld sees the effect of technology on touring. “Currently, I’m touring with the Wallflowers, and before that was Butch Walker, and both of those were really rock shows,” he points out. “Changing something up was just a matter of changing up the setlist.

“(On) some tours, everything is run by Ableton Live including the lights and all of that kind of stuff, and it takes a lot more to change things up because somebody has to go in and reprogram stuff. But for a band that doesn’t have a lot of money to spend on the tour and wants to make an impact, it looks incredible, because everything is timed. The chorus hits and the lights go crazy and come back down on the verse.”

Jansveld also sees greater reliability than existed before in the ability to play backing tracks, another highly automated task.

 

Mastering Kept Simple

 

“Never in a million years growing up did I think that mastering was something I wanted to do, or that I had the skills or ears to do,” Jansveld muses. “I ended up doing it four or five years ago simply because friends of mine had some new tracks mastered and were pretty unhappy with how it sounded. I had started composing music, mostly for commercial stuff, and had used those (same) tools to make my stuff sound as good as I thought it could be, so I just told him, “Why don’t you send me your tracks and I’ll see if I can beat (the other mastering efforts). That led to a whole bunch of other stuff with him and people that he recommended me to, and it took off from there.

“I don’t do anything special and I don’t have any tools that you or I or anybody else don’t have access to. But I take the time, and I pay attention, and I trust my ears enough and my experience in music enough to know what I want to hear. I don’t start altering things just because someone is paying me money to master it. If the mix sounds great, I make it louder but I keep it dynamic. It’s a simple concept, but apparently it goes wrong often enough that there’s room for me to do this.

Jansveld tries not to overcomplicate things in mastering. “If I had to put it very, very simply: “nice and loud” still works all of the time. It kind of works everywhere: on CD, on vinyl, it works for MP3, it works coming out of my phone. It sounds dumb, and it can’t always be that simple, but that’s my experience. You’re definitely pushing up against 0 dB (peak level) or a little bit less, you’re definitely compressing and limiting things, but with the tools we have these days, you can get things nice and loud, still have them be dynamic, and not really experience a feeling that things are squashed or pumping or just made worse.”

 

What Do You Want From Life?

 

When asked what he would ask from music manufacturers, Whynot Jansveld’s request is to harness more powerful technology for his bread-and-butter needs: “I would love a floor box with a bunch of switches on it that can load Audio Units plugins. Plain and simple, just a big old hefty processor, a really amazing CPU, a bunch of RAM. I have incredible stuff I can use on my computer, and I’d love to use all of it live.”

As we prepare to take our leave of Jansveld, he raises one more point on which to comment: “We haven’t talked at all about what Audio Modeling does, but it’s certainly exciting technology for me. What you guys do, where you basically create these sounds out of nothing, is pure magic for me, and there’s no limit as to how far that can go. I can’t wait to see what’s next and I’m having a lot of fun playing your instruments. I’m going to be using them a lot.”

Midi Talk Ep. 9 – Don Lewis 1024 576 Audio Modeling

Midi Talk Ep. 9 – Don Lewis

MIDI Talk 09 – Don Lewis: Living At the Corner of Humanity and Technology

 

It would be an understatement to say that Don Lewis is a legend of music technology because that would not begin to cover his innovations and their impact. Hailing from Dayton, Ohio, in the heartland of the US, Lewis became interested in music as a child watching a church organist, and in high school was introduced to electronics. When things would go wrong with the electropneumatic organ at his church, he would crawl up to the organ chamber and try to fix it. “I was very inquisitive,” Lewis states. “I think that is why we are here as human beings: to be inquisitive and explore.”

Lewis’s curious nature led him to study Electronic Engineering at Alabama’s historical Tuskegee Institute (now Tuskegee University), during which time he played at rallies led by Dr. Martin Luther King. From there, Lewis became a nuclear weapons specialist in the US Air Force, then did technical and musical jobs in Denver, Colorado, until he was able to become a musician full-time.

After moving to Los Angeles, Lewis worked with the likes of Quincy Jones, Michael Jackson, and Sergio Mendez, toured opening for the Beach Boys, and played the Newport Jazz Festival at Carnegie Hall.

In the late 1970s, most of a decade before the advent of MIDI, Lewis designed and built LEO (Live Electronic Orchestra), a system that integrated more than 16 synthesizers and processors in a live performance rig that took the phrase “one man band” to an entirely new level. So new, in fact, that it frightened the Musicians Union, who declared Lewis a “National Enemy” in the 1980s and worked to stop him from being able to perform. (Lewis notes that Michael Iceberg, a white musician also doing a solo electronic orchestra act at the time, never faced the opposition that Lewis, who is black, encountered.)

But Lewis kept innovating, digging into an entirely different role contributing to the design of new electronic musical instruments. A long, fruitful collaboration with RolandCorp’s Ikutaro Kakehashi brought Lewis’s influence to the design of numerous drum machines and rhythm units, including the iconic TR-808 and numerous synthesizer keyboards.

A conversation with Lewis, however, only occasionally touches on all of these accomplishments. Mostly, he weaves together philosophy, humanism, spiritual and cosmic aspects of musicmaking, and the application of all of those to technology to create an emotional experience for listeners. Audio Modeling’s Simone Capitani took the ride and followed Lewis to every corner of his universe in a discussion so wide-ranging we can only touch on a very limited portion of it here. But you can hear more by watching the video.

And now, ladies and gentlemen, the one and only Don Lewis.

 

The Wiggle and the Two Songs

 

Lewis views music as a fundamental ingredient of the human universe. “If we look at quantum physics, in string theory, there is a wiggle,” he begins. “Even if you can’t see (something) move, it is wiggling on some scale that is beyond nano. Working with sound and music is a very macro perspective of this nano thing that’s happening, but we can control the wiggle. That’s one way to look at why music and sound are so pervasive and so innate to our being – because we are working with something on a level that everything else is made out of.”

The art of music, then, lies in how we control the wiggle. Lewis poses a simple answer to how humans accomplish this, once again going to the foundations of existence.

“I have this feeling that every human being is musical. When you came into this world, you could sing two songs. What is a song? It is a sound and a language that someone else can understand.

“Before we learned our language – Italian, English – we sang two songs: the first, crying, the second song, laughter. We sang those songs before we could say ‘mama’ or ‘papa.’ We live our lives between those two songs: one of need, one of joy. We need each other to be joyful. When we are against each other, we are not joyful.”

How can electronic instruments express such profound abstractions? A good first step, suggests Lewis, is to relate the properties of acoustic musical instruments to those of electronic instruments.

“Conventional musical instruments are mechanical synthesizers. The voice is made up of three components of subtractive synthesis: the vocal cord is the oscillator, the mouth cavity is the filter, the envelope generator is the movement of the mouth,” describes Lewis.

“Looking at those ingredients tells you that, for the most part, this is why we have extended the range of the human voice through technology. Musical instruments extended the range: the bass is lower than any bass singer could sing, and the highs are higher than any soprano could sing. So, what we have done with mechanical synthesizers, we are now (doing with electronic instruments,) creating another palette of sound and color that extends it even more, and also articulates much more.

“But the articulation is the main ingredient of how you create emotion: that crying and that laughter. Those are the two emotions that get us – the crying and the laughter. How do you get those two emotions to be represented in the envelopes and the pitches and the filtering? How do you get Louis Armstrong’s and Joe Cocker’s and Janis Joplin’s and Aretha Franklin’s and Ray Charles’s voices, that emotion?“

 

The Delicate Sound of Thunder

 

Lewis does not limit himself to acoustic instruments and the voice in trying to generate emotion. “When we compose and when we sound design, we can pull from nature, the things external to us that still make sound: the wind, the rain, the thunder, the lightning.”

Lewis has long been captivated by thunder. “One of the most exciting things that would happen to me when I was a kid in Dayton, Ohio, especially in the summertime, there would be these afternoon thunderstorms, and everybody would go hide and get scared. Me, I’m sitting there looking at the lightning and going ‘Ooh, this is so cool.’ One Sunday, my grandfather and I were walking to church from our home, going to Sunday School. In the middle of us walking, a big thunderstorm started. There was this paper factory that had this really tall chimney all the way from the ground, it must have been 100 feet high. Lightning struck that chimney, and some bricks from that chimney fell. My grandfather went ‘whooo!’ and I went ‘Wow!’ These are things that prompted me to want to make this thunder (on a synthesizer),” relates Lewis.

“But the other thing was that, on Wendy Carlos’s Sonic Seasonings album, the ‘Spring’ episode had thunderstorms going on while this music was going on. I thought the Moog synthesizer was doing that. I was working with ARP, and I said, ‘Oh, I have to learn how to do this on the ARP.’ But I come to find out, it wasn’t the Moog, somebody recorded this and integrated it and mixed it in. So I did something for the first time: ARP thunder!”

 

The Importance of Intention

 

Don Lewis is a thinking man who has seen a lot, played a lot, and done a lot. When asked what advice he would give to young musicians working with technology, he pauses, then comes back with the wisdom he has gathered over his decades making music.

“What are you going to do with the music you produce with these tools? What is your intention? Are you going to be inspiring? Is there something you want to express? If it’s going to make a mark on anybody else’s life, do you have an intention here? If you can figure out what’s going inside of you that you think needs to be expressed, then use that tool for that. Otherwise, you will be distracted forever, because there are so many other ideas going on in the world.

I look at it this way: the first 20 years of your life (is spent) ingesting everything everybody else thinks you should have. Your education and the whole bit. The next 20 years, you try to put all of that stuff to work. Then you find out at the end of that 20 years – this is working or this is not working. And then the next 20 years, you try to erase all the things (from) the first 20 years that didn’t work for you. So, those first years are very susceptible to being ones of confusion, especially now.

You have to be not only about you, (but) about others, and when you become about others, you are actually being more about you. How would this make a difference not only in my life, but in others’ lives? Is my protest song actually going to help the movement? Or is it going to stop the movement? In the days of civil rights, the protest songs were songs, they were things that people could sing and march. They weren’t chanting, they were singing. The difference between chanting and singing is that chanting only takes in the left side of the brain, which is only speech. Singing takes in the musical side and the language side, the creative side and the logic side. And then you get more power and you get more people participating.

I know I would not be here if it had not been for my ancestors, who sang their way through slavery. They sang those work songs, they sang those spirituals, and that’s what helped them to survive. What helped our civil rights movement in the United States was the singing, the marching, Martin Luther King, John Lewis, and others. I met both of these people, I knew them. So I understand those rudiments, and I hope that the ability to access the creative efforts of (Audio Modeling) and others in making music accessible, can create this atmosphere; this is the atmosphere we need.

If music disappeared from the earth, the earth will continue, but our existence may not, if we don’t become in harmony, first, with ourselves. And that’s what music is all about.

Visit www.DonLewisLEO.com for more informations about Don Lewis and the new upcoming documentary about his life and career.

MIDI Talk is sponsored by Zoom, the maker of Q2n-4k, the camera used to produce this show.

SWAM for iPhone is out now