fbpx

News

Blog-Camelot-for-wind-players
Camelot for Wind Players: An Example 1024 576 Audio Modeling

Camelot for Wind Players: An Example

As a wind player, performing live with music technology presents a number of challenges, some of the most common ones being:

  • playing with backing tracks
  • recalling the right sound preset for each part of a song
  • using a single lead voice (your horn or wind controller) to generate harmony parts on different instruments
  • setting up MIDI controllers to remotely control software instruments and FX
  • displaying music scores

In the past, accomplishing these tasks could be difficult, and you might have given up on some of them because you couldn’t find the right tool with which to do them. You probably bashed your head against a wall once or twice in frustrating attempts to cobble together several devices or software programs only to get some compromised version of what you wanted.

Yeah, we’ve been there, done that. At the end of the day, we only want to play and have fun, and be able to focus on music-making without being distracted by infuriating technicalities. So we created Camelot to let us–and now, you–have more fun and less head bashing.

READ THE FULL ARTICLE

software center blog post image
Introducing Audio Modeling Software Center 1024 576 Audio Modeling

Introducing Audio Modeling Software Center

Introducing Audio Modeling Software Center

 

You are taking the plunge. You’re going to buy upgrades for the SWAM instruments you already own, buy another SWAM instrument bundle, upgrade to Camelot Pro, and wholly commit to live performance with Audio Modeling instruments and whatever other instruments and audio plugins you may have. How exciting!

But…you are afraid. You think it is going to be a long, frustrating process to get everything installed, authorized, updated, and ready to rock.

We say: No, it’s not.

How can we say this so definitely, with such absolute certainty? I mean, we ARE talking about computers here, how can the Audio Modeling team be SO sure?

Easy. Because we have rolled out the faboo Audio Modeling Software Center to handle all of those processes for you with no more trouble (and not much more time) than it takes for you to make an espresso.

OK, we admit, Software Center doesn’t have the rich, strong flavor of espresso, and it won’t give you quite the same jolt of energy. But it DOES have the sweet taste of success, so that, by the time you sit back down in front of the computer with your steaming espresso, your Audio Modeling products will be waiting for you to take that coffee energy and make music with it. And, even BETTER than espresso (if such a thing is even possible), it will keep putting that sweet taste in your mouth as it makes it effortless to always have the most current versions of all of your Audio Modeling apps.

So, fear not. Software Center is here and it will be simple to manage all of your Audio Modeling apps. Which leaves only one question: do you want a little steamed milk with that?

 

READ THE FULL ARTICLE

Audio Modeling Names Three New Resellers 1024 576 Audio Modeling

Audio Modeling Names Three New Resellers

Audio Modeling is pleased to announce the addition of three new resellers: SudeepAudio.com in India, Gear4music in the UK, and Best Service in Germany.

 

Sudeep Audio

One of India’s leading pro audio dealers, Sudeep Audio carries more than 100 major brands…and now one more, with the addition of Audio Modeling!

Audio Modeling products are available online at https://www.sudeepaudio.com/.

 

The late Mr. Nikhil Mehta founded Sudeep Studio–Sudeep Audio’s parent company–in Mumbai in 1977, with the idea of providing a resource for talented musicians working to launch or further their careers. In 2003, SudeepAudio.com was created by Metha’s son Aditya to carry on his father’s vision of always being a true friend to musicians.

“We are proud to partner with Audio Modeling as their exclusive Indian reseller,” glows Aditya Mehta, now one of two partners running Sudeep. “Audio Modeling is a global leader in multi-vector expressive acoustic instrument emulation, and technology is what excites us and connects us to our Indian music community. SWAM truly is a breakthrough in the world of music production. In time, we hope to work with Audio Modeling on creating SWAM versions of Indian instruments, which have unique characteristics.”

With 45 years in business, Sudeep is very well-regarded in India, according to renowned Grammy- and Oscar-winning composer A.R. Rahman. “Sudeep Audio has introduced several revolutionary products, in addition to distributing music production software in India from across the globe,” said Rahman in 2008. “Their competitive pricing, quick deliveries, and strong after-sales support are world-class. I hope they go further in the business, contributing to every musician’s success.”

 

For those who are wondering, “Sudeep” is not the name of anyone in the organization or the founder’s family. In Hindi, “su” means “pure,” and “deep” means “light,” so “Sudeep” translates basically to “pure light.” Mr. Mehta decided on this name to encourage the next generation to carry on with the “pure light” of music.

 

 

Gear4music

Since being debuted by sound engineer Andrew Wass in 2003, UK-based Gear4music has grown into a leading retailer of musical instruments and music equipment, with more than a million active customers, nearly 62,000 products available, and distribution centers in Germany, Spain, Sweden, Ireland, as well as the UK. And now Audio Modeling products can be found and purchased at https://www.gear4music.com/, as well.

 

 

 

Best Service

Munich-based Best Service, founded in 1986, was a pioneer in the emerging sampling market. Committed to providing the best possible virtual instruments and sound library experience, Best Service provides access to an exceptional range of inspiring tools for musicians.

“We are beyond excited to offer Audio Modeling’s outstanding virtual instruments through our webshop,” enthuses Managing Director Robert Leuthner. “Authentic sound isn’t the only feature of Audio Modeling’s products we find impressive. We feel that the SWAM instruments’ intuitive workflow and user-friendly graphical interface with smart controls lets musicians focus on making music, instead of worrying about technical considerations. Plus, for those who want to play virtual instruments live, Audio Modeling offers Camelot, a convenient tool that combines a setlist manager, mixer, MIDI processor, and effect host in a single, integrated performance environment.”

Audio Modeling products are available online at https://www.bestservice.com/

We at Audio Modeling are very excited to collaborate with these amazing companies. Welcome aboard, and we look forward to making beautiful music together!

 

MIDI Talk, Season 2, Episode 1: Omri Abramov 1024 576 Audio Modeling

MIDI Talk, Season 2, Episode 1: Omri Abramov

MIDI Talk, Season 2, Episode 1: Omri Abramov

Omri Abramov (https://abramovmusic.com/) is a musician of his times: global in scope, drawing on tradition and breaking new ground, playing acoustic instruments and exploring the boundless possibilities of technology, and all in the service of his expression of the human experience.

Born in Israel, Abramov’s first instrument was the recorder, a first musical experience for many children, at the tender age of five years old. He moved through the recorder family, on to flute, and then, in high school, picked up the saxophone. “There’s something that has always clicked for me with the movement of fingering the wind instrument,” Abramov observes.

Abramov studied jazz at Tel Aviv’s Israel Conservatory of Music and made tenor saxophone his primary instrument. Receiving designation as an Outstanding Musician from the Israel Defense Forces, he spent his time in the military playing shows all over Israel, including multiple appearances at the Tel Aviv and Red Sea Jazz Festivals.

Since being discharged, Abramov has played in the Idan Raichel Project, co-founded jazz-fusion band Niogi with keyboardist Dr. Guy Shkolnik, relocated to Berlin for better access to Europe’s jazz community, joined Sistanaglia (an ethnic/jazz group of Israeli and Iranian musicians with a message of peace), and formed Hovercraft, a New York City-based trio. Add in composing and producing, and Omri Abramov is a pretty busy dude.

 

Omri at Play

 

Today, Abramov plays a lot of tenor and soprano saxophone, but also has taken a deep dive into electronic wind instruments, playing both the Akai EWI (Electronic Wind Instrument – https://www.akaipro.com/products/ewi-series) and Aodyo Sylphyo (https://www.aodyo.com/?langue=en). Few outside of wind players are familiar with these instruments.

“(Wind controllers) deal with electronic signals, but have this humanizing effect produced through a sensor that is based on breath,” explains Abramov. “This sensor is converted to MIDI data, like expression (controller). You can assign this to different MIDI parameters and humanize virtual instruments or analog synthesizers, or whatever you would like to control.”

The difference between this and playing an acoustic instrument is not always easy to grasp. “People often ask, ‘You are playing an electric saxophone?’  It’s not electric saxophone, it’s an instrument in itself,” Abramov clarifies. ”You have to really dig into it and explore it, and then the possibilities are endless.”

When asked which is his favorite between tenor sax, soprano sax, EWI, and Sylphyo, Abramov glances skyward as he ponders the question. After a moment, he realizes the reason for his hesitation. “I don’t have a favorite instrument,” he begins, “because everything is connected to what (the instrument’s) functionality is in a specific situation.” Further, the experience of playing each instrument impacts his playing on the others. “The saxophone playing benefits my playing on the EWI and vice versa. They all contribute to reaching my vision.”

 

The Deep Dive

 

“My relationship with technology started about 13 years ago, in my old jazz fusion band, Niogi,” says Abramov. “That’s where I got into sound synthesis through playing the EWI, getting a bit out of the traditional acoustic zone that I was in.

“I started doing music of my own in the line of fusion and sound design, and thought, ‘I’m playing all that uptempo jazz on my sax, it will be easy: I’ll just go to the store and pick up the EWI and start shredding.’ And I couldn’t play a note because it was so different. Still, I was brave enough to get one home, sit with it and practice a lot.

“I got a MacBook and a version of Apple Logic, and the first synthesizer I used was the ES-2 that came with Logic. Then I went to Spectrasonics Omnisphere, and explored sound design combinations with samples and stuff. I’ve also worked in the area of plugin development, like my connection with Audio Modeling, and with Aodyo, the manufacturer of the Sylphyo.

“Each of these requires me to dive deeper into its own zone, and then the combinations, for example the combinations you can achieve with Sylphyo and its wave controller controlling the parameters you can assign in the SWAM instruments. In the new SWAM v3 especially, what you can assign is amazing.”

 

Drawing Expressivity Out of Technology

 

Having spent years studying playing techniques on saxophone and learning how to employ them expressively, Abramov is essentially repeating that journey with electronic instruments.

“There’s definitely (challenges in technology) I had to find a way around. For example, vibrato. There’s a great vibrato response in the parameters on the SWAM instruments, and on the EWI, you can get vibrato through biting the mouthpiece. I tried that a bit and ruined a few mouthpieces because my bite is apparently too strong. I started using pitch bend, and it felt more like a natural response. The EWI has a pitch bend up on the upper side of the thumb and pitch bend down on the bottom side. I use many upward pitch bends, and I do it with my thumb. Naturally searching for that feeling from the sax, I found this thing and it started to be part of my playing.

“Maybe that is something personal to myself, I found my way to do it. Many EWI players ask me, ‘oh, you actually use the thumb to do vibrato?’ Yeah, but it’s a tiny thing, because if you do it too much, it’s like oowahoowahoowah. But that’s the beauty of music: you search for this feeling, you suddenly create this thing that maybe if you were coming from the outside and tried to analyze it like a computer, you’d be like, “ah, no, that’s not the way to do it,” but then you put a person in this process trying to reach for something from the heart, and it creates something.”

The other example Abramov cites is the importance of dialing in the velocity response of the sound source, which is an iterative process involving small adjustments on the software instrument, then adjustments to the controller, and back and forth until it feels natural to play. ”It’s not really like I look for what resembles the saxophone, because if I play a trumpet sound, or more than that, a synth combined with a trumpet sound, that’s a new thing. I’m just looking for how it will have a soul, how it will have a character that you can hear someone behind it playing it like it was something natural in the room.

“What we’re trying to do is give technology and electronic instruments and software instruments the human element that keeps them so exciting for years without becoming dated. Acoustic instruments are never getting old. Synths, usually the ones that are flashy and super hip right now, in 20 years will not be hip at all, because fashions change. But things that are more subtle and grounded, that have soul inside and feel more natural–it is more likely they will endure the test of time.

 

Swimming in SWAM

 

Abramov discovered Audio Modeling’s SWAM instruments at a couple of NAMM (National Association of Music Merchants) trade shows. “The first time was at NAMM 2019,” Abramov recounts. “ I connected my EWI to one of the SWAM instruments, and was blown away. Before that, I was doing stuff that was more connected to the internal EWI sounds. But then I used the EWI as a controller for this amazing SWAM sound engine, and I was really intrigued. To be frank, it changed my views about virtual instruments and what you can do with modeling synthesis to emulate an actual instrument.

“Even now, I love taking several SWAM instruments and combining them together to make new sounds. These possibilities hit me right away. I asked, ‘Can I combine this clarinet sound with the flute sound that you just played?’ And (the Audio Modeling person) said, ‘Sure,’ and then he set up two sound engines and put the sounds together, and it was so fun to play with that I was hooked. There’s one big video called SWAMMED (https://www.youtube.com/watch?v=OocavJfNiv0) where I did an orchestra of wind instruments. It was a whole process. I was diving more and more deeply into it. “

Abramov’s interest in Camelot had just as much impact on him. “Camelot is really inviting, that would be the word for it. It’s not messy,” asserts Abramov. “The most crucial things, like volume control, are really easy to approach in Camelot. I can create all my sound combinations and give them different effects, run them through different processing–even use different controllers–see the level of each of those layers (containing the various instruments) and what processors they are running through. Perhaps 50 percent of my time is spent creating those combinations, that’s a lot. Camelot became my go-to for that purpose. My iPad works hard, and Camelot works hard.”

Abramov’s attention to features that aid his ability to play expressively has led him to focus heavily in Camelot on one thing: the response curves that translate controller response into sound source parameter response. Camelot provides powerful tools for shaping these curves to match the style of a player or the characteristics of a controller to the way that a sound changes. “I use the SWAM MIDI Learn function to assign different parameters, then I modify the curves,” he details.

“I use that for things like opening a filter. If I rotate the Sylphyo one way or the other, it opens the filter in different ways. What I like to do is apply just a bit of filtering; even on SWAM Trumpet, I’ll put an external filter on it. The filter is not 100 percent open, but almost, and every small movement I do opens it a little bit. It’s like I’m trying to emulate the feeling of what happens when the trumpeter is moving his head and changes the angle of the lip.

“It’s a territory I’ve started to explore relatively recently, but I feel like there’s a lot there, because I didn’t realize how much expressivity happens from the actual movement of the player. I mean the small gestures you do when you’re excited or you move for a higher or lower register. The aspect of movement is one thing I think Aodyo did great in the Sylphyo, with its movement sensor. When you approach it in a delicate way, the small gestures in the curves, in the volume combinations between layers, in the movement, that is where the magic is. I can spend hours on that.”

 

The Never-Ending Story

 

Omri Abramov has his nimble fingers into many musical pies, but while maintaining his deep love of the saxophone, he is clearly transfixed by the unexplored territory that music technology lays before him. He still views himself as being barely past the starting line.

“All this is a constant search for new ways of expressing myself through technology. Of course music is the engine, but technology is the tool. This is a constant journey since (starting with the EWI). I’m trying to get deeper and deeper. I feel like I’m just scratching the surface.”

Camelot for Bassist
Camelot for Bass Players 1024 576 Audio Modeling

Camelot for Bass Players

Camelot for Bass Players

 

Bobby Bigboddum keeps his plate pretty full. He plays both fretted and fretless electric bass in two wedding bands and a rock band. He also plays acoustic double bass in a jazz group and a bluegrass band. As if that weren’t enough, Bobby is a pretty good singer, too. Because he is so versatile and a solid player and singer, Bobby frequently gets booked to play bass with acts on tour, but finds it exhausting hauling around a heavy bass amp and a big pedalboard of effects to get the various sounds he needs to cover gigs.

 

Recently, Bobby played a gig with a guitarist who brought nothing to the gig but his guitar, a small audio and MIDI interface and an iPad. He plugged his guitar into an interface instrument input, ran a cable from the interface’s output to the sound system, and sounded great all night, with a different sound for each song, and sometimes several sounds within the same song. Impressed, Bobby asked him about his rig, which is when Bobby was introduced to Audio Modeling’s Camelot Pro.

 

The guitarist explained that he could host any plugin in Camelot and, since the audio inputs got added in Camelot 2.1, run his guitar through it, using a MIDI pedalboard to change sounds and control parameters like sweeping his wah wah. Bobby instantly knew he had found his new rig. He learned that Camelot ran on macOS and Windows, as well as iOS, and, since he already had a laptop, an audio/MIDI interface, and some great plugins, he decided to run Camelot on that. So Bobby downloaded a copy of Camelot Free and was off to a new adventure. In less than a week he decided to buy a license for Camelot Pro to get its greater power.

 

Although Bobby’s interface has a nice, clean sound on its instrument input, he wants the grit and beef of a real bass amp, so an amp simulator is the basis of his sound. As we shall see in a moment, getting Bobby’s bass into the world of Camelot is fast and easy.

 

Bobby uses different amp sounds for each of his bands, and the wedding bands call for a wide variety of different sounds. Camelot’s Layers make it easy for Bobby to accommodate all of these different needs.

 

Each Layer has its own signal path, with whatever processing and effects plugins Bobby might want, and even the ability to route his bass signal to other layers, if need be. One or more Layers can be stored as a Scene, and Scenes can be changed manually or automatically at specified times. This makes it easy for Bobby to change his bass sound at different points within a song.

 

But Bobby often wants the same amp sound for a whole song, or, and, in the case of the rock band, for the entire evening. Camelot has two special-purpose sections that make this easy. Layers in Camelot’s Setlist Rack stay the same through every song of the entire set, while Layers in the Song Rack change with each song, but not within a song.

 

So, for the rock band, Bobby makes a Layer in the Setlist Rack, puts his amp sim there, and selects the interface audio input as the audio input to the amp sim. Since the amp sim has some built-in effects, that’s often all he needs for an entire gig with that band.

 

Figure 1 – On the right, Bobby has stompbox effects and an amp simulator in a Setlist Rack Layer. On the left, his bass is selected as the audio input to the layer.

 

There are a few songs, however, with spots needing a sound that is in some way unusual (for example, one song has a section where he uses a Leslie sound). Bobby’s amp sim can call up presets from MIDI program change commands, so when those songs are in the setlist for the night, he has Scenes that send the appropriate program change command to the amp sim. Each Scene has a Layer that sends the appropriate program change command, and an item that routes the MIDI to the Layer in the Setlist Rack that has the amp sim. When he calls a Scene, it changes the amp sim program. He uses this same method to change presets on the amp sim for songs where he plays the fretless electric bass, which needs different settings than his regular bass.

 

To call up these different Scenes, Bobby turns to Camelot’s Timeline view. Many users (including the guitarist who first showed him Camelot) use the Timeline to play backing tracks during a performance. In that situation, Scene changes can be programmed to happen at exact times while the backing tracks are playing. Bobby places his Scene changes on the Timeline, but doesn’t have backing tracks and doesn’t play the Timeline. Instead, he uses his MIDI foot controller to manually step Camelot to the next Scene change on the Timeline.

 

Figure 2 – The Timeline (on the left) here holds Scene changes Bobby calls manually. On the right is shown the MIDI command used to advance to the next Scene.

 

Some of Bobby’s bluegrass and jazz gigs don’t require amplification at all, but the ones that do call for the same sound all night, just like his rock gigs. For these bands, he’s looking for a totally different sound than he can get from an amp sim. Instead of the amp sim, Bobby’s Layer in the Setlist Rack is built around a plugin that emulates a high-class mixing console channel strip, which gives him the sound of a nice recording preamp, plus EQ and compression.

 

Figure 3 – Bobby’s basic rig for his bluegrass gigs is a nice channel strip (shown on the R), followed by a graphic EQ he uses to adjust for each room. Since he uses the same sound all night, he puts everything in the Setlist Rack.

 

Bobby has a pickup on his double bass, but sometimes it works out better to use a microphone. So he has two copies of his jazz and bluegrass Camelot Setlists, one copy with levels, EQ, and compression settings optimized for the pickup, the other identical but with settings optimized for the microphone. (He tried just having separate Scenes, instead of separate Setlists, but he found having so many Scenes confusing.) He also has a graphic EQ plugin that follows the channel strip, which he uses to make adjustments at each gig for the sound of the particular room. There are a few rooms he works regularly, and he simply stored additional copies of his Setlist documents named for those venues. Bobby can go to a gig and use either a pickup or a microphone, with perfect correction for the room at regular gigs – all by doing nothing more than opening the right Camelot document at the beginning of the night.

 

The wedding band gigs are a whole different story. The band plays well-known songs and tries to come as close as possible to the original versions, so Bobby needs a different sound for each and every song. The Setlist Rack is not as much help to him on these gigs, so he relies more on putting his amp sim or channel strip (depending on the demands of the gig) in a Layer in his Song Rack, where it changes for each song. As before, when there are special needs in the middle of a song, he uses Scene changes to let him bring other sounds in and out.

 

Having all of his sounds for all of his bands instantly available in Camelot comes in very handy in one other way. When Bobby is on the road doing a tour, he often needs to be learning new material, sometimes for additions to the setlist, sometimes for other upcoming tours or gigs. Camelot has a Timeline view that many users (including the guitarist who first showed him Camelot) use to play backing tracks during a performance. In that situation, Scene changes can be programmed to happen at exact times while the backing tracks are playing.

Figure 4 – Bobby practices new songs by playing over a recording or a minus-bass mix of the song as a backing track. He also can program scene changes as he learns the song.

 

But instead of using the Timeline for programmed changes, Bobby uses it for rehearsal. He gets recordings of the songs he needs to learn and inserts them as backing tracks in yet another copy of one of his Setlist documents. He creates a new Song with a backing track for each piece he has to learn. He runs it as many times as necessary, stopping to program Scene changes where necessary, then moves to the next Song. Since he works while listening over headphones, he is able to learn while traveling, in motel rooms before or after a gig, or anywhere, really, that he can have his bass with him and play along. If he needs a sound he doesn’t already have, he creates a new Scene, adds whatever plugins he needs, and then stores the Scene as a template, which he can easily import to any of his existing Setlists.

 

When Bobby sings, his microphone goes directly to the sound system on larger gigs, but for some of the smaller gigs the bluegrass band often does, he plugs his mic into his interface’s second audio input and selects that as the audio input to another Layer in the Setlist Rack, in which he has put another channel strip plugin. He usually sends a mix of his bass and vocal out a single interface output, but there have been occasions when he has routed them to separate outputs. This is exactly the kind of flexibility that convinced Bobby to try Camelot Pro in the first place.

 

Figure 5 – When Bobby sings, as well as playing bass, his Setlist Rack has a second layer for processing vocals. Note the addition of reverb after the vocal channel strip.

 

Sometimes people think bass players have it easy. It seems to them that only having four strings instead of six is easier, and that nobody expects lots of crazy sounds and effects out of the bass player. But the art of playing bass lies in large part on how the part is played: the sound has to be right, the touch has to be right, and the phrasing has to be exactly what is needed at any moment. Bobby’s favorite quote is from Little Feat’s Lowell George, who noted on an album cover, “Do not underestimate, nor take lightly, this thing we call ‘bass.’” Bobby understands his role as a bassist very well, which is why he now relies on Camelot Pro to give him all of the tools he needs (beyond his fingers and instrument, of course) to be the player Lowell George was talking about. Judging from his packed gig schedule, no one is underestimating Bobby, or taking him lightly.

Camelot 2.1 Supercharges MIDI Processing
Camelot Supercharges MIDI Processing 1024 576 Audio Modeling

Camelot Supercharges MIDI Processing

Camelot Supercharges MIDI Processing

 

While much attention has been focused on the new audio input features in Camelot 2.1, this new version adds considerable heft to the program’s MIDI processing, as well, particularly in providing new ways of converting or remapping MIDI data. Some of these capabilities are creative in nature, while others are aimed at strengthening Camelot’s ability to integrate different hardware and software that may use MIDI data in slightly (or very) different ways.

 

Camelot is meant to be the center of your music technology when you perform, and these new features add more power to your ability to control all of your virtual and physical instruments in coordinated fashion, using your set of physical controls. In this article, we are going to take a deep dive into this highly potent set of MIDI processing functions.

 

Some Basics

 

As with the audio input features, the new MIDI features reside in the Audio & MIDI Settings of items or the MIDI Settings of layers. Clicking on an item or layer brings up its settings, but you often need to click the legend at the bottom of the window to get to the MIDI settings. In Audio & MIDI Settings, you will want to click the MIDI Transformers tab to access these new features.

 

Figure 1 – The MIDI Transformers are accessed through this screen.

 

Each of these new MIDI features needs to be made active before it can be used, which is accomplished very simply by clicking the Enabled switch at the top of the screen for the feature.

 

Once you have spent the time to set up filters or a remapping curve, you can preserve your work by saving it as a preset. The “three dots” button to the right of the Enabled switch at the top of each feature’s window will drop down a menu that will allow you to save or export a preset. As one would expect, the same menu allows you to load or import a preset.

 

Figure 2 – The Presets drop down menu

 

Message Transformer

 

Message Transformers convert incoming MIDI Control Change, Channel Pressure (often referred to as mono aftertouch), Pitch Bend, or Key Switch messages into different Control Change, Channel Pressure, or Pitch Bend messages.

 

Figure 3 – In this example, Control Change 7 (volume) is being converted to be CC 11 (Expression), and CC 2 is converted to CC3. The button at the bottom adds additional transformers.

(To be clear, there is no such thing as a MIDI Key Switch message; key switching is simply a system in which designated MIDI note messages – usually from the lowest octaves on the keyboard, which are rarely used to play notes – are interpreted to invoke specific instrument articulations on sampled instruments.)

 

This is very useful when, for instance, you need a foot controller to generate Expression messages (CC11) for a SWAM instrument, but your foot controller is hard-wired to generate MIDI volume (CC7). A MIDI message transformer can easily convert CC7 messages to be CC11 messages. Conversely, perhaps you want to control the brightness of a sound on both a SWAM instrument that uses Expression (CC11), and another synth that uses CC74 (Sound Brightness). Your controller can send out CC11, and a MIDI message transformer on the synth item can convert those messages to CC74 messages.

 

To get to the message transformers, go to Audio & MIDI Settings and click the MIDI Transformers tab. Click the Message Transformer item and then the Enabled button to activate the feature. Any number of transformations can be defined. Simply click the Add New Message Transformer button at the bottom, then click each field in the entry that appears to define the input and output messages.

 

Filters

 

The MIDI Filter blocks specified MIDI messages. As with MIDI message transformers, you have to click the appropriate item (Filters) in the Audio & MIDI Settings list, then enable the feature.

 

Figure 4 – Here, Active Sense, Transport, and Program Change messages are being filtered out. Note the Enabled switch at the top.

After that, simply click the message type you want to block. A handful of the most commonly encountered Control Change messages are called out specifically, but the Custom Control Change item allows you to select exactly which CC messages will be blocked and which allowed through.

 

It may be easier just to show Camelot which messages to block, which you can do by clicking the Learn button at the bottom of the window, then sending the message you want blocked to Camelot (usually by operating a control). Camelot will identify the message and block it. The Invert Filter button flips all of your message selections, so that all of the messages selected for blocking now become the only messages that are not blocked, while the messages allowed through before become blocked.

 

Remapping Table

 

MIDI messages such as velocity and Control Change use values that, by default, are generated using a linear curve, that is, the output value is always the same as the input value, so setting a control halfway through its travel produces a value in the middle of the available range of values. But sound parameters sometimes respond more musically when message values are generated using an exponential curve, or an “S” curve, or something in between. In this case, setting a control halfway through its travel may result in a message with a value above or below the middle of the available range. The same input results in a different output value when an exponential curve is used than when a linear curve is used.

 

Figure 5 – The only difference between these two remapping tables is that Bipolar has been enabled for the table on the right, yet they produce very different output values for an input value around the middle of the range (along the X axis).

The MIDI Remapping Table lets you reshape the curve onto which any CC, Channel Pressure, Aftertouch, velocity (of a note message), or Pitch Bend message is mapped. (The Aftertouch message is generated per note, leading it to be known as “poly aftertouch,” where Channel Pressure is global for an instrument, causing it often to be referred to as “mono aftertouch.”) The output value of any of these messages only matches the input with a linear curve; any other curve shape reinterprets the input values. This is useful in massaging the values to get the change in sound you want for a given action.

 

For example, the sound you are using for a virtual instrument may not sound good with very low velocity values. To deal with that, you might change the Min Output value in the remapping table, so that even the most softly played notes will play the VI with enough velocity for the note to sound good. Perhaps you play a weighted keyboard controller but want to play a lead sound with a response more like what an unweighted synthesizer keyboard would put out. Another example might be changing the shape of the curve to make a filter cutoff respond the way you want in response to a mod wheel or foot controller. Or perhaps you set the Max Output value lower of a control assigned to the wet/dry mix of a reverb so that the sound never gets too washy, even when the physical control is on full.

Figure 6 – The main Remapping Table screen gives an overview of all of the remapping curves that have been constructed.

Put more generally, remapping is very useful in matching the action of a physical control to the sonic response you want from a parameter.

 

Once remapping has been enabled at the top of the window, select the Message Type. If you select Control Change, the Message Number setting becomes active, so that you can select the CC number. In Figure 5, CC 3 has been selected.

­

The graphic just below the Message Number shows the current curve shape. By default, the Bipolar setting is Off. Switching it on divides the range of values in two, so that the specified curve applies from the minimum value to the mid-point value, and then again from the mid-point to the maximum value. This is most useful for parameters like pitch bend, where the center represents a value of zero, or no pitch bend. and you may choose to bend either up or down. However, Bipolar can also be used just to get a more complex curve, as well, as shown in Figure 5.

 

The next four settings specify the range of values affected: the minimum and maximum input values, and the minimum and maximum output values. The Output Min and Output Max settings determine the smallest and largest values that notes can have, while the Input Min and Input Max settings set the ranges of values affected by the Output Min and Max.

 

For example, say that Message Type is set to velocity. If Output Min is set to 50 and Input Min is set to 32, then any note played with a value of 32 or less will be given a velocity value of 50. If Output Max is set to 118 and Input Max is set to 100, then every note played with a velocity of 100 or more will end up with a velocity value of 118. Note that changing the min and max settings alters the curve shape, so values between the min and max are also affected.

 

Finally, the last two settings, Symmetry and Shape, change the shape of the curve. Shape varies from a linear shape at its leftmost point to instantaneous (vertical) at its rightmost point, while Symmetry varies from an exponential curve at its leftmost extreme to a logarithmic curve at its rightmost point. Used in combination, a wide variety of curves, including “S” shapes, can be created.

 

Remapping curves is a very powerful technique that can allow you to really dial in how a sound changes when you use a control, but applying it to a number of different parameters on several instruments can create a situation that is rather complex to understand and use intuitively in performance, so be careful.

 

A remapping curve applied to a layer or an item in a particular scene is applied only when that scene in active. However, it is also possible to globally remap data for a given input, that is, to have a remapping always in effect on the input from a specified physical MIDI controller. To do this, click the Settings button in the toolbar at the bottom of the Camelot window, then select the MIDI section and the MIDI Inputs page. Find the controller you want to remap, click the “three dots” menu (“…”), and choose MIDI Input Remappings from the drop-down menu that appears. A remapping you make here will apply regardless of the setlist, song, or scene currently in use.

Figure 7 – A remapping table can be applied directly to a MIDI input to make it active on that input all of the time.

 

Advanced MIDI channel routing

 

Advanced MIDI channel routing simplifies using a single physical controller to affect multiple instruments, or even multiple sounds in a fully polyphonic instrument. The idea is quite simple: each MIDI channel coming into Camelot can be assigned to play out over any combination of MIDI channels. A mod wheel coming in on channel 1 can be assigned to play out over channels 1, 3, 6, and 12. When we say “play out,” however, that does not mean it can only be sent out a MIDI interface to play an external instrument, oh, certainly not! It can be sent anywhere in Camelot, to any virtual instrument or effect plugin…as well as to external instruments.

 

Setting it up is just as simple as it sounds. Enable the feature, click the MIDI input channel you want to remap, then click every output channel to which you want it routed. Click the next input channel you want to remap and do it again, and so on for as many input channels as you need to reroute. That’s all there is to it!

 

Figure 8 – In this example, MIDI input channel 3 is being routed to channels 3, 4 and 11.

 

Note to Chord

 

Once again, the name says it all; this new feature enables you to sound a whole chord by playing a single note. This means that, for example, a guitarist could use a small set of MIDI footpedals to play a chord progression on the fly, rather than using backing tracks.

 

Figure 9 – Note mappings are shown in the bottom half of this screen, while the top half shows enhancement parameters that apply to all mappings.

Once the feature has been enabled, there are three options found above the note maps. When enabled, Latch Mode acts sort of like a sustain pedal. When you play a note that has been mapped to a chord, it will cause the chord to latch and continue playing until the next note is played. Note that if you play the same note twice, the first time causes the chord to play, and the second time stops it; it does not play the same chord a second time.

 

Velocity Humanize randomizes note velocities to add variation. A random number that does not exceed the specified percentage of the velocity as played is added to or subtracted from the velocity. For example, if a note is played with a velocity of 110 and Velocity Humanize is set to 10 percent, then the velocity will end up being some randomly chosen value between 99 (110 minus 10 percent) and 121 (110 plus 10 percent). For most purposes, small values of Velocity Humanize will be most useful, but, of course, nothing stops you from using a large value if it sounds good.

 

Strumming Time introduces a small delay between playing each note of the specified chord, to simulate a chord being strummed on a stringed instrument like a guitar or mandolin, or even “rolled” (arpeggiated) on a keyboard.

 

Finally, we get to the maps! Click on an entry in the Trigger Note column to set which note will sound a chord when it is played. You can click on the note to set it, or click the Learn button and play the note on a MIDI device you have selected as a MIDI Input in the Settings screen.

 

Figure 10 – This screen shows the second mapping seen in Fig 9, which is G3 being mapped to a G minor 7 chord.

When the input note is set, click the Back button in the upper left, and then click in the Output Notes area and specify the chord notes you want to sound, either by clicking the notes on the keyboard illustration shown, or clicking the Learn button on your MIDI controller and playing them. It doesn’t matter if you play the notes one at a time or all at once. The selected notes will be shown on the keyboard diagram.

 

You can have as many note mappings as you like. To add additional mappings, simply click the Add New Note Mapping button at the bottom of the main Note to Chord screen.

 

Musical Scale

 

Musical Scale forces notes that are played to be in a specified scale. If you specify a C Major scale, no note you play will produce an F#, because it is not in the C Major scale. This means that notes that are in the specified scale are unchanged, while notes not in the scale are changed, most often to the next lowest note that is in the scale. So, in our C Major example, playing an F# will sound an F natural, which is in the C Major scale.

 

Figure 11 – Musical Scale allows you to specify a chord scale, including its starting note, and even transpose it.

 

After enabling the feature, choose one of the two dozen available scale types. All of the common church modes are included, which means that Major and Ionian are actually the same thing, as are Minor and Aeolian. If you know your modes, there’s a lot you can do. The Scale Root sets the note from which the scale starts, so, if you set up for a C Major scale and then play an F Major scale, what you will hear will be an F Lydian scale, just as if you had set the feature for an F Lydian scale in the first place.

 

 

Figure 12 – The scales available range from the ordinary to the exotic.

 

Scale Shift is a transpose feature. In our now well-worn C Major example, a scale shift of +3 shifts everything up a minor third to Eb Major. If you then play a C Major scale, what you will hear will be a C Minor scale.

 

Note Range Min and Max designate the range of notes that will sound, regardless of where they are played. If you set Note Range Min to C2 and Note Range Max to B2 and then play from C4 to C5, you will hear C2 to B2, and then the C5 will again sound C2, starting the specified range over again.

 

Humanizer

The Item Humanizer panel contains parameters that define the amounts of randomization that will be applied to MIDI Note On and Pitch Bend events to “loosen up” performances. When using multiple instances of SWAM instruments to create a section, Item Humanizer settings are particularly useful in making each instance sound like a different player.

 

Figure 13 – The Humanizer introduces variation to simulate the slight imperfections found in the playing of even the best musicians.

Average Delay – This is a base amount of delay added to Note On, Note Off, and Pitch Bend events. Average Delay works in conjunction with Note Timing Rate to determine the actual amount of delay applied to any Note or Pitch Bend message. The available range is 0-100 milliseconds.

Note Timing Rate – This sets a range of randomization that is applied to scale the Average Delay time before the resulting value is applied to Note and Pitch Bend messages. Each Note On event causes a Note Timing Rate percentage within the specified range to be chosen and applied, either as a positive value (increasing delay time) or a negative value (decreasing delay time).

Velocity Rate – This sets a range of randomization that is applied to scale the velocity value of a Note event. A Velocity Rate percentage within the specified range is chosen and applied for each Note event, either as a positive value (increasing the velocity value) or a negative value (decreasing the velocity value).

 

Pitch Rate – This parameter is intended to provide random detuning for monophonic instruments, for example, to help make multiple instances of a SWAM instrument sound more like a section. It sets a range of randomization that is applied to scale Pitch Bend messages that are sent to the hardware instrument. Each Note On event causes a value to be chosen and applied, either as a positive value (increasing the pitch) or a negative value (decreasing the pitch) to all Pitch Bend messages until the next Note On triggers a new value. Small values of Pitch Rate are the most effective for realism.

 

 

Audio In, Out, and All About in Camelot 2.1
Audio In, Out, and All About in Camelot 2.1 1024 576 Audio Modeling

Audio In, Out, and All About in Camelot 2.1

Audio In, Out, and All About in Camelot 2.1

 

Perhaps the most exciting improvement in Camelot 2.1 is the addition of new capabilities for getting audio into Camelot and moving it around within a Scene. Layers, hardware device items, and sidechain or aux inputs for items can all now accept external audio through their Audio & MIDI Settings. Let’s take a closer look.

 

Layer Audio Input

 

Each Layer can now be fed with a selected audio input. This means you can bring vocals into Camelot, process them through your best compressor and delay or reverb plugins, then mix the result with all of your software and hardware instruments. You can even create effects loops that send audio from Camelot to an external processor and bring the processor returns back into layer audio inputs. The fact is, Camelot now can be the mixer for your entire rig.

 

Using a Layer audio input takes only three steps: first, be sure that your microphone or external processor is connected to inputs on your audio interface.

 

Figure 1 – Connect mics and instruments to interface

 

Second, go to the Audio>Audio Input pane of the System Settings view. Click the plus sign at the bottom of the Audio Inputs pane to add additional Camelot audio inputs, if you need them. Click on the name of one of the Camelot audio inputs shown there, and choose your interface’s audio inputs from the list that is displayed.

 

Figure 2 – System Settings>Audio Inputs

 

Click on the name of a Camelot audio input to edit its mapping to your interface. Note the toggle field in the upper right that lets you make any input be mono or stereo. It is possible to change from one to the other after an input has been assigned, but, obviously, doing that necessarily changes the assignment, so be careful.

 

Figure 3 – Map interface inputs to Camelot inputs

 

 

If you want to delete an audio input, click the “three dots” icon in the upper right and select Delete from the menu that drops down.

 

Figure 4 – Delete input

 

Finally, open the Audio and MIDI Settings>Audio Input pane on the layer and click the name of the Camelot audio input you want to feed that layer. When the Settings view opens, the whole panel will be displaying MIDI Settings. Just click the arrow in the upper right corner to collapse those and you will see the Audio settings and be able to click the Audio Input pane.

 

Figure 5 – Map audio input to layer

 

Click Done in the upper right corner, and your external audio is now flowing into that layer and any processors in the layer. When you look at the layer, you’ll see a triangle icon on the left side with the name of the Camelot audio input mapped to that layer.

 

Figure 6 – Layer audio input indicator

 

The layer volume fader in the Mixer view lets you balance your external input with everything else. The meter next to the fader shows you the level in the layer, but you might want to check meters in any plugins you’re using to monitor levels through the signal chain.

 

MIDI Device Item Audio Input

 

Up until now, you could control a hardware MIDI synthesizer or other device from Camelot, but you had to mix its audio externally. Now, Camelot’s Hardware Devices items (whether Smart Maps or custom maps) can accept audio inputs, so Camelot can both control an external synth and mix its audio.

 

Setting up an audio input for an external device is pretty much the same process as for a Layer audio input, except that you want to access the Hardware Device item’s Audio & MIDI Settings, instead of the layer’s settings. Click on the name of the item and its settings will show up. If preset selection is being displayed, click the arrow where it says Audio & MIDI Settings at the bottom to collapse the preset display and show the audio settings. Click Audio Inputs and select the input(s) you want to use.

 

Figure 7 – Showing Audio & MIDI Settings

 

Beneath the Audio Input setting is the Item Main Knob and Pan setting. When this is set to MIDI, the big knob that appears on an Item on a Layer will send out MIDI CC 11 messages by default, and the associated Pan control sends out CC 10 (which is the CC dedicated to Pan). But the main knob can be changed to send out any MIDI CC number you wish by expanding the Item and clicking on the CC11 legend at the top of the fader.

Figure 8 – The Item Main Knob and Pan setting (shown on the left) lets you assign an Item’s main knob and pan control to either be sent as MIDI messages or controlled in Camelot’s audio engine. If MIDI is chosen, the main knob can be assigned to send any MIDI CC number, as shown on the right.

You can mix several external devices in a single layer by selecting audio inputs for each item, and then mix in a microphone as a layer audio input. With the ability to accept audio inputs to items and layers, you can get quite sophisticated with mixing in Camelot. Your biggest limitation is likely to be the number of audio inputs on your interface.

 

Plugin Sidechain Inputs

 

Some plugins have the ability to accept auxiliary audio inputs. That could mean inputs to a vocoder or ring modulator, or a sidechain input for a compressor. Sidechain inputs are accessed exactly as they are for external devices: click the item name to bring up Audio & MIDI Settings and then go to the Audio Input pane, where you will see two settings. SideChain Bus displays the inputs available in the plugin, while SideChain Input allows you to select the Camelot audio input that will feed the plugin input selected in the SideChain Bus setting.

 

Figure 9 – Sidechain inputs

 

Routing audio between layers

 

In addition to accepting external audio, Camelot 2.1 also adds the ability to route audio between layers. Simply insert an Audio Layer Connector item in the source layer, choose the destination layer, and set the level to be sent to the destination. This feature makes it easy to set up a layer as a dedicated processing chain and then send audio from other layers to it, with a separate send level for each layer. In addition, the send can be placed anywhere in the source layer, so it can be tapped from any point in the layer’s signal path.

 

Figure 10 – Layer Audio Send

 

To insert an audio layer connector, add a new item, choose Post-Processors> Audio Layer Connector, click Audio Send, and select the destination layer from the list that’s shown.

 

Figure 11 – Choosing layer Audio Send destination

 

Glitchless switching

 

One of Camelot’s most important capabilities is being able to switch scenes, which lets you completely redefine the sonic world in an instant. That’s so easy that it hides how difficult it is to actually make such a big switch seamlessly. Camelot allows no glitching or interruption in the audio when scenes are changed, and MIDI changes are handled appropriately to avoid problems, as well.

 

This means, for example, that harmony vocal microphones can be enabled for song choruses, with compression, reverb, or any other processing, then muted for the quiet verses where there is only lead vocal.

 

Figure 12 – Full band mix

 

blog post Camelot for iPad
Camelot for iPad 1024 576 Audio Modeling

Camelot for iPad

Camelot for iPad: Mark Basile’s iOS Virtual Instrument Faves

 

As Apple’s iPad has grown in computing power, musicians have increasingly turned to it as a platform for music-making, both live and for recording. The raw capability of today’s iPads can’t be denied, and, even though the iPad is still not as powerful or flexible in some respects as a laptop or desktop computer, the stability of its OS and the familiarity of its interface have been key factors in the growth of the iPad as a music machine. iOS apps have grown to be numerous and sophisticated, and we at Audio Modeling are often asked how truly viable an iPad is as a standalone music platform. So we decided to take a shot at an answer.

 

Unsurprisingly, we’ll start with this: Audio Modeling’s Camelot live performance environment runs on the iPad, as well as under macOS and Windows, which makes all of those great instruments and effects usable for live performance. Camelot can host the full range of virtual instrument and effects plugins available for iOS, and provide control for them from a MIDI controller plugged into an iOS MIDI interface. USB controllers can be used, too, but require a USB adapter.

 

To get a bigger picture of just what is out there for iOS, Audio Modeling turned to Mark Basile, vocalist for long-established progressive metal band DGM, and Musical Director and Vocal Coach for the Echoes Music Academy in Naples, Italy. We asked Mark to share some of his favorite iOS instruments, which, by a funny coincidence, he just happens to run within Camelot. By no means is this a comprehensive survey of iOS virtual instruments, as a prowl through Apple’s AppStore will quickly show. It should be more properly thought of as a curated collection from the musical mind of Mark Basile.

 

Camelot Pro

 

Camelot is a complete live performance environment, with virtual instrument and effects plugin hosting, MIDI controller management and processing, multitrack playback, music score display, and much more. In addition to MIDI control of iOS instruments, Camelot can also accept external audio inputs from an audio interface, so you can even use Camelot as a mixing and processing environment for vocals or acoustic instruments.

 

Camelot lets you construct entire VI (Virtual Instrument) and audio rigs, store them, and recall them, enabling you to call up an entirely different sound and, in fact, complete system, for each song or section of a song. Then you can build setlists out of the songs. Camelot is a powerful way of putting iOS plugins to work on a gig.

https://apps.apple.com/us/app/camelot-pro/id1326331127

 

SWAM Solo Instruments

 

SWAM Solo Strings, Solo Woodwinds, and Solo Brass are modeled instruments, not sample instruments, which means that, rather than playing recordings of acoustic instruments, they emulate the behaviors of the sound-producing mechanisms that make acoustic instruments distinctive. This makes modeled instruments far better and more intuitive for playing live and imparting instrumental style. Modeled instruments also need far less memory than sample instruments.

 

Audio Modeling’s SWAM instruments provide a collection of models of many of the instruments that dominate classical, jazz, and other styles of music, from violin, cello, and double bass, to trumpet, trombone, and tuba, to saxophones, oboe, and flutes. But the SWAM instrument bundles also go further afield to include instruments like piccolo trumpet, euphonium, and bass clarinet.

 

“SWAM instruments are very important collections of Solo Strings, saxophones, Solo Woodwinds (which includes the saxophones), flutes (also in the Solo Woodwinds family)…in short, a lot of things,” enthuses Basile. He finds the SWAM instruments easy to program and play. “We always have a graphic interface that is extremely clear, which is very helpful for programming and during interaction. The graphics allow you to have everything under control: vibrato, velocity, and with the expression pedal.”

 

“The legatos are beautiful, and the breath noise….amazing!”

https://audiomodeling.com/iosproducts/

 

Acoustic Piano:

 

“For acoustic piano, I’m using a Ravenscroft app,” Basile reveals.”I’ve used this app for quite some time and it gives me great satisfaction.” Ravenscroft Pianos are ultra-high end, hand-built pianos, and Ravenscroft 275 for iOS is a virtual instrument constructed from exacting recordings of an original Ravenscroft Model 275 Titanium Grand Piano.

 

Careful scripting brings out fine detail in hammer attacks, natural resonances, and other low-level properties that make for a rich, convincing piano sound.

 

Ravenscroft 275 Piano by UVItouch

https://apps.apple.com/us/app/ravenscroft-275-piano/id966586407

 

Strings and Pads for layering

 

Basile frequently layers the VI Labs Ravenscroft 275 piano with strings and pads from Korg’s Module Pro. Module Pro is a sound library player that comes with a core library that includes keyboard, strings, brass, and synth sounds, but it really comes into its own with additions from the large selection of expansion sounds available for it. The expansion libraries add more keyboard instruments, sounds from (of course) the Korg Triton, orchestral and choir sounds, cinematic sound effects, house music sounds, and so on and so forth. Module Pro is an all-arounder.

 

KORG Module Pro

https://apps.apple.com/us/app/korg-module-pro/id932191687

 

 

Expansion Sound Libraries

https://www.korg.com/us/products/software/korg_module/modules.php#expansion

 

Electric Piano

 

Neo-Soul Keys Studio 2 is focused on electric piano, and, most particularly–though not exclusively–Rhodes sounds. Gospel Musicians is fond of telling how the late, great George Duke bought Neo-Soul Keys Studio 2 because he felt it had more grit and funk than other Rhodes emulators. Another plus for NKSK2 is its onboard effects, which are licensed from Overloud.

 

Gospel Musicians/MIDIculous LLC Neo-Soul Keys Studio 2

https://apps.apple.com/us/app/neo-soul-keys-studio-2/id1482448438

 

B3 Organ

 

Guido Scognamiglio is an Italian countryman of ours, and his company, Genuine Soundware & Instruments, is renowned for their VB3 emulation of the Hammond B3 organ. VB3m is the iOS version of this instrument, and boasts lots of authentic detail, from the drawbars to tube overdrive to the Leslie cabinet emulation, as well as various well-known B3 features like percussion and vibrato. Like Audio Modeling SWAM instruments, VB3m is a physical model, not a sample instrument. VB3m also includes flexible MIDI features.

 

GSi VB3m

https://apps.apple.com/us/app/vb3m/id1560880479

 

Synth Brass

 

discoDSP makes a number of virtual instruments, but the OB-Xd captures the magic of one of the most-loved synths of the 1980s, the Oberheim OB-X. Possibly the greatest fame of the original OB-X was in producing the keyboard riff that anchored Van Halen’s “Jump,” and, yes, the OB-Xd can resurrect that sound very well, indeed.

 

The OB-Xd starts with an OB-X emulation, but then adds improvements, since, well, things have advanced some in the last 40 years! Basile pronounces the OB-Xd to be “one of the best Oberheim emulations,” though he then adds “The only issue with this app, is its lack of internal effects. But I can absolutely address that with Overloud TH-U effects, like chorus, delay, and reverb, that are part of the equipment to use with the Oberheim (sound) to get more fatness, richness, presence, and a wider sound palette, and also stronger stereo positioning.”

 

discoDSP OB-Xd

https://apps.apple.com/app/id1465262829#?platform=ipad

 

Delay, Reverb, Effects

 

Basile uses Overloud’s TH-U as an outboard effects device following a virtual instrument in Camelot. TH-U iOS can share presets with the desktop version of TH-U. Oh,and by the way, TH-U is an amazing guitar amp simulator, too. The free version has a limited set of amp sims and effects, but there are many, many available as inexpensive paid add-ons.

 

Overloud TH-U iOS

https://apps.apple.com/us/app/th-u/id1478394489?ls=1

 

Synth Lead, Pads, Arpeggiators

 

KV331 Audio’s SynthMaster One is a wavetable synthesizer with a straightforward interface and lots of features like scales, filters that emulate a few famous analog designs, microtuning, and lots and lots of presets.

 

The voice structure on SynthMaster One is quite powerful, with stereo oscillators, two oscillators, two sub oscillators, two filters, four envelopes, two LFOs and a 16-step sequencer oscillator. Eleven different effect types round out the package.

“An amazing app. I really do everything with it,” says Basile, “not just my signature lead, but there are synth basses, cinematic atmospheres and soundscapes…I can program so many things.”

 

KV331 Audio SynthMaster One

https://apps.apple.com/us/app/synthmaster-one/id1254353266

 

Pads/Juno 60

Roland’s beloved Juno 60 arrived in 1982 and became a favorite instrument for making pads, synth brass sounds, and for its gritty stereo chorus. The TAL-U-No-LX captures the warmth and funk of the Juno 60’s sound without its noisiness. The TAL-U-No-LX goes the Juno 60 one better in that it has 12 voices, as opposed to the original’s six.

 

The Juno 60 was the first of the Juno series, which included the Juno 106 that was one of Basile’s favorites. TAL-U-No-LX, says Basile, “is available both on desktop and iPad, and it is faboo, with a layout that is just a wonder, extremely clear in its emulative intention. It has an incredible sound.”

 

TAL Software GmbH TAL-U-No-LX

https://apps.apple.com/us/app/tal-u-no-lx/id1505608326

 

Sample Player/Rompler

 

Pure Synth Platinum is Rompler synthesizer, despite having no actual ROM (since it is all software). Nevertheless, it has a ton of tones, plus four layers per voice and effects licensed from Overloud. If your iPad has limited storage, samples can be stored on an external SSD or Thumb drive.

 

Basile explains that Pure Synth Platinum 2 “has a sample library of FM sounds, like the DX7 and beyond,” but he appreciates the range of sounds available “I have a sound we will call ‘Stratovarius-ish’ or ‘Malmsteen-ish,’ created using Pure Synth Platinum 2, that combines sounds from multiple internal parts.”

 

Gospel Musicians LLC Pure Synth Platinum

https://apps.apple.com/us/app/pure-synth-platinum/id1459688500?ls=1

 

Synth Bass

 

It’s the Minimoog. From Moog Music, no less. Do we really need to say anything more about it?

 

Moog Music Inc. Minimoog Model D

https://apps.apple.com/us/app/pure-synth-platinum/id1459688500?ls=1

 

MIDI Talk 010 - Whynot Jansveld
Midi Talk Ep. 10 – Whynot Jansveld 1024 576 Audio Modeling

Midi Talk Ep. 10 – Whynot Jansveld

MIDI Talk 10: Whynot Jansveld Takes Tech On Tour

Making a living in music is a hustle, but diversification strengthens your ability ride the winds of change that so often sweep through the industry. Whynot Jansveld is a case in point. As a bassist, Jansveld has toured with far more artists–well-known to obscure–than there is space in this article to name, but we’ll throw in a few: The Wallflowers, Richard Marx, Butch Walker, Natasha Bedingfield, Gavin DeGraw, Sara Breilles…OK, we better stop there if want to get to his words. You can hear the entire conversation here.

He has appeared on both daytime and late night talk shows, and worked with numerous producers of note. But his bass career is not his only one.

Jansveld has also built himself secondary careers as a composer, particularly writing a lot for Founder Music, a stock music library, and as a mastering engineer, many of his clients being the same folks for whom he plays bass.

A native of The Netherlands, Jansveld emigrated to the US in his 20s to attend Berklee College of Music in Boston. “I grew up on a lot of rock and pop music on the radio, and a lot of that came from (the US). Ultimately, that’s why I wanted to explore it and became a professional musician here,” recalls Jansveld.

After coming to Berklee to do one semester and graduating after spending three years there instead, Jansveld slipped down the road to New York, where he worked for 16 years before relocating to Los Angeles 10 years ago.

 

Tech On Tour

 

Jansveld’s introduction to music technology while, at age 18, Jansveld traveled for a year playing bass with Up With People, a US nonprofit organization founded in 1968 to foster cross-cultural understanding amongst young adults through travel, performing arts, and community service. The troupe traveled with early digital music technology: a Roland MC Series sequencer and a Yamaha RX5 drum machine. Jansveld took the bait. “I just started messing around (with the sequencer and drum machine) in our free time. I wanted to learn how that all worked and I thought I could somehow make something out of it. And I did.”

At Berklee, Jansveld took the next step when a bandmate sold him a Macintosh SE computer loaded with Opcode Systems Vision, a very early (and, in fact, visionary) MIDI sequencer program.

Today, Jansveld’s primary gig remains touring as a bass player, which puts a lot of his emphasis on music technology on mobile systems. Powerful devices in small, robust packages are valuable to him. His heaviest use of music technology on the road is for composing and mastering, but some of it naturally finds its way onstage, as well.

Jansveld’s mobile technology use is mostly behind the scenes. At his level of touring, musicians are expected to arrive at rehearsals for a tour already knowing all of the material, so Jansveld often needs to create song charts while he travelling.

“It used to be that you’d have a stack of papers (of charts), but now, I write all my charts on my iPad with an Apple Pencil.” This works for Jansveld because he doesn’t need to be hands-on with his instrument in order to transcribe.” I think I’m a little bit different than most musicians, in that I almost never touch a bass guitar while I’m preparing for any of this, even up to the point where I get to the rehearsal. I do it all in my head. I get a big kick out of being on a five-hour flight and transcribing a whole set of tunes. I have my in-ears (monitors) plugged into the iPad, and three-quarters of my (iPad) screen is where I write my chart, and the little quarter of a screen is my music program. I’m playing the music, rewinding a little bit, and writing as I go along.”

Jansveld also carries a mobile production rig for composing on the road. “I have a laptop and a little bag with an Apogee Jam (USB guitar interface for iPad), an Apogee mic, some hard drives, and a couple of cables, and that’s it. I can work in my studio at home, disconnect everything, close the laptop, put it in a bag and take it on the road. (I) open it up in a coffee shop, and everything that I was working on is there and I can keep working on stuff.”

Mastering is more of a challenge. “(That), obviously, is a little hard to do on the road because you are on headphones. I try to avoid doing it on the road unless somebody needs something quickly.”

 

The Show Must Go On

 

Jansveld sees the impact of technology in support infrastructure for performances, as well, especially in the areas of monitoring and show control.

“Having in-ears, you can have a beautiful mix, but it does feel like you’re cut off from the world,” he laments. “Sensaphonics have this technology where the in-ears also have microphones on them and you can mix the sound of the microphones with the board feed coming from the monitor side of things.

“It’s a little bit of a hassle to deal with, but at the time (I got them), it was worth it to me to do, because (I could) feel the room. I can’t overstate how important that is to me, because I’m not just there to play the notes, I’m there to perform for sometimes an incredibly huge room with a lot of people in it, and I want a (strong sense of) how that feels to me and how that makes me play and perform.“ Hearing the “outside world” also improves Jansveld’s onstage experience, because “even just walking towards the drums, the drums get louder.” Onstage communication is improved, as well. “If the singer walks up to you and says something while you have normal in-ears, you just nod and hope it wasn’t something important.

“I now use ACS in-ear monitors instead of Sensaphonics, because they came up with a system that also lets in ambient sound from around you, but instead of using microphones and a second belt pack, it simply uses a vent with a filter (like a custom earplug) that keeps the seal intact.”

Digital mixers can store in-ear monitor mixes as files that can be ported to another mixer of the same type. “That is super, super helpful if you’re doing a run of a few weeks, because every time you get to sound check, it’s already been mixed from the night before, It’s an incredible timesaver because, (without that,) you spend a lot of time on, ‘OK, kick..more kick. OK, snare…more snare,’ and it seems incredibly repetitive for no reason.”

Show automation is another area where Jansveld sees the effect of technology on touring. “Currently, I’m touring with the Wallflowers, and before that was Butch Walker, and both of those were really rock shows,” he points out. “Changing something up was just a matter of changing up the setlist.

“(On) some tours, everything is run by Ableton Live including the lights and all of that kind of stuff, and it takes a lot more to change things up because somebody has to go in and reprogram stuff. But for a band that doesn’t have a lot of money to spend on the tour and wants to make an impact, it looks incredible, because everything is timed. The chorus hits and the lights go crazy and come back down on the verse.”

Jansveld also sees greater reliability than existed before in the ability to play backing tracks, another highly automated task.

 

Mastering Kept Simple

 

“Never in a million years growing up did I think that mastering was something I wanted to do, or that I had the skills or ears to do,” Jansveld muses. “I ended up doing it four or five years ago simply because friends of mine had some new tracks mastered and were pretty unhappy with how it sounded. I had started composing music, mostly for commercial stuff, and had used those (same) tools to make my stuff sound as good as I thought it could be, so I just told him, “Why don’t you send me your tracks and I’ll see if I can beat (the other mastering efforts). That led to a whole bunch of other stuff with him and people that he recommended me to, and it took off from there.

“I don’t do anything special and I don’t have any tools that you or I or anybody else don’t have access to. But I take the time, and I pay attention, and I trust my ears enough and my experience in music enough to know what I want to hear. I don’t start altering things just because someone is paying me money to master it. If the mix sounds great, I make it louder but I keep it dynamic. It’s a simple concept, but apparently it goes wrong often enough that there’s room for me to do this.

Jansveld tries not to overcomplicate things in mastering. “If I had to put it very, very simply: “nice and loud” still works all of the time. It kind of works everywhere: on CD, on vinyl, it works for MP3, it works coming out of my phone. It sounds dumb, and it can’t always be that simple, but that’s my experience. You’re definitely pushing up against 0 dB (peak level) or a little bit less, you’re definitely compressing and limiting things, but with the tools we have these days, you can get things nice and loud, still have them be dynamic, and not really experience a feeling that things are squashed or pumping or just made worse.”

 

What Do You Want From Life?

 

When asked what he would ask from music manufacturers, Whynot Jansveld’s request is to harness more powerful technology for his bread-and-butter needs: “I would love a floor box with a bunch of switches on it that can load Audio Units plugins. Plain and simple, just a big old hefty processor, a really amazing CPU, a bunch of RAM. I have incredible stuff I can use on my computer, and I’d love to use all of it live.”

As we prepare to take our leave of Jansveld, he raises one more point on which to comment: “We haven’t talked at all about what Audio Modeling does, but it’s certainly exciting technology for me. What you guys do, where you basically create these sounds out of nothing, is pure magic for me, and there’s no limit as to how far that can go. I can’t wait to see what’s next and I’m having a lot of fun playing your instruments. I’m going to be using them a lot.”

Midi Talk Ep. 9 – Don Lewis 1024 576 Audio Modeling

Midi Talk Ep. 9 – Don Lewis

MIDI Talk 09 – Don Lewis: Living At the Corner of Humanity and Technology

 

It would be an understatement to say that Don Lewis is a legend of music technology because that would not begin to cover his innovations and their impact. Hailing from Dayton, Ohio, in the heartland of the US, Lewis became interested in music as a child watching a church organist, and in high school was introduced to electronics. When things would go wrong with the electropneumatic organ at his church, he would crawl up to the organ chamber and try to fix it. “I was very inquisitive,” Lewis states. “I think that is why we are here as human beings: to be inquisitive and explore.”

Lewis’s curious nature led him to study Electronic Engineering at Alabama’s historical Tuskegee Institute (now Tuskegee University), during which time he played at rallies led by Dr. Martin Luther King. From there, Lewis became a nuclear weapons specialist in the US Air Force, then did technical and musical jobs in Denver, Colorado, until he was able to become a musician full-time.

After moving to Los Angeles, Lewis worked with the likes of Quincy Jones, Michael Jackson, and Sergio Mendez, toured opening for the Beach Boys, and played the Newport Jazz Festival at Carnegie Hall.

In the late 1970s, most of a decade before the advent of MIDI, Lewis designed and built LEO (Live Electronic Orchestra), a system that integrated more than 16 synthesizers and processors in a live performance rig that took the phrase “one man band” to an entirely new level. So new, in fact, that it frightened the Musicians Union, who declared Lewis a “National Enemy” in the 1980s and worked to stop him from being able to perform. (Lewis notes that Michael Iceberg, a white musician also doing a solo electronic orchestra act at the time, never faced the opposition that Lewis, who is black, encountered.)

But Lewis kept innovating, digging into an entirely different role contributing to the design of new electronic musical instruments. A long, fruitful collaboration with RolandCorp’s Ikutaro Kakehashi brought Lewis’s influence to the design of numerous drum machines and rhythm units, including the iconic TR-808 and numerous synthesizer keyboards.

A conversation with Lewis, however, only occasionally touches on all of these accomplishments. Mostly, he weaves together philosophy, humanism, spiritual and cosmic aspects of musicmaking, and the application of all of those to technology to create an emotional experience for listeners. Audio Modeling’s Simone Capitani took the ride and followed Lewis to every corner of his universe in a discussion so wide-ranging we can only touch on a very limited portion of it here. But you can hear more by watching the video.

And now, ladies and gentlemen, the one and only Don Lewis.

 

The Wiggle and the Two Songs

 

Lewis views music as a fundamental ingredient of the human universe. “If we look at quantum physics, in string theory, there is a wiggle,” he begins. “Even if you can’t see (something) move, it is wiggling on some scale that is beyond nano. Working with sound and music is a very macro perspective of this nano thing that’s happening, but we can control the wiggle. That’s one way to look at why music and sound are so pervasive and so innate to our being – because we are working with something on a level that everything else is made out of.”

The art of music, then, lies in how we control the wiggle. Lewis poses a simple answer to how humans accomplish this, once again going to the foundations of existence.

“I have this feeling that every human being is musical. When you came into this world, you could sing two songs. What is a song? It is a sound and a language that someone else can understand.

“Before we learned our language – Italian, English – we sang two songs: the first, crying, the second song, laughter. We sang those songs before we could say ‘mama’ or ‘papa.’ We live our lives between those two songs: one of need, one of joy. We need each other to be joyful. When we are against each other, we are not joyful.”

How can electronic instruments express such profound abstractions? A good first step, suggests Lewis, is to relate the properties of acoustic musical instruments to those of electronic instruments.

“Conventional musical instruments are mechanical synthesizers. The voice is made up of three components of subtractive synthesis: the vocal cord is the oscillator, the mouth cavity is the filter, the envelope generator is the movement of the mouth,” describes Lewis.

“Looking at those ingredients tells you that, for the most part, this is why we have extended the range of the human voice through technology. Musical instruments extended the range: the bass is lower than any bass singer could sing, and the highs are higher than any soprano could sing. So, what we have done with mechanical synthesizers, we are now (doing with electronic instruments,) creating another palette of sound and color that extends it even more, and also articulates much more.

“But the articulation is the main ingredient of how you create emotion: that crying and that laughter. Those are the two emotions that get us – the crying and the laughter. How do you get those two emotions to be represented in the envelopes and the pitches and the filtering? How do you get Louis Armstrong’s and Joe Cocker’s and Janis Joplin’s and Aretha Franklin’s and Ray Charles’s voices, that emotion?“

 

The Delicate Sound of Thunder

 

Lewis does not limit himself to acoustic instruments and the voice in trying to generate emotion. “When we compose and when we sound design, we can pull from nature, the things external to us that still make sound: the wind, the rain, the thunder, the lightning.”

Lewis has long been captivated by thunder. “One of the most exciting things that would happen to me when I was a kid in Dayton, Ohio, especially in the summertime, there would be these afternoon thunderstorms, and everybody would go hide and get scared. Me, I’m sitting there looking at the lightning and going ‘Ooh, this is so cool.’ One Sunday, my grandfather and I were walking to church from our home, going to Sunday School. In the middle of us walking, a big thunderstorm started. There was this paper factory that had this really tall chimney all the way from the ground, it must have been 100 feet high. Lightning struck that chimney, and some bricks from that chimney fell. My grandfather went ‘whooo!’ and I went ‘Wow!’ These are things that prompted me to want to make this thunder (on a synthesizer),” relates Lewis.

“But the other thing was that, on Wendy Carlos’s Sonic Seasonings album, the ‘Spring’ episode had thunderstorms going on while this music was going on. I thought the Moog synthesizer was doing that. I was working with ARP, and I said, ‘Oh, I have to learn how to do this on the ARP.’ But I come to find out, it wasn’t the Moog, somebody recorded this and integrated it and mixed it in. So I did something for the first time: ARP thunder!”

 

The Importance of Intention

 

Don Lewis is a thinking man who has seen a lot, played a lot, and done a lot. When asked what advice he would give to young musicians working with technology, he pauses, then comes back with the wisdom he has gathered over his decades making music.

“What are you going to do with the music you produce with these tools? What is your intention? Are you going to be inspiring? Is there something you want to express? If it’s going to make a mark on anybody else’s life, do you have an intention here? If you can figure out what’s going inside of you that you think needs to be expressed, then use that tool for that. Otherwise, you will be distracted forever, because there are so many other ideas going on in the world.

I look at it this way: the first 20 years of your life (is spent) ingesting everything everybody else thinks you should have. Your education and the whole bit. The next 20 years, you try to put all of that stuff to work. Then you find out at the end of that 20 years – this is working or this is not working. And then the next 20 years, you try to erase all the things (from) the first 20 years that didn’t work for you. So, those first years are very susceptible to being ones of confusion, especially now.

You have to be not only about you, (but) about others, and when you become about others, you are actually being more about you. How would this make a difference not only in my life, but in others’ lives? Is my protest song actually going to help the movement? Or is it going to stop the movement? In the days of civil rights, the protest songs were songs, they were things that people could sing and march. They weren’t chanting, they were singing. The difference between chanting and singing is that chanting only takes in the left side of the brain, which is only speech. Singing takes in the musical side and the language side, the creative side and the logic side. And then you get more power and you get more people participating.

I know I would not be here if it had not been for my ancestors, who sang their way through slavery. They sang those work songs, they sang those spirituals, and that’s what helped them to survive. What helped our civil rights movement in the United States was the singing, the marching, Martin Luther King, John Lewis, and others. I met both of these people, I knew them. So I understand those rudiments, and I hope that the ability to access the creative efforts of (Audio Modeling) and others in making music accessible, can create this atmosphere; this is the atmosphere we need.

If music disappeared from the earth, the earth will continue, but our existence may not, if we don’t become in harmony, first, with ourselves. And that’s what music is all about.

Visit www.DonLewisLEO.com for more informations about Don Lewis and the new upcoming documentary about his life and career.

MIDI Talk is sponsored by Zoom, the maker of Q2n-4k, the camera used to produce this show.