fbpx

News

Award-Winning Film Composer John Powell Talks About How He Uses SWAM Instruments, Music, and Creativity
Award-winning composer talks about SWAM instruments, music, and creativity 1024 576 Audio Modeling

Award-winning composer talks about SWAM instruments, music, and creativity

Award-Winning Film Composer John Powell Talks About How He Uses SWAM Instruments, Music, and Creativity

 

With over fifty scored motion pictures under his belt, John Powell is a prolific and influential film composer, to say the least. Some movies he worked on include Shrek (2001), Robots (2005), the second and third Ice Age films (2006-2012), the Happy Feet films (2006-2011), the How To Train Your Dragon trilogy (2010-2019), amongst many others.

He’s earned three Grammy nominations for his work on Happy Feet, Ferdinand, and Solo: A Star Wars Story, and his score for How To Train Your Dragon 1 earned him an Academy Award nomination.

So when Emanuele Parravicini, Audio Modeling’s CTO and co-developer of the SWAM engine, received an email from Mr. John Powell’s office, the whole team got excited. It’s not every day an award-winning film composer takes the time to write to us to express his appreciation of our products!

What started as a friendly email exchange developed into an hour-long interview where we had the chance to talk to the man himself not only about SWAM but also about his perspective on music, the creative process, and the music industry.

 

Meeting the Man Behind Those Iconic Film Music Scores

 

We logged into Zoom at the appointed time. John Powell warmly greeted us, and we introduced ourselves and the company. It seemed we shared the excitement as Mr. Powell settled in to answer our first and most obvious question — one we were dying to know the answer to — how did he discover our SWAM instruments?

“Honestly, I’m not sure how I found you guys. It was definitely through the internet, maybe something I read on Apple News or some other tech-related news platform. No one told me about SWAM, that I remember. Then, I saw one of your demonstration videos on your YouTube channel.”

“That video caught my attention because, at the moment, I’m re-writing an opera I wrote 28 years ago with composer Gavin Greenaway and librettist Michael Petry. We didn’t have a big budget back then, so it was written only for fourteen instruments. I tried to program it the usual way I write but since it’s written for solo instruments, I couldn’t find good samples to work with.”

“With SWAM instruments, I can perform from the score. They’ve been particularly useful in this case because all the instruments in this piece are solo instruments.”

 

Discussing John Powell’s Use of SWAM Instruments

 

We listened to John Powell explain why he enjoys working with SWAM.

“When you’re film scoring, you’re not sketching something out on paper and handing it over to someone to have it orchestrated. As composers, we need to write every note played by every single instrument. Film composers need to become a master of every instrument they write for or at least, a master of the keyboard version of it. That’s something I understood working with Hans Zimmer back in 93-94. Because of that, I’m drawn to any sound that is very playable.”

“I hate sample libraries that are endless patches. You have to load up a patch to do this thing and then load up a patch to do that thing… Then you end up with ten different tracks. Or when you’re trying to get a result that sounds natural and performable from an articulation set and you end up cross-fading with MIDI from one type of sound to another to get the slides, portamentos, the right vibrato… It’s annoying.”

“I always loved sounds that allow me to do it all. That’s what drew me to your instruments — you seemed to have gotten everything inside one performable instrument and that’s not something really possible with samples at the moment.”

“When I’m working with samples, I need to shift between a technical and a creative mindset all the time. To counter that, I set up huge auto-loads on my systems, just because I don’t want to have to go through that technical mindset when I’m in the process of creating. But with SWAM, I can stay in this creative state longer because I can play the instruments as I go.”

After the praises came the time for some constructive feedback.

“There’s one thing I’m lacking in your instruments and that’s the relationship between the instrument and the room.”

At that moment, we understood why Mr. Powell asked us for a meeting. He had some questions of his own about our technology, questions Emanuele Parravicini eagerly answered. What followed was an enthusiastic conversation between audio software experts on ideas about how to not only model the sound of an instrument but also the sound of the room it’s recorded in and even the sound of specific microphones used to record them.

Audio Modeling has been aware of this issue for quite some time and is actively conducting research to understand what would be the best approach to achieve this kind of result.

“Whatever you do, I think you shouldn’t include the sound of speakers,” John said. “That’s the big problem with acoustic modeling at the moment, it always includes the sound of speakers and I think that’s a disaster. We already have speakers at the end of the chain, so it’s like having them twice.”

“Regardless of that aspect, it’s fascinating to me how you’ve developed such an unlikely level of quality that I haven’t seen before. We are so used to working with samples, and we know the problems that come with using samples, but this is such a different approach.”

 

The Importance of Aligning the Use of Technology With a Human Connection in Music Interpretation

 

We were curious to know if working with SWAM will influence the way he writes music in the future.

“Admittedly, I’m not approaching these instruments from a creative perspective. I’m looking for accuracy so that I can create a performance-based audio representation of the score.”

“It’s true that when using technology, one thing that’s interesting is exploring possibilities that go away from realism. But if we decide to move away from what real instruments can do, we need to keep sight of the fundamental reasons we love these instruments in the first place and why they work for us.”

“Let me give you an example. In the original recording of ‘American In Paris’, there’s a specific scene, it’s a very sexy dance between the main characters. There’s one trumpet note there, just a single note that slowly does a crescendo. Many people played that same piece beautifully afterward, but no one has played it quite like Uan Rasey, the trumpet player in the original recording.”

“One day, I arranged for Uan to come to my studio and sort of ‘bless’ my trumpet section since he had taught many of them. I talked to him about that note in ‘American in Paris’. I told him that for me, I hear everything in that note — everything I ever felt about love, sex, life, death… Everything! Just in that one note. Something about the way it changes from one thing to another and how it blossoms. Every time I hear it, I see the Universe open, and I see all human experience. I told him all this and he kind of looked at me and said ‘You know, it was just a gig that day.’ But when I asked him what happened in the studio and how he got to playing and recording it this way, he said they did the first take and then the director came to him and said ‘Listen, this needs to be the sexiest note you ever played.’”

“Now, ‘sexy’ is a difficult word to use in music, and in the end, what I got from that note is not sex, it’s much more than that. But his response was a very deep and human contact with this word and he made it blossom because he was a master of his instrument.”

“It’s not just the note, it’s also the arrangement and where it goes, but that note itself, how the timbre changes, always struck me as the epiphany of what musical expression is. Singers can do it, great players can do it. Gershwin wrote that single note with a crescendo and a slur over it and like I said, others have played it magnificently but no one has played it quite as magnificently as him, in my opinion.”

“That’s the musical and human connection I will always say is required for everything you do in music. So if you’re taking an instrument away from reality, you need to try and hold on to that. If there’s a synth note that doesn’t sound at all real but it blossoms in some way, or it changes in the timbre — the timbre meaning the change in the note means something — that’s what I’ll always be looking for, even if the sound is unrealistic.”

How to Develop Your Own Unique Sound

 

So what gives a musician this kind of unique and recognizable sound? And how can music composers and producers achieve this kind of sound quality while using technologies like MIDI and sample libraries?

“You can make your own sounds. Hans Zimmer has always created new libraries of sounds and I’ve done this also in the past: recording instruments, then sampling and processing them to make new sounds. For example, some reasons the Bourne films sound the way they do is because I was using a particular TC Fireworx box as my processor, and I chose to use a lot of bass and very specific guitars, playing them a certain way with different tunings. But then the most important part came afterward when editing.”

“It’s the choices you make that create your sound and the technique you have when writing music. People ask me which sample libraries I use. Honestly, I use the same ones as everybody else! But it’s the choices I make when I’m working with these libraries that create my sound. When you work with samples, or even when you work with something like SWAM, you have the possibility to change everything. You can change the acoustic and see what happens if you go with a drier or a wetter sound. You can place instruments differently in the space and see what happens if you have instruments far away from each other that are normally close or the other way around.”

“For example, I always loved ensembles, especially taking solo instruments and making ensembles with them. In The Call of the Wild, I had an ensemble of fourteen banjos. It’s not a sound you usually hear. That’s one way of developing your own sound, to just think and do things differently in some way. It sounds cliché, but it’s difficult to do and for me, it comes down to my fascination for other people’s music.”

“I have many musical obsessions I always come back to. There’s a four-bar phrase in a Vaughan Williams piece I always remember, there’s a record from Judie Tzuke that has a string arrangement I always remember, and then there’s my own experience of playing certain pieces that I always remember. You go through your whole life, you hear music, and it does certain things to you, depending on what is happening in your life at that moment. Or you simply remember that music because it sparks something in you.”

“All these connections music makes, they are like emotional diamonds buried inside us that we carry around. Then when you write music, really all you do is start pulling them out and using them. I think artists and musicians with very unique sounds simply carry around slightly weirder diamonds or they pick diamonds that are very different.”

“If you can remember all of Star Wars’ music and write like that, it’s great, but it’s not very useful to anybody else in the world. We need people to write like Star Wars but not. We need something new that doesn’t sound exactly like Star Wars. A person might try to write like Star Wars but can’t so the result comes out as something different but equally wonderful.”

“That missing accuracy is important. You need the memory of these emotional diamonds, but you also need to forget the details of what you heard so it can become whatever feels right at the moment. Some people are very accurate in recording those emotional moments and it just comes out exactly like the thing they are remembering. That’s fine, but it doesn’t move anything forward; people won’t see it as anything new.”

“In my case, at my best, I’m remembering the strengths of the emotions but completely forgetting the details of how the piece was played or written. When you fight to achieve the same kind of emotion while forgetting the details of how it was done originally, the result becomes something else. Then it has a chance of being unique.”

“That’s because in the end, if I’m remembering Ravel, I can’t forget I also love Esquivel or Timbaland beats but Vaughan Williams and Ravel never heard these things. It makes no sense to me to leave out some of these influences when I’m trying to reach that same emotion. Why would I do that? Because they are different? They’re not different, they feel the same to me. I get the same emotion from one than from the other so why not use both? Then, if I’m lucky, it becomes something different but with a strong feeling to it that people can recognize.”

 

Finding joy in the process of music creation

 

The art of music composition is one thing, but breaking into the industry is something else entirely. We asked Mr. Powell for his advice to anyone who dreams to one day be where he is.

“In many ways, I had as much pleasure working on my first advert many years ago back in the UK as I had working on How To Train Your Dragon, even though that advert was terrible. I probably did something like $150 for the demo and $450 for the final. It wasn’t anything great at all, but what I liked was pursuing the idea of making it work, of making it right, and I enjoyed the act of creation more than the effect my work had on people. If you don’t enjoy the creation process, it’s very hard to balance that with the amount of rejection you’ll get.”

“For some, it’s worse. Take actors for example. They can get rejected from the moment they walk into a room, just because of their looks. At least, for composers, looks are not that important… Even though, admittedly, we all look like this,” pointing at himself, “we’re all white men. It’s embarrassing and I really hope this will change soon.”

“If you enjoy writing, you’ll do it again. And if you get rejected, you’ll figure out why and how you can write better. Tenacity is the key to that.”

“Making money in this business is hell, and it’s worse now than it ever was because so many people can do it. Technology has made it possible for anyone to write and record music. I squeezed in at the end of a period when it was very much about people’s talents. I don’t think I had as much talent as many of the people surrounding me, but I had a talent for looking into technology to find different ways to do things. It’s not much of a vision, it’s not a Philip Glass kind of vision, but it was enough for me to keep pursuing technology and its use along with a musical understanding of things.”

“I got through in the industry because I was lucky and tenacious, and I was tenacious because I enjoyed the process. Not the process of people liking my work, even though it’s wonderful when they do, but the process of me liking my work was more important.”

“If I had anything to say to anybody, it’s that if you don’t enjoy the process all the time, and if it’s not all about the moment of creation rather than the moment of presentation, you shouldn’t do it. It’s going to be too painful. It’s difficult, you’ll get rejected a lot, and the stress is huge. Obviously, compared to many other jobs, real jobs, it’s much easier. It’s the hardest ‘easy job’ in the world in my opinion.”

 

What’s coming next

 

We didn’t want to end this interview without hearing a bit more about the opera John is working on at the moment.

“The opera is written for four soloists, a women’s chorus, a small orchestra, and also a larger orchestra for the recording. It’s going to be a bit on the shorter side, about 65 minutes long.”

Grinning, he said, “I like to think of it as an ‘operatainment’ because it’s perhaps too much fun to be an opera.”

“We are planning to record it in September and release a CD by the end of the year. But as far as performing it goes, classical music has a very long process. So even though we’ll use the recording to promote the opera, find places to perform it, and approach different opera groups, and even if people love it, we probably won’t be able to perform it before 2023.”

His last words to us were “Long live the nerds!” to which we enthusiastically cheered. Yes, Mr. Powell, we are indeed going proudly towards the future, and we are all looking forward to seeing how you’ll be using our technology, and hear all the incredible music you’ll create in the years to come.

 

Written by: Mynah Marie
Photo credits: Rebecca Morellato

You Tube_MIDI Talk_Ep 1
Thoughts on Succeeding in the Music Industry with Claudio Passavanti — MIDI Talk Ep. 1 1024 576 Audio Modeling

Thoughts on Succeeding in the Music Industry with Claudio Passavanti — MIDI Talk Ep. 1

Thoughts on Succeeding in the Music Industry with Claudio Passavanti — MIDI Talk Ep. 1

 

MIDI Talk is a new podcast and video series about making music, music creation, and specifically the relationship between music and technology.

In this series, Simone Capitani, UX/UI designer of Camelot Pro and Marketing Expert for Audio Modeling, chats with creators, artists, technicians from all around the world to discuss their unique musical experience and what was the role of technology in their profession.

 

Introducing Our First Guest — Claudio Passavanti

 

Claudio Passavanti is a British Italian pianist, music producer, and digital entrepreneur, known as Sunlightsquare and Doctor Mix.

In his early twenties, Claudio moved to Los Angeles, California to study Orchestral Composing and Arranging at The Grove School Of Music.

Back in Italy after completing his studies and collaborating with many prestigious musicians in the L.A. music scene, he played keyboards, produced albums, and conducted orchestras for artists such as Zucchero, Lucio Dalla, Pino Daniele, Alex Britti, Niccolo’ Fabi, Gabin, Brian Adams (Italian leg of the tour), Ambra Angiolini, as well as many others.

“My interest in music started when I was really young. My mom says that apparently, already at the age of 3 I was playing with my brother’s melodica. Then, around the age of 8 or 9, I was already experimenting with record players. I think I was around 8 when I had my first ‘multi-track’ experience using my dad’s cassette tape recorder.”

“When I was a kid, I was always fascinated with computers”, explains Claudio. “I remember when I got my first computer, a Commodore 64, that’s what really got me started in technology.”

 

Clash of cultures leading to a unique sound

 

“In Italy, I hit a point in my career when I got tired of being put in this box of a pop musician. The music wasn’t fulfilling me anymore, I wanted to discover my own sound. So I moved to London when I turned 30.”

Claudio goes on to explain the difficulties of reinventing himself in a new country, one where he didn’t have the professional recognition he already had in Italy and where he had everything to learn.

“I remember going to jam sessions and feeling so out of place because I didn’t know the songs everyone else knew.”

There’s an incredible opportunity for growth and learning when someone is able to approach the unknown with humility, an open mind, and the drive to learn and work hard. Claudio shares how the musicians he met in the London music scene completely blew his mind because of their way of creating electronic music. By discovering new styles, genres, and producing techniques, he developed a very unique sound which he later became recognized for.

He became passionate about creating electronic music people would want to dance to and that became his inspiration to create Sunlightsquare.

 

Advice from the experts on making your place in the music business

 

Reminiscing on the challenges to adapt to a new culture and carve a place in a new industry brought up interesting and constructive thoughts around the topic of “making it” in the music business.

“Too many artists think that success comes from ‘being discovered’ when in fact, most success stories happen because someone put in the energy to put themselves out there, meet people, and learn from them.”

“One of the major keys to success is being able to listen to the needs of an industry, and then apply your knowledge to a different area so that you can create a unique combination, something truly ‘you’, that the industry wants and needs.”

The full episode of MIDI Talk with Claudio Passavanti is available as a podcast or as a video on our YouTube channel. Don’t forget to subscribe and hit the bell button to get notified of future episodes! Upcoming guests include Dom Sigalas, Martin Ivenson, and Peter Major.

MIDI Talk is sponsored by Zoom, the maker of Q2n-4k, the camera used to produce this show.

Showcasing an Unusual Way to Play With SWAM
Showcasing an Unusual Way to Play With SWAM 1024 576 Audio Modeling

Showcasing an Unusual Way to Play With SWAM

When we created the SWAM engine, we wanted to provide musicians, music producers, and composers with the most expressive instruments on the market. Our customers normally want to directly play the instruments, whether in the studio or live, as they would with real traditional instruments but once in a while, an artist surprises us with an interesting use case of our products.

We never suspected the possibility of someone trying to use code to have their computer play a SWAM instrument for them. Oddly enough, this is the experiment attempted by Mynah Marie a.k.a. Earth to Abigail.

 

Using code as a performance tool to create music in real-time

 

In the last decade, a new underground trend has emerged in the electronic music scene. Live coding is a form of performance where artists use a computer programming language to create music in front of an audience in real-time. The idea is to show the code on a screen so that everyone can witness it develop as the artist types it to create the music. Live coding demystifies programming by allowing the audience to see inside the programmer’s mind through their screen while emphasizing the expressive and artistic powers of code.

Mynah Marie is a live coder going by the name Earth to Abigail. Her live coding environment of choice is Sonic Pi, an open source software developed by Dr. Sam Aaron from the UK.

Before becoming Earth to Abigail, Mynah worked as an accordionist and singer with bands and artists from all around the world such as Soan (France), Casa Verde Colectivo (Mexico), Din Din Aviv (Israel), Pritam, and Arijit Singh (India).

Five years ago, she discovered computer programming. From that moment on, she’s been looking for ways to mix her passion for code with her music knowledge. Discovering live coding and Sonic Pi is what inspired her to create Earth to Abigail.

“I’m interested in using a medium that most people consider ‘cold’ or ‘emotionless’ such as programming to create something full of emotions and expressivity, in my case music. I’m a bit obsessed with this alignment between rational analysis and emotions. To me, live coding is this perfect junction between the logical mind of science and the beautiful expressivity of art.”

“While many live coders come primarily from the programming or engineering side, I come as a musician first and I think this influences the way I use code to express myself.”

 

Using SWAM in an unusual way

 

“When I discovered SWAM instruments, I just knew I had to see what I could do with them through live coding. The question I’m trying to answer is ‘Can I find underlying algorithms that would allow a SWAM instrument to improvise in a human way?’ Basically, I don’t want to play the instrument myself, I want to see if I can instruct my computer to improvise the way a human would. Because SWAM instruments are so expressive and realistic, I thought it would make for an interesting experiment.”

Earth to Abigail performed for the first time using SWAM at an online event on May 12, 2021. While the experiment is still in its infancy, it’s interesting to see SWAM used in a way we haven’t thought of before and with interesting questions in mind.

“I’ve barely started to scratch the surface of the possibilities with SWAM. This first performance for me was just to see if I could realistically change the expression by coding MIDI messages in a way that sounds more or less realistic. There are still so many parameters to explore! Right now, I’m basically using only Sonic Pi’s built-in randomness functions to add some movement in the expression but eventually, I intend to develop my own set of functions and methods to allow me to ‘code improvisations’ that are complex but human-sounding.”

It will be interesting for us to see how this experiment develops in the future.

You can view Earth to Abigail’s performance here.

The Making of a Great UX/UI Music Software Designer
The Making of a Great UX/UI Music Software Designer 1024 576 Audio Modeling

The Making of a Great UX/UI Music Software Designer

Innovation doesn’t simply manifest out of thin air. Though it might seem that some of the most innovative ideas result from someone’s eureka moment, there are many occasions when these ideas arise from combining the knowledge of one industry to another. Many times, brilliant inventions were created by someone casting a new light on a concept that already exists.

This is exactly the kind of mindset that led Simone Capitani, UX/UI designer, to create Camelot Pro. “I wasn’t born as a musical instrument designer, I started with designing business software, websites, etc…”

 

From business apps to audio software

 

From an early age, Simone was fascinated by technology while also being a fervent audiophile.

“Moved by my passion for technology, I decided to study electronic engineering and apply that knowledge to music. After completing my engineering degree, I tried to apply to some audio software companies, but I had no connections in the industry so my CVs were getting lost in all the applications these companies were receiving.”

“For a while, I gave up thinking it was impossible to enter such a small industry. So I started to work as a UX designer in a business software company.”

One day while he was still working in that business company, Simone bought a flagship synthesizer that, to his perception, had a terrible UI. That’s when he realized there was a huge gap between everyday digital products tailored for the average consumer and the more specialized software in the digital audio niche.

“It was a great product, but the UI required a 300 pages manual. You needed to go to forums to learn how it worked.”

From then on, Simone had the dream to create an audio application so well-designed and intuitive that even reading a manual to understand how it worked would be irrelevant.

A few years later, the iPad came out. The concept of a touch surface was new and exciting, so Simone started designing music apps for iOS during his free time. One day, he stumbled on an app that he thought he could improve. He wrote to the developer, exposed some of his ideas and they started collaborating on the development of the app together.

This took Simone all the way to the MUSIKMESSE in Frankfurt, where he met Jordan Rudess, the legendary keyboardist of Dream Theater. Jordan had a company building music software for iOS and was one of the pioneers interested in developing applications for touch surfaces. By collaborating on various projects in Jordan’s company, Simone got his foot in the door of the music software industry.

In 2015, after being introduced by Jordan, ROLI offered Simone a job in London. “I started to work on the ROLI Seaboard project and I worked on the Equator synth, the main synth capable of managing the multidimensional interface of the ROLI Seaboard”.

After that, offers started rolling in. Many companies tried to entice him to join their ranks. But in 2016, Simone met Emanuele Parravicini and Stefano Lucato. They spoke and exchanged ideas. The company’s vision seemed to fit in the direction Simone wanted to go and, more importantly, there was room for working on Simone’s initial dream application.

They soon joined forces. The company FATAR helped them bring the vision to life by providing financial backing as well as connections in the industry to put a team together, start the development process, and launch the project. In 2018, the first version of Camelot Pro was released to the public.

 

Don’t wait for opportunities, make them happen

 

If there’s any takeaway from Simone’s story, it’s that going forward to meet your dreams is the way to go. Especially with artists, there’s a long-lasting myth that says you need to “be discovered” to achieve success. In fact, the key to success is to dare to talk to people and expose your ideas while having enough creativity to apply the knowledge you accumulate in one area into a completely different industry.

“When I was building my network, I started by having conversations with almost everyone. On one thousand people I would talk to, maybe ten of them would lead me to more serious connections that could help me make a step towards my goal.”

Perseverance mixed with a bold attitude open to meet people led him to create the Holy Grail for live music performers, Camelot Pro. Time will tell the new heights awaiting Simone Capitani next.

The System Used to Develop Incredible Audio Software Revealed
The System Used to Develop Incredible Audio Software Revealed 1024 576 Audio Modeling

The System Used to Develop Incredible Audio Software Revealed

Audio Modeling aims to create a complete ecosystem of virtual instruments and audio software for live and studio musicians, music producers, and composers.

But developing simultaneously software, especially different plug-in formats (VST, VST3, AAX, AU, Standalone instruments) compatible on most platforms (Windows, macOS, iOS) comes with its fair share of challenges:

  • Each platform and plug-in format has its set of specifications that needs to be carefully taken into consideration during the development process.
  • The difficulty in keeping a fast release pace for the resolution of bugs and the development of new and improved versions of our existing products while operating with a small team of developers.
  • Finding enough time and energy to allocate to staying in touch with the community so we can constantly be aware of new needs emerging for musicians, composers, and producers in the music industry.

A Winning Combination — JUCE and Continuous Integration

JUCE is a development framework specialized in enabling developers to create a variety of cross-platform audio applications, from DAWs to plug-ins.

Projucer is the programming environment that makes working with JUCE intuitive and easy. It provides everything developers need, from a code editor to a GUI interface for setting up environment variables as well as platform-specific settings necessary for building cross-platform audio applications from a single codebase.

Continuous Integration (CI), as used by Audio Modeling, is a software development methodology that consists of triggering the automation of builds through the use of version control within our development team.

Every time one of our developers commits their code to a dedicated repository through version control, a series of automated scripts are triggered which first creates a new and clean virtual container and then proceeds in compiling the code and building the application.

If the latest committed code breaks the build in any way, it is reported by the system. This ensures that bugs and errors are detected quickly, and makes the process of identifying the source of the issue much more efficient.

For Audio Modeling, adopting a CI methodology is a huge time saver because of how fast it allows us to narrow down issues and fix them. Since we operate with a small team, identifying bugs and errors quickly is crucial to keeping a relatively fast development pace.

CI methodologies originate from web development and are normally used as a way to automate testing. But in our case at Audio Modeling, our reasons to apply CI are quite different. We use it primarily for automating the building process simultaneously for different platforms and to detect problems quickly as the development process of a product unfolds.

But since this application of CI is not necessarily a traditional one, it introduced an issue that required special attention.

The Unique Solution Developed by PACE Allowing Us to Implement CI With Our Team

Here’s a step-by-step overview of our development process:

  1. Our developers work on a single codebase maintained on a dedicated repository located on our own GitLab server. Maintaining a single codebase is possible, as we saw, thanks to JUCE and Projucer.
  2. Once a team member is ready to commit changes, they push their new code to the main repository using Git, a version control tool.
  3. The commit triggers Azure pipelines (there are various pipelines created for Audio Modeling’s various products and builds) and the process of creating a new virtual container in which the software is automatically built begins.
  4. The automatic build completes and any errors found by the system are reported. This process includes the generation of the installer and the Apple notarization for macOS products.
  5. For iOS apps, the new build is then pushed to Apple’s platform AppStore Connect and awaits approval.
  6. Desktop builds are also occasionally pushed to a special DropBox location where Beta testers have access to them and can start the testing process. iOS Apps are available for testing through TestFlight.

Continuous Integration process

Audio Modeling’s Continuous Integration System Schema

For this system to work, every step of the build needs to happen directly on the server through command line scripts and applications. The problem we faced was that one plug-in format required the use of a GUI-based application.

AAX (AVID Audio Extension) is a plug-in format developed by the company AVID specifically for plug-ins used by one of their flagship products, Pro Tools. AAX plug-ins require a signature by an application called wraptool which is developed by PACE. The purpose of this signature is to provide AAX plug-ins with cryptographically secure proof that it was produced by Audio Modeling (or any other software company using it) and is unmodified since its signing.

Normally, digital signing can be done in 2 ways:

  • With a physical iLok USB key
  • With iLok Cloud, a feature permitting users to store licenses in the cloud and manage them through a GUI interface over the internet

PACE didn’t provide a tool that could be used on a server from the command line, making the signature process for AAX plug-ins impossible to automate.

In August 2020, a conversation started between the Audio Modeling team and the developers at PACE. They took the time to listen to us and understand the problem we faced in our development process.

PACE acquired JUCE in April 2020. The fact that the same team was now in charge of developing both applications crucial to Audio Modeling’s plug-in development made the communication easy and straightforward. A few months later, and after a thorough evaluation of the problem, the team at PACE came up with the perfect solution.

They created a command line utility that allows us to start an iLok Cloud session, bypassing the need for using their GUI-based application. Thanks to this tailored-made solution, we are now able to automate the build of AAX plug-ins and integrate the signing process inside Audio Modeling’s Continuous Integration system.

This screenshot shows an excerpt of one pipeline used to build Audio Modeling / SWAM products

This is proof that PACE is indeed a company that is developer-centric. Their recent acquisition of JUCE, an essential tool for audio software development, promises a bright future for the audio software industry. The fact they were able to come up with a flexible solution in such a short period of time is proof of the company’s motivation to constantly improve their products and listen to the needs of not only the developers in their community but also the companies using their products.

We are grateful and proud to count them as our partners and we are looking forward to contributing together future innovations to the world of audio software.

HOW TO CREATE A STRING QUARTET MOCKUP WITH SWAM SOLO STRINGS, MUSESCORE3, AND GARAGEBAND
How to Create a String Quartet Mockup With SWAM Solo Strings, MuseScore3, and Garageband 1024 576 Audio Modeling

How to Create a String Quartet Mockup With SWAM Solo Strings, MuseScore3, and Garageband

One of the biggest challenges music composers face is expressing their ideas clearly to the musicians performing their music. No matter how accurate and detailed a music score is, ultimately music notation is always subject to the interpretation of the person reading it. Sometimes, the most effective way to quickly and easily communicate ideas is to create a digital mockup of what we would like the final piece to sound like.

While there are very good sample libraries out there, it’s impossible to achieve true expressivity using samples. This limitation comes from the simple fact that samples are pre-recorded. No matter how good they are, they were not created with the specific needs of your original piece in mind.

In comes the magic of physically modeled instruments. With physical modeling, the sound is not pre-recorded. Physically modeled instruments do not work with samples at all. What they do is using complex equations and algorithms to model the mechanical system of an instrument according to some external input.

The SWAM plug-ins are virtual instruments going even further because they use special techniques, in addition to physical modeling, to not only model the way an instrument reacts, but also model the unconscious behaviors of a real professional musician.

Because these virtual instruments can express a full range of emotions, they are perfect for creating mockups of ensemble and chamber music compositions.

In this tutorial, we’ll have a look at how to create a mockup of a string quartet using MIDI files created from a score.

Here are the exact versions of my system and the applications I’ll be using:

  • GarageBand v10.3.2
  • SWAM Solo Strings Audio Unit plug-ins v3.0.2
  • MuseScore3 v3.6.2
  • macOS Catalina 10.15.3

Step 1 — Getting ready to use SWAM virtual instruments in Garageband

Purchase, download, and install SWAM Solo Strings v3

After purchasing the SWAM instruments you want, you’ll need to create an account in Audio Modeling’s Customer Portal and register your license keys. After that, all your license keys and the installers will be available in your account.

If you need more details on how to open an account, register, or install SWAM instruments, visit our Knowledge Base.

Enable Audio Unit plug-Ins

Open Garageband and go to Preferences > Audio/Midi. Make sure the box next to Enable Audio Units is checked.

Step 2 — Create MIDI files from your instrument parts in MuseScore3

Open MuseScore3. If you only have a full score of your composition, start by creating individual parts for each instrument. If you don’t know how to do this, you can refer to this post in MuseScore’s handbook.

Once you have files for each individual instrument part, open one of them in MuseScore3. In the top menu, click on File > Export… and select MIDI in the Export To dropdown menu. Click on Export.

Repeat this step for each instrument part.

Step 3 — Create new tracks in Garageband by importing each MIDI file and choosing the right SWAM plug-ins

Open Garageband and create a new empty project. When prompted what kind of audio track you’d like to create, choose Software Instrument.

Navigate to your Finder folder where you saved your MIDI files created with MuseScore3. Drag and drop a MIDI file in the track to import it into your project. Select Import Tempo when prompted if you want to use the tempo information you wrote in your original score.

Click on the knob symbol (next to the scissors) in the left corner of the Garageband interface. Now, at the bottom of the screen, you should see the Controls and EQ window. Make sure Track is selected. Click on the arrow next to where it says plug-in and select the SWAM instrument you want to use.

Repeat this process by creating new tracks for each instrument you’ll be using.

In this tutorial, since we are creating a string quartet mockup, we’ll have two tracks with SWAM Solo Violin (violin 1 and violin 2), one track with SWAM Solo Viola, and one track with SWAM Solo Cello, all of them in mono.

When prompted, choose Software Instrument

Import the tempo information from your score

Drag and drop a MIDI file in the track you created to import it into your project

Choose the SWAM instrument you want to use

Step 4 — Clean up the MIDI tracks if needed

You are now ready to listen to your composition played by a SWAM Solo Strings quartet (or any other formation you chose to create). Of course, it still won’t have the level of Expression you are looking for because we haven’t drawn the Expression curve yet, but you can still get an idea of how your composition sounds like using the default settings.

More importantly, you can clean up any unwanted notes that might have been created by MuseScore3 when exporting your file to MIDI (for example, extra notes created when in reality, they are tied together and shouldn’t be repeated).

Editing MIDI is very easy in Garageband, simply click on the scissors icon at the top left corner of the interface and make sure to select Piano Roll in the bottom window to see the MIDI editor appear. From there you can change, move, and delete any MIDI note. Make sure you have the correct track selected before editing anything.

The Piano Roll view allows you to easily edit MIDI files.

Step 5 — Adjust Note On velocity

SWAM virtual instruments can’t be accurately controlled with note on velocity alone, but velocity is important for your composition to be played as expected.

For example, consecutive connected notes played with a velocity below 25 will have a slur between them. The bigger the difference between the velocity values, the faster the slur. The velocity will also affect how strong is the attack at the beginning of a legato phrase. If you want the note to be played with a softer attack, the velocity value should be lower.

You can get a lot more details about how note on velocity affects the way the instrument plays by referring to the SWAM Solo Strings user manual.

With that in mind, you can play around with the velocity of each note to make sure you get the result you want. To do that in Garageband, simply select a MIDI note (or a group of notes if you want to edit in bulk) in the Piano Roll view and adjust the velocity value with the slider on the left.

Select one or more notes in the Piano Roll view and adjust the velocity using the Velocity slider on the left.

Step 6 — Draw the Expression curve

The Expression parameter is where all the SWAM magic happens. When SWAM instruments are recorded in real-time, we need to control the Expression with a controller that has this capability such as a sustain pedal, a breath controller, a knob, a wheel…

But since we are not playing the instruments in real-time and we are importing MIDI tracks instead, we can manually draw the Expression curve without the need for an external controller.

To do this in Garageband, click on the knob icon in the top left corner of the interface. Next, in the application navigation bar, select Mix > Show Automation. You should see a new dropdown menu appear below the main controls of each track. The parameter showing by default will be Volume. Click on the arrows, then navigate to the name of the SWAM plug-in (in the picture below, it’s the Violin), then Expressivity Group > Expression. Click to select it.

Next, click anywhere inside your track to see a straight line representing the selected parameter appear.

Click on Show Automation

Select the Expression parameter from the dropdown menu

Click anywhere in the area where the MIDI track is to see the straight line representing the Expression parameter appear

The Expression parameter is where all the emotion of a player is simulated. So drawing the Expression curve properly means you are putting yourself in the shoes of the musician and literally playing the instrument!

This is where dynamics will shine. The more precisely you draw the curve, and the more you have an accurate knowledge of how that instrument is played in real life, the more emotions and realism you’ll be able to inject into the way your composition sounds.

To draw the Expression curve (or to draw any other parameter), click on the straight line to create dots, then move them around and adjust the curve to your liking.

Example of what a drawn Expression curve can look like

Step 7 — Add some Early Reflection for additional realism

You can play around with the Early Reflection parameter to help increase the overall realism of the piece. Early Reflection simulates the first reflections generated by the sound wave being reflected by the walls, floor, and ceiling of a room. Reflections are different for each musician in the room since they can’t be sitting exactly in the same position.

To obtain even more realism, you should also adjust the starting point of each note since, in real life, it’s impossible to have musicians start exactly at the same time. Some DAWs allow to apply small amounts of randomization to the notes but unfortunately, this is not possible in GarageBand. But since GarageBand is a free application, let’s not complain too much.

You can find this parameter in the automation dropdown menu (the same one you used to select the Expression parameter), then navigate to the name of the SWAM plug-in, then FX Group > Early Reflection Gain.

Play with the Early Reflection Gain parameter to add some extra realism

Step 8 — Finish up by adjusting the panning and adding reverb

Finally, it’s time to add the final touches to your composition mockup.

In the automation dropdown menu, select Pan Pot to adjust the panning of each instrument to place them in space (name of SWAM plugin > Main Group > Pan Pot).

Adjust the panning of each track by adjusting the Pan Pot parameter.

Last but not least, add some reverb. Because these software instruments are physically modeled instruments, they sound quite dry by default. We recommend adding an external reverb to ensure you get the exact effect you want according to the needs of the specific project you are working on.

There are some built-in reverbs in Garageband you can play with without needing to buy any additional plug-ins. In the example below, the reverb is added on the Master track to get a more realistic effect of a string quartet playing in a medium-sized hall.

Using the built-in Space Designer reverb on the Master track.

Bonus — More useful parameters to play with

SWAM plug-ins are built to give you the possibility to control every single aspect of the instrument’s sound. Depending on the specific needs of your composition, you might want to go deeper besides simply drawing the Expression curve.

Here are a few common parameters you might want to tweak:

  • Bow Pressure: Does your composition include parts that are pianissimo which would require a real musician to refrain from applying the normal amount of bow pressure? Or fortissimo which would require much more pressure than average? Adjusting the Bow Pressure parameter allows you to get those effects.
  • Bow Polyphony: By default, SWAM Solo Strings are set as monophonic but if parts of your composition include double strings, you can draw the Bow Polyphony automation in Garageband to specify when the instrument should be allowed to play on two strings simultaneously. It’s recommended to have this only in the musical sections where it’s required because otherwise, you might not be able to get the exact effect you’re looking for when the instrument is supposed to be played on a single string only.
  • Vibrato Depth: This controls how much of the vibrato we hear. This can be useful if you have sections you’d like to be played without any vibrato at all, for example, because you want to achieve a certain effect. You can draw the automation for the vibrato depth at your convenience.
  • Vibrato Rate: You don’t need to have only one vibrato speed throughout your piece, you can draw the automation for the speed of the vibrato to change gradually on long sustained notes for example, or to adjust the default speed of the vibrato to your liking.

Express your creativity easily with SWAM instruments

Creating mockups of your pieces to pass on to musicians is the quickest and most efficient way to make sure your ideas come across correctly. Besides the SWAM instruments themselves, the software used in this tutorial is free. And if you don’t own any SWAM Solo Strings yet but you’re feeling insecure to make the investment, contact us at support@audiomodeling.com to request a Trial license! Let us know which products you’d like to try and we will provide you with a temporary license, no strings attached (pun intended!).

We also have a supportive SWAM community on Facebook where you can ask questions, find support if you run into issues, and get inspired by seeing what other musicians, composers, and producers are doing with SWAM! The possibilities are endless and we are excited to see (and hear) the incredible music you’ll create with our products.

Don’t forget to share the work you’re proud of! Share it on your social networks, tag us, and we’ll happily share your post with our audience.

Now go make some music!

Can Software Instruments Express the Extent of Your Musicality? 1024 576 Audio Modeling

Can Software Instruments Express the Extent of Your Musicality?

To become a great musician, it takes more than having fantastic technique allowing you to play a gazillion notes per minute. Of course, some level of technique is required to have the freedom to express a variety of emotions on the instrument but the keywords are these: expressing emotions.

There’s a magical element in good music playing, something that’s difficult to pinpoint and explain in words but that we can all recognize when we hear it. 

What does being musical mean?

In college, it seemed music students could be separated into two different categories: the ones who were technical, and the ones who were musical. Teachers sitting in to judge student’s performance exams would say things like “You are very musical but you should work on your technique” or “What a great technique you have but your musicality doesn’t really come through”.

It seems like musicality is the magical ingredient that makes the listener experience something from the way a musician plays, but what is musicality exactly? We could say musicality is the ability to express something beyond simply reproducing a series of notes, rhythms, and dynamics written down by a composer, or improvising complex solos. 

We can go even further by saying that musicality is the element that makes someone’s playing unique, that gives a musician an authentic sound or style that we, as listeners, can recognize. Jazz musicians like Chick Corea or Victor Wooten have an incredible amount of technique, of course, but more than that, they have a particular sound that makes their playing recognizable anywhere.

In the professional world, musicality is an important part of becoming a successful musician, whether it’s in the studio or on stage. Producers and composers hiring session musicians don’t just want someone who can play something technically challenging on the fly, they also look for players who can make their ideas come alive, musicians that can infuse in their music precisely the thing computers can’t: emotions

The Limitations of Hardware and Software Instruments

But in this day and age of technology, musicians are faced with an additional challenge. We don’t always play acoustic instruments anymore. 

Keyboard players, drummers, or all other kinds of instrumentalists now play more and more on electric instruments or MIDI controllers with sounds selected on their computer. It can be frustrating in some cases to find ways to express the full range of our musicality while using digital sounds. It never quite feels the same as playing a real acoustic instrument.

So is there a solution or are musicians using technology bound to play music without the same level of musicality as acoustic music? Is it necessary to have the budget to hire a full orchestra if we want our orchestral piece to be truly expressive?

Expressive and realistic VST plug-ins and software instruments

Digital audio has come a long way in the last decade and the good news is that we are not limited to using traditional technologies anymore. With the development of physically modeled instruments, musicians, composers, and producers now have access to a variety of virtual instruments and VST plug-ins with a level of realism unequaled to sample libraries. 

Physical modeling refers to sound synthesis methods in which the waveform of the sound we want to generate is computed using a mathematical model, a set of equations, and algorithms to simulate a physical source of sound like a musical instrument. 

Unlike sampling, which uses recordings to replicate the sound of an instrument, physical modeling doesn’t generate a sound as such but creates the conditions for the system to create it according to an external input (like someone playing on a keyboard).

Realism is one thing, and definitely something important. But what about musicality?

The SWAM engine was created precisely to address the need for not only realistic software instruments and VST plug-ins but also expressive ones.

SWAM (or Synchronous Waves Acoustic Modeling), developed by Stefano Lucato and Emanuele Parravicini, is a proprietary technology based on physical modeling that adds elements allowing the system to not only model the mechanical system of an acoustic instrument in real-time, giving it that sought after realism but also models the unconscious behaviors of a real professional musician.

If we think about it, musicality is not something applied consciously. It’s a musician’s way of feeling the music that influences the way they touch the keys, breathe in their instrument, pluck the strings, or move the bow. The emotions felt are conscious but the way these emotions are communicated through the instrument is not. 

In this sense, what the SWAM engine accomplishes is spectacular because it can perceive and model all these unconscious behaviors that professional musicians perform at their instrument when they are expressing the full range of their musicality.

Expressing Yourself From the Studio to the Stage

One of the main issues with traditional sample libraries is their size. Since the programs include a ton of files and recordings (samples), they are usually very heavy. This makes them complicated to use in a live setting because they normally take a long time to load and they are not very practical to use on a portable system with limited storage space. 

But since physically modeled instruments like SWAM instruments don’t need to store any samples, they are extremely lightweight. They can not only run on regular laptops but some SWAM instruments (and soon all of them) also have versions available for iPad. 

And these software instruments are made to be played in real-time. They come in different formats, including a standalone version, VST, VST3, AAX, or Audio Unit plug-ins. For live performances, they can be loaded inside a VST host such as Camelot Pro.

Playing these software instruments live means you can extend your musical expression to not only the one instrument you already master, but you can use your skills to play any acoustic instrument you like! You can use your technique on the keyboard to learn how to accurately express yourself using a clarinet or saxophone sound. You can use your breath controller to play an emotional violin solo. The sky is the limit.

Computers Are Not Replacing Musicians

When digital audio came around with the invention of MIDI, artists and musicians were afraid that computers would replace them. That soon, we wouldn’t need musicians to play the music anymore. We thought that the job of a musician would become obsolete. But decades later, we see how false this idea was. 

No matter how great the technology, we still need musicians to play the music. The only difference is that the software and virtual instruments we now have to express ourselves are better.

Now, we can use the skills we have on one instrument and apply them to another. This can have a variety of applications besides recording and performing, it can be used to learn new instruments, orchestration, music theory, and to teach these things as well. 

Nothing can replace the power of human emotions. Audio Modeling’s mission is simply to create the tools allowing the full range of these emotions to shine.

Audio Modeling – CAMELOT 2.0 a “Must Have” for Live performance 1024 576 Audio Modeling

Audio Modeling – CAMELOT 2.0 a “Must Have” for Live performance

On stage or at home (Home Studio), many musicians wonder how to manage Keyboards, Sound Modules (hardware and software), MIDI Controllers…

Until now, musicians using Apple computers had MainStage which not only allows you to manage all these peripherals but also embeds the sound modules delivered with Logic Pro X. This is great and comes at a very affordable price compared to the possibilities of the product.

Yes but what about PC users, I hear you say? There are other applications that have been developed which are more or less known or more or less well distributed but they’ve remained in the shadows of MainStage which is widely used by many professionals all around the world.

So, the Italian company Audio Modeling, (who is also developing a series of virtual instruments using their SWAM engine), has made a software that seems to address the desire of a growing number of musicians looking to integrate various hardware components (Synthesizers, Sound Modules, Keyboards of controls, MIDI controllers…) with software applications (Virtual Synthesizers, Virtual Effects…). This software is CAMELOT Pro 2.0.

I don’t know if the name chosen by Audio Modeling refers to the Arthurian legend, but it could be that Audio Modeling is looking for the holy grail for all musicians on the planet.

I admit, before CAMELOT was released I was using Apple's MainStage. But when CAMELOT version 2.0 came out, and after downloading the trial version, I started using it and discovered not only a very simple and intuitive software but also a very powerful tool.

At first glance, I wasn’t particularly thrilled by the looks of the graphical interface. Compared to other current software, we can say CAMELOT is minimalist (graphically speaking… this is a personal opinion). But behind this first misleading impression, CAMELOT is much more than what it looks like. In fact, as soon as you scratch the surface, you discover a monster of power and ergonomics. Everything in this software was done precisely to allow users to perform complex tasks with disconcerting simplicity (to be honest, I only opened CAMELOT’s online “Quick Guide” to dig into more sophisticated features after using it for a week, only to find out they were much easier to implement than I thought).

Let see what CAMELOT is capable of:

First, CAMELOT is available on Windows, Mac OS, and IOS (this already differentiates it from MainStage which is only for Mac OS users).

CAMELOT is mainly used to create routines to keep musicians focused on what they like to do … music, while CAMELOT’s job is to take care of all the grunt tasks.

The Quick User Guide on the Audio Modeling site is very well done. I won’t go over all the features in detail but I will try to offer, as much as possible, suitable features for Wind controller users in future articles.

So, what do we need to make music on stage?
First, you need something to generate musical notes (a keyboard, a Workstation, a wind controller, a MIDI controller…), one or more sound modules (Synthesizer, Expander, hardware or software), and a PA system (or equipment allowing you to connect to such a system like a Mixing Console, PA equipment, amplified speakers or an audio interface, etc…).

During a concert or a stage performance, you’ll want to play a set of pieces one after the other in a pre- established order but you also need the possibility, if necessary, to easily modify this order at any time. This is called a Setlist in CAMELOT. Depending on the nature of your concerts, it’s possible to create several Setlists (in the example below “Show 2019”, “Work In Progress” and “Show 2020 Part 1” are Setlists and song titles contained in each setlist are displayed on the right side of the screen). The pieces that make up this Setlist can be moved at any time by simply clicking and dragging (difficult to do simpler).

For each of the pieces you are going to play, you’ll need one or more controllers (Master Keyboard, Wind Controller, MIDI Pedal, etc.) to control and play one or more sounds (more or less complex, effects, etc.) as well as music sheet and, if you play alone, possibly one or more backing tracks. CAMELOT acts as “your conductor by proxy” by organizing all these elements in a simple way.

First, tell the program which hardware you want to use. To do this, go to “System Settings”.

With these parameters, you can tell CAMELOT which equipment you’re using. (I advise you, as a first step, to define at least the sections Audio and MIDI to establish the connections with your equipment and then to try a simple configuration in order to familiarize yourself with the different tools).

  • Audio: This allows you to configure your audio card, select the ASIO driver of your card to limit the
    latency, and define the audio outputs you want to use.
  • MIDI: This allows you to define which MIDI inputs and outputs you’ll use (ex: for “Class Compliant”
    equipment—equipment that you connect directly via USB to your computer and which is recognized
    without having to install a specific driver—their name will be displayed directly in the list. If, on the
    other hand, your equipment is connected via DIN MIDI, you’ll need to select the MIDI port to which
    they are connected.
  • Remote Control: This allows you to define which component to use for switching from one song to
    another, for starting the playback, etc…. (I’ll come back to these features later).
  • Plugin Manager: This allows you to scan folders on your computer where your virtual synths, virtual
    effects, MIDI processors, etc. are installed, and to manage them efficiently.
  • Backup & Snapshots: This is where you can export or import your backups, as well as create and
    restore your Snapshots.
  • Accounts & License: In this section, you can visit the Audio Modeling site to download the new
    versions of CAMELOT or, if you are using the evaluation version, convert it to a full version.
  • Options: Various options which we will come back to later.
  • About: Gives you information such as which version of CAMELOT you are currently using.

Note: When an exclamation mark (!) appears in red, it means CAMELOT has detected a problem in one of its sections. It’s advisable to go and see what the problem is (e.g. missing MIDI connection, etc…).

Then, everything is simple. In almost all of CAMELOT’s pages when you have a dotted rectangle with the “+” sign in the center, you can create something new. (ex: in the 1 st page you can create a new Setlist, in the second page a new Song, in each song, new Scenes, and in each scene install new sound modules, Effects, Layers, etc…) A window will appear where you can indicate the name of the object and save it by clicking “Done”.

To be followed … closely.

Other important information, Audio Modeling is also developing their SWAM instruments based on their SWAM engine to IOS. No doubt that the CAMELOT / SWAM combination on IOS should allow musicians playing wind controllers to have a light and efficient alternative to racks loaded with sound modules that are unmovable over time. However, as the price of these instruments will be more economical, the version offered on IOS will be limited (fewer adjustments possible, however, if you have the full version on Mac or PC, it will be possible to import presets including all the settings of the Mac or PC version on the IOS in order to have the complete settings).

Freely translated from SaxFred Blog (in French)

Audio Modelling – CAMELOT 2 un “must have” pour le live

Audio Modeling – Camelot 2.0 – Application for “LIVE”

Camelot d’Audio Modeling is the direct competitor of Apple's Mainstage, they do not yet provide the same functionalities, but Camelot has caught up and even overtook its competitor in a lot of areas and above all has the big advantage of being multi- platform (Windows, Mac OS and IOS) which allows to create overly complex configurations and above all to allow musicians on PC to benefit from the same environment as on Mac and IOS.

For a few weeks now I have been using this program for several reasons, on the one hand I use both the Mac as well as the PC and IOS for music and especially Camelot is much easier to use than its direct competitor, much more intuitive and yet extremely powerful. In Camelot you get the impression that everything has been thought of for efficiency, not the right graphics just the essentials to achieve the desired result, to make music without having to be a computer scientist. No frills, simplicity, once you understand Camelot's logic everything is extremely easy, to implement.

Camelot allows combinations of incredible sounds mixing, Synthesizers, and Hardware and Software effects, MIDI processors, seamless sequences of sounds, in the song, we can even automate these sequences thanks to the contribution of an audio track (playback) as well as the display of the score (the sequences are done without having to take your eyes off the score). Even the chaining of songs can be done automatically. Likewise, if you play a transposing instrument (ex: Saxophone Alto Eb, Clarinet Bb) and the score you have is in the key to your instrument and you use a Camelot Wind Controller transpose the MIDI part. you no longer must worry about modifying the transposition on your MIDI controller between each song, Camelot takes care of it itself.

I advise you to refer to the article on Camelot that I did in the Blog section for additional information.

Freely translated from SaxFred Applications’ section (in French)
https://saxfred.1ere-page.fr/applications-mac/

Get Ready For Your Next Show With Camelot Pro 1024 576 Audio Modeling

Get Ready For Your Next Show With Camelot Pro

Right time to look for new solutions to make your live performance great

As musicians and performers, we all took a severe blow during 2020. No more live shows, no more masterclasses in person, and barely any work in the studio. Many of us had to reinvent ourselves to be able to survive. Many musicians, composers, and producers took this isolation time as an opportunity to learn and experiment with new instruments and new hardware or software tools.

Now, we are starting to see the light at the end of the tunnel. In some places, life is getting back to normal—events and gigs are starting again which means that soon, you’ll be able to walk up on stage once more. But while you are still waiting, why not use this last stretch to learn how to use an application that could dramatically help you improve your live performances?

More Than a VST Host, Camelot Pro Is a Musician’s Swiss Army Knife

What if you could control all your MIDI controllers, keyboards, and pedals all in one centralized application? What if this app didn’t only allow you to control all your devices on stage but also provided you with a music score reader, a MIDI patchbay router, and a setlist manager? 

Camelot Pro is the application every musician with a complex live setup dreams of. The idea behind Camelot is to make the lives of every stage musician easier and more convenient so that their time is better spent focusing on the music, rather than focusing on technical details. 

Camelot provides one central place to manage every aspect of your live performances:

    • Manage your setlists and songs. Keep all your songs organized by events and projects. Share songs and setlists with bandmates and across devices.

    • Attach audio backing tracks. With Camelot’s Timeline feature, you can set backing tracks and automate transitions between sound, MIDI, and attachment configurations simultaneously.

    • Add auto-play and auto-stop markers, and use the separate output option to send the click to your bandmate’s in-ear monitors, separated from the main backing track and the band mix.

    • Manage all your hardware instruments in one place (sounds, presets, transitions, everything).

    • Use it as a host for your software instruments and FX.

    • Integrated music score reader. Attach PDF or JPG music scores or chords and add your own sticky notes on top of them.

    • Acts as a MIDI patchbay router for advanced channel routing, MIDI layer connectors, MIDI monitor, Key-Range and Velocity splits, and MIDI FX plug-in support.

    • Switch seamlessly between sections of a song without Audio or MIDI interruptions.

    • Available on iPad, macOS, and Windows.

Camelot Pro is great for any type of instrumentalist, not only keyboard players who have multiple keyboards on stage! 

Here a few use cases for different types of musicians.

The One-Man-Band

John is a singer and guitar player. He plays his own compositions alone on stage with the support of backing tracks. 

John has Camelot Pro installed on his iPad and uses it to manage all the moving parts of his live performance.

  • All his backing tracks are loaded in Camelot and organized with automated start and stop markers. His songs are then neatly organized in a setlist.
  • His pedalboard is connected through USB to his iPad and he can use it with Camelot Pro to switch between patches and snapshots.

For a more in-depth explanation on how to accomplish this, check out this video by Nick Trapassi.

Drummer Managing Complex Tempo Changes and Sound Settings

Dave is a drummer who plays with different bands. In one of these bands, he needs to manage the show’s backing tracks, while using dedicated click tracks for each song and sending the click to another auxiliary channel so that his bandmates can hear it in their in-ear monitors.

With Camelot Pro, Dave can load all the backing tracks and all the click tracks on his laptop, organize the setlist in Camelot, automate the click track and backing track changes, and easily make last minute changes to the setlist with a simple drag & drop gesture. 

Try It Before Buying

Learning new tools can be intimidating, especially if you have less experience with using this kind of software. That’s why we think it’s important you get the chance to first try Camelot before buying it

A free version of Camelot is available on our website for download. Simply fill in the form with your details and we will send you a link to download the Camelot Free version installer! 

Camelot Free has nearly all the features of the full Camelot Pro version because we want you to experience the power of this software before committing to buying it. You can see a comparison of the features between the free and paid versions here.

That’s how sure we are that you’ll love it.

What are you waiting for? Have a go with Camelot Free by signing up here and let us know your thoughts! Join our Facebook group to exchange impressions and ideas, or to receive help and feedback from our team and our community. 

We Are the Future of Digital Music Making 1024 576 Audio Modeling

We Are the Future of Digital Music Making

It seems only like a few decades ago when the thought of replacing real instruments with virtual ones was laughable. We never expected for sampling technologies to come as far as it did but yet, it happened.

As computers become more powerful everyday, we are not satisfied anymore by what we now call traditional technologies can give us, we want more. More expressivity, more realism, more flexibility, and the ability to play virtual instruments live as well as being able to program them in the studio. The close enough realism that sampling gave us is now outdated.

Time to innovate.

What Makes SWAM Instruments and VST Plug-Ins So Special

Physical modeling refers to sound synthesis methods in which the waveform of the sound is computed using a mathematical model, a set of equations, and algorithms to simulate a physical source of sound like a musical instrument. 

Unlike sampling, which uses recordings to replicate the sound of an instrument, physical modeling doesn’t generate a sound as such but creates the conditions for the system to create it according to an external input (like someone playing on a keyboard). Modeled virtual instruments can give the feeling and  behaviour of the instrument, reacting and interacting with the player the same way a real instrument does.

Audio Modeling has always been at the cutting edge of technology for creating the most realistic virtual instruments and VST plug-ins on the market. To achieve this, we developed a proprietary technology called SWAM (Synchronous Waves Acoustic Modeling) based on physical modeling.

The SWAM engine combines pure physical modeling synthesis methods with some additional elements allowing us to go much further in terms of flexibility and realism.  The topic is huge but in a nutshell, not only do we model the mechanical system of an instrument, we also model the unconscious behaviors of a real professional musician

This is why musicians and producers are able to inject into SWAM instruments their own style of playing. This kind of realism and expressivity is impossible to achieve with sampling.

We Aim to Provide a Complete Ecosystem to Musicians, Producers, and Composers

Our mission is not only to create single products, but to create a full musical ecosystem to provide musicians, composers, and producers with a complete toolset to create incredible music. 

  • Each SWAM instrument is available as a standalone virtual instrument and in VST, VST3, AAX and Audio Unit plug-in formats.
  • All our instruments are available for macOs and Windows, with some also available on iPad (soon, all our instruments will be available for iPad and iPhone as well).
  • We have a range of SWAM powered products. Previously with ROLI Noise and recently with GeoShred. There are GeoSWAM Instruments optimized for playing on GeoShred’s unique playing surface and on MPE controllers.

We developed our very own performance management platform, Camelot Pro. We call Camelot Pro the “Live Performance Swiss Army Knife” because it connects hardware and software instruments together with the best MIDI patchbay and processor ever created for macOS, Windows and iOS. Besides that, it also has a ton of features making it essential for any live performer.

Reaching a New Milestone With the Release of SWAM Solo Strings v3

SWAM Solo Strings v2 were already considered the best physically modeled string instruments and VST plug-ins on the market today. Yet, we knew we still had the capacity to improve. 

In the last week of March 2021, we officially released SWAM Solo Strings v3 for macOS and Windows. Now, SWAM Solo Strings v3 is available on iPad as well. This new release comes with major changes and improvements to the previous version.

In particular, the new SWAM framework and interface has interesting features:

  • A wide range of popular controller profiles. Making user-friendly products is extremely important to us. As much as we can, we are always aiming for Plug & Play solutions. That’s why we included configurations for a wide range of popular controllers including keyboards, breath controllers, wind controllers, MPE, and more. (link to youtube video https://youtu.be/jT3H8LFluL8)

List of built-in controller profiles

  • We also provide the option to go much deeper. The MIDI Mapping engine allows you to create custom configurations for your specific controller, use MIDI Learn, and remap curves to adjust the response and playability of the instrument according to your needs and style.

The MIDI Controller Mapping Interface

Remapping curves

Four types of messages are supported: AfterTouch, CC, CC Hi-Res and NRPN.

List of MIDI mapping parameters

We implemented microtuning and integrated it with MAQAM and SysEx.

Pitch control and micro tuning window

SWAM Solo Strings for iPad

  • The iPad version has an innovative control surface that allows you to play the instrument in real-time directly from your device
  • You can also read and sync presets from your Desktop to your iPad since the instruments are built using the same SWAM engine.

The only difference with the iOS version is that some advanced editing parameters are locked on iPad. 

According to Your Feedback, We Are Doing It Right

Since the release of SWAM Solo Strings v3, our inbox has been flooded with positive feedback. This means a lot to us since we are working tirelessly to constantly improve our technology. Here are just a few examples of what some of you had to say. 

“Thank you very much, I already bought an upgrade from Solo Strings v2 to v3. I use all of your physically modeled plug-ins: Flutes, Trumpets, Solo Horns, Violins etc. Your physically modeled software sounds absolutely fabulous! About 5 years ago I switched to physical software whenever possible. A big compliment for your enormous and fantastic work!”Andreas

* * *

“Just want to say how much I love your instruments. I’ve worked with sample libraries a lot and the modeled instruments just allow more human feel by the person playing it, not by the person who played it. It has been the most inspiring software I’ve used in a very long time. Instead of doing the same thing differently, these instruments are truly something new for me.”William

* * *

“I just had an hour to test two of the plug-ins and while I knew from your videos that they would all sound very good, I didn’t know how incredible they would *feel* when actively played! This took me quite by surprise.”Karl

* * *

Our technology might hold the future of digital music making, but without our precious community of musicians, composers, and producers our work would be meaningless. Thank you for your meaningful feedback and input, you help us improve our products and technology everyday.

We hope to become an essential part of your musical journey.