fbpx
Posts By :

Stefano Foresta

Showcasing an Unusual Way to Play With SWAM
Showcasing an Unusual Way to Play With SWAM 1024 576 Audio Modeling

Showcasing an Unusual Way to Play With SWAM

When we created the SWAM engine, we wanted to provide musicians, music producers, and composers with the most expressive instruments on the market. Our customers normally want to directly play the instruments, whether in the studio or live, as they would with real traditional instruments but once in a while, an artist surprises us with an interesting use case of our products.

We never suspected the possibility of someone trying to use code to have their computer play a SWAM instrument for them. Oddly enough, this is the experiment attempted by Mynah Marie a.k.a. Earth to Abigail.

 

Using code as a performance tool to create music in real-time

 

In the last decade, a new underground trend has emerged in the electronic music scene. Live coding is a form of performance where artists use a computer programming language to create music in front of an audience in real-time. The idea is to show the code on a screen so that everyone can witness it develop as the artist types it to create the music. Live coding demystifies programming by allowing the audience to see inside the programmer’s mind through their screen while emphasizing the expressive and artistic powers of code.

Mynah Marie is a live coder going by the name Earth to Abigail. Her live coding environment of choice is Sonic Pi, an open source software developed by Dr. Sam Aaron from the UK.

Before becoming Earth to Abigail, Mynah worked as an accordionist and singer with bands and artists from all around the world such as Soan (France), Casa Verde Colectivo (Mexico), Din Din Aviv (Israel), Pritam, and Arijit Singh (India).

Five years ago, she discovered computer programming. From that moment on, she’s been looking for ways to mix her passion for code with her music knowledge. Discovering live coding and Sonic Pi is what inspired her to create Earth to Abigail.

“I’m interested in using a medium that most people consider ‘cold’ or ‘emotionless’ such as programming to create something full of emotions and expressivity, in my case music. I’m a bit obsessed with this alignment between rational analysis and emotions. To me, live coding is this perfect junction between the logical mind of science and the beautiful expressivity of art.”

“While many live coders come primarily from the programming or engineering side, I come as a musician first and I think this influences the way I use code to express myself.”

 

Using SWAM in an unusual way

 

“When I discovered SWAM instruments, I just knew I had to see what I could do with them through live coding. The question I’m trying to answer is ‘Can I find underlying algorithms that would allow a SWAM instrument to improvise in a human way?’ Basically, I don’t want to play the instrument myself, I want to see if I can instruct my computer to improvise the way a human would. Because SWAM instruments are so expressive and realistic, I thought it would make for an interesting experiment.”

Earth to Abigail performed for the first time using SWAM at an online event on May 12, 2021. While the experiment is still in its infancy, it’s interesting to see SWAM used in a way we haven’t thought of before and with interesting questions in mind.

“I’ve barely started to scratch the surface of the possibilities with SWAM. This first performance for me was just to see if I could realistically change the expression by coding MIDI messages in a way that sounds more or less realistic. There are still so many parameters to explore! Right now, I’m basically using only Sonic Pi’s built-in randomness functions to add some movement in the expression but eventually, I intend to develop my own set of functions and methods to allow me to ‘code improvisations’ that are complex but human-sounding.”

It will be interesting for us to see how this experiment develops in the future.

You can view Earth to Abigail’s performance here.

The Making of a Great UX/UI Music Software Designer
The Making of a Great UX/UI Music Software Designer 1024 576 Audio Modeling

The Making of a Great UX/UI Music Software Designer

Innovation doesn’t simply manifest out of thin air. Though it might seem that some of the most innovative ideas result from someone’s eureka moment, there are many occasions when these ideas arise from combining the knowledge of one industry to another. Many times, brilliant inventions were created by someone casting a new light on a concept that already exists.

This is exactly the kind of mindset that led Simone Capitani, UX/UI designer, to create Camelot Pro. “I wasn’t born as a musical instrument designer, I started with designing business software, websites, etc…”

 

From business apps to audio software

 

From an early age, Simone was fascinated by technology while also being a fervent audiophile.

“Moved by my passion for technology, I decided to study electronic engineering and apply that knowledge to music. After completing my engineering degree, I tried to apply to some audio software companies, but I had no connections in the industry so my CVs were getting lost in all the applications these companies were receiving.”

“For a while, I gave up thinking it was impossible to enter such a small industry. So I started to work as a UX designer in a business software company.”

One day while he was still working in that business company, Simone bought a flagship synthesizer that, to his perception, had a terrible UI. That’s when he realized there was a huge gap between everyday digital products tailored for the average consumer and the more specialized software in the digital audio niche.

“It was a great product, but the UI required a 300 pages manual. You needed to go to forums to learn how it worked.”

From then on, Simone had the dream to create an audio application so well-designed and intuitive that even reading a manual to understand how it worked would be irrelevant.

A few years later, the iPad came out. The concept of a touch surface was new and exciting, so Simone started designing music apps for iOS during his free time. One day, he stumbled on an app that he thought he could improve. He wrote to the developer, exposed some of his ideas and they started collaborating on the development of the app together.

This took Simone all the way to the MUSIKMESSE in Frankfurt, where he met Jordan Rudess, the legendary keyboardist of Dream Theater. Jordan had a company building music software for iOS and was one of the pioneers interested in developing applications for touch surfaces. By collaborating on various projects in Jordan’s company, Simone got his foot in the door of the music software industry.

In 2015, after being introduced by Jordan, ROLI offered Simone a job in London. “I started to work on the ROLI Seaboard project and I worked on the Equator synth, the main synth capable of managing the multidimensional interface of the ROLI Seaboard”.

After that, offers started rolling in. Many companies tried to entice him to join their ranks. But in 2016, Simone met Emanuele Parravicini and Stefano Lucato. They spoke and exchanged ideas. The company’s vision seemed to fit in the direction Simone wanted to go and, more importantly, there was room for working on Simone’s initial dream application.

They soon joined forces. The company FATAR helped them bring the vision to life by providing financial backing as well as connections in the industry to put a team together, start the development process, and launch the project. In 2018, the first version of Camelot Pro was released to the public.

 

Don’t wait for opportunities, make them happen

 

If there’s any takeaway from Simone’s story, it’s that going forward to meet your dreams is the way to go. Especially with artists, there’s a long-lasting myth that says you need to “be discovered” to achieve success. In fact, the key to success is to dare to talk to people and expose your ideas while having enough creativity to apply the knowledge you accumulate in one area into a completely different industry.

“When I was building my network, I started by having conversations with almost everyone. On one thousand people I would talk to, maybe ten of them would lead me to more serious connections that could help me make a step towards my goal.”

Perseverance mixed with a bold attitude open to meet people led him to create the Holy Grail for live music performers, Camelot Pro. Time will tell the new heights awaiting Simone Capitani next.

The System Used to Develop Incredible Audio Software Revealed
The System Used to Develop Incredible Audio Software Revealed 1024 576 Audio Modeling

The System Used to Develop Incredible Audio Software Revealed

Audio Modeling aims to create a complete ecosystem of virtual instruments and audio software for live and studio musicians, music producers, and composers.

But developing simultaneously software, especially different plug-in formats (VST, VST3, AAX, AU, Standalone instruments) compatible on most platforms (Windows, macOS, iOS) comes with its fair share of challenges:

  • Each platform and plug-in format has its set of specifications that needs to be carefully taken into consideration during the development process.
  • The difficulty in keeping a fast release pace for the resolution of bugs and the development of new and improved versions of our existing products while operating with a small team of developers.
  • Finding enough time and energy to allocate to staying in touch with the community so we can constantly be aware of new needs emerging for musicians, composers, and producers in the music industry.

A Winning Combination — JUCE and Continuous Integration

JUCE is a development framework specialized in enabling developers to create a variety of cross-platform audio applications, from DAWs to plug-ins.

Projucer is the programming environment that makes working with JUCE intuitive and easy. It provides everything developers need, from a code editor to a GUI interface for setting up environment variables as well as platform-specific settings necessary for building cross-platform audio applications from a single codebase.

Continuous Integration (CI), as used by Audio Modeling, is a software development methodology that consists of triggering the automation of builds through the use of version control within our development team.

Every time one of our developers commits their code to a dedicated repository through version control, a series of automated scripts are triggered which first creates a new and clean virtual container and then proceeds in compiling the code and building the application.

If the latest committed code breaks the build in any way, it is reported by the system. This ensures that bugs and errors are detected quickly, and makes the process of identifying the source of the issue much more efficient.

For Audio Modeling, adopting a CI methodology is a huge time saver because of how fast it allows us to narrow down issues and fix them. Since we operate with a small team, identifying bugs and errors quickly is crucial to keeping a relatively fast development pace.

CI methodologies originate from web development and are normally used as a way to automate testing. But in our case at Audio Modeling, our reasons to apply CI are quite different. We use it primarily for automating the building process simultaneously for different platforms and to detect problems quickly as the development process of a product unfolds.

But since this application of CI is not necessarily a traditional one, it introduced an issue that required special attention.

The Unique Solution Developed by PACE Allowing Us to Implement CI With Our Team

Here’s a step-by-step overview of our development process:

  1. Our developers work on a single codebase maintained on a dedicated repository located on our own GitLab server. Maintaining a single codebase is possible, as we saw, thanks to JUCE and Projucer.
  2. Once a team member is ready to commit changes, they push their new code to the main repository using Git, a version control tool.
  3. The commit triggers Azure pipelines (there are various pipelines created for Audio Modeling’s various products and builds) and the process of creating a new virtual container in which the software is automatically built begins.
  4. The automatic build completes and any errors found by the system are reported. This process includes the generation of the installer and the Apple notarization for macOS products.
  5. For iOS apps, the new build is then pushed to Apple’s platform AppStore Connect and awaits approval.
  6. Desktop builds are also occasionally pushed to a special DropBox location where Beta testers have access to them and can start the testing process. iOS Apps are available for testing through TestFlight.

Continuous Integration process

Audio Modeling’s Continuous Integration System Schema

For this system to work, every step of the build needs to happen directly on the server through command line scripts and applications. The problem we faced was that one plug-in format required the use of a GUI-based application.

AAX (AVID Audio Extension) is a plug-in format developed by the company AVID specifically for plug-ins used by one of their flagship products, Pro Tools. AAX plug-ins require a signature by an application called wraptool which is developed by PACE. The purpose of this signature is to provide AAX plug-ins with cryptographically secure proof that it was produced by Audio Modeling (or any other software company using it) and is unmodified since its signing.

Normally, digital signing can be done in 2 ways:

  • With a physical iLok USB key
  • With iLok Cloud, a feature permitting users to store licenses in the cloud and manage them through a GUI interface over the internet

PACE didn’t provide a tool that could be used on a server from the command line, making the signature process for AAX plug-ins impossible to automate.

In August 2020, a conversation started between the Audio Modeling team and the developers at PACE. They took the time to listen to us and understand the problem we faced in our development process.

PACE acquired JUCE in April 2020. The fact that the same team was now in charge of developing both applications crucial to Audio Modeling’s plug-in development made the communication easy and straightforward. A few months later, and after a thorough evaluation of the problem, the team at PACE came up with the perfect solution.

They created a command line utility that allows us to start an iLok Cloud session, bypassing the need for using their GUI-based application. Thanks to this tailored-made solution, we are now able to automate the build of AAX plug-ins and integrate the signing process inside Audio Modeling’s Continuous Integration system.

This screenshot shows an excerpt of one pipeline used to build Audio Modeling / SWAM products

This is proof that PACE is indeed a company that is developer-centric. Their recent acquisition of JUCE, an essential tool for audio software development, promises a bright future for the audio software industry. The fact they were able to come up with a flexible solution in such a short period of time is proof of the company’s motivation to constantly improve their products and listen to the needs of not only the developers in their community but also the companies using their products.

We are grateful and proud to count them as our partners and we are looking forward to contributing together future innovations to the world of audio software.