Creating Soundtracks for Lives in Real-Time: A Journey into Generative and Adaptive Music with LifeScore
Co-Founder and CEO Chris Walch, Chief Audio Officer Mary Lockwood, and Kaleidoscope Label head Dani Sawyerr shared their insights, offering a glimpse into the future of music and AI.
Welcome to Musicworks, where we're all about equipping music creatives and professionals with the knowledge they need to thrive in an industry constantly in flux. After launching our Music AI Archive, we wanted to talk to some of the people behind the scenes at the companies making these breakthroughs to gain deeper insights into their innovative contributions.
Generative and adaptive music hold a special fascination for me due to their transformative potential for personalising musical experiences, seamlessly blending technology with artistic expression. Stemming from my creative roots, I firmly believe in the ethical coexistence of AI and the music industry. That's why I am particularly grateful to the team at LifeScore for their work and our conversation. They are trailblazers in the fields of generative and adaptive music, simultaneously setting standards for ethical AI in the music industry.
For those unfamiliar, here is some background information on LifeScore.
“LifeScore transforms artists’ original recorded music into endless unique variations - meeting audiences where they’re at, engaging with them personally, and putting music creation in their hands, in real time and at scale. LifeScore focuses on creating high quality content that retains artistic integrity and maintaining clear IP provenance. Its core offerings are: short form remixes, long form remixes, remixes for activity, remixes by genre, remixes by location, user-generated remixes, and adaptive remixes. Each of these products creates new ways for audiences to engage and interact with their favorite artists, and opens up new distribution and monetization opportunities for artists and rightsholders.
At LifeScore we work with artists and rightsholders to further capitalize their existing musical assets by (i) increasing quality content, (ii) deepening audience engagement and (iii) creating additional monetization opportunities outside of the traditional channels. LifeScore was founded in 2016 and is backed by Warner Music Group and Octopus Ventures. Its technology team is led by Tom Gruber, Co-Founder of Siri and Humanistic AI pioneer. Its audio team is led by Mary Lockwood, who has over a decade of experience with interactive music.
Before we dive in, could you tell us a bit about your background and about the company?
MARY: Hi, I'm Mary, the Chief Audio Officer at LifeScore, overseeing the audio department. With a background primarily in the video game industry, specialising in interactive music for the past decade, I joined LifeScore a couple of years ago.
DANI: Hi, I'm Dani. I've been immersed in the music industry for 15 years, predominantly at Universal Music Publishing, where I served as A&R Director working with artists such as Dua Lipa, Florence and the Machine, Little Simz and Dermot Kennedy. I’ve also managed artists and released records over the years. I recently joined LifeScore and it’s been a great ride so far. I lead the Kaleidoscope Label and A&R, overseeing our release strategy, artistic direction, rights management and artist/brand partnerships.
CHRIS: Hi, I’m Chris, I’m our CEO and one of the co-founders. When we first started our focus was primarily on adaptive music—music that could change in real time to live data inputs. In the last six months, we've explored other possibilities of what our generative technology can achieve. Our main focus now, and where we've gained the most traction in the past six months, is with artists and rights holders. What we aim to do for them is to capitalise on their existing assets by (i) increasing their high quality content without significant additional effort, (ii) deepening their audience engagement, enabling them to interact with the music for longer periods and (iii) creating additional monetization opportunities outside of the traditional channels.
When we say high quality content, we mean that the music we generate is the same studio quality as the original sources that our technology remixes. In addition, we maintain the identity and integrity of the original artist content. The music we deliver is not created in a black box. We know every cell of music that goes in as an input and every cell of music that comes out as an output and are able to track its journey through our technology. This is why the music we create maintains the high quality and artistic integrity of the original materials. We then work collaboratively with our clients to ensure we have their creative approvals.
We also do not train on the world's data. We only work with the materials provided to us or include our own materials if requested. This ensures that the music we create for artists and rights holders can be released without concerns about copyright issues or be at risk for potential litigation.
We continue our work on adaptive music as we believe this will create opportunities for broader distribution and monetization. An example of this is our work with Audi and Bentley to develop adaptive music soundtracks for your drive—an untapped distribution channel for artists today. Thus, we're striving to devise new avenues for them to leverage their existing assets.
How did you get started in the music AI space, and how did you assemble your team?
MARY: I was working on Detroit Become Human, a game created by Quantic Dream. That's where I met Chris and Philip Sheppard, two of the company's co-founders, who were also involved in that project. So, starting from there, shortly after, we embarked on an amazing project for Twitch. I believe it was one of the first of its kind—an interactive live drama on Twitch, where the audience could react in real-time, influencing the music with their inputs. That's where we began, and from there, the audio team expanded, bringing in more composers.
DANI: I was working at Universal Music Publishing, but I had a strong interest in the AI music space. I connected with an executive named Vickie Nauman. She recommended a few of her favorite AI music platforms, and LifeScore was one of them. After that, I got in touch with Chris, and we discussed the motivations and ambitions for LifeScore and ideas around growing the label and connecting with artist and industry partners. The role of a traditional A&R is changing, and I was looking to find new, innovative ways of using my experience to help talent grow meaningful careers. At LifeScore we have in-house composers, and my role adds another layer to this process. I aim to help bridge the gap between AI powered music and traditional A&R processes. Our approach positions us at the forefront of innovation while maintaining a tangible, human connection. This balance between technological advancement and human touch is something that personally excites me.
Through conversations with artists, I've found that this approach resonates with them as well. By emphasizing our human presence in the creative process, we can create music that not only pushes boundaries but also resonates deeply with listeners.
CHRIS: I am one of the four co-founders of LifeScore. My background is in commercial law; I spent six years as a finance attorney at Latham and Watkins. I met the other founders at an entrepreneurship conference. I started working with Philip Sheppard, who was a composer at the time. We were creating film and video game scores for Mary at Detroit Become Human at Quantic Dream. And then the idea was like, 'Wow, it takes us so long to create these scores.' It consumes so much time and money, and you only really have so much material to work with."
So what could we do now that the world has so much data and access to it? What if we could create soundtracks for lives in real-time? That was the initial idea of the company. The other co-founders of the company are Tom Gruber, our CTO, and co-inventor of Siri, and Ian Drew, a serial entrepreneur. He recently sold one of his companies to Qualcomm.
Initially, we started with the concept of adaptive music technology because we really thought there was something there. However, it's like having a really cool product without a clear problem to solve. I think we were a bit ahead of our time in terms of what we aimed to achieve. We had a fantastic concept, which we incubated at Abbey Road Red. Through Abbey Road Red we met the team at Bentley and developed numerous prototypes for them. From those prototypes, we began working with Audi in 2021 and have been engaged in two-plus-years of R&D with them creating a productionized vehicle that can soundtrack your drive in real time.
With adaptive music technology, it's somewhat challenging because you need to identify the right B2B customer who truly desires to enhance an engaging experience in an adaptive manner, and B2B sales cycles can be quite long. So we went back to the drawing board last year.
It's one of those classic stories where you assess your internal tools to see what you've got.
We have built a suite of tools that allow our composers to experiment, simulate, and craft musical experiences that go beyond manual capabilities. They combine the skills of a composer with technology to generate new behaviours, enabling adaptability and the creation of unexpected tracks. With numerous customization options, we can generate a wide array of personalised outputs.
We eventually realised we could apply our learnings not just to music we composed ourselves but to existing catalogues. We could also do a lot more than just make it adaptive; we could make it long form, short form, genre-bend, interactive, and adaptive, among other things. This greatly expanded our use cases while still using the same tools and maintaining the same product.
We focused on refining our use cases, our target customers, and our value proposition. This effort began in Q3 of last year, and we've since seen significant traction.
Can you describe some use cases for LifeScore?
CHRIS: We've collaborated with both Sony, for David Gilmour and the Orb, and Warner, for a German Christmas album, to create live, fan-generated remixes of the original artist music. These fan-remix experiences foster deeper engagement and interaction with the artists’ releases. Another example involves Sleeping At Last, who sought to extend his 2017 Atlas album's reach. After only two phone calls, we created an eight-hour Sleep album that garnered over 4 millions streams on Spotify.
We've been partnering with Warner to release new Latin lo-fi tracks every week, capitalising on existing assets and introducing them to new audiences. This strategy has been highly successful, with our releases achieving five times the performance in one-sixth the time compared to previous efforts.
One particularly innovative project involved collaborating with The Orchard in transforming the Shredderz heavy metal album into eight different genres, which are now being sold as NFTs for the band. This approach showcases our ability to find additional distribution channels and monetization opportunities for artists.
We have partnerships with Audi on their Active Coach app to create meditation remixes and are working with them on a vehicle that composes an adaptive soundtrack in real time. This collaboration demonstrates how we're expanding into new areas for artists to reach new audiences and explore new revenue streams.
What capabilities must your AI models have to perform tasks well?
CHRIS: Our generative models are not like those trained on large catalogs of music. Our technology transforms a fixed source of original music – a track or an album or a mashup of a few tracks – into a form that can generate endless remixes automatically. You can think of it as a remix producer. As for a human remix producer, we prefer to start from high-quality stems. We have used AI generated stems, but depending on their quality or the amount of splits it can restrict the possible range of genres we can create. When we have high quality inputs, we generate equally high quality outputs. I imagine that as AI stem separation advances, so will our ability to work with more tracks that haven't been separated before.
MARY: We have a team of composers reworking them, producing them in different styles. Starting from there, this is where we can create templates to make it more automated. So we can try to start with one genre, and the more we do in this genre, the more we can try to automate it and apply it to other tracks.
What are the primary challenges or risks encountered when developing AI models for music, and what potential outcomes do they entail? How can these challenges be mitigated or avoided?
CHRIS: I think one of the reasons we focus on IP provenance, and we only work with materials that we've got consent rights to, or artists are happy with, is because we do think IP issues are going to remain a challenge. Commercialising an asset that's been trained with material that you can't track or that people aren't getting remuneration for, will result in a host of issues.
Commercialising any output will be one of the biggest issues until there's a commercial framework and legal regulation in place for how you can remunerate folks whose material is used in training data.
It is important for us as a music technology company that we engage and collaborate with artists and rights holders to understand their concerns and the specific problems and use cases that our technology can solve. Can we create something together with the music industry versus lobbing technology over the wall and just seeing what the aftermath will be. That's not our approach. I think when you have something potentially game-changing, you have a responsibility to take into account those people who are going to be most positively, but also most negatively affected.
DANI: It’s extremely important that talent can trust that we have the right motivation for working with their treasured songs and recordings. That we’re here to help. As Chris mentioned, our focus on IP provenance will hopefully help talent build trust in what we’re offering at LifeScore.
What's the most overrated narrative related to how attitudes towards AI music creation will evolve over time? Please explain your alternative perspective?
CHRIS: I believe people are adaptable, and we adjust to whatever our new situation or context is. AI and generative music are now in the marketplace, and it's here to stay. I do hope that it becomes an augmentative tool rather than a replacement. I don't think it will become a replacement, because if it does we end up in a race to the bottom, and that's not a great scenario.
A landscape or a world where artists have more time to explore different avenues, where they can create more with less effort, or have ideas they never would have had, I think those are great use cases for AI. There are also some instances where purely generative music might be preferred, such as when you need something for a short personal video and licensing a track isn't feasible.
So there will be different tiers where generative music makes sense. In those models where you're working with artists' music as the initial source of inspiration, ideally we are establishing a regulatory and commercial framework where individuals can be remunerated. However, I do believe there will always be a space for music of very high quality, studio quality, that artists approve and create, and they want to put their verified checkmark on and say, Hey, this is my vision and my creation - and this is the space we want to work with artists in.
DANI: I've been speaking to lots of artists and managers over the last six months about our tech and AI music on a broader level. The feedback has been amazing and it’s clear that there is excitement around how AI will integrate itself into the music industry - if approached with care and considerations for the creator community. Many producers and artists are currently using AI tools to some degree.
I do not believe the majority of talent think that there is this impending wave of disruption, where human creativity will be replaced by AI.
What are your thoughts on the US Copyright Office’s position that generative AI work falls into the public domain?
CHRIS: Our perspective is that we stay on the side of respecting intellectual property (IP) rights. Copyright exists for a reason, and we fully respect that. We only use artists' materials that are provided to us so that we can assure them, "Hey, this is your material, and we know every single component that went into this." The copyright office is talking about music generated without any human involvement or provenance.
Could you share any details about the next project you are shipping or working on?
CHRIS: We're constantly releasing Lo-Fi remixes with Warner. We hope to release more of these remixes with other labels within Warner. We just released the Shredderz 6,666 remixes in April in collaboration with The Orchard. We are still working with Audi on a vehicle that will soundtrack your drive in real time and we hope to see that one within the next year.
DANI: We have some really cool artist and label partnership projects in the works. We’re thinking big and what’s exciting is that our tech can be used for a wide variety of experiences; for immersive events at museum/gallery spaces, for music within TV, film, theater and gaming, and for brands looking to leverage the power of music.
Is there anything else you'd like to share?
DANI: I think it's really important for us to connect with the wider music industry, as I mentioned before, by speaking with publishers, writers, artists, managers, agents, labels, music schools, charities, trade bodies, and ultimately, listening to what each area of the industry needs and how our tech can add value. To achieve that, we need to deeply connect and engage with the industry. That's something which we're working on.
MARY: To add on to that, into our approach to audio what we're striving to achieve is to immerse people in an anticipated experience where they can re-experience music in new ways. It's not just a matter of pressing a button to generate music; it's about using technology to enhance human talent and create something unique. Additionally, we're seeing a lot of composers and individuals coming from the game industry, and I believe this is where we can offer a distinct experience by merging these talents together.
Browse the Music AI Archive to find AI Tools for your Music
Find out more about LifeScore and their artist & producer Kaleidoscope
LifeScore Links: Website | Instagram | LinkedIn | Facebook
Kaleidoscope Links: Website | Instagram | Tiktok | Facebook | YouTube