We’ve all got those, you know, dusty boxes somewhere.

Forgotten memories.

But imagine finding old analog tapes in one.

From a band that just, well, vanished nearly 30 years ago.

Okay, yeah.

So today we’re diving into this really fascinating story.

It’s about how one musician used not just careful restoration but also AI, you know, cutting-edge AI, to bring his band’s sound back.

That sounds like a great topic for a deep dive.

Especially think about what that means for, like, keeping creative stuff alive and even reimagining it.

Exactly.

So our mission today, unpack the journey of David Marsden.

He was the guitarist and kind of the tech guy for this 90s band from Grimsby, Hovercraft.

I remember that.

Hovercraft.

Yeah.

And we’ll follow his path from these, like, really degraded cassette tapes and basic free audio tools all the way to using AI, not just to restore the sound, but to properly reimagine it.

Right.

And along the way, we’ll kind of ask, what does this whole creative resurrection tell us about music and memory and maybe even who owns the art in this digital age?

It’s a really compelling example, isn’t it, how tech can, you know, genuinely breathe new life into forgotten art, but also maybe challenge how we think about what’s possible.

Okay, so let’s get into it.

First off, Hovercraft, UK.

Who were they?

This wasn’t some made-up band.

They were real, a live band playing gigs.

Right.

But only for about nine months between 1995 and 96.

Pretty brief stint.

Nine months, wow.

But they did have this one notable achievement.

They actually got to number one in the South Bank demo charts.

It was in the Grimsby Evening Telegraph.

Might have been the first and only time that chart even ran.

Ah, okay.

So quite the flash in the pan, but they made a mark.

A bit of a Cinderella story, even if it was short.

Definitely.

They certainly, you know, had their moment before, as David apparently put it, sinking without a trace in under a year.

Yeah.

And for anyone wondering about a reunion, well, David gives some pretty funny reasons why that’s unlikely.

Yeah.

It really paints a picture.

Well, the drummer moved to Australia.

The singer-songwriter, lead guitarist guy, totally off the grid, last seen somewhere between India and Bosco, apparently.

Wow.

Then the bass player, Ron Nasty, great name.

He’s now Ronnie Nice and plays in a semi-pro covers band.

Ronnie Nice, I like it.

And then there’s David Marsden himself, the guitarist slash tech guy who started this whole thing.

So yeah, the odds of a, let’s say, traditional reunion are slim.

Very slim.

Which David himself kind of says might be a relief to some people.

Huh, probably.

But, okay, the story doesn’t stop there.

The initial mission, David’s mission, was this big analog challenge, taking these, quote, lo-fi degraded analog recordings preserved on mangled old cassette tapes.

Oof, mangled tapes.

Yeah, and they were digitized using, like, the absolute cheapest USB tape deck converter thing you could get years ago.

Yeah.

So fidelity-wise, it was a nightmare start.

It really sounds like it.

And the goal initially wasn’t just cleaning them up, right?

It was trying to make them actually sound like the band sounded?

Exactly.

In the rehearsal room, the studio, down the pub, and, he says, most importantly, in our own imaginations.

Hmm, that last part’s interesting.

It’s not just restoration, then.

It’s about grabbing hold of a sonic memory, making it real again.

Totally.

And the first tools were, well, pretty basic.

Audacity at first, then those free online mastering tools from SoundCloud and BandLab.

Right, the usual suspects for bedroom producers and stuff.

Yeah, and it was this cycle, you know, add some reverb, maybe some preset effects, probably reverb and compression, just trying to make the track sound louder, bigger, less rough.

Sounds like a real labor of love.

He called it going down a nostalgia rabbit hole, didn’t he?

He did.

But was there a point where those sort of traditional tools just hit a wall, couldn’t quite get past that mixed bag sound?

Well, that seems to be where the rabbit hole got tricky, because the challenge wasn’t just fixing bad sound, it was deciding how much of the original character, the noise, the loftiness you wanted to keep.

So, yeah, the reverb and compression definitely made things sound better, maybe louder, more atmospheric, but they could also add more noise back in.

Oh, okay.

Especially when, you know, noise was kind of part of the original recording’s degraded state anyway.

So it’s this delicate balance, trying to clean it up without scrubbing away the essence of it.

So a bit of a paradox there with the old tools.

Yeah.

But then AI comes into the picture, an unforeseen collaborator, as you might say.

Right.

And the trigger for this was reconnecting with the old bass player, Ron Nasty, sorry, Ronnie Nice.

Ronnie Nice, yeah.

And their first step with AI wasn’t even music.

They used Claude.ai, this language model.

For what?

To generate the band’s backstory, like a not entirely factual or unelaborated history, and also feeding it song lyrics just to see what amusing results it came up with.

Okay, that’s unexpected.

Using AI for the band’s lore first.

It’s fascinating, isn’t it?

They even called Claude like the fifth Beatle because it could offer critiques.

A fifth Beatle AI.

Yeah, an AI giving feedback on narratives, on lyrics, shaping the band’s story, all without ever hearing a single note.

It really makes you think about how AI can fit into creative work in different ways.

And it led David to ask that key question, can AI listen to music?

And the answer, it seems, was a big yes, because that led them to Suno music.

David calls it Microsoft’s online AI music creation beast.

Right, Suno.

And the first way they used it back in March was super simple.

Ronnie had painstakingly transcribed the lyrics.

Okay.

And they just pasted those lyrics into Suno, and bam, in seconds, it could apparently generate three minutes of pretty sophisticated sounding rock or pop.

Just from lyrics, that’s pretty impressive.

Yeah, and you could customize it too, right from the start.

Add style tags like folk, blues, punk, whatever, or even give it general instructions like, get this Indian sounding funeral dirge.

Okay, that’s specific.

Right, and this whole process led to what they called Hovercraft Reimagined, completely new takes on their old songs.

It just shows how accessible that kind of creative experimentation is becoming, doesn’t it?

Anyone can try these wild ideas.

It sounds like it just blew the doors open creatively for them.

But Suno itself evolved too, didn’t it?

It didn’t just stay lyric-based.

No, exactly.

It progressed pretty quickly.

From just lyrics, it moved to letting you upload actual audio, initially just two minutes, but now, with Suno V4.5, it’s apparently up to eight minutes of original audio you can feed it.

Eight minutes?

That’s a decent chunk of a song.

It is.

And they had fun with it, apparently doing a whole album of Hovercraft songs as if covered by a female singer in a more chill-out style.

Wow, okay.

So you could really play with different vibes for your old material.

What kind of doors does that open up creatively?

Oh, huge doors.

It lets you revisit your own stuff with completely fresh perspectives, like infinite remix possibilities almost.

But it’s not perfect yet.

There are still limitations.

Like what?

Well, David mentioned one song, Mr.

Tooting Brown.

They just couldn’t get the AI to recreate faithfully.

It just wouldn’t work.

And generally, the AI still tends to miss things like intros, outros, middle sections, solos, specific lead guitar parts, you know, the detailed stuff.

Right, the human touch bits.

Exactly.

Though he did say it’s getting a lot closer to matching the original sound and style.

Yeah.

So it highlights that AI is this powerful tool, a collaborator maybe, but it’s not quite a full replacement yet, especially for those really specific, nuanced bits of a performance.

Okay, so AI for the sound.

Getting better, but still evolving.

But they didn’t stop there, did they?

They used AI for visuals too.

Yeah, that’s right.

They used ChatGPT to generate song covers and album covers for these new AI-assisted tracks.

So it became this whole AI-powered production pipeline, basically.

Totally, a complete workflow.

Most of the later tracks, apart from the cleaned-up originals, were made using Suno.

Then they’d download the stems.

The individual tracks.

Right, the separate instrument parts, upload those to BandLab for proper remixing, mastering touches, and then finally stick them up on Bandcamp for people to hear.

And David Marston even started using the same techniques to create some completely new songs of his own, so it shows this whole process, concept to distribution, all doable with these tools.

Which brings us to the big question, maybe for you listening too, what’s the so what in all this?

Because, let’s be real, some people are pretty skeptical about AI art, and maybe rightly so.

Absolutely, and that’s a really important conversation to have.

But David’s main takeaway, he says, is that the whole project has just been tremendous fun.

Okay, fun is good.

It is.

But it’s maybe more than just fun.

It kind of points to a shift, doesn’t it?

It let David and Ron and anyone else who remembered the band actually recreate, restore, produce, and hear the old songs again in a way that doesn’t hurt the ears so