Blog

7 min read

Resurrecting Old Tech with AI: Fighting E-Waste (and Streaming Fatigue)

AI makes protocol archaeology and lost tooling less painful, which opens the door to extending the life of genuinely good hardware.

  • ai
  • retro
  • hardware
  • sustainability
  • kanora
  • offline-first
  • ownership

Most tech doesn’t die because it stops working, it dies because software abandons it. Drivers stop being maintained, companion apps disappear, cloud services get turned off, and suddenly a perfectly capable device is treated as landfill.

That’s a software failure with physical consequences, and it’s one of the few areas where I think AI can be genuinely useful without needing any hype.

The Reality of “Obsolete” Hardware

There’s a huge amount of perfectly good hardware out there that is only “obsolete” in the sense that nobody wants to support it anymore:

  • MiniDisc players with NetMD support
  • Old Hi-Fi systems with early network protocols
  • The iPod Classic and similar devices
  • Countless “smart” devices built on now-forgotten standards

None of this hardware is broken, it’s just the software running it becomes obsolete. I have loads of examples of this:

  • My Panasonic AllPlay HiFi
  • My old Pebble Watch
  • MiniDisc players
  • Even my car (an 11 year old Nissan Leaf... the clock doesn't work properly any more as it relies on a 2G network to update the firmware. 2G was shut off in the UK.)

APIs disappear, networks are turned off. Drivers stop being maintained. Documentation gets lost.

The default outcome is that it becomes e-waste, not because the hardware can’t do the job, but because the software path to using it has been allowed to rot while people moved on to the next shiny thing.

This Is a Software Problem

Most of these devices are still capable. What’s missing is the layer that makes them accessible again:

  • Tooling
  • Documentation
  • Accessible APIs
  • Modern integrations

Historically, bringing these things back meant doing protocol archaeology by hand:

  • Reverse engineering protocols manually
  • Digging through obscure forums
  • Writing low-level code from scratch

It’s slow, and it’s slow in a very specific way. You don’t just need time, you need sustained attention across a messy set of sources, and you need enough confidence to ship something that won’t brick a device or corrupt someone’s library.

AI Changes That

AI doesn’t magically solve everything, and it doesn’t remove the need to verify on real hardware. What it does is reduce the cost of getting from “I have a pile of fragments” to “I have a workable understanding and a first draft of code”.

In practice, it helps with the unglamorous bits:

You can now:

  • Analyse old protocols and packet captures faster
  • Reconstruct missing documentation into something coherent
  • Generate working code from fragmented sources (old READMEs, abandoned repos, random PDFs)
  • Iterate on integrations without spending a week getting to “hello world”

That doesn’t mean the work is trivial, it just means the slope is less brutal. The effort shifts from “can I even get started?” to “can I make this reliable and pleasant to use?”, which is the part I actually care about.

Real Examples I’m Working On

NetMD Revival

MiniDisc is one of those formats that refuses to die, partly because it’s a genuinely good piece of hardware, and partly because the whole experience is refreshingly offline.

NetMD support exists, but it’s fragmented and awkward. There are open-source efforts, there’s partial tooling, and then there’s the reality of trying to make any of it feel modern.

Using AI, the goal is to:

  • Understand existing open-source efforts
  • Fill in gaps in tooling
  • Build cleaner, modern interfaces

Not reinvent everything, just make it usable. The bar I care about isn’t “it works if you follow a ten-step README”, it’s “this can be part of a normal workflow without feeling like a hobbyist science project”.


AllJoyn (and Other Forgotten Protocols)

There’s a graveyard of “almost great” standards: good ideas, decent capabilities for their time, and then abandonment. AllJoyn is one of them.

The idea:

  • Local device communication
  • No cloud dependency
  • Decent capabilities for its time

The reality:

  • Abandoned
  • Poor tooling
  • Hard to integrate today

AI makes it viable to revisit these without spending weeks just getting oriented:

  • Extract useful concepts
  • Rebuild lightweight integrations
  • Make them usable in modern apps

Even when you don’t revive a protocol directly, there’s value in re-learning what worked. A lot of “modern” smart home thinking is just re-branded versions of problems people were already solving fifteen years ago, sometimes with better privacy defaults than today’s cloud-first approach.


iPod Classic + Kanora

This one’s more personal, because I still think the iPod Classic is one of the best music devices ever made.

  • Physical controls
  • Offline-first
  • No distractions
  • Built for ownership, not streaming rentals

But it’s locked into an old ecosystem, which makes it awkward to keep a modern library in sync without falling back to an old toolchain.

Kanora is an attempt to bridge that gap:

  • Modern library management
  • Local-first architecture
  • Potential integration with legacy devices

Getting music back onto these devices — cleanly — is part of that story.

Why Now

There’s a shift happening, and you don’t need statistics to notice it. People are getting tired of their libraries being conditional, their favourite albums disappearing, and their listening habits being mined as a business model.

At the same time, there’s a growing interest in physical media again (vinyl, CDs, even MiniDisc), and not just as nostalgia. Ownership is a different relationship to your collection, and it changes what you value in software.

Streaming solved convenience, but it also made your library conditional, and once you notice that trade-off it’s hard to unsee it. People are starting to feel the gap between “I can access anything” and “I actually own anything”.

The Opportunity

There’s a huge gap between modern software and legacy hardware. Bridging that gap creates:

  • Better user experiences
  • Less e-waste
  • More control for users (this is probably the bit the big names do not want. Rent everything, own nothing, right?)

The thing is now, with the help of AI, it’s finally realistic to attack these problems as a solo developer. But it still isn’t easy, just easier.

  • Documentation is still messy
  • Hardware quirks still exist
  • Some things simply won’t work

AI helps, but it doesn’t remove the need for persistence. You still have to test on real devices, you still have to respect the limits of old hardware, and you still have to be careful about what you claim will work.

The Goal

Not everything needs to be saved. But a lot of it can be. If we can:

  • Extend the life of good hardware
  • Reduce unnecessary waste
  • Bring back genuinely great user experiences

Then it’s worth doing.

Where This Is Going

This is still early but the direction is clear:

  • More tooling for legacy hardware
  • Better integration into modern apps
  • A growing ecosystem around ownership and offline-first tech

Streaming isn’t going away and these features are going to be very niche, but I would love to think there is enough of a movement to bring some of this stuff back.