
How I Build Persistent Music Players in React & Next.js
A technical deep-dive into building persistent music players that keep playing across route changes, using React, Next.js, Zustand, and Web Audio API with canvas visualizations.
Introduction
For as long as I've been building for the web, I've wanted one specific thing: a music player that keeps playing while you move around the site. No awkward restarts, no janky reloads — just a smooth, app-like audio experience inside a browser. For years that goal fought against the limits of older stacks. Then React and Next.js came along, and suddenly the thing that felt nearly impossible became a clean, reliable pattern.
In this article, I'll walk you through how I build persistent music players in React & Next.js: the architecture, the tradeoffs, and the real-world lessons from pushing this pattern way beyond a simple "play/pause" component. I'll reference the exact approach I used on my own portfolio — a global Zustand store, a shared layout that never unmounts, and a canvas-based waveform visualiser tied to the audio frequencies. If you're an entrepreneur, tech lead, or dev planning a custom app with rich media, this is how I make it feel native, reliable, and fun.
Background: The WordPress & AJAX Era (A.K.A. The Pain)
Trying to Force a Persistent Player into WordPress
Long before React took over, I tried to build the first version of SoundVent on WordPress. I wanted exactly what my current portfolio has now: hit play, browse around, keep listening. In that ecosystem, the only mildly feasible approach was to go all-in on AJAX navigation. Instead of letting the browser do a full page load, you intercept links, fetch new content over AJAX, and manually swap parts of the DOM while trying to keep the audio element alive.
On very small sites, it kind of worked. But as soon as I tried to scale it to something more ambitious — a full social network with feeds, profiles, messaging, notifications — the whole thing became fragile. Plugins assumed full reloads. Themes assumed page-level scripts. Caching and SEO collided with AJAX hacks. It was like bolting a spaceship engine onto a family sedan.
The Core Problem: Fighting the Platform Instead of Working With It
To keep audio persistent in that environment, I had to:
- Intercept every navigation instead of using normal links.
- Issue AJAX requests for every page change.
- Parse the HTML response and surgically replace only specific DOM nodes.
- Re-bind scripts in exactly the right order so plugins wouldn't break.
- Hope updates didn't quietly blow everything up.
It was brittle by design. WordPress was built around full page loads and PHP templates, not a single long-lived UI tree. I spent more time babysitting the illusion than building the actual product experience.
Why Next.js Changed Everything
React and Next.js flipped the entire architecture. Instead of hacking around full reloads, I could treat the app as one persistent React tree where navigation simply swaps out children. The audio player lives at the layout level — above the individual pages — so it never unmounts when the route changes. No DOM surgery. No plugin roulette. Just a stable parent component and client-side routing optimized for this use case.
For the first time, a persistent player felt natural, not hacked on. That's the mindset I use now for my portfolio, SoundVent, and any custom app that needs continuous audio.
What a "Persistent Music Player" Really Is
When I say "persistent music player," I don't just mean a play/pause button glued to the bottom of the page. In practice, it needs to:
- Keep playing while users navigate between different routes.
- Remember the current track, position, and queue.
- Accept commands from anywhere in the app (play this track, skip, seek, etc.).
- Feel native: no awkward resets or flicker when the route changes.
- Optionally drive visualisations, theming, and UI states across the app.
In other words, a persistent player is a combination of:
- Global audio state (stored centrally, not per-page).
- A shared layout that keeps the player mounted.
- A real audio implementation (via
HTMLAudioElementand/or Web Audio API). - Integration points from other components to control it.
Once you see it as shared state + shared UI + stable layout, the architecture becomes straightforward to design.
The Core Stack: React, Next.js, Zustand & Canvas
React for Components, Next.js for Routing
React gives me the component model: small, reusable pieces of UI with props and state. Next.js gives me structure: file-based routing, layouts, server rendering, and app-wide composition. Together, they let me treat the browser like a single application instead of disconnected pages.
Zustand for Global Audio State
For the player on my portfolio, I use a dedicated Zustand store called usePlayerStore. This store knows:
- The current queue of tracks.
- The currently active track id.
- Whether playback is active or paused.
- The current playback position in seconds.
- Helper actions like
playTrack,pause,resume,next,prev, andseek.
Inside the player component, I destructure exactly what I need:
const {
queue,
currentTrackId,
isPlaying,
positionSeconds,
setQueue,
playTrack,
pause,
resume,
next,
prev,
seek,
} = usePlayerStore();This store is the "brain" of the audio system. It doesn't play sound itself — it just describes what should be happening. The player component is responsible for making the browser's audio APIs line up with that state.
Canvas for the Waveform Visualiser
On top of standard playback, I use a separate component, <WaveformVisualizer />, that takes the current HTMLAudioElement and an isPlaying flag. Inside, it sets up an AudioContext and AnalyserNode, then draws a bar-style waveform to a canvas on every animation frame. That visual feedback is subtle, but it turns a basic player into something that feels intentionally designed.
Where the Player Lives: Next.js Layouts That Never Unmount
The secret to persistence isn't a trick inside the player — it's where the player is mounted in the tree.
In my portfolio, the <AudioPlayer /> is rendered inside a shared layout under the App Router. That layout wraps all the "app" routes, so when you navigate between pages, the layout (and therefore the player) stays mounted the whole time.
A simplified version of the structure looks like:
app/
layout.tsx // Global shell (theme, base styles)
(site)/ // Route group for the main site
layout.tsx // Site layout (includes AudioPlayer)
page.tsx // Home
blog/
page.tsx // Blog index
portfolio/
page.tsx // Portfolio index
...And the layout that holds the player might look like this:
// app/(site)/layout.tsx
import { AudioPlayer } from '@/components/audio-player';
export default function SiteLayout({ children }: { children: React.ReactNode }) {
return (
<div className="app-shell">
<header>...site header...</header>
<main>{children}</main>
<AudioPlayer />
</div>
);
}Every page under (site) can interact with the audio store. The player, however, is never torn down when the route changes — which means the music keeps playing.
Inside the Audio Player Component
Ref to the Audio Element
In the player component, I create a ref to the underlying <audio> element:
const audioRef = useRef<HTMLAudioElement | null>(null);That ref bridges the React world and the browser audio world. The audio element:
- Receives
srcupdates when the current track changes. - Listens to events like
timeupdate,loadedmetadata,ended. - Feeds progress and duration back into the Zustand store.
Syncing Store State & Audio State
There are two main loops:
- Store → Audio: when
isPlayingchanges, the effect decides whether toaudio.play()oraudio.pause(). - Audio → Store: as the track plays, a
requestAnimationFrameloop readsaudio.currentTimeand pushes it intopositionSecondsin the store.
That pattern keeps things predictable: the store describes intent, and the player makes sure the audio element matches it.
Letting Any Page Control the Player
One of the big wins of the global store approach is that any component can control the player without prop-drilling. For example, a track card in a list can do:
const { playTrack, setQueue } = usePlayerStore();
const handlePlayClick = () => {
setQueue(tracks); // maybe the full playlist
playTrack(track.id);
};That's exactly how my portfolio works: track data lives in a shared tracks module, and UI components simply call the store when they want something to play. The player listens and responds.
Waveform Visualiser: Turning Audio into Motion
The waveform visualiser lives in its own component:
interface WaveformVisualizerProps {
audioElement: HTMLAudioElement | null;
isPlaying: boolean;
className?: string;
}
export function WaveformVisualizer({
audioElement,
isPlaying,
className = '',
}: WaveformVisualizerProps) {
// ...
}Inside, it:
- Creates or reuses an
AudioContext. - Uses
createMediaElementSource(audioElement)to connect the audio element. - Attaches an
AnalyserNodeto read frequency data. - Draws a series of bars to a
<canvas>on every animation frame while audio is playing.
It's intentionally defensive: if a source already exists, it reconnects; if something goes wrong, it logs a warning instead of breaking the app. The result is a visual layer that feels smooth and synchronised with the music but doesn't interfere with the core playback logic.
Real UX Touches: Minimised State, Playlist Sheet & Motion
On top of the core tech, I layer in UX elements to make the player feel considered:
- A minimised state that tucks the player into a compact bar when you want it out of the way.
- A playlist sheet/modal that shows all available tracks using a
<Sheet>component. - Framer Motion animations for smooth enter/exit/transitions.
- Theme-aware styling so the player feels native to the rest of the site.
These are built on top of the same foundation: Zustand for state, a persistent layout, and a single <AudioPlayer /> tied to an HTMLAudioElement. The details vary per project, but the underlying pattern is stable.
Common Pitfalls & How I Avoid Them
Strict Mode Double Mounts
In development, React Strict Mode can cause effects to run twice, which can lead to duplicate AudioContext instances or repeated setup. I keep audio side effects inside guarded hooks, clean up thoroughly on unmount, and avoid doing heavy setup work directly in render.
Hydration & Client-Only APIs
Anything that touches window, document, AudioContext, or canvas needs to run on the client. The player and visualiser are declared as 'use client' components, and all browser API code lives inside useEffect hooks to avoid hydration mismatches.
Overloading the Store
It's tempting to dump every bit of UI state into the audio store. I keep it focused on audio (queue, track, position, playback). UI concerns like "is the playlist sheet open?" or "is the player minimised?" live in component-level state or dedicated UI stores. This keeps things easier to reason about.
Quick Takeaways
- A persistent music player is really about shared layout + global state, not DOM tricks.
- Next.js App Router layouts make it natural to keep a player mounted across routes.
- Zustand (or similar) gives you a clean, global audio store that any component can use.
- Web Audio + canvas transforms a plain player into a branded, expressive experience.
- Careful handling of Strict Mode, hydration, and state boundaries keeps the system stable.
- This pattern scales from "simple portfolio flex" to "full media platform" without re-architecture.
Visual & Diagram Ideas
Here are a few visuals that would complement this article well:
- Architecture Diagram: "One Tree, One Player"
Boxes showing:- Top-level
SiteLayoutwithAudioPlayerandAudioProvider/usePlayerStore. - Nested pages (
Home,Blog,Portfolio) swapping under the layout. - Arrows from pages calling
playTrack()into the audio store.
- Top-level
- State Flow Diagram
A simple flow:- User clicks a track → component calls
playTrack(). usePlayerStoreupdatescurrentTrackIdandisPlaying.AudioPlayerupdates the<audio>element and starts playback.- Audio events update
positionSecondsback into the store.
- User clicks a track → component calls
- Layout Structure Visual
A diagram of theapp/(site)/layout.tsxtree, highlighting the persistent player region and children routes.Alt text: "Next.js App Router layout structure showing a persistent music player component." - Waveform Visualiser Screenshot
A mock or screenshot of the actual player from your portfolio, showing the waveform bars moving as music plays.Alt text: "Custom React and Next.js music player with canvas waveform visualiser." - Before/After UX Comparison
Side-by-side:- Left: traditional site where audio restarts on every page load.
- Right: persistent Next.js app where music continues across navigation.
Conclusion
Building a persistent music player used to feel like wrestling the platform. In the WordPress + AJAX era, I spent a lot of time trying to keep one fragile audio element alive while everything else reloaded around it. It was clever in theory, but it never felt stable enough for large, complex systems like a full social network.
With React and Next.js, especially the App Router and shared layouts, the entire problem looks different. Instead of fighting page reloads, I design a single React tree with a stable layout and a global audio store. The player on my portfolio — with its persistent playback, playlist sheet, waveform visualiser, and smooth navigation — is a direct result of that architecture.
For entrepreneurs, product teams, and anyone thinking about custom apps, this pattern means you can deliver app-like audio experiences in the browser without sacrificing maintainability or performance. And for me as a builder, it's satisfying to know that the player I always wanted years ago is not only possible now — it's something I can design, reason about, and extend with confidence.
If you're considering a product that needs continuous audio — a music platform, learning environment, content hub, or just a portfolio that quietly flexes your capabilities — React and Next.js give you the right foundation. And if you'd rather not stitch that architecture together alone, I'm always open to talking about what we could build together.
FAQs
Do I need Next.js for a persistent music player, or is React enough?
You can technically build a persistent player with plain React and a client-side router, as long as the player lives near the top of your component tree. Next.js just makes the overall app architecture cleaner: file-based routing, shared layouts, server rendering, and a solid pattern for keeping global UI elements (like a player) mounted across navigation.
Why did you choose Zustand instead of Redux or Context?
For this specific use case, I wanted something lightweight, ergonomic, and easy to scale without a lot of boilerplate. Zustand hits that sweet spot: the API is simple, the store can be colocated with related logic, and components can subscribe to just what they need. React Context can work for smaller setups, but as the audio state grows (queue, progress, controls), Zustand helps keep things maintainable.
Can I reuse your pattern without the waveform visualiser?
Absolutely. The waveform visualiser is completely optional. The core architecture — a persistent layout, a global player store, and a single audio component wired to that store — works perfectly well on its own. You can always add visualisation later if it fits your brand or product goals.
Does this kind of player hurt performance?
A well-designed persistent player should have minimal impact on performance. It's just one more component in the tree. The bigger performance wins (or losses) come from how you handle data fetching, rendering strategies, image optimisation, and bundle size. Using Next.js's features and being thoughtful about your architecture will keep your app fast even with continuous audio.
What if I want a similar experience in my product?
If you're thinking about a music platform, content site, or product that needs continuous audio, the pattern I've described here can be adapted to your requirements — different track sources, custom UI, access control, analytics, and more. The important part is the architecture: keeping the player persistent, the state predictable, and the app structure clean. From there, we can layer on whatever experience your audience needs.
Let's chat!
Thanks for diving into this breakdown of how I build persistent music players in React & Next.js. I'd love to hear your thoughts: if you could add a continuous audio experience to any project you're working on right now, what would it be? Share your ideas, send this to someone who's dreaming up a custom app, or reach out if you'd like to explore building something like this together.
