Blog
Why Your New Laptop Feels Slower Than Your Old One, And Why We Did It to Ourselves
March 22, 2026
- Software Engineering
- (DX) Developer Experience
- (UX) User Experience
You unbox a new machine - M3, 32 GB unified memory, NVMe everything - and somehow window-switching stutters, fans spin up at idle, and Activity Monitor shows 9 GB used before you’ve opened a terminal.
Meanwhile, a 2015 MacBook Air running native macOS apps feels snappier.
This isn’t nostalgia bias. It’s a measurable consequence of how we’ve collectively chosen to ship software.
The Architecture Shift Nobody Voted For
Over the past decade, a massive portion of “desktop” software migrated from native toolkits to web-technology stacks - primarily via Electron (Chromium + Node.js) and its derivatives.
The before:
- C++ / Qt - cross-platform, compiled, direct GPU access
- Cocoa / AppKit - macOS-native, tight OS integration
- Win32 / WPF - Windows-native rendering pipelines
- GTK - lightweight, Unix-native
The after:
- Electron - Chromium + Node.js, ships a full browser per app
- CEF (Chromium Embedded Framework) - similar story, different packaging
- React Native for Desktop / webview wrappers - thinner, but still JS-driven rendering
The key architectural difference: native apps talk almost directly to the OS compositor and GPU. Electron apps go through a full browser rendering pipeline: HTML parsing, CSS layout engine (Blink), compositing, JS event loop, before a single pixel hits your screen.
The Numbers: What Actually Happens When You Open Slack
Each Electron app instance spins up:
- A Chromium renderer process (or several - one per webview/window)
- A Node.js main process
- A GPU process for compositing
- Various utility processes (networking, audio, etc.)
A single Electron app at idle typically consumes:
- 150-400 MB RSS (resident set size)
- 2-5% CPU just maintaining the JS event loop + GC pressure
- Its own V8 heap, separate from every other app
Now stack a typical developer’s daily toolkit:
| App | Framework | Idle RAM (approx.) |
|---|---|---|
| Slack | Electron | ~250 MB |
| VS Code | Electron | ~400-900 MB |
| Discord | Electron | ~200 MB |
| Notion | Electron | ~350 MB |
| Microsoft Teams | Electron/Webview2 | ~300 MB |
| Figma Desktop | Electron + WASM | ~400 MB |
That’s ~1.5-2.5 GB of RAM consumed by what are essentially six independent Chromium instances, before you count your actual browser tabs.
Why It Feels Slow: It’s Not Just RAM
Startup latency
Native apps can cold-start in <100ms. Electron apps typically take 1-4 seconds because they must:
- Load and initialize Chromium
- Parse and execute large JS bundles (often 5-20 MB of minified code)
- Hydrate the React/Vue/Svelte component tree
- Resolve async data fetches before first meaningful paint
Input latency and jank
In a native app, a keypress → render can happen within one frame (~16ms at 60fps). In Electron:
- Event hits the Node.js main process
- Gets dispatched to the renderer process via IPC
- JS event handler fires, triggers a React reconciliation
- Virtual DOM diff → real DOM mutation → Blink layout → paint → composite
That’s 3-5 frames of latency in the best case. Under GC pressure or heavy JS execution, it balloons.
Thermal throttling cascade
Multiple Chromium instances competing for CPU cycles cause sustained background load, which triggers thermal throttling, and everything slows down, including your native apps. The Electron tax isn’t confined to Electron apps.
The IPC Tax
Electron’s process architecture means almost every user action involves inter-process communication:
Renderer (Chromium) ←→ Main (Node.js) ←→ Native APIs / OS
Each IPC hop serializes data (typically JSON), crosses process boundaries, and deserializes. For frequent operations like keystrokes in a text editor, real-time collaboration updates, and cursor tracking, this overhead is non-trivial.
VS Code partially mitigates this by doing heavy lifting in extension host processes and using typed arrays / shared buffers where possible. Most Electron apps don’t.
The Dependency Graph Problem
A native macOS app might link against a handful of system frameworks. A typical Electron app’s node_modules:
- 500-2000+ packages
- Multiple abstraction layers (React → React DOM → Scheduler → reconciler internals)
- Polyfills for web APIs that the bundled Chromium already supports
- Duplicated transitive dependencies
This translates directly to:
- Larger JS bundles → longer parse times
- More code paths → harder to optimize → more GC pressure
- Bigger attack surface (but that’s a different post)
Why We Chose This Anyway
Let’s be honest about the engineering economics:
- Cross-platform from day one. One JS/TS codebase for Mac, Windows, Linux. The alternative is 2-3x the engineering team, or C++/Qt (which has its own pain).
- Shared code with the web app. If your product is primarily a web app, Electron lets you wrap it with minimal delta. Slack, Notion, Discord - the desktop app is the web app with a different shell.
- Hiring. There are ~20M JavaScript developers. There are far fewer experienced native desktop developers, and the number shrinks every year.
- Iteration speed. Hot reload, web DevTools, React component model: the DX is genuinely better for UI-heavy apps.
- “Good enough.” For most users, the performance penalty is tolerable. The apps work. They just aren’t great.
The Counterarguments (And Where They Fall Short)
“Just buy more RAM.”
This treats symptoms. 32 GB shouldn’t be the baseline for running a chat app, a text editor, and a browser.
“Electron apps are getting better.”
True, VS Code is remarkably well-optimized for an Electron app. But it’s also maintained by a team that invests heavily in performance profiling and has access to Chromium internals. Most teams don’t have those resources.
“Tauri / Neutralinojs will fix this.”
Tauri uses the system webview (WebKit on macOS, WebView2 on Windows) instead of bundling Chromium. This dramatically reduces memory overhead (3080 MB per app). But you still have the JS rendering pipeline, and you lose cross-platform rendering consistency. It’s a meaningful improvement, not a solution.
What Native-First Looks Like in 2026
Some teams are swimming against the current:
- Zed: Rust + GPU-accelerated rendering via GPUI. Sub-frame input latency, ~60 MB idle.
- Ghostty: Zig-based terminal, ~15 MB idle, native platform integration.
- Linear: technically a web app, but obsessively optimized with custom virtual scrolling, minimal re-renders, aggressive code splitting.
- Warp: Rust-based terminal with GPU rendering.
- Apple’s own apps: Safari, Xcode, Final Cut. The performance baseline that Electron apps are measured against.
The common pattern: compiled language + direct GPU access + minimal abstraction layers.
The Real Cost Function
Here’s the uncomfortable framing:
Total cost = (dev team size × native platforms) + maintenance burden
vs.
Total cost = (smaller team × Electron) + user performance tax + hardware inflation
We externalized the cost. Companies saved on engineering headcount. Users paid with RAM, battery life, and the vague feeling that computers aren’t getting faster.
The irony: developers - who chose Electron for their own productivity - are among the hardest hit, because they run the most Electron apps simultaneously.
Where This Goes
A few trends worth watching:
- WebAssembly on the desktop. Figma already uses WASM for its rendering engine inside Electron. If more apps move compute-heavy paths to WASM, the JS overhead shrinks (though the Chromium overhead remains).
- System webviews maturing. Apple’s WKWebView and Microsoft’s WebView2 are increasingly capable. Tauri-style apps could become the pragmatic middle ground.
- AI-assisted native development. If LLMs can generate and maintain platform-specific UI code, the “one codebase” argument for Electron weakens.
- User pushback. The HN thread about this post will probably have 200+ comments about how Electron is a plague. At some point, that sentiment reaches product decisions.
TL;DR
Your new laptop isn’t slow. Your software is running six copies of Chrome, each executing megabytes of JavaScript through multiple abstraction layers, fighting for thermals against each other.
The hardware got faster. The software got proportionally heavier, and in many cases, architecturally less efficient than what it replaced.
We chose developer convenience over user experience. Whether that trade-off was worth it is the real question.
And we’re the ones who made that choice.