Technology May 03, 2026 · 6 min read

Why I'm Moving from Go to C After Building a Load Balancer

This isn't an anti-Go post. Go is a great language. This is about what I want to understand. I just finished building an L7 HTTP load balancer in Go. It accepts connections. It parses HTTP headers. It forwards requests to backend servers using round-robin. It handles concurrent connections with...

DE
DEV Community
by Dani Sam
Why I'm Moving from Go to C After Building a Load Balancer

This isn't an anti-Go post. Go is a great language. This is about what I want to understand.

I just finished building an L7 HTTP load balancer in Go.

It accepts connections. It parses HTTP headers. It forwards requests to backend servers using round-robin. It handles concurrent connections with goroutines. It has health checks. It works.

And somewhere in the middle of it working, I realized I didn't fully understand what it was doing.

Not the logic — I understood the logic. I'm talking about what's happening underneath. The goroutine scheduler, the net package, the garbage collector deciding when to free memory — all of it is running, and I can't see it. I'm directing traffic above a layer I've never touched.

That bothered me more than I expected.

Go is hiding things from you. On purpose.

That's not a criticism. That's the whole point of Go. It was designed to let you build reliable network services without managing memory, without dealing with file descriptors, without thinking about what socket() and bind() and accept() actually do. You call net.Listen() and it works.

But when you're trying to build a career around infrastructure and protocol design — when you want to eventually write things that operate at the wire, not above it — those hidden layers matter.

In Go, a TCP connection is a net.Conn. In C, a TCP connection is a file descriptor returned by socket(), configured by setsockopt(), bound by bind(), connected by connect() or accepted by accept(). You can inspect it, manipulate it, do things to it that Go will never let you do from inside its abstraction.

When I build a load balancer in C, I'll write the socket calls myself. I'll choose between blocking and non-blocking I/O. I'll call select() or epoll() directly and decide how to handle readability and writability on each file descriptor. I'll feel the event loop in my hands instead of trusting the runtime to manage it.

That's not harder for the sake of being harder. That's closer to what's actually happening.

Memory tells you the truth about your program.

Go has a garbage collector. It's a good one. You don't think about allocation — you create things, use them, and the runtime figures out when to free them. For most programs, this is exactly the right tradeoff.

But the GC also means you develop a certain blindness. You never ask: how long does this live? who owns it? when does it go away? You don't have to. Go answers those questions for you.

In C, nothing answers those questions for you. You call malloc(), you use the memory, you call free() — or you leak it, and valgrind will tell you exactly where. This forces a precision of thought that Go quietly removes.

I don't think Go programmers are less rigorous. I think they're rigorous about different things. But I want to be rigorous about memory. I want to know, for every allocation in a network path, where it lives and when it dies. That instinct only comes from writing C.

Protocol design lives in C's territory.

Here's the thing that I care about most in the long run: protocol design. Not using HTTP — designing binary protocols. Wire formats. Message framing. Encoding length-prefixed fields. Handling byte order with htons() and ntohl().

In C, you write a packed struct, cast a buffer to it, and send it over a raw socket. You understand exactly how many bytes are on the wire, in what order, and why. When the receiver reads it, you know precisely what it sees.

In Go, you reach for encoding/binary and the abstraction handles byte order for you. Useful. But it doesn't teach you why byte order matters, or what the machine is actually doing when it serializes that field.

If I ever want to write a protocol from scratch — not implement HTTP, but design something that runs on TCP and defines its own framing — I need to be comfortable at that level. C is where that comfort is built.

Go did exactly what it was supposed to do. And more.

Building the load balancer in Go was the right choice for the project. Go's concurrency model made handling connections straightforward. The standard library handled HTTP parsing cleanly. The resulting code is readable and correct.

But here's what I didn't expect: Go was great for me precisely because of the doubts it created.

Every time I hit a question I couldn't fully answer — why does this goroutine block here? what is the runtime actually scheduling? what does the kernel see when net.Dial() runs? — those weren't failures. Those were directions. Each doubt was a pointer to something real underneath that I hadn't looked at yet.

Go gave me a map of questions I didn't know I had.

And one of those questions led somewhere I didn't anticipate: embedded systems. The more I pulled on the thread of "what's actually running this code," the more I found myself staring at hardware. Microcontrollers. Registers. Interrupts. The point where software stops and physics begins.

That's when something clicked.

Electronics and computer science are not separate fields that happen to overlap. Electronics is the layer that computer science runs on. Every abstraction — the OS, the runtime, the network stack — eventually bottoms out at a circuit doing something physical. Without that layer, none of the rest exists.

A load balancer in Go is real. But it is running on silicon. And I want to understand the whole path, from the wire to the application, without a gap.

That's not Go's failure. That's Go being honest about what it is — and me being honest about where I want to go.

So what's next.

I'm starting C with a specific sequence: raw sockets first, then a TCP echo server, then a custom binary protocol over TCP, then — eventually — something that touches the kernel directly. No web servers, no frameworks, no shortcuts.

I want to feel what Go is abstracting. Not because abstraction is bad, but because you can't abstract something you don't understand. And I want to understand it.

And I'll be honest about something: humans change their minds. Maybe in six months I'm deep in embedded — writing firmware, talking to hardware over SPI, thinking in interrupts. Maybe I end up the other direction — implementing TCP itself, building network stacks, living inside the kernel's socket layer. I don't know yet.

That's not a weakness in the plan. That's just how engineers actually develop. You follow the questions. The questions take you somewhere. You follow those.

Right now the questions are pointing at C. So that's where I'm going.

The void meets the wire. That's the direction.

DE
Source

This article was originally published by DEV Community and written by Dani Sam.

Read original article on DEV Community
Back to Discover

Reading List