The bits, bytes, and bad ideas that somehow work...

Hi, Paul here again, CEO of CreeperHost. Last time we talked about what's inside our servers. This time, let's talk about how those servers actually connect to the rest of the world. Because contrary to what most people think, the internet isn't flat, and two servers in the same city can perform completely differently.

Not All "New Yorks" Or "Sydneys" Are Equal

People often assume that if two providers both say they have a "Sydney" or "New York" location, the performance must be about the same. It isn't. The city name on the order form tells you roughly where the server lives, but it says nothing about how your traffic actually gets there or comes back.

Under the hood, the whole internet runs on BGP (Border Gateway Protocol). It's what decides which path your packets take between you and a server. Different networks make different choices. Some pick the cheapest route, others pick the fastest, and sometimes those decisions change minute by minute based on load, cost, or how many routers are having a bad day.

That's why you can have two servers sitting a few streets apart and one of them feels instant while the other feels like it's gone on holiday to Los Angeles before coming back. We've seen it happen. A few years ago we watched a "local" Australian route bounce across the Pacific twice just because an ISP thought that was cheaper. It wasn't faster, though.

The BGP Reality Check

BGP isn't evil, but it's built around cost and policy, not necessarily performance. Networks gossip routes to each other, and not all gossip is true.

By default, BGP prefers the path that crosses the fewest ISP networks, not the path with the lowest latency. It assumes fewer hops equals faster, which sounds fine in theory until one of those networks happens to have a congested link halfway across the world. That's how you end up with traffic taking the "shortest" route on paper while actually being the slowest in practice.

Some ISPs also make things worse by classifying certain ports or traffic patterns as "peer to peer" or "file sharing" when they're actually just game servers. That means packets for common ports like 25565 or 27015 can get throttled or deprioritised for no good reason. We've literally seen a 1ms local path turn into 200ms because someone's traffic filter thought Minecraft was BitTorrent.

It's most common with residential ISPs and university networks that do broad shaping to save bandwidth or "protect" their users. A few big providers used to be especially bad for it (looking at you, Shaw Cable). We're not sure who still does it, but it still happens often enough to matter.

We pick our transit providers and peers specifically to avoid that nonsense at our end. Our routing is tuned for reliability and low latency, not just cheap bandwidth. If a provider is known to shape or misclassify game traffic, we simply don't use them.

Bandwidth Isn't Everything

Everyone loves to brag about "how many gigabits" they have, but raw pipe size doesn't mean much if it's always full. What actually matters is utilisation. When a network runs near capacity, latency spikes and packets start queueing. That means lag, slower map downloads, and game updates that crawl.

We keep utilisation low so there's always room to breathe. That's why things like Steam downloads, modpack updates, and backup restores stay fast even when everyone else is complaining about peak time slowdowns. The fewer people fighting for the same lane, the faster everything moves.

To give you an idea, our latest Gen7 servers each have dual 25 Gbps network links, but the average utilisation is only around 0.1%. Even when someone's downloading or updating a huge game, it might briefly hit 7% before dropping back down. We massively oversize our links so there's always more capacity than any single server or customer could ever saturate.

Smarter DDoS Protection

DDoS protection isn't just about how much traffic you can absorb. Sure, having hundreds of gigabits of capacity helps, but the real trick is knowing what to block and what to leave alone. Game servers use specific packet types and timing patterns that generic protection systems often mistake for attacks.

Our protection layer is game aware. It's built to handle real game traffic while filtering out the junk that actually matters. You get clean joins and no random disconnects every time someone decides to "test their booter."

Firewalls That Actually Help

We also run network firewalls tuned for our environment. They don't just sit there blocking ports; they actively monitor for brute force login attempts and block them automatically. You can see and control these blocks right inside CreeperPanel. It's an extra layer of safety that doesn't get in the way of normal use.

Built Right From The Ground Up

Our core network in the UK runs on Juniper gear for reliability and low latency. Our Grantham site has diverse fibre entry points so a single digger can't take us offline, and multiple diverse routes back to Telehouse North in London where we connect to the wider internet. If a carrier has an issue, traffic automatically moves to another path. Uptime and routing flexibility are baked in, not bolted on.

Smarter Routing In Real Time

We also run an intelligent routing system that constantly watches live traffic between players and servers. It doesn't just look at the network as a whole, it tracks individual connections in real time.

Every few seconds, it tests the routes through all our transit providers to find the one with the lowest latency and zero packet loss. If one path starts acting up, that connection is quietly moved to a cleaner route on the fly, without anyone noticing.

It means every player, no matter where they're connecting from, always takes the best possible path to your server, not just whatever path the internet felt like using that minute.

Why Grantham Works (Even If It's Not London)

Yes, being based in Grantham adds maybe two or three milliseconds for UK players compared to sitting inside London, but that tiny difference buys us something far more valuable: full control. Because it's our facility, we can swap hardware, add capacity, or fix problems within minutes instead of waiting on remote hands or colo staff.

We make up for that small latency with premium transit and direct exchange links across Europe. We also work with DataPacket, who make sure their own network peers directly with major residential ISPs so your traffic reaches players faster. Peering is cheaper for them and faster for you, so everyone wins.

Most latency doesn't come from geography anyway. It comes from bad routing. We'd rather run a perfectly tuned network that's a few milliseconds further away than one that's "local" but sending your packets halfway around the planet.

Putting It All Together

So next time you hear someone say "it doesn't matter, all servers in the same city are the same," you can smile and let them believe it. We'll be over here quietly keeping the packets flowing the right way while they're wondering why theirs suddenly stopped.

Everything we talk about here, the routing, the monitoring, the mitigation, the hardware, it's all handled by people inside CreeperHost. We don't sit around waiting for a third party to fix it. When something isn't right, we have the knowledge, access, and experience to dig in and make it right ourselves.

It's not about the city name. It's about the path. And we build, manage, and tune that path ourselves to keep it clean, fast, and game friendly.

- Paul (CEO, Founder, CreeperHost)

Comments?

Leave us your opinion.

You’ve successfully subscribed to CreeperBlog
Welcome back! You’ve successfully signed in.
Great! You’ve successfully signed up.
Success! Your email is updated.
Your link has expired
Success! Check your email for magic link to sign-in.