A deep dive into what makes us tick and what we've learned over the last 13 years...

Over the past 10 years, we've been constantly hunting for the best cost to performance ratio for Minecraft server hosting, that is changing.

Starting back when SSD's where so unbelievably expensive, and had such low write endurance dedicated hosting providers would not even consider them we ran with a very interesting, complicated and high maintenance configuration;

Dual Intel Xeon L5420, 24GB DDR2 memory, 2x1TB WD Black, 1xKingston 60GB SSD

We ran the magnetic drives in software RAID 1 using mdadm, then we layed "Flashcache" in writeback mode over it, this was high maintenance but gave levels of performance in Minecraft hosting few could compete with.

Over time, we migrated to;

Dual Intel Xeon L5520, 48GB DDR3, 2x1TB WD Black, 1xSamsung 120GB SSD

With the same "Flashcache" setup.

We then moved onto the more modern, more recognisable CPU models;

Intel Xeon E3-1230v2, 32GB DDR3, 4x120GB SSD, Software RAID 10

and

Intel Xeon E5-1620v2, 64GB DDR3, 2x500GB Samsung SSD, Hardware RAID 1 with battery backed cache.

And with this change, we dropped the use of block level caching to SSDs.

We've stuck with this specification for quite a long time - the reason being the low cost to build combined with the low contention (less customers per machine). Meaning less chance of noisy neighbour problems and should hardware failure happen, less customers impacted.

That is, until late last year when we began the massive undertaking mentioned previously.

We've been working with hardware vendors, funding providers and data centres to roll out new hardware specifications.

As back with the previous specifications, the average modpack would start and run on 2.5GB ram, the average user count per machine was quite low, but since modpacks started entering 3.5 - 4.5GB ram requirements, the amount of users per machine dropped dramatically, this gave us more breathing room for our target performance.

As such, we're building machines of the following specifications;

Intel Xeon E-2288G/E-2276G/E-2176G [Stock and deployment time dependant], 64GB DDR4, 2x1TB Samsung SSD, Hardware RAID 1 with DDR3, battery backed cache (To increase queue depth)

And also

AMD Ryzen 9 3950X, 128GB DDR4 3600MHz, 2x1TB Corsair MP600 NVMe (PCI-E Gen 4), Software RAID 1 with round robin read load balancing

We mostly deploy the Xeon's to remote locations and the Ryzens where we can maintain them with our own technicians.

At the date this was posted, we have replaced our entire network in Buffalo, Miami, Dallas, Los Angeles, Seattle & Sydney.

Our hardware is with the couriers for Hong Kong.

Grantham is still in progress, as it is a large investment and will take time.

We are always committed to finding the best performance we can, there are more cost effective hardware specifications.

A common one used by some of our largest competitors include;

Dual E5-2620v2, 512GB DDR3, SATA SSD's (Configuration varies)

This specification allows for a lot more customer density than any of our solutions ever have, meaning much more profit per machine and lower overheads, allowing them to reduce the cost to the end user. We fully support this model but it has never been what we are about.

We want to be at the pinnacle of service control (A full container with root access for every user), the pinnacle of service performance (chasing that 5GHz clock speed and 1M operations per second...).

Moving forward we will be focusing solely on the highest performance in both hardware and network, as you can see from our new Network page, we have connections to many Internet Exchange points around the world now, giving us the shortest route (Least amount of other ISP's between you and us) possible.

So, I'd like to apologise to our users, if you've seen your service migrate and have not been sure why, this is why, we're giving you hardware from this decade, not leaving you behind.

On top of the hardware changes, we now own considerable portions of our IP network, no longer relying solely on connections provided by data centres. We have deployed our own Juniper routing hardware in Grantham, Buffalo, Dallas, Los Angeles, Sydney and Hong Kong, giving us the ability to engineer our traffic around congestion and other issues faced daily by gamers.

Comments?

Leave us your opinion.

You’ve successfully subscribed to CreeperBlog
Welcome back! You’ve successfully signed in.
Great! You’ve successfully signed up.
Success! Your email is updated.
Your link has expired
Success! Check your email for magic link to sign-in.