Inexpensive CPU Monster


Rather than using the commercial cloud, my group tends to run day-to-day jobs on a tiny cluster of machines in my office and then to use Emulab when a serious amount of compute power is required. Recently I upgraded some nodes and thought I’d share the specs for the new machines on the off chance this will save others some time:

processor Intel Core i7-5820K $ 380
CPU cooler Cooler Master Hyper 212 EVO $ 35
mobo MSI X99S SLI Plus $ 180
RAM CORSAIR Vengeance LPX 16GB (4 x 4GB) DDR4 SDRAM 2400 $ 195
SSD SAMSUNG 850 Pro Series MZ-7KE256BW $ 140
video PNY Commercial Series VCG84DMS1D3SXPB-CG GeForce 8400 $ 49
case Corsair Carbide Series 200R $ 60
power supply Antec BP550 Plus 550W $ 60

These machines are well-balanced for the kind of work I do, obviously YMMV. The total cost is about $1100. These have been working very well with Ubuntu 14.04. They do a full build of LLVM in about 18 minutes, as opposed to 28 minutes for my previously-fastest machines that were based on the i7-4770. I’d be interested to hear where other research groups get their compute power — everything in the cloud? A mix of local and cloud resources? This is an area where I always feel like there’s plenty of room for improvement.

,

3 responses to “Inexpensive CPU Monster”

  1. I’m not a “research group” (just a programmer fooling around with stuff) but I do most of my computation on a Hetzner i7-3770 server that I also use for storage. Those things are ridiculously cheap:

    https://robot.your-server.de/order/market/country/US?hdsize=3000

    They also have some i7-3930’s with 64gb of ram which are almost as fast as your i7-5820 according to passmark.

    I don’t think I’d want a cluster of compute servers in my office, making heat and noise. Remote servers are best.

    Regarding building LLVM, I haven’t tried this but

    https://computing.llnl.gov/code/pmake.html

    is a parallel version of make that lets you distribute a compilation across multiple machines. There are other things like that around too. I also remember hearing around 10 years ago that the main Google web search application at that time was around 100 MLOC of C++ and they compiled it on a 1000 node cluster. So distributed compilation is a known useful approach.

  2. Hi Paul, one of my students uses pmake to build LLVM and it seems to work well. I haven’t been inspired to try it yet, my view is that unless something like that is used by a large number of people it’s likely to have odd corner-case bugs due to clock skew, version skew, or anything else skew.

    My office runs cold and these machines are quiet!

    Hetzner looks interesting, I hadn’t seen it– will investigate next time I need extra power.

  3. One thing about Hetzner is that it’s not cloud computing– they are monthly rentals of bare metal dedicated servers. You have to configure the OS, they require 30 days notice if you want to cancel, etc. So they’re not really for occasional usage bursts. But I’ve had mine for about 6 months and after getting some early snags fixed it works great. If I needed more capacity on a regular basis I’d certainly add another one.

    Runabove.com has reserved core cloud instances at fairly low hourly rates if that’s of interest. They also have some of what are supposed to be super fast machines with the power8 architecture (22 cores, 176 threads, 48gb ram) at around $1/hour but I haven’t tried it. Phoronix mentions some benchmarks: http://www.phoronix.com/scan.php?page=article&item=runabove_power8_cloud&num=2

    it sounds sort of comparable to an 8-core Haswell-E except for numeric workloads.