There's been some buzz about HP's new "Moonshot" server: http://h17007.www1.hp.com/us/en/enterprise/servers/products/moonshot/index.a...
The main page is kind of marketing-speaky, but if you poke around it becomes clear what it is: a 4.3U[1] enclosure that fits "up to 45 hot-pluggable, efficient, extreme low-energy servers." They're dual-core 2 GHz Atoms, with a single memory slot up to 8GB, and a single 2.5" disk.
I had lunch yesterday with some people who happened to be pretty familiar with this, and what's dubbed the "hyperscale" paradigm. The idea is that, instead of some dense servers and running VMs on them, you go for a massive number of very low-power servers. These Atoms seem to use something in the ballpark of 10W, but there's also a lot of interest in ARM chips where you can get much lower usage. And apparently, 45 nodes in 4.3U is nothing. Look at HP's Redstone[2] for example, which fit 288 ARM chips into a 4U chassis.
So on the surface, this is wholly irrelevant to cloud computing. In fact, if anything, it seems like it undermines the whole premise of virtualization.
But it occurs to me that, if it's possible to have hundreds, even thousands, of servers in a single rack, the way people manage them is probably going to change. I think you're almost going to want to treat them like cloud instances -- boot them with some premade image, do your work on them, and then spin them down, treating them as stateless. Maybe it's a one-off compute job (using Hadoop or something), or maybe you're hosting some sites and want to just power up additional servers as load grows. I think you're going to want to stop treating them as always-on servers with a fixed role, and view them exactly the same was as someone views a cloud instance, but backed by hardware.
I think it's too soon to be able to do a lot, but I think this is something worth thinking about. Can we effectively bring what we do in the cloud to "hyperscale" physical nodes?
-- Matt
[1] Yes, 4.3U. And yes, the idea of fractional rack units really bugs me. [2] http://www.theregister.co.uk/2011/11/01/hp_redstone_calxeda_servers/
I believe HP has been contributing to https://wiki.openstack.org/wiki/Baremetal already. Basically using the same openstack api for bare metal instead of vm's. This allows all the same software to manage the instances whether physical or virtual. This hardware would fit that model very well.
Kevin ________________________________________ From: aeolus-devel-bounces@lists.fedorahosted.org [aeolus-devel-bounces@lists.fedorahosted.org] On Behalf Of Matt Wagner [matt.wagner@redhat.com] Sent: Wednesday, April 10, 2013 11:50 AM To: aeolus-devel@lists.fedorahosted.org Subject: Musing: Hyperscale computing and us
There's been some buzz about HP's new "Moonshot" server: http://h17007.www1.hp.com/us/en/enterprise/servers/products/moonshot/index.a...
The main page is kind of marketing-speaky, but if you poke around it becomes clear what it is: a 4.3U[1] enclosure that fits "up to 45 hot-pluggable, efficient, extreme low-energy servers." They're dual-core 2 GHz Atoms, with a single memory slot up to 8GB, and a single 2.5" disk.
I had lunch yesterday with some people who happened to be pretty familiar with this, and what's dubbed the "hyperscale" paradigm. The idea is that, instead of some dense servers and running VMs on them, you go for a massive number of very low-power servers. These Atoms seem to use something in the ballpark of 10W, but there's also a lot of interest in ARM chips where you can get much lower usage. And apparently, 45 nodes in 4.3U is nothing. Look at HP's Redstone[2] for example, which fit 288 ARM chips into a 4U chassis.
So on the surface, this is wholly irrelevant to cloud computing. In fact, if anything, it seems like it undermines the whole premise of virtualization.
But it occurs to me that, if it's possible to have hundreds, even thousands, of servers in a single rack, the way people manage them is probably going to change. I think you're almost going to want to treat them like cloud instances -- boot them with some premade image, do your work on them, and then spin them down, treating them as stateless. Maybe it's a one-off compute job (using Hadoop or something), or maybe you're hosting some sites and want to just power up additional servers as load grows. I think you're going to want to stop treating them as always-on servers with a fixed role, and view them exactly the same was as someone views a cloud instance, but backed by hardware.
I think it's too soon to be able to do a lot, but I think this is something worth thinking about. Can we effectively bring what we do in the cloud to "hyperscale" physical nodes?
-- Matt
[1] Yes, 4.3U. And yes, the idea of fractional rack units really bugs me. [2] http://www.theregister.co.uk/2011/11/01/hp_redstone_calxeda_servers/
On Wed, Apr 10, 2013 at 02:50:54PM -0400, Matt Wagner wrote:
I had lunch yesterday with some people who happened to be pretty familiar with this, and what's dubbed the "hyperscale" paradigm. The idea is that, instead of some dense servers and running VMs on them, you go for a massive number of very low-power servers. These Atoms seem to use something in the ballpark of 10W, but there's also a lot of interest in ARM chips where you can get much lower usage. And apparently, 45 nodes in 4.3U is nothing. Look at HP's Redstone[2] for example, which fit 288 ARM chips into a 4U chassis.
FYI HP Moonshot will support other chip vendors by end of 13, as they promise (AMD, ARM...)
I think it's too soon to be able to do a lot, but I think this is something worth thinking about. Can we effectively bring what we do in the cloud to "hyperscale" physical nodes?
There is already Foreman which handles bare-metal efficiently. Maybe it worths to integrate some more effort into the Foreman.
On Wed Apr 17 11:06:53 2013, Lukas Zapletal wrote:
On Wed, Apr 10, 2013 at 02:50:54PM -0400, Matt Wagner wrote:
I had lunch yesterday with some people who happened to be pretty familiar with this, and what's dubbed the "hyperscale" paradigm. The idea is that, instead of some dense servers and running VMs on them, you go for a massive number of very low-power servers. These Atoms seem to use something in the ballpark of 10W, but there's also a lot of interest in ARM chips where you can get much lower usage. And apparently, 45 nodes in 4.3U is nothing. Look at HP's Redstone[2] for example, which fit 288 ARM chips into a 4U chassis.
FYI HP Moonshot will support other chip vendors by end of 13, as they promise (AMD, ARM...)
I've heard some really interesting anecdotes about the density that can be achieved with ARM chips. It would be interesting to see if things get even more dense going forward.
I think it's too soon to be able to do a lot, but I think this is something worth thinking about. Can we effectively bring what we do in the cloud to "hyperscale" physical nodes?
There is already Foreman which handles bare-metal efficiently. Maybe it worths to integrate some more effort into the Foreman.
I think that could be a really interesting thing to do... Either with us integrating with Foreman directly, or trying to get a Foreman driver for Deltacloud, so that Deltacloud clients can think of it as a "bare metal cloud."
The Baremetal extension that Kevin Fox mentioned above is also really interesting -- making bare-metal servers appear behind the OpenStack API.
Between the two of those, it sounds like there is a lot we could do going forward.
-- Matt
aeolus-devel@lists.fedorahosted.org