RAM: Over the years I have watched as server RAM prices have gone down. However, when I recently priced out a virtualization server with 192GB RAM and the RAM was half the price from 9 months ago, I thought WOW! My purchase power to implement virtualization has increased dramatically! With 2-socket servers with 192GB RAM, there is more compute power than most companies really need for most tasks in total; virtualization of some kind is required for most enterprises to make use of the hardware on the market.
There is no better time than the present thus far to implement virtualization.
Monday, October 3, 2011
Saturday, October 1, 2011
New ASHRAE standards, energy efficiency, and HPC
The new ASHRAE standards, when vendors support them, allow for servers and storage that can be run continuously at a maximum of 113 degrees Fahrenheit. Your support contracts and warrantees are not voided when this is supported.
Dell is at the forefront of supporting these standards for the public. They have made public statements supporting these standards. Most of us have heard about the military running servers in the desert without air conditioning. Milspec is not for most IT organizations, but is paving the way for energy efficiency for the rest of us. It means in many places in the US, ambient air is plenty cold 100% of the time. No more cooling server rooms except as needed for people doing work in server rooms.
I have heard jokes in the past about hanging winter coats in server rooms in warm climates as people get very cold working for long periods in chilled rooms. Now the new joke is shorts, flip-flops, and tanktops on a rack by the server room door as we might start running these rooms much hotter.
How does this affect HPC?
Air cooling at any temperature is only good for racks holding up to about 20KW of power-using dense equipment. Any more than that and you just cannot get the heat out fast enough without help from a more dense material than air or by moving air so quickly that it is not cost effective. If I can run data centers on ambient air cooling, then running air cooled racks is getting cheaper and cheaper. Highly dense racks requiring chilled water or other solutions are becoming more expensive compared to these air cooled racks.
It has been the trend to try to use smaller and smaller server rooms, and get more and more dense. The benefits seem to now peak to 20KW/rack and go down from there. If you do not really need extreme HPC density for performance reasons, extra density is not cost effective.
Methods to retro-fit ambient air cooling into existing data centers are critically needed. There is not nearly enough action in the market in this area.
The cloud: hype and reality
The cloud. Everyone in IT is in a buzz about it. It means many things to many people, and sometimes seems to mean anything anyone wants it to mean.
First let me say that, if it is not obvious already, this is not a one-size-fits-all world. As businesses tackle the next buzzword concept, it is very important to keep that in mind. One great was outsourcing, which is great for lots of situations, but definitely not all. Now the cloud.
I was asked to ignore all "business problems" and evaluate moving all servers my department runs in the cloud. It was going to cost twice what it currently costs us in terms of costs like equipment replacement and support contracts. Running servers in the cloud is great if you are small and it helps you avoid hiring your first systems administration team, and it is great if you are huge and need help not buying equipment for rare peaks of capacity utilization, especially if you also have to buy equipment for a DR site for those peaks as well.
As far as SaaS goes, it is a clear winner for many cases. If you do not need to run a local email system, sure, outsource it to a "cloud provider". Salesforce CRM and more can be great instead of running it yourself. Examining every service you run and seeing if it could be more effectively run by others is a sensible thing to do. Sometimes software makers move to a hosted model that is more effective.
Running your own "local cloud" can be effective too, assuming you have appropriate control of and planning for your infrastructure. Moving into server or desktop virtualization when you may not have the network bandwidth or central storage IO to handle it is a disaster.
As always, if you do what makes sense, things go well. Do things because "they are the hot thing to do" rarely goes well for anyone.
Saturday, October 9, 2010
Blade servers - your time may never come
I know many people use and like blade servers. However, I have repeatedly evaluated them and found them not quite matching needs. Blade servers can provide high density compute capabilities. Let's look at a common product - a 10U 16 blade chassis. You can have up to 32 CPUs, 3TB RAM, and up to 32 disks - 2 per half-height blade, all in a 10U chassis.
The problem for most enterprises is that this is more compute power than they will ever need! If you are a Google/Ebay/Yahoo/Amazon/Facebook kind of company, of course that is not true. But these kinds of companies often use alternative products for density, such as multi-server chassis that strip out the management bits of blade computing for a cheaper solution.
I am running Data Center operations with about 200 virtual servers on about 15 vmware ESX servers. But when I look at the details, and the future, I could imagine running them all on one box within a year or two. Most of these ESX boxes run pre-nahalem intel CPUs, where 4 physical CPUs are easily outclassed by 2 nahalem-based CPUs. With an 8-core CPU (or a 12-core AMD cpu) perhaps we could go 4-1, roughly 2 years later. My old 4U servers with 128GB RAM are being replaced by 1U servers with 192GB RAM. If this trend continues, I'll have 1U servers with something like 512GB within a couple years, maybe even 1TB of RAM. Large enterprises need this kind of power for very few needs other than a virtual server infrastructure.
New single high-power systems are replacing use cases for the blade chassis of yesterday.
Saturday, June 27, 2009
VMWare Vsphere 4 (enterprise) - is it a big deal?
One feature of vsphere 4 is a game-changer - EVC - enhanced vmotion compatibility. It allows vmware to use new CPU features for virtualization management, but uses lowest-common-denominator features for running operating systems, to enable vmotion between modern Intel cpus, OR modern AMD CPUs, but not between the two. Without this feature, I felt forced to buy more old CPUs. Now I'm free to buy new 6-core CPUs, for example. Once we test the new CPUs, I'll post about it.
Sunday, March 15, 2009
New technologies benefiting the little guy - power efficiency in small server rooms
Small server room operators have many capabilities:
- hot and cold aisles
- plastic aisle separator curtains
- Liebert XDP/XDV and other in-row cooling solutions
Sunday, March 2, 2008
What is ITIL? What is a CMDB?
I was working with some ITIL-type concepts before ITIL was well known. Pretty much all of us are, although many of us do not know it. I think of ITIL as a framework to not forget about some of the stuff that should be considered as we manage our IT and ops stuff. If you get ITIL training, part of it is about using terminology differently - we all know that people using the same word to mean different things can lead to confusion, especially in the tech. industry, so this is important. The other part is getting a list of all the stuff you should pay attention to - and it is meant to be a complete list.
CMDB is an inventory of stuff. A good inventory. For example - I have a list of all of my servers, and what they are for. Information about when they were deployed, and what their service contract is, if any, is in there. Then, instead of checking all the servers, we check the CMDB. It is just the database of record for what I choose to put there. It can be in several places - we use a separate database to track hardware problems. In the grand scheme of things, it is part of the CMDB too - the database of record of what is what.
CMDB is an inventory of stuff. A good inventory. For example - I have a list of all of my servers, and what they are for. Information about when they were deployed, and what their service contract is, if any, is in there. Then, instead of checking all the servers, we check the CMDB. It is just the database of record for what I choose to put there. It can be in several places - we use a separate database to track hardware problems. In the grand scheme of things, it is part of the CMDB too - the database of record of what is what.
Subscribe to:
Posts (Atom)