To address this demand, Azure was designed to use as much automation as possible, using a strategy called lights-out operations. This strategy seeks to centralize and automate as much of the work as possible by reducing complexity and variability. The result is a person-to-servers ratio closer to 1:30,000 or higher.
Microsoft is achieving this level of automation mostly by using its own off-the-shelf software. Microsoft is literally eating its own dog food. It’s using System Center Operations Manager and all the related products to oversee and automate the management of the underlying machines. It’s built custom automation scripts and profiles, much like any customer would do.
One key strategy in effectively managing a massive number of servers is to provision them with identical hardware. In traditional data centers where we’ve worked, each year brought the latest and greatest of server technology, resulting in a wide variety of technology and hardware diversity. We even gave each server a distinct name, such as Protoss, Patty, and Zelda. With this many servers, you can’t name them; you have to number them. Not just by server, but by rack, room, and facility. Diversity is usually a great thing, but not when you’re managing millions of boxes.
The hardware in each Azure server is optimized for power, cost, density, and management. The optimization process drives exactly which motherboard, chipset, and every other component needs to be in the server; this is truly bang for your buck in action. Then that server recipe is kept for a specific lifecycle, only moving to a new bill of materials when there are significant advantages to doing so.
Source of Information : Manning Azure in Action 2010