Facebook’s new Iowa data center goes modular to grow forever

By Stephen Lawson | Nov 14, 2014

The traffic inside Facebook’s data centers is growing so fast that the company is changing the basic architecture of its networks in order to keep up.

The new design, which Facebook calls a data center fabric, doesn’t use a new link speed to make the network run faster. Instead, it turns the whole system into a set of modules that are less expensive and more widely available than what the company’s using now. It’s also easier to deploy and manage, according to Facebook’s networking chief.

Unlike older hierarchical networks, the modular design can provide consistently fast links across the data center for any two servers to talk to each other. The new architecture was used in a 476,000-square-foot (44,000-square-meter) data center that goes online today in Altoona, Iowa. Facebook plans to use it in all newly built centers and retrofit older facilities as part of its regular upgrade cycle.

Facebook and other companies with sprawling Internet data centers have turned to homegrown or inexpensive white-box gear for networking as well as for computing. They add their own software on top of that hardware, which can mean they don’t buy dedicated products from networking specialists such as Cisco Systems. Though most enterprises don’t have the network scale or in-house expertise to do the same, software-defined technologies developed and shared by these trailblazers are changing some aspects of networking.

Facebook’s current data-center networks are based on clusters, each of which may have hundreds of racks of servers linked together through a massive switch with high-speed uplinks to handle all the traffic that the servers generate. That’s a traditional hierarchical design, which makes sense when most traffic goes on and off the Internet, said Najam Ahmad, vice president of network engineering.

The problem is, most of the communication in a Facebook data center now is just Facebook talking to itself. The applications that organize shared content, status updates and ads into the familiar news feed are highly distributed, so what the company calls “machine-to-machine” traffic is growing many times faster than the bits actually going out to the Internet.

Hundreds of racks per cluster meant hundreds of ports on the switch where all those racks link up. That’s an expensive and specialized need, and it was getting worse.

“We were already buying the largest box you can buy in the industry, and we were still hurting for more ports,” Ahmad said.

In addition, traffic between servers often has to get from one cluster to another, so the company had to constantly worry whether the links between those big clusters were fat enough.

What Facebook needed was a network that could keep carrying all those bits internally no matter how many there were or which servers they had to hit. So in place of those big clusters, it put together pods: much smaller groups of servers made up of just 48 racks.

Now Facebook just needs switches with 48 ports to link the racks in the pod and 48 more to connect with other switches that communicate with the rest of the pods. It’s much easier to buy those, and Facebook could even build them, Ahmad said.

With the new architecture, Facebook can supply 40-Gigabit ethernet pipes from any rack in the data center to any other. Rather than oversubscribing an uplink between two switches and assuming that all the racks won’t be sending data full-throttle all the time, it can equip the data center to handle maximum traffic all the time, a so-called non-blocking architecture.

The identical pods and standard fabric switches allow for easy expansion of both computing and network capacity, Facebook says.

“The architecture is such that you can continue to add pods until you run out of physical space or you run out of power,” Ahmad said. In Facebook’s case, the limiting factor is usually the amount of energy that’s available, he said.

The company has also developed software to automatically discover and configure new components and automate many management tasks. In fact, the fabric switches it uses have only standard, basic capabilities, with most other networking functions carried out by Facebook’s software.

Facebook has developed its own top-of-rack routing switch, called the Wedge, which wasn’t ready for the initial deployment in Altoona but can be used in the network fabric in the future. The architecture calls for servers to connect to the top of the rack over 10-Gigabit ethernet.

In Altoona, Facebook has been able to design the data center from the start for its new network architecture. It’s deployed fiber throughout the facility, installed core network equipment in the center and used straight-line fiber runs that are as short and direct as possible. When needed, the network fabric can be upgraded from 40-Gigabit ethernet to 100-Gigabit and beyond, Ahmad said.

In addition to sharing the core concepts of the network fabric, Facebook says it may later share designs and code with other companies. The company has made some of its data-center technologies available through the Open Compute Project that it founded in 2011.

Posted in Uncategorized | Tagged , , , , | Leave a comment

Amerimar Enterprises Acquires 717 South Wells Carrier Data Center in Downtown Chicago

Amerimar Enterprises has announced the acquisition of 717 South Wells Street, a 100,000-sq-ft, fiber-rich building located in Chicago. Amerimar Enterprises is once again partnering with telecom industry veteran Hunter Newby to own and operate the property.

717 South Wells (originally constructed in 1923) was strategically redeveloped from a manufacturing facility to a telecom carrier hotel by the prior owner in the late 1990s. “With over a dozen networks, 717 South Wells has a solid foundation of carrier customers,” states Joshua Maes, vice president of Amerimar, “and we look forward to growing the network roster at the property.” Amerimar intends to undertake immediate redevelopment work at the property, including construction of a new “Meet Me Room” to draw additional network operators to the building. The first phase of Amerimar’s owner-operated, carrier-neutral “Meet Me Room” is scheduled to open in early 2015, affording new and existing customers the opportunity to interconnect with one another reliably and cost effectively.

The 10-story, 100,000-sq-ft property is one of the most fiber-dense, network-neutral facilities in Chicago. The building serves as a major hub for data and Internet traffic and provides reliable network interconnection infrastructure for carriers, service providers, and enterprise customers. The building’s location is of particular interest to telecom businesses, as it is a gateway to the local fiber backbone in Chicago as well as a primary access point for long-haul fiber in the region. The building also provides highly reliable data center operations with its robust floor loads, power and HVAC infrastructure.

“717 South Wells Street is an excellent addition to our growing carrier hotel platform,” adds Newby. “Chicago is not only a major junction point for the north-south and east-west domestic fiber routes in the Midwest, but it is also a nexus for multiple international networks, making it a global gateway and therefore strategic for us and our customers.”

Ropes & Gray LLP, with a team led by partner Walter R. McCabe III, represented the buyer on the acquisition. Five 9’s Digital and Romans Properties brokered the transaction.

Posted in Uncategorized | Leave a comment

BITCOIN ASIC HOSTING STRIKES DEAL WITH DELL FOR DATA CENTER MINER HOSTING

Bitcoin ASIC Hosting based out of Seattle Washington made a key deal with Dell to hosting miners in Quincy WA data center. Dell with it’s recent move to accept Bitcoin payments has been opening even more doors to the Cryptocurrency world. When Bitcoin ASIC Hosting went looking for a facility to host in, they contacted the Dell data center in Quincy.

Screen Shot 2014-10-09 at 4.33.59 PM

The move to a Dell Tier 3 data center is just the first step in the growth for the startup Bitcoin ASIC Hosting. They are in the process of building out a data center in Washington that takes advantage of the cheap hydroelectric green energy.  Currently, Bitcoin ASIC Hosting are running between 130 th/s to 160 th/s in the Dell data center. They can add up to an additional 4 MW of capacity at the Dell data center.

Bitcoin ASIC Hosting touts the following benefits of using a hosting center.

Challenge:
Bitcoin mining requires power-hungry, purpose-built servers running all out 24×7, so Bitcoin ASIC Hosting sought a co-location partner near low-cost energy sources to run its machines.

Solution:
Bitcoin ASIC Hosting avoided higher-cost power and gained redundant connectivity, cooling and backup emergency power by colocating its high-performance PCs at the Dell Western Technology Center in Washington state.

Benefits:
• Avoided much higher power costs
• Launched colocated operations in less than 10 days
• Gained redundant network connectivity
• Increased hosting reliability and resiliency via backup power
• Improved its carbon footprint

CCN will have an exclusive interview and more information on Bitcoin ASIC Hosting, their hosting deal with Dell and build-out of their data center to expand their offerings.

Posted in Uncategorized | Leave a comment

InnoLight raises $38M to help servers communicate through fiber-optic cables

Google Capital??????????, the growth-stage venture arm of Google, has just made its first investment in China.

The lucky company, InnoLight Technology, announced earlier today it has raised $38 million in a third round of funding.

InnoLight manufactures high-speed optical transceivers used by computer servers. They enable servers to communicate with each other through fiber-optic cables by transforming electrical signals into optical signals, and back to electrical signals.

“InnoLight’s technology is uniquely suited for next-generation data center environments,” said Google Capital general partner Gene Frantz in a statement.

Google’s investment in the company is a strategic one for both sides. Google operates one of the largest data facilities in the world, and more than half of InnoLight’s revenue comes from the U.S. market, including cloud operators and communications equipment manufacturers as some of its customers.

Google Capital co-led this round together with Lightspeed Ventures. The company will use the new funding to purchase new production equipment, grow its team, research and development, and other production-related needs.

Google Capital launched its fund last summer, and has now invested in nine companies.

InnoLight was founded in 2008 by Osa Mok, Hsing Kung, and Sheng Liu, and is based in Suzhou, China. The company previously raised $20 million in funding from Suzhou Ventures and Acorn Ventures, among others.

Posted in Uncategorized | Tagged , | Leave a comment

Duke Pumps $500M Into Renewable Energy i

Duke Pumps $500M Into Renewable Energy in N. Carolina http://ow.ly/ByxUp

Posted in Uncategorized | Leave a comment

Study: Facebook Data Center in North Carolina Has Massive Economic Impact

Data Center Knowledge ArticleScreen Shot 2014-09-05 at 11.30.27 AM

Data centers have a huge economic impact on the local community, but what about Facebook’s mega-data centers? Facebook has provided economic impact studies in Sweden and Oregon, and the figures are staggering. A new study by RTI and Facebook examines the economic effects of Facebook’s data center in Forest City, North Carolina.

Facebook began construction of the Forest City facility in 2011 and it went live in April 2012. Over three years, the data center has resulted in addition of 4,700 jobs across North Carolina, including direct creation of 2,600 jobs, according to the report. The company contributed $526 million in capital spending statewide, generating $680 million in economic output. Facebook’s 2013 operations in North Carolina and the economic activity it generated are associated with more than $1 million in state and local taxes.

Advertisement
In total, between 2011 and 2013 the data center generated a total gross economic impact of $707 million and supported 5,000 jobs across the state.

For every $1 million of output resulting from direct capital expenditures, another $700,000 in output is generated elsewhere in the state. For every $1 million in value added, $1.1 million is generated elsewhere in the state. And for every 10 jobs created from direct capital expenditures, eight jobs are created elsewhere in the state.

Facebook directly supported approximately $337,000 in state personal income tax collection in 2013 through the incomes of its employees. It also added $198,000 to local property tax, paid $336,000 gross receipts tax on electricity usage and $194,000 in franchise taxes to the state.

There are also several other impacts such as donations to local schools, and a partnership with Forrest to provide free local Wi-Fi. Facebook has distributed around $450,000 in community action grants and other community assistance in Rutherford country to support local non-profits and organizations.

“As Facebook continues its mission of connecting the world, we are proud of our role as a community partner in strengthening the Forest City region and adding to its long-term success,” wrote Kevin McCammon, site data center manager in Forest City.

The company previously released an economic impact study of its Luleå, Sweden data center. In Sweden, Facebook directly created nearly 1,000 new jobs and generated local economic impact that amounts to hundreds of millions of dollars.

Facebook had a similar study done for its data center site in Prineville, Oregon, by economic consultants ECONorthwest. That study, announced in May, concluded the company’s data center construction over five years had created about 650 jobs in Central Oregon and about 3,600 jobs in the state overall.

Posted in Uncategorized | Tagged | Leave a comment

Level 3′s Brazil Data Center is now ISO 9001

brazilLevel 3 has made its Latin America-based data centers even more valuable by earning the international ISO 9001 Certification for Quality Management for its data centers in Sao Paulo, Rio de Janeiro and Curitiba, Brazil.

ISO 9001:2008 certification is an international standard that sets out the criteria for a quality management system. It reviews a number of quality indicators, including customer orientation, executive management support, focus on process management and management systems and commitment to continual improvement.

After TÜV Rheinland and the Argentine Accreditation Body (OAA) jointly conducted an internal and external audit process that comprised an evaluation of Level 3′s data center operations, infrastructure and related processes at its facilities in Brazil, certification was granted to three of its data centers.

The service provider said the new certification, which is valid for three years, is part of an ongoing improvement and measurement process that enables Level 3 to optimize its quality standards and continue offering high-quality service to its customers.

“With this third-party seal of approval, customers have the assurance Level 3 is focused on providing quality products and continually improving its services,” said Leonardo Barbero, senior vice president and chief marketing officer for Level 3 in Latin America, in a release.

Besides Brazil, four of Level 3′s data centers in Argentina also are ISO 9001 certified.

Data center expansion also has been a major priority for Level 3. Already operating 350 data centers worldwide, the service provider recently opened its latest center in Herndon, Va., in response to the burgeoning demand for cloud and enterprise services.

Posted in Uncategorized | Leave a comment