Data Center For Sale – New Price!

For Sale – Data Center in Raleigh, North Carolina 
Screen Shot 2015-10-19 at 1.46.43 PM

October 19, 2016

A data center in the Raleigh-Durham market is now for sale at a new price.   The 30,000 SF vacant facility is now available for $4.7M.   Features and specifics include:

  • Robust data center market in Raleigh-Durham NC area
  • Excellent condition
  • 2 MW Generator and 12,000 gallon fuel storage
  • Secured equipment yard with high impact concrete bollards surround facility
  • Card access and security cameras interior and exterior
  • Mechanical and electrical in place
  • Pre-action dry pipe fire suppression system
  • AT&T, Level, 3, Verizon and Time Warner
  • Dual access, diverse fiber entrances
  • Low cost, robust power by Duke Energy

Screen Shot 2015-10-19 at 1.54.40 PM

For more information about this data center, please visit Raleigh Data Center For Sale.

Posted in Uncategorized | Tagged , , , | Leave a comment

Windstream to Sell Data Center Business for $575M

October 19, 2015


Windstream WIN, +5.84% a leading provider of advanced network communications, today announced that it has entered into a definitive agreement with TierPoint, a leading national provider of cloud, colocation and managed services, to sell Windstream’s data center business in an all cash transaction for $575 million.

As part of the transaction, Windstream will establish an ongoing reciprocal strategic partnership with TierPoint, allowing both companies to sell their respective products and services to each other’s prospective customers through referrals. This structure will allow Windstream to focus capital on its core telecom offerings while continuing to offer traditional data center services to enterprise customers across a broader data center footprint.
“Data center services will remain an integral component of our enterprise service offering,” said Tony Thomas, president and CEO. “We expect the divested data center business to continue its significant growth under the leadership of TierPoint, and we look forward to partnering closely with them to provide advanced data center services to our enterprise customers.”

“This is a great strategic fit for TierPoint and our customers,” said Jerry Kent, Chairman and CEO for TierPoint. “Windstream Hosted Solutions and its employees have earned a reputation for providing excellent customer service and innovative enterprise-class solutions. We value these team members as a key asset in the acquisition and their expertise adds to our strength and focus on providing a superior level of customer care. We’re also very pleased to enter into a long-term strategic partnership with Windstream, allowing both companies to leverage the expertise and respective strengths of our organizations.”

The boards of both companies have approved the transaction, which is expected to close within the next 2-4 months, subject to customary conditions and approvals.

The data centers being divested generated the following financial results:
(Dollars in millions) 3 Months Ended June 30, 2015 Second Quarter 2015 Annualized
Revenue $30.5 $122.0
Adjusted OIBDA $10.2 $40.8
Non-GAAP Financial Measures

A reconciliation of this measure to the most directly comparable GAAP measure is presented below:

(Dollars in Millions) 3 Months Ended Second Quarter
June 30, 2015 2015 Annualized
Operating loss under GAAP $(2.0) $(8.0)
Depreciation and amortization 12.0 48.0
Stock-based compensation 0.2 0.8
Adjusted OIBDA $10.2 $40.8
Additional Information

About Windstream

Windstream, a FORTUNE 500 company, is a leading provider of advanced network communications and technology solutions, including cloud computing and managed services, to businesses nationwide. The company also offers broadband, phone and digital TV services to consumers primarily in rural areas. For more information, visit the company’s online newsroom at or follow on Twitter at @WindstreamNews.

About TierPoint

TierPoint is a leading national provider of cloud, colocation and managed services designed to help organizations improve business performance and manage risk. With corporate headquarters in St. Louis, MO, TierPoint operates highly-redundant, carrier-neutral data centers in the states of Washington, Texas, Oklahoma, Pennsylvania, Maryland, New York, Massachusetts, Connecticut and Florida.

Posted in Uncategorized | Leave a comment

Cloud Enables Next-Gen IT Operating Model

1Transforming IT operations from traditional infrastructure to cloud computing can deliver notable financial and business benefits. But leading CIOs are using cloud solutions as the pivot point to a new IT operating model. The enterprise cloud market is experiencing rapid growth, and is expected to top $250 billion by 2017, up from $70 billion in 2015.¹ A maturing array of cloud infrastructure, platform, and application services is now available for enterprise adoption, enabling increased utilization, time to market, and a more favorable return on IT investments. These advancements are prompting CIOs to rely more heavily on new, cloud-based delivery models in response to business demands of increased agility, cost transparency, improved service quality, and better risk management. For example, Netflix Inc. recently announced the close of its last data center this summer in a move toward infrastructure that is “fully reliant on Amazon Web Services.”

Yet, notwithstanding the business case for cloud, aggressive adoption by nimble organizations such as Netflix, and forecasted growth for enterprise cloud investment, the journey to the cloud remains much more staggered and deliberate for many organizations. Moving mainstream technology workloads to the cloud frequently entails re-evaluation of long-standing IT infrastructure and organization practices. Medium to large enterprises in particular, need time to properly prepare to integrate, aggregate, and orchestrate cloud and on-premise assets while providing offerings at an attractive price point and easy-to-consume service to internal development teams

Perhaps more significant, contemporary CIOs see their cloud strategies as part and parcel with their efforts to build the capabilities for a next-generation IT operating model. As such, cloud migration calls for careful planning.

The process begins with examining existing IT capabilities, with first evaluating all IT-supported business functions to assess their compatibility with the cloud by answering three questions:

  1. Which business functions rely on legacy technology that limits their effectiveness or efficiency, and would benefit from a change to cloud-based systems? These functions are ripe for immediate migration. Typical examples include CRM and human resources.
  2. Which business systems risk destabilization due to their size and complexity if the IT footprint should be replaced outright? These systems can be migrated to the cloud over time as technology matures, security/audit controls catch up, and internal emotional barriers subside. Typical examples include core financials and processing functions.
  3. Which systems require as-is operations, at least for the time being, for regulatory compliance purposes? Migration of these systems to the cloud can be deferred. Typical examples include trading systems and health care information systems containing personally identifiable patient information.

Subsequent evaluation of existing IT infrastructure for its compatibility with cloud models is a must.  IT organizations must determine which legacy systems can be improved through the use of cloud computing, and which systems would benefit from DevOps, Agile, or other modern-day practices that can speed time to market for new applications.

Based on the results, CIOs and their teams can label some IT operations as “brownfields” that will simultaneously support legacy and new systems, and others as “greenfields” that will rely on the cloud exclusively as a way to respond more quickly to pressing business needs.

IT organizations will need to simultaneously create conditions for both with three paths for achieving this goal:

  • Create parallel IT organizations to run greenfield and brownfield environments, increasing the footprint of the greenfield operation as legacy systems are retired.
  • Operate the two in parallel, as above, but with a much slower transition to greenfield systems, with the understanding that the business requires the brownfield systems in place indefinitely.
  • Use a “big bang” approach, switching from brownfield systems to greenfield all at once.

The big bang approach is typically an option for smaller firms that lack a large legacy IT environment, but it can also work for certain larger companies. For example, a global online retailer can pursue this approach because its big IT implementations are of recent vintage, making them easier to switch to the cloud.  Larger organizations may opt to pursue a version of the parallel approach.

The typical underpinning platform and transformation requirements to achieve this target state include:

  • Foundational release of a new Infrastructure-as-a-Service (IaaS) public or private cloud platform
  • Deployment of a self-service portal to provide a consistent experience across customer base
  • Adoption of continuous development and continuous integration capabilities (e.g. Agile, DevOps)
  • Shift to a cloud-optimized application portfolio, providing flexibility to outsource applications and infrastructure as appropriate, conduct environment cloning for test and development activities.

In situations where an organization may own the underlying data center real estate, the cloud option still exists and can decrease the cost of a migration, since the data center asset can sold via a sale-leaseback transaction.

This can provide excess capital from the sale of the asset and still allow the company to operate the data center just as it always has.  “Larger corporations have started to use this strategy to transition away from the traditional model of owning its data center, while also addressing the issue of stranded capacity from over built facilities and/or the on going capital expenditures needed to support the operations”, noted Stephen Bollier of Five 9s Digital.

Many options exist when considering how best to execute a migration and the impact it will have to a company’s operating model.  With these options come greater efficiencies and potential significant long-term cost savings.

  1. Jagdish Rebelo, “Enterprise Cloud Computing: Future Market Size, Growth and Competitive Landscape.” IHS Quarterly, Q2 2014
Posted in Uncategorized | Tagged , , | Leave a comment

The Time Is Ripe for Data Center Consolidation

server-consolidation-imageDuring the past 10 to 15 years, multinational organizations amassed dozens of data centers around the world to support business growth and IT demand. Mergers and acquisitions, regional and federated data center ownership models, and regulatory requirements mandating certain data remain in-country have all contributed to complex, sprawling data center real estate assets at many companies.

Now those companies are stuck with legacy infrastructure, hundreds if not thousands of servers and application platforms, and high fixed costs. Given that the average annual cost to operate a single data center runs between $10 million and $25 million, and that some large companies may run up to 60 of them, the average annual cost of data center operations can easily exceed $500 million.

In the financial services industry, several tier one banks are coping with data center sprawl by embarking on billion dollar programs aimed at dramatically reducing their footprints. Their actions include shutting down older facilities, standardizing hardware, rationalizing applications, and modernizing them to run in the cloud or on shared infrastructures. Some large companies in other industries are undertaking similar initiatives: A transportation equipment manufacturer wants to consolidate more than 40 data centers down to five, while a global media company hopes to save more than $100 million by shutting down nearly 60 data centers.

We have seen these “spend-to-save” data center programs cost anywhere from $500 million to $1 billion over multiple years.  They’re often mandated by corporate boards and driven by a need to reduce annual IT operating expenses in order to reinvest those funds in other parts of the business.  The other goal of these programs is to transform monolithic data center estates into more nimble, scalable infrastructure services that allow IT organizations to react more quickly to business needs, whether for new products or services, or compressed time to market.

CIOs & CTOs initiating these data center consolidation programs are taking advantage of a number of factors, including the maturity of cloud technology, increasing options for turning fixed costs into variable costs, and the ability to “pay by the drink” for infrastructure and applications.

CIOs who are thinking ahead two to three years don’t want to be saddled with legacy infrastructure or investments they can’t redirect.  Data center consolidation and application modernization allow them to essentially ‘future-proof’ their technology footprints.

Plan the Undertaking

Obviously, a massive data center consolidation and application modernization effort is no small task. With that in mind, it is good to consider the following:

Survey your current data center estate. Take stock of data center assets and identify the total cost to run your data centers. Consider the cost of hardware, software, disaster recovery, networking, power, real estate, taxes, and labor at each site. And as you tally the number of distinct servers and application platforms, look for opportunities to standardize and rationalize.

Assess your objectives. Why are you undertaking such an initiative? To reduce cost and complexity? Free up capital? Improve IT responsiveness? Has the board given you a number by which you need to reduce IT costs? Clarifying your objectives can help you identify areas to target.

Envision the end state. Gather requirements from stakeholders and then—with financial and strategic goals in mind—either determine what it will take to reduce costs by the target amount (if you’ve been given a specific goal), or calculate potential savings gleaned from shutting down facilities, standardization, and virtualization. Aim to develop a target state data center strategy that addresses business requirements for service levels, security, and regulatory compliance, while driving toward enhanced technology innovation, greater cost efficiency, and increased agility.

Think long-term and big picture. To the extent possible, take a three- to five-year view of business needs and the technology landscape as you plan a new data center strategy. Focus on core architectural principals rather than specific technologies and vendors. Paying too much attention to specific products may limit your ability to adapt your data center strategy as technology evolves.

Posted in Uncategorized | Tagged , , , , , , , | Leave a comment

A peek inside United Airlines Greenfield Data Center Project

The trip from nine data centers to a single greenfield facility for one of the country’s legacy airlines involved a little midflight turbulence but resulted in a safe landing.

United Airlines’ data center consolidation project is one of the most energy efficient in the U.S. and even makes use of a Kyoto cooling wheel.

It was complex to combine nine data centers — some were colocated, one was in a high rise in Houston and some were barely rated Uptime Tier II, however all of these were combined into one Uptime Tier IV data center just outside Chicago, said Tom Songaila, director of IT critical facilities and data center engineering at United Airlines.

Construction of a greenfield data center has become less common over the past five years with the growth of cloud computing and colocation providers, according to Jason dePreaux, a data center analyst with IHS, in Austin, Texas.

However, the consolidation of multiple data centers remains common, whether that results in a greenfield data center, colocation or a move to the cloud, dePreaux said.

Part of the reason United has so many data centers is its merger with Continental Airlines — the Continental data centers were included in the process.

The United data center project was commissioned in 2013 and went live in 2014. Its area is 166,000 square feet on 16 acres, with enough spare room to double in size. The building is rated to withstand an EF4 tornado and seismic activity stronger than the Chicago area has ever seen before. And the data center’s available power is 4 MW — expandable to 6 MW.


Energy efficiency was a major project goal, and a big part of that was the Kyoto cooling wheel. Songaila estimates the 20-foot KyotoCooling Kyoto wheel saves $1 million annually in operating costs and eliminates 19,544 metric tons of carbon dioxide output each year by foregoing chiller-based computer room air conditioners (CRAC).

The facility has a power usage effectiveness average of 1.09 and during the next 10 years, the United Airlines data center, when compared to a less efficient operation, is expected to save 420 million KWh of electricity, 115 million gallons of water, $35 million in operational costs and 250,000 tons of carbon dioxide.

The use of free cooling is a trend in greenfield data center construction that reduces energy consumption from CRAC and other power-intensive options.

Most greenfield data center projects still use OEM cooling systems from major makers such as Schneider Electric, Emerson Network Power and Siemens, but the Kyoto wheel is a great way to use free cooling.

A Liebert DS CRAC from Emerson is used for the electrical areas of United’s data center not cooled by the Kyoto wheel. There is also raised floor throughout the data center, which was chosen to make cabling easier.

United also used Emerson Network Power’s Trellis data center infrastructure management (DCIM) tool — which took more work to get up and running than Songaila had thought.  “We do expect to see the fruits of our labor,” he added.

United’s greenfield data center also has 2N backup, using two power supplies from two different substations. In addition, there is an onsite diesel backup that can last for at least 48 hours. The uninterruptible power supply system is also ground faulted.

The integration of United’s data center and IT teams won the recognition of this year’s Brill Awards for Efficient IT from the Uptime Institute. The awards highlight projects that “improve the industry’s ability to sustainably deliver IT services to end users while minimizing cost and other resources.” United Airlines was one of two winners of the Global Leadership Award; the other was the Boeing Company.

Posted in Uncategorized | Tagged , , , | Leave a comment

Google Pulls Back Curtain On Its Data Center Networking Setup

Posted by Frederic Lardinois – Techcrunch

Screen Shot 2015-06-18 at 12.06.04 PM

While companies like Facebook have been relatively open about their data center networking infrastructure, Google has generally kept pretty quiet about how it connects the thousands of servers inside its data centers to each other (with a few exceptions). Today, however, the company revealed a bit more about the technology that lets its servers talk to each other.

It’s no secret that Google often builds its own custom hardware for its data centers, but what’s probably less known is that Google uses custom networking protocols that have been tweaked for use in its data centers instead of relying on standard Internet protocols to power its networks.

Google says its current ‘Jupiter’ networking setup — which represents the fifth generation of the company’s efforts in this area — offers 100x the capacity of its first in-house data center network. The current generation delivers 1 Petabit per second of bisection bandwidth (that is, the bandwidth between two parts of the network). That’s enough to allow 100,000 servers to talk to each other at 10GB/s each.

Google’s technical lead for networking, Amin Vahdat, notes that the overall network control stack “has more in common with Google’s distributed computing architectures than traditional router-centric Internet protocols.”

Here is how he describes the three key principles behind the design of Google’s data center networks:

  • We arrange our network around a Clos topology, a network configuration where a collection of smaller (cheaper) switches are arranged to provide the properties of a much larger logical switch.
  • We use a centralized software control stack to manage thousands of switches within the data center, making them effectively act as one large fabric.
  • We build our own software and hardware using silicon from vendors, relying less on standard Internet protocols and more on custom protocols tailored to the data center.

Sadly, there isn’t all that much detail here — especially compared to some of the information Facebook has shared in the past. Hopefully Google will release a bit more in the months to come. It would be especially interesting to see how its own networking protocols work and hopefully the company will publish a paper or two about this at some point.

Posted in Uncategorized | Leave a comment

Google Expands Its Data Centers In Asia As Millions Come Online For First Time

by Jon Russell TechCrunch

Screen Shot 2015-06-02 at 1.11.15 PM

Here’s good news for those of us who are based in Asia, Google is increasing the capacity of its two data centers the region. That means better performing sites and services for those of us near to its facilities in Singapore and Taiwan.

The U.S. company today announced that its data center in Singapore, which opened its doors just 18 months ago, will be extended with the addition of a “a second, larger, multilevel data center” right next door. This expansion will take Google’s spending on its Singapore site to $500 million which, combined with the $600 million it allocated to Taiwan, takes it past $1 billion for Asia.

The new center in Singapore is due to be completed and online in two years. For now, Google has shared a colorful but “not final” rendering of what it could look like — the most important part, however, is that it will help Google websites and services load faster across Southeast Asia, India and the wider Asian region.

Screen Shot 2015-06-02 at 1.12.58 PM

Google also appears to be planning a further expansion to its other Asia-based data center, which is in Taiwan and opened in 2013.

Media reports last year speculated that the company would invest $66 million to increase its capacity — following an earlier $100 million expansion. A Google representative declined to comment on plans for the site in Taiwan when we asked, but the company did openly it would expand the site — which is located in Changhua County — when it was announced in December 2013.

These expansions are particularly interesting given that Google abandoned plans for a third data center, located in Hong Kong, back in 2013. (That was a busy year for Google data centers in Asia.) Given real estate prices in Asia, expanding its existing sites over time may have been preferable to establishing an entirely new one — Google declined to comment on that, however.

Google’s data centers don’t exclusively serve customers in their immediate proximity, centers in the U.S. and Europe also serve Asia and those in Asia can serve the U.S. too, but users who are located close to a data center do enjoy faster running services. So, while Google’s Asia-based users aren’t solely reliant on these expansions, they are most definitely good news.

The U.S. company is beefing up its server capacity across the world — a $600 million center just went online in Oregon — but the rise of mobile internet in Asia has made expansions in the region particularly important. India alone is thought to have added 40 million new internet users during the first half of this year, which gives an indication of the increased load that Google and other companies are dealing with.

Posted in Uncategorized | Leave a comment