Data Center For Sale – New Price!

For Sale – Data Center in Raleigh, North Carolina 
Screen Shot 2015-10-19 at 1.46.43 PM

October 19, 2016

A data center in the Raleigh-Durham market is now for sale at a new price.   The 30,000 SF vacant facility is now available for $4.7M.   Features and specifics include:

  • Robust data center market in Raleigh-Durham NC area
  • Excellent condition
  • 2 MW Generator and 12,000 gallon fuel storage
  • Secured equipment yard with high impact concrete bollards surround facility
  • Card access and security cameras interior and exterior
  • Mechanical and electrical in place
  • Pre-action dry pipe fire suppression system
  • AT&T, Level, 3, Verizon and Time Warner
  • Dual access, diverse fiber entrances
  • Low cost, robust power by Duke Energy

Screen Shot 2015-10-19 at 1.54.40 PM

For more information about this data center, please visit Raleigh Data Center For Sale.

Posted in Uncategorized | Tagged , , , | Leave a comment

Windstream to Sell Data Center Business for $575M

October 19, 2015

LITTLE ROCK, Ark., Oct 19, 2015 (GLOBE NEWSWIRE via COMTEX) —

Windstream WIN, +5.84% a leading provider of advanced network communications, today announced that it has entered into a definitive agreement with TierPoint, a leading national provider of cloud, colocation and managed services, to sell Windstream’s data center business in an all cash transaction for $575 million.

As part of the transaction, Windstream will establish an ongoing reciprocal strategic partnership with TierPoint, allowing both companies to sell their respective products and services to each other’s prospective customers through referrals. This structure will allow Windstream to focus capital on its core telecom offerings while continuing to offer traditional data center services to enterprise customers across a broader data center footprint.
“Data center services will remain an integral component of our enterprise service offering,” said Tony Thomas, president and CEO. “We expect the divested data center business to continue its significant growth under the leadership of TierPoint, and we look forward to partnering closely with them to provide advanced data center services to our enterprise customers.”

“This is a great strategic fit for TierPoint and our customers,” said Jerry Kent, Chairman and CEO for TierPoint. “Windstream Hosted Solutions and its employees have earned a reputation for providing excellent customer service and innovative enterprise-class solutions. We value these team members as a key asset in the acquisition and their expertise adds to our strength and focus on providing a superior level of customer care. We’re also very pleased to enter into a long-term strategic partnership with Windstream, allowing both companies to leverage the expertise and respective strengths of our organizations.”

The boards of both companies have approved the transaction, which is expected to close within the next 2-4 months, subject to customary conditions and approvals.

The data centers being divested generated the following financial results:
(Dollars in millions) 3 Months Ended June 30, 2015 Second Quarter 2015 Annualized
Revenue $30.5 $122.0
Adjusted OIBDA $10.2 $40.8
Non-GAAP Financial Measures

A reconciliation of this measure to the most directly comparable GAAP measure is presented below:

(Dollars in Millions) 3 Months Ended Second Quarter
June 30, 2015 2015 Annualized
Operating loss under GAAP $(2.0) $(8.0)
Depreciation and amortization 12.0 48.0
Stock-based compensation 0.2 0.8
Adjusted OIBDA $10.2 $40.8
Additional Information

About Windstream

Windstream, a FORTUNE 500 company, is a leading provider of advanced network communications and technology solutions, including cloud computing and managed services, to businesses nationwide. The company also offers broadband, phone and digital TV services to consumers primarily in rural areas. For more information, visit the company’s online newsroom at news.windstream.com or follow on Twitter at @WindstreamNews.

About TierPoint

TierPoint is a leading national provider of cloud, colocation and managed services designed to help organizations improve business performance and manage risk. With corporate headquarters in St. Louis, MO, TierPoint operates highly-redundant, carrier-neutral data centers in the states of Washington, Texas, Oklahoma, Pennsylvania, Maryland, New York, Massachusetts, Connecticut and Florida.

Posted in Uncategorized | Leave a comment

Cloud Enables Next-Gen IT Operating Model

1Transforming IT operations from traditional infrastructure to cloud computing can deliver notable financial and business benefits. But leading CIOs are using cloud solutions as the pivot point to a new IT operating model. The enterprise cloud market is experiencing rapid growth, and is expected to top $250 billion by 2017, up from $70 billion in 2015.¹ A maturing array of cloud infrastructure, platform, and application services is now available for enterprise adoption, enabling increased utilization, time to market, and a more favorable return on IT investments. These advancements are prompting CIOs to rely more heavily on new, cloud-based delivery models in response to business demands of increased agility, cost transparency, improved service quality, and better risk management. For example, Netflix Inc. recently announced the close of its last data center this summer in a move toward infrastructure that is “fully reliant on Amazon Web Services.”

Yet, notwithstanding the business case for cloud, aggressive adoption by nimble organizations such as Netflix, and forecasted growth for enterprise cloud investment, the journey to the cloud remains much more staggered and deliberate for many organizations. Moving mainstream technology workloads to the cloud frequently entails re-evaluation of long-standing IT infrastructure and organization practices. Medium to large enterprises in particular, need time to properly prepare to integrate, aggregate, and orchestrate cloud and on-premise assets while providing offerings at an attractive price point and easy-to-consume service to internal development teams

Perhaps more significant, contemporary CIOs see their cloud strategies as part and parcel with their efforts to build the capabilities for a next-generation IT operating model. As such, cloud migration calls for careful planning.

The process begins with examining existing IT capabilities, with first evaluating all IT-supported business functions to assess their compatibility with the cloud by answering three questions:

  1. Which business functions rely on legacy technology that limits their effectiveness or efficiency, and would benefit from a change to cloud-based systems? These functions are ripe for immediate migration. Typical examples include CRM and human resources.
  2. Which business systems risk destabilization due to their size and complexity if the IT footprint should be replaced outright? These systems can be migrated to the cloud over time as technology matures, security/audit controls catch up, and internal emotional barriers subside. Typical examples include core financials and processing functions.
  3. Which systems require as-is operations, at least for the time being, for regulatory compliance purposes? Migration of these systems to the cloud can be deferred. Typical examples include trading systems and health care information systems containing personally identifiable patient information.

Subsequent evaluation of existing IT infrastructure for its compatibility with cloud models is a must.  IT organizations must determine which legacy systems can be improved through the use of cloud computing, and which systems would benefit from DevOps, Agile, or other modern-day practices that can speed time to market for new applications.

Based on the results, CIOs and their teams can label some IT operations as “brownfields” that will simultaneously support legacy and new systems, and others as “greenfields” that will rely on the cloud exclusively as a way to respond more quickly to pressing business needs.

IT organizations will need to simultaneously create conditions for both with three paths for achieving this goal:

  • Create parallel IT organizations to run greenfield and brownfield environments, increasing the footprint of the greenfield operation as legacy systems are retired.
  • Operate the two in parallel, as above, but with a much slower transition to greenfield systems, with the understanding that the business requires the brownfield systems in place indefinitely.
  • Use a “big bang” approach, switching from brownfield systems to greenfield all at once.

The big bang approach is typically an option for smaller firms that lack a large legacy IT environment, but it can also work for certain larger companies. For example, a global online retailer can pursue this approach because its big IT implementations are of recent vintage, making them easier to switch to the cloud.  Larger organizations may opt to pursue a version of the parallel approach.

The typical underpinning platform and transformation requirements to achieve this target state include:

  • Foundational release of a new Infrastructure-as-a-Service (IaaS) public or private cloud platform
  • Deployment of a self-service portal to provide a consistent experience across customer base
  • Adoption of continuous development and continuous integration capabilities (e.g. Agile, DevOps)
  • Shift to a cloud-optimized application portfolio, providing flexibility to outsource applications and infrastructure as appropriate, conduct environment cloning for test and development activities.

In situations where an organization may own the underlying data center real estate, the cloud option still exists and can decrease the cost of a migration, since the data center asset can sold via a sale-leaseback transaction.

This can provide excess capital from the sale of the asset and still allow the company to operate the data center just as it always has.  “Larger corporations have started to use this strategy to transition away from the traditional model of owning its data center, while also addressing the issue of stranded capacity from over built facilities and/or the on going capital expenditures needed to support the operations”, noted Stephen Bollier of Five 9s Digital.

Many options exist when considering how best to execute a migration and the impact it will have to a company’s operating model.  With these options come greater efficiencies and potential significant long-term cost savings.

  1. Jagdish Rebelo, “Enterprise Cloud Computing: Future Market Size, Growth and Competitive Landscape.” IHS Quarterly, Q2 2014
Posted in Uncategorized | Tagged , , | Leave a comment

The Time Is Ripe for Data Center Consolidation

server-consolidation-imageDuring the past 10 to 15 years, multinational organizations amassed dozens of data centers around the world to support business growth and IT demand. Mergers and acquisitions, regional and federated data center ownership models, and regulatory requirements mandating certain data remain in-country have all contributed to complex, sprawling data center real estate assets at many companies.

Now those companies are stuck with legacy infrastructure, hundreds if not thousands of servers and application platforms, and high fixed costs. Given that the average annual cost to operate a single data center runs between $10 million and $25 million, and that some large companies may run up to 60 of them, the average annual cost of data center operations can easily exceed $500 million.

In the financial services industry, several tier one banks are coping with data center sprawl by embarking on billion dollar programs aimed at dramatically reducing their footprints. Their actions include shutting down older facilities, standardizing hardware, rationalizing applications, and modernizing them to run in the cloud or on shared infrastructures. Some large companies in other industries are undertaking similar initiatives: A transportation equipment manufacturer wants to consolidate more than 40 data centers down to five, while a global media company hopes to save more than $100 million by shutting down nearly 60 data centers.

We have seen these “spend-to-save” data center programs cost anywhere from $500 million to $1 billion over multiple years.  They’re often mandated by corporate boards and driven by a need to reduce annual IT operating expenses in order to reinvest those funds in other parts of the business.  The other goal of these programs is to transform monolithic data center estates into more nimble, scalable infrastructure services that allow IT organizations to react more quickly to business needs, whether for new products or services, or compressed time to market.

CIOs & CTOs initiating these data center consolidation programs are taking advantage of a number of factors, including the maturity of cloud technology, increasing options for turning fixed costs into variable costs, and the ability to “pay by the drink” for infrastructure and applications.

CIOs who are thinking ahead two to three years don’t want to be saddled with legacy infrastructure or investments they can’t redirect.  Data center consolidation and application modernization allow them to essentially ‘future-proof’ their technology footprints.

Plan the Undertaking

Obviously, a massive data center consolidation and application modernization effort is no small task. With that in mind, it is good to consider the following:

Survey your current data center estate. Take stock of data center assets and identify the total cost to run your data centers. Consider the cost of hardware, software, disaster recovery, networking, power, real estate, taxes, and labor at each site. And as you tally the number of distinct servers and application platforms, look for opportunities to standardize and rationalize.

Assess your objectives. Why are you undertaking such an initiative? To reduce cost and complexity? Free up capital? Improve IT responsiveness? Has the board given you a number by which you need to reduce IT costs? Clarifying your objectives can help you identify areas to target.

Envision the end state. Gather requirements from stakeholders and then—with financial and strategic goals in mind—either determine what it will take to reduce costs by the target amount (if you’ve been given a specific goal), or calculate potential savings gleaned from shutting down facilities, standardization, and virtualization. Aim to develop a target state data center strategy that addresses business requirements for service levels, security, and regulatory compliance, while driving toward enhanced technology innovation, greater cost efficiency, and increased agility.

Think long-term and big picture. To the extent possible, take a three- to five-year view of business needs and the technology landscape as you plan a new data center strategy. Focus on core architectural principals rather than specific technologies and vendors. Paying too much attention to specific products may limit your ability to adapt your data center strategy as technology evolves.

Posted in Uncategorized | Tagged , , , , , , , | Leave a comment

A peek inside United Airlines Greenfield Data Center Project

The trip from nine data centers to a single greenfield facility for one of the country’s legacy airlines involved a little midflight turbulence but resulted in a safe landing.

United Airlines’ data center consolidation project is one of the most energy efficient in the U.S. and even makes use of a Kyoto cooling wheel.

It was complex to combine nine data centers — some were colocated, one was in a high rise in Houston and some were barely rated Uptime Tier II, however all of these were combined into one Uptime Tier IV data center just outside Chicago, said Tom Songaila, director of IT critical facilities and data center engineering at United Airlines.

Construction of a greenfield data center has become less common over the past five years with the growth of cloud computing and colocation providers, according to Jason dePreaux, a data center analyst with IHS, in Austin, Texas.

However, the consolidation of multiple data centers remains common, whether that results in a greenfield data center, colocation or a move to the cloud, dePreaux said.

Part of the reason United has so many data centers is its merger with Continental Airlines — the Continental data centers were included in the process.

The United data center project was commissioned in 2013 and went live in 2014. Its area is 166,000 square feet on 16 acres, with enough spare room to double in size. The building is rated to withstand an EF4 tornado and seismic activity stronger than the Chicago area has ever seen before. And the data center’s available power is 4 MW — expandable to 6 MW.

unitedairlines_desktop

Energy efficiency was a major project goal, and a big part of that was the Kyoto cooling wheel. Songaila estimates the 20-foot KyotoCooling Kyoto wheel saves $1 million annually in operating costs and eliminates 19,544 metric tons of carbon dioxide output each year by foregoing chiller-based computer room air conditioners (CRAC).

The facility has a power usage effectiveness average of 1.09 and during the next 10 years, the United Airlines data center, when compared to a less efficient operation, is expected to save 420 million KWh of electricity, 115 million gallons of water, $35 million in operational costs and 250,000 tons of carbon dioxide.

The use of free cooling is a trend in greenfield data center construction that reduces energy consumption from CRAC and other power-intensive options.

Most greenfield data center projects still use OEM cooling systems from major makers such as Schneider Electric, Emerson Network Power and Siemens, but the Kyoto wheel is a great way to use free cooling.

A Liebert DS CRAC from Emerson is used for the electrical areas of United’s data center not cooled by the Kyoto wheel. There is also raised floor throughout the data center, which was chosen to make cabling easier.

United also used Emerson Network Power’s Trellis data center infrastructure management (DCIM) tool — which took more work to get up and running than Songaila had thought.  “We do expect to see the fruits of our labor,” he added.

United’s greenfield data center also has 2N backup, using two power supplies from two different substations. In addition, there is an onsite diesel backup that can last for at least 48 hours. The uninterruptible power supply system is also ground faulted.

The integration of United’s data center and IT teams won the recognition of this year’s Brill Awards for Efficient IT from the Uptime Institute. The awards highlight projects that “improve the industry’s ability to sustainably deliver IT services to end users while minimizing cost and other resources.” United Airlines was one of two winners of the Global Leadership Award; the other was the Boeing Company.

Posted in Uncategorized | Tagged , , , | Leave a comment

Google Pulls Back Curtain On Its Data Center Networking Setup

Posted by Frederic Lardinois – Techcrunch

Screen Shot 2015-06-18 at 12.06.04 PM

While companies like Facebook have been relatively open about their data center networking infrastructure, Google has generally kept pretty quiet about how it connects the thousands of servers inside its data centers to each other (with a few exceptions). Today, however, the company revealed a bit more about the technology that lets its servers talk to each other.

It’s no secret that Google often builds its own custom hardware for its data centers, but what’s probably less known is that Google uses custom networking protocols that have been tweaked for use in its data centers instead of relying on standard Internet protocols to power its networks.

Google says its current ‘Jupiter’ networking setup — which represents the fifth generation of the company’s efforts in this area — offers 100x the capacity of its first in-house data center network. The current generation delivers 1 Petabit per second of bisection bandwidth (that is, the bandwidth between two parts of the network). That’s enough to allow 100,000 servers to talk to each other at 10GB/s each.

Google’s technical lead for networking, Amin Vahdat, notes that the overall network control stack “has more in common with Google’s distributed computing architectures than traditional router-centric Internet protocols.”

Here is how he describes the three key principles behind the design of Google’s data center networks:

  • We arrange our network around a Clos topology, a network configuration where a collection of smaller (cheaper) switches are arranged to provide the properties of a much larger logical switch.
  • We use a centralized software control stack to manage thousands of switches within the data center, making them effectively act as one large fabric.
  • We build our own software and hardware using silicon from vendors, relying less on standard Internet protocols and more on custom protocols tailored to the data center.

Sadly, there isn’t all that much detail here — especially compared to some of the information Facebook has shared in the past. Hopefully Google will release a bit more in the months to come. It would be especially interesting to see how its own networking protocols work and hopefully the company will publish a paper or two about this at some point.

Posted in Uncategorized | Leave a comment

Google Expands Its Data Centers In Asia As Millions Come Online For First Time

by Jon Russell TechCrunch

Screen Shot 2015-06-02 at 1.11.15 PM

Here’s good news for those of us who are based in Asia, Google is increasing the capacity of its two data centers the region. That means better performing sites and services for those of us near to its facilities in Singapore and Taiwan.

The U.S. company today announced that its data center in Singapore, which opened its doors just 18 months ago, will be extended with the addition of a “a second, larger, multilevel data center” right next door. This expansion will take Google’s spending on its Singapore site to $500 million which, combined with the $600 million it allocated to Taiwan, takes it past $1 billion for Asia.

The new center in Singapore is due to be completed and online in two years. For now, Google has shared a colorful but “not final” rendering of what it could look like — the most important part, however, is that it will help Google websites and services load faster across Southeast Asia, India and the wider Asian region.

Screen Shot 2015-06-02 at 1.12.58 PM

Google also appears to be planning a further expansion to its other Asia-based data center, which is in Taiwan and opened in 2013.

Media reports last year speculated that the company would invest $66 million to increase its capacity — following an earlier $100 million expansion. A Google representative declined to comment on plans for the site in Taiwan when we asked, but the company did openly it would expand the site — which is located in Changhua County — when it was announced in December 2013.

These expansions are particularly interesting given that Google abandoned plans for a third data center, located in Hong Kong, back in 2013. (That was a busy year for Google data centers in Asia.) Given real estate prices in Asia, expanding its existing sites over time may have been preferable to establishing an entirely new one — Google declined to comment on that, however.

Google’s data centers don’t exclusively serve customers in their immediate proximity, centers in the U.S. and Europe also serve Asia and those in Asia can serve the U.S. too, but users who are located close to a data center do enjoy faster running services. So, while Google’s Asia-based users aren’t solely reliant on these expansions, they are most definitely good news.

The U.S. company is beefing up its server capacity across the world — a $600 million center just went online in Oregon — but the rise of mobile internet in Asia has made expansions in the region particularly important. India alone is thought to have added 40 million new internet users during the first half of this year, which gives an indication of the increased load that Google and other companies are dealing with.

Posted in Uncategorized | Leave a comment

QTS Realty Trust to Acquire Carpathia Hosting for $326M

QTS Realty Trust to Acquire Carpathia Hosting

Information contained on this page is provided by an independent third-party content provider.

Screen Shot 2015-05-06 at 5.44.36 PM

OVERLAND PARK, Kan., May 6, 2015 /PRNewswire/ — QTS Realty Trust (NYSE: QTS) announced today that it has reached an agreement to acquire Virginia-based colocation, cloud and managed services provider, Carpathia Hosting, Inc. for approximately $326 million. Carpathia is a leading hybrid cloud services and Infrastructure-as-a-Service (IaaS) provider offering a high level of security and compliance solutions to sophisticated enterprise customers and federal agencies.

The transaction strengthens QTS’ existing, fully integrated real estate and technology services platform, known as QTS’ 3C’s: custom data center (C1), colocation (C2) and cloud and managed services (C3). The combined companies will service over 1,000 customers in North America, Europe and Asia Pacific.

“Carpathia will be a great complement to our existing platform, enhancing and expanding our premium product and service offering, while delivering added value to customers, shareholders and employees,” said Chad Williams, Chief Executive Officer – QTS. “The addition of Carpathia, with its seasoned management team, will accelerate QTS’ already industry-leading performance, and further strengthen QTS’ unique integrated technology services platform that enterprise customers increasingly require.”

The transaction will provide a number of strategic benefits to QTS, including:

Enhances QTS’ proven and well established 3C integrated services platform with additional scale and product mix in C2 and C3

Accelerates QTS’ federal business through Carpathia’s strong momentum with federal agencies and the ability to leverage this business with QTS’ Richmond mega data center

Adds access to government business through numerous federal Authorizations to Operate (ATOs) with federal civilian and DoD agencies, and a Provisional ATO from the FedRAMP Joint Authorization Board (JAB)

Provides highly complementary capabilities in security and compliance solutions designed for sophisticated enterprise and federal customers

Deepens the QTS customer base with approximately 230 customers and provides the ability to cross-sell QTS’ infrastructure and Carpathia’s product mix to a combined group of over 1,000 customers

Diversifies and expands QTS’ geographic footprint, adding strategic international capabilities

Promotes strategic partnership and offering with VMware vCloud® Government Service provided by Carpathia™ (vCGS)

Fosters near term and long term financial benefits including growth, scale, and immediate accretion
Expands QTS’ executive ranks to include the Carpathia leadership team
“We admire the leadership position QTS has taken in understanding the evolving needs of the industry and building dynamic solutions to meet customer demand,” said Peter Weber, Chief Executive Officer – Carpathia. “Joining QTS means leveraging common strengths and continuing the development of innovative hybrid cloud solutions for enterprise and public sector customers. Furthermore, the existing Carpathia customer base will benefit significantly from QTS’ world-class mega data center infrastructure for their integrated data center needs.”

Peter Weber will join the QTS executive team as Chief Product Officer. The remainder of the Carpathia executive team will continue to play key roles in further developing and delivering products and services that customers seek and value.

Financial Impact
The transaction is expected to be immediately accretive to QTS, adding an estimated $0.01 per share to OFFO and an estimated $0.10 per share to AFFO in 2015, and adding an estimated $0.10 per share to OFFO and an estimated $0.25 per share to AFFO in 2016. The terms of the agreement call for QTS to purchase Carpathia for approximately $290m, and to take approximately $36m of capital lease obligations, for a total enterprise value of approximately $326m. Carpathia is expected to contribute on a second-half 2015 annualized basis approximately $90m in revenue and approximately $32m in annualized Adjusted EBITDA. In addition, QTS anticipates an additional $2m in synergies to begin to ramp in 2016, resulting in an anticipated purchase price multiple of approximately 9.6x 2015 annualized Adjusted EBITDA, pro forma for synergies. In addition, QTS is expecting to incur approximately $7m in fees and expenses associated with the transaction and approximately $15m one-time integration costs to be incurred through 2016.

QTS expects to finance the transaction through available capacity under the Company’s credit facilities, and will ultimately look to a blend of equity and debt securities to finance the acquisition on a leverage-neutral basis.

The transaction is expected to close mid-year 2015, subject to the fulfillment of customary closing conditions.

Management will host a call Wednesday, May 6, 2015 at 4:00 p.m. Central / 5:00 p.m. Eastern with additional details surrounding the transaction. The dial-in number for the conference call is (877) 883-0383 (U.S.) or (412) 902-6506 (International). The participant entry number is 3726112 and callers are asked to dial in ten minutes prior to start time. A link to the live broadcast and the replay will be available on the Company’s website (www.qtsdatacenters.com) under the Investors tab.

Financial and Legal Advisors
Deutsche Bank Securities Inc. acted as lead financial advisor and TD Securities acted as financial advisor to QTS. Hogan Lovells acted as legal advisor to QTS. Credit Suisse acted as exclusive financial advisor to Carpathia’s controlling shareholder, Spire Capital Partners, and Dentons acted as legal advisor to Carpathia and Spire Capital Partners.

About QTS
QTS Realty Trust, Inc. (NYSE: QTS) is a leading national provider of data center solutions and fully managed services and a leader in security and compliance. The company offers a complete, unique portfolio of core data center products, including custom data center (C1), colocation (C2) and cloud and managed services (C3), providing the flexibility, scale and security needed to support the rapidly evolving hybrid infrastructure demands of web and IT applications. With 12 data centers in eight states, QTS owns, operates and manages approximately 4.7 million square feet of secure, state-of-the-art data center infrastructure and supports more than 850 customers. QTS’ Critical Facility Management (CFM) can provide increased efficiency and greater performance for third-party data center owners and operators.    Press Release.

Posted in Uncategorized | Leave a comment

Amazon Web Services test driving Tesla batteries in data centers

By Rachel King for Between the Lines | May 1, 2015 Screen Shot 2015-05-04 at 8.39.50 AM

Tesla turned up the spotlight earlier this week with a portfolio of new power-fueling (and hopefully power-saving) innovations for homes and businesses.
The California car maker often garners attention through the cutting-edge designs and breakthroughs demonstrated predominantly through its lineup of electric vehicles.

Many of today’s small businesses and startups have become leading-edge adopters and innovators in technology because they are not chained to big, legacy systems. We look at tech best practices and transformative opportunities for the small companies that make up such a big part of the business world.

Now premium motor company founded by Elon Musk is infusing that energy into somewhere a little less flashy but all the more needed these days: data centers.

Among those already sampling Tesla’s battery innovations is none other than one of the largest data center managers worldwide: Amazon Web Services, recently boasted to be a $5 billion business by company founder and CEO Jeff Bezos.

Describing itself as “not just an automotive company” but rather a “an energy innovation company,” Tesla touted how it is utilizing some of the same architectures and components in its electric vehicles and bringing them to energy storage systems.

Namely, Tesla is experimenting with integrating batteries to power electronics, thermal management and controls for wrangling them together for a turn key system.
“Tesla’s energy storage allows businesses to capture the full potential of their facility’s solar arrays by storing excess generation for later use and delivering solar power at all times,” the Palo Alto, Calif.-headquartered business asserted. “Business Storage anticipates and discharges stored power during a facility’s times of highest usage, reducing the demand charge component of the energy energy bills.”

James Hamilton, a distinguished engineer at AWS, revealed that Tesla has already been testing running applications on Tesla’s high-capacity battery technology over the last year.

The hope, Hamilton explained, is that such energy efficient measures could encourage “widespread adoption of renewables in the grid.”

“Batteries are important for both data center reliability and as enablers for the efficient application of renewable power,” Hamilton wrote in prepared remarks. “They help bridge the gap between intermittent production, from sources like wind, and the data center’s constant power demands.”

AWS plans to roll out a 4.8-megawatt hour pilot of Tesla’s energy storage batteries, starting with its US West (Northern California) Region. AWS has four regions stateside (including one dedicated to government applications) with half a dozen more scattered around the globe.

Hamilton promised that the soft launch fits in with Amazon’s long-term strategy to eventually achieve a 100 percent renewable energy tech deployment rate across its global infrastructure.

Posted in Uncategorized | Leave a comment

Facebook Investing Heavily in Webscale Data Centers – CIO Journal

By STEVE ROSENBUSH WSJ

Facebook Inc.FB -0.82% is investing heavily, no, make that massively, in cloud-based data centers that are capable of delivering video and other complex digital services to its billion-plus users. Total costs and expenses rose 83% from the year-earlier period, far outpacing a 42% increase in revenue, and research and development spending more than doubled, to $1.06 billion, the WSJ’s Alistair Barr and Deepa Seetharaman report.

Screen Shot 2015-04-23 at 12.28.01 PM

Other CIO Journal News:

A technician works on a computer while testing servers at the Facebook Inc. hardware labs in Menlo Park, Calif., April 7, 2014.
“Lurking in the background, though, is Facebook’s heavy spending on data centers to deliver services and long-term projects such as virtual reality and Internet access beamed from solar-powered drones. Facebook needs lots of computers, places to run them, and the means to connect them to end users around the world,” they say.

Such technology is steadily making its way into the corporate mainstream. Bank of America Corp. is considering the ideas behind Facebook’s data center in Prineville, Ore., as it remakes its own IT infrastructure, as CIO Journal reported. Open standards, commodity hardware, and compute, storage and networking systems that scale out to accommodate wild levels of user demand are the order of the day.

Addressing the California drought requires access to accurate data. California, now in its fourth year of drought, is collecting more data than ever from utilities, municipalities and other water providers about just how much water flows through their pipes. But some say the data-collection process, built on monthly self-reporting and spreadsheets, could be better, allowing for more effective, fine-tuned management of water. “More data and better data will allow for more nuanced approaches and potentially allow the water system to function more efficiently,” said Ted Grantham, a research scientist with the U.S. Geological Survey, tells CIO Journal.

EMC CEO says IT in midst of big secular shift. History wasn’t particularly kind to EMC Corp. during the latest quarter, but CEO Joe Tucci says medium and long-term investments in areas such as an open, cloud-based software development platform, security and software-defined everything will pay off.

Posted in Uncategorized | Leave a comment