How Amazon Stays On Top in the Cloud Wars
Amazon Web Services was the first major player in cloud computing, and has maintained its clear leadership position as rivals like Microsoft and Google have poured billions of dollars into competing platforms.
How has Amazon, a company with its roots in online retailing, managed to dominate one of the major tech battlegrounds of our era?
The answer? Amazon has done many things right:
The company’s early adoption among developers gave it an unparalleled ecosystem of experimenters.
Requests for new features allowed AWS to maintain its lead in functionality.
Its focus on scale and data center automation has created an extremely efficient infrastructure, making it resistant to pricing pressures.
Through education and certification services, Amazon has cultivated a talent pool of workers skilled in their use of AWS and its services, expanding beyond its initial developer-focused base and into all market segments.
The competition, however, is growing. New clouds are rising and prices are being aggressively cut. In order to understand Amazon’s ability to maintain its leadership, it is important to look at how it built its cloud empire.
The Early Days: A New Business Model
Many companies say they’ve been doing cloud computing for years. However, Amazon was the first to offer a true utility compute offering in 2006. This allowed it to gain an early lead.
Companies like Rackspace soon followed into the market, acquiring online storage provider Jungle Disk and unique VPS hoster Slicehost in 2008 to form the basis of its cloud offering. The business models for the two companies were very different; Rackspace focused on providing the best customer experience, positioning as a premium cloud product. The same tactic of “fanatical support” served Rackspace well during the mass market hosting pricing wars, when it refused to participate in the race to the bottom for prices. Rackspace has built itself a nice cloud business, but it still hasn’t challenged Amazon.
In another corner, there was SoftLayer (now part of IBM), which focused on dedicated servers, later rebranded as “bare-metal” servers. SoftLayer was innovating in terms of automation, shrinking the provisioning time and automating the purchasing of dedicated servers to the point where it was a utility, just like cloud. Under IBM, it now has the scale to achieve new heights. IBM/SoftLayer is building a nice cloud business, but it also hasn’t challenged Amazon.
Out of the three pillars of early cloud, Amazon offered the bare bones utility compute and storage, the cloud that most resembled a utility, like electricity. It was a playground for developers and startups who wanted to experiment, which led to building applications and businesses atop of AWS. By 2007, Amazon already had 180,000 developers using AWS. It was this early foundation, as well as continued evolution, that kept the company on top.
Building A Cloud Ecosystem
The playground-like atmosphere, coupled with the fact that AWS was “bare bones,” meant a lot of developers were able to innovate and scale without the traditional hurdles of upfront cost. Amazon basically open-sourced the platform and allowed others the ability to build functionality atop of raw infrastructure. Since this was the ‘Wild West’ days of cloud, several companies built businesses that provided functionality atop of AWS.
One of the poster children of this movement was – and continues to be – Rightscale. Letting techies go wild building businesses atop of AWS in the early days meant that Amazon was the platform where many new technologies were rolled out, either through these third parties, or by Amazon itself.
Companies like Netflix, which relies 100 percent on AWS to this day, kicked the tires for larger purposes, creating their own tools like Chaos Monkey (which was open sourced in 2012). Chaos Monkey provides auto-scaling and seeks to rid failures by intentionally causing chaos in the system. This is just one example of how AWS was as resilient as you wanted it to be, depending on the hands which handled it.
Other clouds had to catch up, either building or partnering for similar functionality and toolsets. The big push for cloud agnosticism meant that many third-party cloud-handling businesses (think scaling, cost control etc.) began to offer their services on other clouds. This was one factor that allowed Amazon to build out functionality without necessarily threatening its ecosystem; in order to survive, its ecosystem had to either do it better or extend across different clouds. AWS adding services forced these companies to expand the breadth of their offerings or perish.
Leading By Adding Its Own Features
The cloud world still seems to be following Amazon’s lead. Amazon realized it had a hit on its hands and began adding features at a staggering pace, adding 82 new services and features in 2007. In 2012 there were 159 new services and features added. In 2013 the company that number grew to 280, and 2014 is on pace to beat 2013.
These new services and features are customer driven.
“Our general approach is to go out and ask customers where their pain points are, and we take that feedback to continually iterate on the services,” said Matt Wood, General Manager of Data Science for AWS. “The approach is pretty straightforward. We build out a bare bones service that fills the core of customer needs. We get it operationally stable and get it out to customers.”
But what about the ecosystem of companies that have built services atop of AWS? What happens when Amazon adds a feature that someone has built their business around? “Our approach is very much to focus in on what customers are asking for,” said Wood. “We add where it makes sense. It leaves a remarkably large area for partners to build on.”
There are too many successes to count in terms of building on Amazon’s cloud platform across an eight-year track record for third parties to ignore the appeal of building atop of AWS. The potential audience these third parties can reach is biggest on AWS than any other cloud, so they continue to build.
AWS Evolving With The Times
Amazon Web Services is no longer solely a developer playground, though these roots inevitably helped the company get a running start. It aggressively adds features, functions and regions to address all needs.
“When we started AWS eight years ago, we assumed that the benefits of on-demand computing were most useful to startup companies,” said Wood. “But what we’ve seen over the last four years is the major benefit that comes from nimbleness. That is just as valuable to startups as to large organizations.”
A really good example of AWS adding a service and widening its appeal is Redshift, which addressed a data warehousing problem that was troubling many customers. “We asked larger enterprises what challenges they’re facing,” said Wood. “One was that data warehousing was far too expensive. It’s incredibly painful to spin out (data) warehouses.”
The company works with customer feedback in qn interesting fashion. “What we did, as we do with all our services, is we wrote a press release, a mock release, that outlines to customers what the benefits were going to be,” said Wood. The other document the company prepares is what is essentially an FAQ (Frequently Asked Questions).
A mock press release and customer FAQ are the initial blueprints for a service. “Those two documents guide the development process,” said Wood. “We know we were on the right track when the service reaches the high operational bar we set for ourselves.” The primary goal for any service is to remove and reduce the complexity as much as possible. In the case of data warehousing, the goal was to initially offer it for less than $1000 per Terabyte per year, according to Wood.
Keeping the messaging simple, making the service accessible is what makes these services successful out of the gate.
Surviving the Pricing Wars
AWS’ scale also allows it to cut prices consistently. “If you look back over the past 8 years, AWS has had a pretty strong track record of consistently reducing prices,” said Wood. “The more customers we have, the more they use our services, the more infrastructure we go buy and put into our data centers. We get to the point where we reach economies of scale. At the scale that we operate, we’re able to negotiate our own discounts. We’re able to operate more efficiently. We could choose to pocket the difference, but we pass those savings on.”
One of the first things Google did upon entering General Availability was slash prices. Amazon did the same at their event the next day.
Since Amazon has always run AWS like an open utility, price-cutting doesn’t affect it as much as competitors because of its scale. It doesn’t mind turning cloud into a commodity because it has the early lead, the momentum, and the scale to be compete on prices.
In terms of a cloud war, the 800 pound gorilla in the room has always been Google, which can and will compete with AWS. It can trade blows with the giant because it has the money and resources to participate in a long, drawn out war.
However, Google was basically absent from the battlefield in the early years. While Google had App Engine, it’s Platform as a Service product, it only recently went into General Availability (GA) for its cloud. It has a late start, missing out on the early cloud adopters that made AWS king.
Google wants to create the same type of atmosphere that AWS captured early: an open party, where developers and tech geeks participate in making a service great, as well as getting rich in the process. Both Google and AWS are canvases to paint on (albeit canvases that come with an increasing amount of frills). Google focused on the developer when it recently went into GA. Amazon focuses on everyone, because it already has the developer base and a track record of treating this base well. AWS in the next stages. It is touting its cloud for all purposes.
Automation At The Data Center
The key to making this work is Amazon’s data center infrastructure and automation. “We’re continually innovating at the data center space,” said Wood. “We’re continually adding the sort of features and benefits at high scale.”
Amazon doesn’t say much about the scope of its data center operations. It maintains major data centers in eight geographic regions – northern Virginia, northern California and Oregon in the U.S., and international regions in Dublin, Singapore, Tokyo, Sydney and Sao Paulo. AWS also maintains a dedicated GovCloud region in Oregon to support government workloads. Amazon also has 42 global “edge” locations throughout the world for content distributions.
Some of these regions have multiple data centers. The US-East region in northern Virginia, for example, is spread across 10 data centers.
In April 2013 Amazon disclosed that its S3 storage system holds 2 trillion objects, up from 1 trillion in June 2012. The service now sees peak traffic of more than 1.5 million requests per second, the company said in November. Estimates of server counts have been left to third parties. A 2012 estimate placed the number of servers at 450,000. Netcraft reports a lower number of 158,000 physical servers, but reported a growth rate of more than 30 percent over an eight-month period.
What About Support?
In reaction to competitors arguing that AWS can’t offer the support of a traditional hosting provider, the company has attacked this in a two-pronged fashion. Nurturing the talent pool and offering increasing support. It gained the early talent pool lead through its head start in the cloud computing market. There is a sizable workforce that makes using the AWS cloud possible in any company, because AWS has the largest pool of talent from which to pull from in terms of cloud offerings. Demand continues to grow, as seen in the continued rise in job listings for staffers with AWS skills:
While other cloud providers and traditional hosting providers are capable of one-to-one support, the breadth of AWS expertise means companies can hire employees internally that can help support the cloud. Amazon also now offers a professional services organization for those that do need an extra layer of support; another feature customers demanded and received.
This talent pool advantage has been nurtured through coaching and training programs. There are certifications to help people certify against a particular standard. AWS has evolved beyond a playground, and the company is adding support for the companies that need support.
“We have a large solutions architecture team, a free resource,” said Wood. “They do architecture reviews for you. This is in addition to the formal support organization. The company offers a trusted advisor that will look at your infrastructure and make recommendations on how to improve security. “We run 20-25-30…a growing number of reports, and we proactively run those for customers.”
When businesses evaluate potential cloud strategies, there is zero chance that AWS is not at least in the conversation. For this reason, big Systems Integrators (SIs) like Capgemini and CA are building out very impactful cloud computing offerings in conjunction with AWS. These SIs are helping Amazon reach into the enterprise more deeply than ever before.
Going Forward: Can AWS Stay on Top?
There will likely be more than one winner in the cloud wars. We’ve already begun to see others pull away from the pure utility-style approach to cloud. Companies have differentiated themselves by addressing niches and offering a different cloud.
Will anyone dethrone Amazon? Given the momentum of AWS, it seems unlikely that it will be easily surpassed. Instead, AWS should be seen as something that has opened up opportunity in the data center, hosting, cloud, what-have-you industry. It has the entire world looking to outsourcing so that it can focus on core competencies. The mass market mentality has changed. People are loosening their grips on their servers because cloud has captured the zeitgeist. A new audience has been born for colocation providers, hosting providers, and the service provider world because AWS has exposed the advantages of letting go of hugging their server.
“It’s still day one for the Internet, and cloud computing,” said Wood. “Our approach is going to stay the same. We keep an eye on the changing competitive landscape, but we’re not influenced by it. We work backwards from customer requirements.”
It continues to broaden its appeal without destroying the underlying utility nature of the platform. “It’s a utility platform – what type of customer couldn’t take advantage of electricity?” said Wood. “We see a lot of opportunity in every direction. The breadth of usage that we see is as broad and diverse as you could possibly imagine.”
Do customers outgrow AWS? Do they graduate from AWS? “The tide is moving so more and more workloads are moving on AWS,” said Wood. “It’s a broad platform that continues to move at a fast pace.”
While some leave AWS for a managed service provider or colocation provider, it seems like two more customers come in to replace each departure. A majority who were born on AWS and somehow leave, continue to use AWS in the mix. The biggest companies in the world don’t outgrow AWS. They extend. Netflix didn’t outgrow AWS. Bristol Myers Squibb, a company big enough to run its own data centers comfortably, is increasing its AWS usage as of late.
An early lead gave the company the basis of what would propel the cloud for many years to come. A customer-driven approach to adding building-block-type services, increasing appeal across all types of businesses, and the scale to compete at the thinnest of margins is how the company has managed to stay on top all of these years.