Cloud And Software Pricing Chaos

The notion of an individual computer disappears with the infrastructure as a service cloud computing model as does the decades old approach linking software sales to physical computers.   The early attempts at software pricing in a cloud environment do not look promising.  Amazon EC2 scales the fee for Windows in direct proportion to the instance price in an arrangement that gets very expensive very quickly.  The Windows option for the Amazon Extra Large High CPU instance costs $350 per month or $12,600 over the three year useful life of an on-premise server.

Software pricing always followed the usual economic rules of supply and demand, but the sale of a one time license on a computer by computer basis proved a successful distribution model.  Software prices did not necessarily decline along with hardware prices, but the cost relative to processing power declined as hardware capabilities improved. Software offered more features and functionality over time even given a relatively fixed price.  This cycle of improvement triggered a upgrade cycle that produced recurring revenues of software companies.

Microsoft always offered enterprise and consumer versions of licenses for the same basic functionality given differences in willingness and ability to pay, and maximizing sales always meant preventing software from getting loaded on more than one computer.  The per computer model started to show its age wiht the emergence of multi-core chips and the expanding share of multi-processor computers.  Linking licenses to the physical computer gets problematic when the capability of individual computers diverge.  The elimination of the one time license in an on-premise context demands an entirely different approach.

Microsoft adapted to changes in the number of processers per computer by assigning licenses as a function of processor count.  Some companies have linked licenses to core count.  These changes do not necessarily reflect an increasing utility of software from an end user perspective, so these escalating costs contribute to the growing interest in open source offers in a number of categories.  The lack of a standard unit of compute resources noted previously contributes to the chaos, but settling on a standard compute unit does not solve the problem software pricing.  Software pricing needs to obtain a direct relationship with the associated value proposition. 

Salesforce charges license fees as a function of the number of users.  The fees may go up and down as a function of the volume or contract duration, but the fees do not have a direct relation to the underlying compute resources consumed.  The cost of implementation almost certainly go down over time for each Salesforce user, but the pricing reflects the existence of competitive offers more than direct costs.  The same applies in the case of Windows or any other software offer in the need to demonstrate value independent of the cost performance improvements of underlying hardware.

VMWare pricing links to the number of virtual machines rather than the underlying hardware.  Oracle licenses remained tied to hardware.  The Oracle approach means deployments get less expensive with each subsequent generation of processors from Intel and AMD.  The absence of debate about pricing represents additional evidence of the relatively modest degree of deployment momentum, but a cloud deployment model charging $12,600 for a Windows license seems unlikely to `displace on-premise options offering the same functionality for 5% or so the of cost.

Posted in Cloud Price Calculator | Tagged , , , | Leave a comment

Where are the Cloud Critics?

I am troubled by the failure to find a single cloud critic during a six month immersion in the cloud ecosystem. I have yet to hear of a single large scale cloud deployment disaster. The several and growing cloud conferences have an eerie Jim Jones everyone loves cloud vibe. This seems odd given the supposed transformative nature of cloud computing, but it is consistent with a cloud industry that remains 99% theory and 1% practice.

The issue is not even a matter of defining what cloud means. Cloud has become synonymous with “future of computing”, so there should at least be serious discussions weighing the future of computing. A host of issues need to get addressed before everyone can live happily ever after in the cloud computing future. The absence of a consensus measure of compute resources noted previously needs to get resolved but other issues come to mind. For example, how does the cloud model effect the legacy compute tasks running (not always happily) via an on-premise compute model.  Let’s admit the cost of moving legacy apps to the cloud means re-architecting and re-working the code.

Where are the debates about whether servers designed for standalone deployment will meet the unique needs of virtualization? Where is the discussion about virtualization overhead costs or the fact server virtualization undermines motherboard level processor optimizations? Where is the discussion about performance hits that make virtualization a problem for real-time applications like VoIP?

Amazon’s EC2 represents the poster child of cloud computing, but Amazon’s revenues represent less that 1% of the overall IT spend.   1% does not make for a IT industry transformation.  No one can dispute the hourly model makes Amazon EC2 an excellent sandbox for software development, but does EC2 host any large scale mission critical applications? Amazon’s internal needs even given large scale do not rank high among the the full spectrum of enterprise compute needs. The fact Amazon does not feel compelled to reduce prices over four years reflects a model more like an electric utility than the IT industry.

The promised cloud cost benefits rely heavily on the theory cloud deployments can run at high levels of utilization and more closely follow the demand curve. There is very little discussion of how to turn this theory into practice. Amazon may benefit from higher utilization of a diverse user load, but the failure to reduce prices in leaves their customers without benefits. A customer can create and destroy instances to match a variable load, but turing this from theory to practice would require implementation of automation that would not be free.

Let’s go ahead and stipulate that the “theory” of cloud computing looks great and start focusing resources on converting the theory into practice and admit the transformation will not be painless.

Posted in Cloud Price Calculator | Tagged , , , , | Leave a comment

More on Embracing the ECU

Some argue a consensus measure of compute resources represents a long standing and unfulfilled utopian dream.  The “MIPS” seems like an existence proof and served this purpose in the 80′s and early 90′s.  Processors eventually got more complicated with multiple cores and various performance enhancement strategies. The issue is not only not utopian, it a fundamental obstacle to progress.  The cloud industry will not move beyond experimentation if there is no consensus measure of what we are selling?  The electric utility industry does not exist if the utility does not provide customers a measure of what they are receiving?
 
Would people be willing to buy gasoline from a gas station that does not give them a reliable metric to know what they are buying?  Does it make sense for different gas stations to use different metrics?  Does it seem likely customers would be happy if the cloud computing industry decided to replace GB as the measure of Memory or TB as the measure of storage or GB as the measure of bandwidth with some vague abstractions as Amazon did with the ECU regarding compute resources?
 
Consider a small sample of the ways companies are characterizing compute resources: Amazon (ECU), Softlayer (Cores), OpSource (CPU Hours), Rackspace (No compute options), Terremark (VPU), Linode (Linodes), ThePlanet (vCPU), Liquidweb (CPU’s), and EngineYard (small, medium, large).  Customers can buy on-premise computer equipment without a compute metric because the unit of compute is the processor itself. This is not the case with cloud computing where processor resources get sliced and diced.
 
The bulk of the present momentum in infrastructure as a service cloud adoption involves exploratory projects.  There is no uncertainty about the utility of Amazon’s hourly pricing when it comes to playing around.  Amazon EC2 nonetheless remains expensive relative to traditional dedicated servers and well-executed premise based implementations.  The 98% of compute resources that address the non-playing around activities benefit very little from the hourly pricing.  Converting exploratory projects into a mass migration toward the cloud requires the cloud to be cost competitive with on-premise options.  This means enabling and exposing cloud offers to price performance competiton.
Posted in Cloud Price Calculator | Tagged , , | Leave a comment

The Case for a Cloud Computing Price War

The 10x difference in pricing between the least and most expensive cloud computing offers listed on the Cloud Price Calculator home page reflects an industry in pricing chaos. The long list of issues associated with sessions at the various cloud conferences from security to picking a hypervisor remain moot until the industry becomes more price competitive with on premise computing options. The ability to leave EC2 instance prices unchanged for years at a time may seem like good news for Amazon, but high costs keep the mass migration of computing to the cloud on hold.

A price war forces discipline among the companies contributing to the cloud computing value chain at the same time lower prices increase end user demand. Intel does need to chase chip cost reductions to the extent the cloud computing companies don’t compete on price. AMD can’t pressure Intel’s 90% data center share if there isn’t sensitivity to price. The companies delivering the software for the virtualization layer can set prices arbitrarily high in the absence of demand. The absence of price competition does not represent good news for anyone, it means there is not enough deployment activity for anyone to care.

VMWare’s nearly 100% share of server virtualization represents a cautionary tale. The VMWare monoculture leaves server virtualization an expensive proposition available only to deep pocket Fortune 500 IT departments. There exists very little upside in the virtualization ecosystem for anyone except VMWare. This weighs against innovation and presently makes VMWare vulnerable to cloud based (multi-server virtualization) developments. Large enterprises with the support of Intel announced the Open Data Center Alliance (opendatacenteralliance.org) in order to avoid repeating mistakes that produced the captive customer leverage achieved by VMWare.

All prosperous infotech sectors reflect a healthy degree of price competition. Linux takes the edge off Microsoft’s hedgemony. AMD takes the edge off Intel’s market power. SugarCRM keeps Salesforce.com from getting too comfortable. The growth of Amazon EC2 in spite of a failure to provide customers price performance improvements represents bad news for the mid to long term prospects of cloud computing. Microsoft’s Azure and Google’s AppEngine represent some competition, but neither address Amazon EC2 directly. The cloud computing industry will never convert theory to practice or hype to reality without a basis for comparing offers from different companies and price competition.

Posted in Cloud Price Calculator | Tagged , , , | 3 Comments

Cloud Price Normalization – CPN Index

The CloudPriceCalculator uses a simple index to compare infrastructure as a service cloud computing offers.  The Cloud Price Normalization (CPN) index adds compute, memory, storage, bandwidth, and divides by price.  CPN reflects the quantity of cloud resources that one can buy for $1000 USD.  The CPN table includes offers from six different vendors with CPN’s that range from a vincinity of 10 to 50.  This 5x differential should start to narrow once price competition starts to take hold.

The CPN results for Amazon EC2 which vary from 13 (Standard Small) to 50 (XXXX Large High Memory) reflect the relative ages of the offers.  Amazon EC2 introduced the original standard small offer in August 2006.  The High Memory instances were introduced in October 2009 and benefit from 36 months of cost performance improvements.  The question remains whether and when Amazon EC2 will feel inclinded to improve the value proposition of its standard small instance.

goCipher created a cloud offering matching several of the Amazon instances with at least double the CPN rating.  Customers can obtain at least double the price performance value with these DomainGuru offers.  CPN enables ranking any cloud offers in terms of relative value proposition.  The CloudPriceCalculator home page shows six examples with a Softlayer instance on top and a GoGrid instance on the bottom.  CPN makes it clear GoGrid needs offer significantly more for significantly less.

The CPN unwinds some of the confusion about total cost that arises from the many ala carte components of cloud offers.  The cost of bandwidth tends to represent a significant monthly costs with vendors charging anywhere from $.12 per GB to $0.22. Most of the vendors charge only for outbound bandwidth, but Rackspace charges for inbound and outbound bandwidth.  There exist a range of other ala carte charges for IP addresses or persistent storage that can effect costs.   All the vendors charge extra for Windows support with Amazon EC2, for example, charging significantly more via a percentage calculation rather than a flat fee.

There exist plenty of dimensions for competition even given relatively similar CPN results.  There are significant differences in the strength and cost of support offers.  Amazon is among the most expensive with a $400 per month minimum and a 20% fee on top of total monthly costs for Gold Support.  The basic reliability of cloud offers will vary significantly between cloud suppliers.  No one seems to offer an SLA of better than 99.99% up time.  The basic functionality of a cloud offer relative to commodity dedicated servers also represents a consideration.  Not all applications work well in a virtualized environment.  Amazon EC2, for example, tends to perform poorly relative to the real-time needs of VoIP applications.
Posted in Cloud Price Calculator | Tagged , , , | Leave a comment

Unified Theory of Cloud Computing – Avoid it if you can!

Cloud computing represents the latest in a series of concessions to the limits of scaling computer performance.  Multi-threading, multi-cores, multi-processor, and now multi-server solutions arise as compromises in the search for ever more powerful compute platforms. Multi-anything represents a compromise, because it forces partitioning compute tasks.  Decomposing compute tasks to take advantage of parallelism is not always possible and always adds cost.  The preferred way forward remains scaling single threaded compute capacity.

Operating systems and software tend to take a long time to catch up with each subsequent generation of hardware parallelism.  Cloud computing will not represent an exception to the rule.  Anyone embracing cloud computing as the means to obtain mainframe like functionality from a collection of commodity servers will be disappointed.  Realizing the promise of cloud computing requires purpose built servers, reducing their number by incorporating ever more powerful processors, and the adoption of, as yet, unknown cloud specific server performance benchmarks.

Virtualization asserts a degree of server to server communication and disk I/O intensity that makes existing server architectures entirely unsuitable.  The lowly mainframe represents a more useful reference point.  The “modern” mainframes incorporate low cost commodity components by opening multiple communication and disk I/O channels.  Cloud server architectures will need implement some form of the same model.

The present dynamic differs from the 1980′s embrace of “personal computers” over mainframe computing.  PC’s represent the preferred solution for distributing compute capacity to different users.  Aggregating PC’s into a “cloud” produces a suboptimal solution for tasks requiring large scale compute resources.  Virtualization represents mere bandaid pending the more powerful compute platforms.

In the mean time, the cloud ecosystem needs to explore alternative architectures and do a better job tracking progress with cloud specific benchmarks.  The present virtualization offers fall far short of the price performance improvements necessary to break the grip of server huggers.  Dual Intel 5410′s generate a GeekBench result of 6000 in standalone mode (see http://browse.geekbench.ca/).  The Geekbench score for the Amazon EC2 High Compute instance (which uses dual Intel 5410′s) comes in at 5000.   As is obvious on the CloudPriceCalculator ranking, Amazon EC2 charges a significant premium for this 20% loss of compute capacity.

Posted in Cloud Price Calculator | Tagged , , , | Leave a comment

What the heck is an ECU?

Amazon’s ability to leave the price of the original single ECU
instance unchanged in the four years since the launch of EC2 suggests
they missed the Moore’s Law memo.  In particular, Amazon’s success
owes to the invention of the ECU as a new measure of compute capacity
that clouds (pun intended) competitive comparisons.

Amazon’s Definition of ECU at http://aws.amazon.com/ec2/instance-types/
“We use several benchmarks and tests to manage the consistency and
predictability of the performance of an EC2 Compute Unit. One EC2
Compute Unit provides the equivalent CPU capacity of a 1.0-1.2 GHz
2007 Opteron or 2007 Xeon processor. This is also the equivalent to an
early-2006 1.7 GHz Xeon processor referenced in our original
documentation”

Setting aside the fact AMD did not sell a 1.0-1.2 GHZ Opteron in 2007,
Amazon’s definition falls short of the making the ECU measurable.   A
“metric” that defies measurement might have been Amazon’s intention,
but it creates problems for everyone else.

Benchmarks like those run by Jason Read at cloudharmony.com show
inconsistency in Amazon’s application of the ECU with the 4 ECU High
Memory instances performing better than the 5 ECU High Compute
instance, and, similarly, the 6.5 ECU High Memory performing better
than the 8 ECU Standard Large instance.  Amazon may have internal
benchmarks that do not show these discrepancies, but Amazon has so far
decided not to let its customers in on the nature of these benchmarks.

goCipher created infrastructure as a service instances
<link:http://www.domaingurus.com/ec2> replicating several of the
Amazon instances. goCipher’s adoption of Amazon instance types and
ECU’s follows the successful example of the competitive PC industry
adopting the IBM PC architecture.  There exist no uncertainty about
the meaning of a GB of memory, a TB of storage, or a TB of bandwidth.
Establishing a consensus measure of ECU represents the final piece of
the puzzle.

Gordon Moore identified the doubling of transistors in a processor
every 18 months.  Moore’s Law does not necessarily apply to all the
components that go into a computer, the computer itself, or a cloud
computing offer.  For example, the relatively slower decrease of DRAM
at approximately 30% per year means memory consumes an increasing
portion of the total costs.  The price performance improvements of
storage tend to exceed Moore’s Law.   There exists no Moore’s Law
equivalent for operating systems or software.

Reasonable people can come to different conclusions about the expected
pace of price performance improvements, but the immunity of Amazon
instance types to Moore’s Law over a four year period is not business
as usual.  A reliable means of comparing cloud computing offers needs
to emerge for nascent cloud computing industry to become less chaotic.
GoCipher is the first company to directly offer Amazon instance
types, but we believe everyone in the emerging ecosystem would be
better served by quoting offers in terms of ECU’s.

Posted in Cloud Price Calculator | Tagged , , , , , , | Leave a comment
  • About Daniel Berninger

    Daniel Berninger moved to goCipher after working as a Washington, DC based independent technolgy analyst. Active in VoIP since 1995. Daniel worked on the original assessment of VoIP at Bell Laboratories and led early gateway deployments at Verizon , HP , and NASA after joining VocalTec Communications . He won the VON Pioneer Award as co-founder of the VON Coalition and led the founding teams for of ITXC and Vonage.