Friday, 29 July 2011

The history of cloud computing

It all started in 1972

Back then, IBM released System/370 which included an operating system called VM or Virtual Machine. It included what we’d today call a hypervisor – a layer between the hardware and the OS. But the idea of virtualising machines really only lived in the IBM world for many years.
IBM S/370 Model 138
It was obviously theoretically possible to do it on any CPU architecture. In fact Bill Gates and Paul Allen sort of did it in 1975. They developed a version of the BASIC programming language for the Altair 8800, one of the first home-built, hobbyist micro-computers. They didn’t have an Altair 8800 on which to develop but they did have access to a DEC PDP-10 computer. So they wrote something that emulated the instruction set (Intel 8008) and hardware of the Altair’s internals, and of course the rest is history.
The Altair 8800
 
DEC PDP-10

Virtualisation hits the x86

VMWare formed in 1998 and released a virtualised environment called VMWare Workstation based on the x86 chip. Many people before had thought it not worth the effort of virtualising such a chip because the overhead would be too great and the effort would not be worth the results. VMWare proved this wrong. A number of other vendors also offered virtualisation on the x86 chip namely Connectix with their Virtual PC product which was later acquired by Microsoft.
Over the next few years the efficiency of these environments improved dramatically as more and more resources were poured in to development. There was also a notable project at Cambridge University called Xen which became the open source software movement’s virtualisation engine of choice. There were others but these three, VMWare, Connectix (Microsoft) and Xen became the mainstay of virtualisation in the IT industry.
In the noughties, it was being discovered that virtualisation was a good way of consuming the “headroom” that most physical servers possessed when running in data-centres. Back then, it wasn’t uncommon for a physical server to sit at less than 30% utilisation for most of the time. That meant 70% of the resource in a data centre was mostly unused. By putting 2 or 3 virtual servers on to one physical server, all that spare capacity could be consumed.
In the early and mid noughties, virtualisation took a real grip in many large data centres and the arms race between Xen, VMWare and Microsoft was on.

An Online Retailer gets a piece of the action

Back in the mid-noughties, Amazon, famed for its success as an online retailer was having a few problems in maintaining, scaling, changing and operating the most successful online retail infrastructure in the world. They decided they needed to take an entirely different approach to the problem. Werner Vogels, now the CTO joined and drove the company down a shared-services platform route. It became obvious as all this was going on that services could also be sold to customers and in 2008 Amazon announced Amazon Web Services (AWS). 
This used Xen virtualisation to run the virtual machines but Amazon had already started to solve problems with things like scalable storage and so although Amazon AWS became the umbrella term for the offering, it was in fact divided up in to many services such as EC2 (the compute virtualisation infrastructure) or the Elastic Compute Cloud and S3 (Simple Storage Services).

The Cloud is born

Suddenly every man and his dog has a cloud offering. Amazon’s first name – the “Elastic Compute Cloud” described a virtualisation environment that you could consume on a per-hour basis. You could quite happily fire up 20 VMs for a couple of hours then deprovision them again.  This billing mechanism became one of the key features of the cloud over simple hosting. With hosting you get a monthly fee for the server (whether virtual or physical) that you use in a hoster’s data-centre. You can’t really say when the bill comes in “actually we weren’t using the server for 10 days on the month, can we have a discount?”. But with cloud service operators you pay by the hour for compute, by the GB for data storage and by the GB/in and GB/out for data moving in to and out of the data-centre.
Google also got in to the cloud game with Google App Engine or GAE in 2008. The idea was “you can run your web apps on Google’s massively scalable infrastructure”. GAE is based around a Java/Python development environment. They run and serve your code for you. You don’t get access to the VMs that are sitting underneath the applications. This made it very different from Amazon’s EC2 in which you get full admin access to the VMs. Some people argue that handling and maintaining infrastructure is a burden, others feel the sense of control it gives is a good thing.
At the time all this was going on, Microsoft had its own set of problems with services like MSN, Hotmail, Messenger, X-Box Live and so on. These were massive infrastructures that had to respond with agility to customer needs and growth. So a team was asked to look in to this problem and in doing so they came up with the idea that there should be a division between the underlying infrastructure and the application. So many systems in the past have been architected in such a way that they are deeply embedded in to the OS, the virtualisation, the networks infrastructure and so on. Microsoft released a cloud service to Beta called “Red Dog”. When it was commercially released, this became known as Windows Azure.
VMWare has a range of technologies it sells directly to its customer that allow them to build their own cloud data-centres (discussed below). They have also put in to Beta a public cloud service called CloudFoundry. Don’t get this confused with their original CloudFoundry offering, which is now called “Classic Cloud Foundry”. With the classic version you submitted your JavaSpring application and it ran on Amazon AWS’s infrastructure. In the new version it runs entirely on VMWare’s infrastructure.
So that gets the three big cloud providers out of the way. These are called cloud services. They are built in such a way that anybody with a credit card and an Internet connection can consume them (they all have free offers for developers who want to try the services out without any commitment). Because they are so available, they are known as “the public cloud”. Any operator that allows you to send your application/data and run it on their infrastructure in this highly transactional way (you can sign up to these services in a couple of minutes, pay-as-you-go and only-pay-for-what-you-use meaning you can also bail out at a moment’s notice) could be a called a public cloud operator.

Security

Hmmm – but of course if you have data that is sensitive, private, confidential, restricted, subject to legislation etc that means it can’t live in another organisation’s infrastructure – well, that’s a challenge. If the law says you can’t do it – you can’t do it; easy answer. After that it’s down to what other legislation you need to abide by and your own assessment of risk. For example if in your country there is a code of conduct from some kind of financial authority that looks after banking and financial information, even though you may be abiding by the law, they may say you can’t store your data at another company. Again, it’s an easy answer. When you get to your own assessment of risk, well that’s a long conversation. But you can see how annoying it must be to some firms who want the economic and business agility benefits the public cloud offers, but they can’t store their data there too, or at least some of their data.
That’s where the private cloud comes in. In the same way that Microsoft, Amazon and Google offer services on an infrastructure where the benefit is that they can offer services more cheaply because of the sheer scale they operate at ($0.5bn or more is not an uncommon price for a data centre and they all have several data-centres) so can a centralised function (say, the IT department) within an organisation do the same. Instead of offering say a specific service to the finance department and a different one to the marketing department and yet a different one again to the sales department, usually each with their own physical corner of the data-centre shielded off from other departments and ring-fenced, they could offer a shared infrastructure in which each department’s applications are run. Remember what we said about server utilisation in the section above on virtualisation – the shared infrastructure could give a much greater efficiency of resources. These private cloud data-centres could be owned and operated by the organizations that need them, but they could also be shared across a number of organisations that share some affinity. Like a health federation – many health organisations all sharing a private cloud data-centre. Each one has its own private applications and data but the service is built and operated with their specific needs in mind. This shared service could be run by all the health organisations together, or perhaps another organisation runs it and charges them for the service. It’s not like the public cloud where anybody can come along and take part – only health organisations (or whatever the taxonomy is) can use the service. You can also think of clouds like this as “community clouds”.

Technology

And then there are cloud platform providers. Take Amazon for example, it has based its cloud infrastructure on the Xen virtualisation technology. Microsoft has based its infrastructure on Hyper-V. With AWS or Windows Azure you’ll never be sat at a Xen or Hyper-V management console directly managing the virtualisation in the way you would in your virtualised private data centre but nevertheless the technologies underneath it all are there.
VMWare has seized the idea of the private cloud and extended its virtualisation offerings with specific products aimed at running a private cloud. You could buy and operate these technologies in your own massively scalable data-centre and sell them to the public as a public cloud operator. Microsoft has done the same with Hyper-V and the System Center range of products. Many private clouds are built using Hyper-V and System Center. But there is a difference between buying the software that runs a cloud data-centre and actually owning and operating your own cloud data-centre, whether it’s a public or a private cloud data-centre. For example Microsoft runs hundreds of thousands of physical servers and many multiples of virtual servers in its 6 data-centres around the world, located in North Europe, West Europe, South Central US, North Central US, South East Asia and East Asia.
Microsoft Data Center
I hope that’s cleared up any confusion over what a cloud operator is, what a cloud platform is, what the difference is between a public and a private cloud and where technologies like Hyper-V, VMWare and Xen fit in to this new vocabulary we’re all going to have to get used to in the future…

Thanks & regards,

"Remember Me When You Raise Your Hand For Dua"
Raheel Ahmed Khan
System Engineer
send2raheel@engineer.com
sirraheel@gmail.com

http://raheel-mydreamz.blogspot.com/
http://raheeldreamz.wordpress.com/
http://www.facebook.com/pages/My-Dreamz-Rebiuld-our-nation/176215539101271

 

No comments:

Post a Comment

what is Juice Jacking SCAM

  Juice Jacking is a cybersecurity threat that occurs when cybercriminals manipulate public charging stations, such as USB charging ports in...