By John | July 1, 2008
It is my belief that what we today call the “cloud” will really just evolve into a complex IT infrastructure of the future, and in the end, will just be referred to as infrastructure. There is no doubt the traditional IT landscape of the last 20 years is going through a substantial transformation on the same scale as what happened in the mid 1980′s as mainframe resources shifted to distributed computing and client server architectures. This new complex IT infrastructure of the future will link services from a myriad of inter connected inter-operable applications spanning internal legacy applications, internal/external virtual resources, private clouds, and public clouds. For example, I can envision a scenario where a business service runs internal behind-the-firewall VMware instances for parts of an application and possibly inter-operates with resources on Amazon’s EC2, Flexiscale, Google’s App Engine, or a player to be named later. These same business services might also use resources from private internal clouds running 3Tera’s Applogic, IBM’s Blue Cloud, or Cassatt’s Active Power Management. Like it or not, Microsoft will have resources involved in this new IT management infrastructure of the future. Any interoperability discussion will need to include them as well. There are also numerous variations of cloud “types” that are available as services. Traditional storage pools now run as standalone clouds as well as new storage types like SimpleDB and BigTable. Storage resources like Hadoop and CouchDB will also be key components in the IT infrastructure of the future. Not every cloud type solution will be bound to a public infrastructure. For example, an enterprise might create their own private internal cluster of HFS (Hadoop File System) proprietary storage resources for reasons such as security, compliance, and/or confidentiality. Therefore, applications will need to talk not only between different cloud vendors but also to different cloud types. Platform as a Service (PaaS) solutions that today run as cloud services, like Ruby-on-Rails, Python, and Java based cloud sub-infrastructures will also need to be provisioned and configured dynamically in the IT infrastructure of the future. LAMP stack applications seem to be the meat and potatoes of today’s clouds, however; they will also have to be inter-operable with new and traditional IT infrastructure services in the future. Last but not least, SaaS pure plays like a CRM system will need to be integrated into these new IT infrastructures as well. The new IT infrastructure of the future will enable services to spin up and spin down IT resources in sub-minute execution possibility even in seconds. The on-demand nature of this new infrastructure will most likely be pay-as-you-go and services will be chosen based on geography, market demands, and possibly even geopolitical criteria. A business service based out of NY might get a burst of activity in Sydney that requires an automated selection of resources that best-fit performance and cost for the clients in that area. There will also exist a mesh of geopolitical nation-to-nation sensitivities between web services that will have to be navigated. In Cote’s recent blog he points to Dan Farber’s “Cloud computing on the horizon” post. Farber quotes Sun Microsystems CTO Greg Papadopoulos’ predictions where Popadopoulos predicts, in Carr’sk like fashion, that there will only be six large cloud vendors in the future. My question is, if that is true, how many of them will be in Russia, China, and India as opposed to the US? I think it’s silly to suggest in a global economy there will be only a small number of huge infrastructures in the future. How many utility companies are there in the world today? I have never agreed with this Nick Carr type assumption in the first place. In my opinion there will be thousands perhaps 100′s of thousands of huge infrastructures needing inter connecting inter-operable services around the globe.
The key to connecting all of theses new infrastructures together will hinge on traditional IT management disciplines. Applications such as monitoring, automation, provisioning and configurations will have to start to meld together to make this new IT infrastructure of the future possible. The good news for IT management vendors is that with the advent of these new emerging IT infrastructures terms like autonomics will really mean something and the ROA for well defined IT management API’s will be invaluable to enterprises. Monitoring and automation will be the key to the new IT infrastructure of the future as they will play the role of the autonomic nerve center for the infrastructure, while provisioning and configuration will be the arms and legs of this new infrastructure. Monitors will predict possible resource shortages and automation, provisioning, and configuration will allocate and make new resources available. Proprietary solutions from core vendors like IBM, Sun, HP, Microsoft, BMC, and Oracle, will definitely be part of the final solution as well as core open source plays like Hyperic, Zenoss, Puppet, and ControlTier. Monitoring and automation will be used as the autonomics to determine when to speed up or slow down service requests manage queues and other resources. Provisioning will be used to allocate resources dynamically and cloud based or virtualized infrastructures will allow this to occur with sub-minute execution time. Configuration tools will provide the last mile in ensuring uniformity, compliance, and proper execution of the provisioned services. I spoke to the guys at ControlTier last week and they said they had conversations with Puppet and the guys at HJK Solutions at O’Reily’s Velocity conference and talked about how to create some interesting cloud prototypes. An inter-operable prototype with Puppet, iClassify, and ControlTier might go a lot farther in engaging the cloud standards discussion than any premature meta-language standard. It is my belief that the core of any good cloud standard discussion most likely will lead you directly back to the old IT Management standards discussion. For which I will leave you in the capable hands of Mr. Vambenepe who is indeed an expert on said subject.
A few of the leading cloud vendors I have talked to seem to imply it might be a little two early to start a “Cloud” standards definition. While most agree that standards will be necessary at some point, almost all agree there is a lot of work to be done before we can get there. A few cloud vendors are lining up their meta definitions as best practices and letting lessons learned guide them as the go. 3Tera is probably in the best position to start this discussion from a cloud interoperability perspective since they are one of today’s leaders of running clouds to cloud infrastructures. 3Tera has recently introduced Cloudware as their stake in the ground. Despite some mis-representations by Forbes (not 3Tera) in a recent article, I think they are on the correct path (listen to me get all googley-eyed in a recent Cloud Cafe Podcast). RightScale probably has the most experience in providing public cloud services on Amazon’s EC2 and S3. They will also have a significant say in how the IT infrastructure of the future inter-operates, based on their strong experience. Elastra is another vendor I have been following that has also introduced robust meta-defintions for cloud operations called ECML/EDML (Here is the Elastra Cloud Cafe Podcast). It is also my belief that emerging open source cloud meta definers, like Eucalyptus, Sclar, and HP’s Smartfrog (just learned about SmartFrog today from William V.) will play a huge part in the final IT management infrastructure. However, unless these open source meta proponents include the participation of other open source monitoring and automation solutions like Hyperic and Zenoss, and provisioning and configuration solutions like Puppet and ContrlTier, they might get lost in all the noise.
Cloud technology has clearly raised the bar on IT infrastructure. The traditional mantras of ESM autonomics are truly a reality with the new cloud and virtualized infrastructures. However, the technology is moving so fast it seems premature to try and screen print any cloud standards discussions at this point. What we are calling the cloud has only begun to really take shape. It is 1988 and we have installed our first Unix box. I agree with most of the cloud vendors, that a standards discussions might be a little early. There is also still a lot of work that still has to be done sorting out the basic IT management standards discussion. I am not sure we have nailed that one yet. I do think it is important for cloud vendors to put their cloud meta solutions out there as starting points for cloud standards discussions. However, I think the real discussion starts when we sort out the old IT management standards and start to figure out how these new technologies fit in. Since in my opinion this is all just one big old fat infrastructure problem, I believe the the basic “How do I manage IT” discussion needs be addressed first before we can isolate in on what the cloud brings to the table.
Topics: 3tera, SaaS, amazon, app engine, aws, bigtable, cloud computing, cloudt10, controltier, ec2, elastra, eucalyptus, flexiscale, hadoop, hyperic, ibm, ibm blue cloud, open source, other, puppet, ruby on rails, zenoss | 14 Comments »