IT Management and Cloud Blog

« | Home | »

The Mighty Two in the Cloud

By John | November 12, 2008

Some times my Irish temper gets flaring when I see the cloud-o-sphere serving “stupid” for breakfast. In the past I typically go off and open my big yap and regret it after the fact.  Case in point, yesterday eWeek was slopping out stupid when they wrote an article describing how Hyperic and Rightscale are competitors and how they are competing for the same space.  This time I deiced to take a more Coté,-esk approach and not go off on a rant (not counting my comment on the eWeek post where I suggested the author should look into selling shoes for a living).

The good news is that I didn’t put a link to it and I only mentioned it in one of my CloudDroplets podcasts. However, this did not get past the keen eye of Javier-on-the-spot Soltero. Anticipating a pending cloud burst from me he graciously sent me an email and offered to take some of his time out of the busy day to brief me on what Hyperic-HQ. He quickly clarified that most of the journalistic comparisons between RightScale and Hyperic were due to a misunderstanding by the journalists and not the Hyperic marketing team, Furthermore, he pointed out, that both Rightscale and Hyperic are funded by Benchhmark and it wouldn’t make sense to go down that road (good point!). We spent about an hour on the phone going over the salient points of HQ 40 and what it means in the cloud. In the end it is now my opinion that the journalists were actually correct to identify Rightscale and Hyperic as leaders in the “IT management” cloud space; however, they did so for the wrong reasons. Here’s why…

IT management, or what I have classically called ESM, can be divided into two primary silos (at least the two I care about).

  1. Provisioning and configuration management
  2. Availability and monitoring

To that extent Hyperic and RightScale do play in the same space (ESM for the clouds).  However, Rightscale is an undisputed leader in the provisioning and configuration management cloud space. Even before I spoke to Javier today, I was actually leaning towards Hyperic for the top spot in the latter. Today, Javier gave me a really good overview of what Hyperic-HQ 4.0 and what 4.0 really means in the cloud space.  Now I am convinced they own the other half of the “Mighty Two in the cloud”.

Here are some important highlights of what Hyperic-HQ is doing with the clouds.

There are a lot of other features in Hyperic’s new HQ 4.0, however, the aforementioned, are the ones that I see as the most important for enterprise cloud discussion. Those of you, who follow my blog, know that I have been critical of Hyperic in the past (can you say CloudStatus). However, on this round they have grabbed a leadership role in moving ESM into the cloud discussion. The ironic part is that, typically, the scope of competitors in the “Little/Mighty” jargon have only been open source vendors like (OpenNMS, Zenoss, and Groundworks (ugh)). Where in fact in this round they have also blown both Tivoli and HP’s out of the water as well.

Javier, feel free to send me one of those nice Hyperic jackets, accept there is one caveat, I can not replace my beloved old trusty Zenoss gear in the now infamous “No Country” running joke.

Topics: hyperic | 9 Comments »

9 Responses to “The Mighty Two in the Cloud”

  1. CloudDroplets #6 - What they don’t teach you at Crotonville | IT Management and Cloud Blog Says:
    November 13th, 2008 at 9:31 am

    [...] Hyperic HQ 4.0 [...]

  2. William Louth Says:
    November 13th, 2008 at 11:16 am

    John I think you better go back to being Irish (which I am) as obviously your judgement is completely clouded here.

    Hyperic is not the only management solution on the market with “uni-directional” agent support. There are products that do not need to have a single (point of failure) central management server with a single database backend – a show stopper if you ask me for anyone offering a cloud computing management. These products have a distributed (partitioned, disconnected) database residing in part within each agent that can be merged on-demand and in part from any workstation point allowing agents to even run on temporarily disconnected handheld devices. Because the database typically resides within the managed process’s memory or local storage it can outperform a database backed solution, especially a MySQL one, by a huge margin.

    I fail to see how having an AMI image of some piece of software matters in terms of cloud computing management.

    One of the most exciting features in Hyperic 4.0 is the ability to browse a local agents JMX MBeanServer. Yikes!!! This is what Hyperic PR states. This has been achieved in the commercial world years ago and in a much better and scalable manner. What has this got to do with cloud computing? Nothing.

    Maybe it is a good and cheap (free) application availability solution for the enterprise but positioning as a cloud computing management platform when it does not address any issues within the cloud itself is in such a context “putting lipstick on a pig”.

    William

  3. John Says:
    November 13th, 2008 at 11:36 am

    William,

    Thank you for commenting on this blog :)

    One rule I made for my self is that I promised myself I would never do something foolish like get into a heated argument with some one who has taken the time to not only read my blog but comment on my blog (as I have been know to do on other forums). There are caveots (IBM and people who are from outer space). That is clearly not the case with you.

    In the tone of a friendly debate here goes:

    I wasn’t saying that Hyperic is the only one that has unidiectional agents. What I was saying is that I have not seen many in the ESM availability/monitoring space that do this. For example IBM/Tivoli does a bad job at this. Also compared to their percieved superset (OpenNMS, Zenoss, and Groundwork) they are clearly leading on this item.

    Over all I was was also trying to make the argument that, again in this niche ESM space, they seem to be leading the pack on moving in/on the cloud. I don’t know if you saw how I took them to task on CloudStatus a while back. I was ruthless. This time round I was impressed (for that’s worth).

    Agreed, creating an AMI is not the end-all be-all but a lot of people applauded when Gigapaces did the same thing (myself included). I think there is a tone they are setting and from my perspecitive it looks promising.

    On the JMX front… I am not qualified to comment on how strong the are vs. others and that is why I didn’t bring it up in my post. However, now that it has been mentioned… I have been trying to create a lab with the IBM “Tivoli” (billion dollar solution) to just get Tivoli to find a “Hellow world” string in a very simple MBean. I will admit I am an idiot when it comes to Mbeans; however, Hyperic found it in a nano seconds and I have about .01 percent with Hyperic and 15 years working with Tivoli.

    Dude, don’t get me wrong. I love the fact that you used this forum to voice your concerns and please feel free to keep em coming even if Javier never sends me that cool jacket.

    Thanks
    John

  4. Announcing HQ 4.0 | Blogging Hyperic Says:
    November 13th, 2008 at 12:08 pm

    [...] 4.0 also represents Hyperic’s continued commitment to innovation in the management space. 4.0 introduces the worlds first web application management solution designed and packaged natively for the Amazon AWS cloud. HQ for AWS is packaged as an Amazon Machine Image (AMI) which leverages technology like Elastic Block Storage to provide a fully cloud-enabled solution that can be deployed as easily as any other EC2 AMI out there. It also provides the first cloud-friendly management agent which allows users to manage cloud based virtual machines securely and reliably from either inside the cloud, or from HQ 4.0 installations inside your datacenter. Our good friend John Willis wrote up his impressions on the importance of this new development in HQ’s architecture on his blog. [...]

  5. William Louth Says:
    November 13th, 2008 at 1:23 pm

    Hi John,

    “On the JMX front…”

    I can take a memory snapshot of the complete complex object state (not just a primitive attributes but whole object field graphs) of all MBeans across one or more servers in seconds with one click. This state image can be browsed off-line, annotated, compared, merged and used to define metrics.

    But lets get back on track. The Hyperic 4.0 release might be a worthwhile upgrade from the perspective of a hosting company because the management model is devoid of software activity knowledge – it is system based and by and large sees everything as a process and on some host and nothing else other than some associated metrics. But for cloud computing the “real” cloud computing (when it arrives) the concept of a host, system, process is superfluous. The real cloud computing (version 2.0?) will deliver a programming model that will be cloud aware in that applications will not be aware of their location (containment hierarchy, deployment topology). Other than the cloud computing platform vendor no one will see the point of a process, host, or cluster. For those using such a platform and programming model (along with a logical architecture) the management model will be based entirely on an utility model of activities, resources, unit costs (rate plans) and mapped cost objects (cost centers). It will be about flow without explicit and visible compute (and storage) boundaries.

    William

  6. John Says:
    November 13th, 2008 at 2:11 pm

    I am not sure this analogy is going to work; however, that never stopped me before. Years ago I met a guy who told me he had perfected a methodology for coding in assembler where his code could never (never, never, never) loop. If you agree with him we should then stop and agree to disagree.

    If you are still here… I tried to explain to him that was basically impossible and he never agreed. My point was that as long as the human “design” element was involved there was always a scenero of which his code could loop. He then boldy defined me to tell him how… That was real simple after looking at his code ( A storage overlay puts a value negative an Bang!)

    I can not provide a simple answer in your case and you are indeed correct what we do today will definitely be different tomorrow (Cloud 2.0 or Infrastructure 9.0 whatever).

    The other night I asked the AWS evangelist why Amazon doesn’t expose more metrics for things like SQS and EC2. He asked me why do you need that stuff. I told him to monitor my application. He said you don’t have to worry about that we will take care of that for you. then I pointed him to a SmugMug article and said that’s just the tip of the iceberg. Now for something completely crazy… If I were one of those dudes on Discovery One I would have really liked to have had some instrumentation in HAL.

    Back to point… No matter what it looks like we are probably going to have to monitor is somehow and Hyperic is no further behind than anyone else and IMHO further along than most.

    One last point, I like you am stoked about the possibilities of this new IT frontier; however, I have been doing this stuff for 30 years and not a lot has changed even though we have seen many promises.

    Again, I enjoy the debate.

    Thanks
    John

  7. William Louth Says:
    November 13th, 2008 at 3:25 pm

    Hi John,

    We are going to have “all that instrumentation” but the management perspective and execution abstractions will be completely different.

    It is probably best to think of it in turns of different costing methods because the end game will all be about costs and cost drivers.

    The traditional accounting approach assigns costs to organizational units (employee, department, unit, …) and is largely unaware of the output products other than for association with an organizational unit. There is lack of traceability of costs to activities other than by way of what activities an organizational will typically undertake. Chargebacks are estimated with a largely course grain model.

    The activity based costing approach combines a product costing method with a resource consumption model – organizational units are largely secondary if at all present. The activity/process (software: workflow stage, event, request, operation, call) is the central focus resource usage and costing is tracked in terms of this.

    True cloud computing management solutions need to focus on what is going on within the operating system process (do we need one) tracking the major steps involved and recording costs via indirect (cause-and-effect) and direct resource usage. The platform vendors can manage the process and meter its resource usage but to the a customer having charges aggregated at an operating system process level (even a named or grouped one) is meaningless at least in terms of operational management and process improvement. Customers will want to have the resource usage itemized just like your phone bill though it is much harder for the platform vendor as the activity so discernible.

    William

  8. William Louth Says:
    November 13th, 2008 at 3:34 pm

    “IMHO further along than most”

    I disagree (you did see the coming?).

    Judging by their recent visit rates to our website and viewing of our “metering the cloud” article as well as our recent ABC release they think the same.

    The future will be all about metering in the context of activity and the cost and performance management models will merge.

    William

  9. John Says:
    November 13th, 2008 at 3:52 pm

    I’m thinking we should be talking podcast cafe style. I guess I should have taken a closer look at your products before I opened my big yap. Let me know if you are interested.

    Thanks
    John

Comments