As you may have heard, Oracle has found a little bit of “cloud religion” of late, making especially loud noises this very special week of the Oracle “Open” World (and unofficial Java neutering) event. They have announced the Oracle Exalogic Elastic Cloud Server, which is well regarded as being one of the industry’s foremost Oracle servers having the word “cloud” in its title. I’m sure it’s very nice and all, but in his rush to pump up his company’s newfound cloud love, Larry Ellison didn’t particularly have to knock down and old friend:
Also, Ellison took shots at Salesforce on the cloud. Announcing Exalogic Elastic Cloud, Ellison said there are two definitions of the cloud: a platform for building and deploying applications - the Amazon EC2 model - and delivery of one or two applications on the internet - Salesforce.com.
He paired Oracle with the Amazon model and attacked Salesforce - whose CEO Marc Benioff is due to deliver a keynote at OpenWorld on Wednesday and whose company Ellison has invested in.
- Channel Register http://www.channelregister.co.uk/2010/09/20/hp_oracle_play_nice_avoid_hurd/
At a very basic level, I disagree with the above definition of the “cloud.” That said, I was extremely curious about what the various players in this drama might say when asked in a public forum. So today I attended a JavaOne panel session entitled “Taking Java to the Sky: Cloud Computing 2010 Expert Panel.” The panel members were all extremely knowledgeable and gave wonderful product demos, and all was going very smoothly. Too smoothly. I mean, it’s as if Larry had said nothing, and there sat the Salesforce.com and Amazon guys, right next to each other, smiling politely, while the Oracle guy shot in the occasional crazy comment about the “private cloud” like it was a real thing, and the Microsoft guy tried to suppress laughter while swimming in this pond of absurdity. I would’ve expected someone to pull out a shiv. But no.
Even more aggravating, no one posing questions would get anywhere near the subject. There they were, lined up like polite school kids at a spelling bee. They would each step up in turn, ask something like “How much will I need to change my architecture to get to the cloud?” like it’s the beach or Disneyland or something, and step away, smiling at the nice experts regardless of the answer.
So yes, since you asked, I DID take the responsibility for asking the ONLY really pertinent question, and it went a little something like this:
Yesterday, Larry Ellison said that Amazon follows a cloud model, while Salesforce.com, not so much. Would you all please 1) interpret his statement, and 2) react to it?
There was much nervous laughter from the crowd and the panel; I knew I had touched a nerve, and got ready to reap the informational benefits.
The Salesforce guy jumped, nay cartwheeled, at the chance. His comments were brief but thorough, and they amounted to saying that, well, yes, obviously companies like Oracle are threatened by the fact that they might not be able to sell gobs of hardware and software in support of big, ugly enterprise applications. It’s only natural.
The next comment, which I couldn’t pin on any particular panel member, but which definitely came from one of them, was:
“I didn’t bring my ten-foot pole with me.”
Thereafter, I was essentially brushed aside for the next questioner, and so ended my first-ever journalistic search for the Truth. Although I didn’t get all of the answers I was looking for, sometimes the unanswered questions speak loudest.
I have to admit I was a little surprised to see the following email subject this morning. But there it was, undeniable, right in my face:
Subject: Mule 3 is here!
Let’s get the full disclosure stuff out of the way. I am, in fact, a HUGE Mule fanboy, and have been for years. I have open-source Mule instances running in production, and I have long espoused the virtues of this particular ESB technology over other approaches. So when I say that I’ve been waiting with bated breath for this release, understand that I am absolutely not exaggerating. It has left me feeling somewhat tingly in my major extremities and, I’m told, blueish around the lips.
The wait has been a prolonged one, so I’ve muddled by with my 2.2 instances, spreading the good word about what would be coming in the not-too-distant future. But no more — it’s here!
So… what’s here?
- Hot Deployment. I can’t say enough about how important this is. This was the most difficult issue for me to overcome with my production deployments; I work at a truly cloud-based company, and we place a high premium on in-place upgrades supporting our zero-downtime requirements. The addition of run-time deployable packages and service activation in Mule 3 will provide far greater deployment flexibility for those of us who really need it.
- Native REST Support. Having made use of Mule 2’s existing “REST” facilities, I can attest to the fact that any improvement here is a really good thing. Combine this with the new data binding capabilities, and we’ve finally got a respectable toolkit for dealing with the most common integration style in all of Internet-dom.
- Cloud Connectors. Whenever I see the “C” word in connection with an established product, even a really good one, I get really skeptical; many large software shops are “cloud-washing” their products to generate market hype. While I applaud any effort to lower the developer’s cost of entry for integration with cloud vendors, I have fresh memories of a not-so-useful Mule 2 Force.com connector that prompted me to go off and develop my own. Still, it will be interesting to see just how robust this toolkit has become.
- Dynamic endpoints. On the one hand, I’m like, “Why did we have to wait this long for this feature?” But on the other hand, I’m like, “Awwwwww yeaahhhhh.” One thing is certain: I’m going to go delete some code once I get Mule 3 in place.
Although I have much exploration yet to do, I’m really glad to see the advances the Mule team and community are making on this important piece of software. Thanks to all of you for making my life easier!
I’ve always enjoyed discovering the use of oxymorons in everyday language; a peculiar turn-of-phrase like “successful failure,” “simple grandeur,” or “comforting chaos” can raise the eyebrow and prompt a smile, all while delivering a potent dose of meaning. These simple demonstrations of irony are quite often delightful, and occasionally brilliant.
Unfortunately, oxymorons are not always intended, and as such, must be held up for further scrutiny. To wit, I give you: the “Private Cloud.”
On the surface, the notion seems innocuous enough. I hear this cloud thing is great, but it’s out there in Internet-land, and that’s scary. So why can’t I get some cloud-like stuff inside of my enterprise and have all the benefits with none of the risks?
The real problem with allowing this linguistic genie to escape its bottle is that the notion of a private cloud, while eye-catching as a marketing term, completely misses the point. Say it with me: there is no private cloud. Let me explain.
When a SaaS (software-as-a-service) company makes the claim that they run software “in the cloud,” they are not just saying that you can connect to their application over the Internet (in fact, some SaaS applications are better served over private network connections like MPLS — more some other time). Serving software from in the cloud also implies:
- Economies of scale via shared infrastructure. No given physical host is dedicated to particular customer; instead, all resources are shared across tenants. This way, no capacity goes to waste, requiring far less capital investment.
- Load balancing and disaster recovery via massive distribution and redundancy. All server roles are replicated across multiple servers in multiple data centers; there is no single point of failure.
- Minimum downtime via coordinated software updates. Software upgrades are performed once for all clients, and are performed in a rolling manner, preventing service interruptions.
- Large reduction in ongoing maintenance costs via shared operations staff. Cloud vendors become experts in efficient care and feeding of their purpose-built infrastructures, and can thus pass the savings on to customers.
So what could a “private cloud” possibly be? The meaning of “private” is easy enough to decipher — a desired application would need to run exclusively in a given private data center, and would necessarily be dedicated to the given enterprise’s local base of users. This is not new; many large companies run application server farms for internal use, and to some limited extent they can run multiple applications across these servers, reducing overall costs.
But what about “cloud,” and the benefits it implies? The economies of scale go only so far for a limited application audience, and a single company can only justify so much budget on redundant systems. Further, anyone who has spent significant time dealing with enterprise software upgrades understands the pain these can introduce when managed by local IT staff, even in those exceptional cases wherein nothing goes wrong. Finally, the incredible cost burden of managing an ever-increasing server farm has been enough of a concern to end many large, distributed software projects in the budget office.
There are times when it is perfectly appropriate to invest in a locally-hosted software product; however, be wary of software vendors targeting the “private cloud” with their traditional offerings. Chances are good that you won’t get what a true cloud-based solution can deliver.
In 2008, I was invited to present at Robodev on the subject of sensor fusion applications, with a heavy emphasis on — you guessed it — robotics (you can download the presentation from SlideShare: http://www.slideshare.net/guestc4ce526/roomba-sensor-fusion). Although the presentation was in a small room and was sparsely attended, there were enough people there to make for a reasonable back-and-forth on the subject, and I managed to pique the interest o some of the attendees in FOSS, in particular the Esper Project.
As I mentioned in my previous post, Esper is an open source complex event processing (CEP) engine with a surprising array of capabilitites. In summary, Esper supports:
- multiple sources of real-time events, and the addition of custom event sources
- multiple publishing sinks, and customization here as well
- real-time event queries using the EQL query language
- access to static data sources for event queries in a larger context
- time windows, counts, filters, etc…
For an open source project, this is an insanely capable tool. So what led me to believe it could be useful for sensor fusion projects? Here is a picture of a general sensor fusion model:
Note the importance of transformation and matching in this model, and the application of a feedback loop for model maintenance and prediction over time. Now, my “EventBot” implementation model:
I built the Roomba Connector from scratch to interact with a Roo232 physical link, the Event Transformer as the feed into Esper, and the command pattern code to manipulate the robot’s servos (all Java modules). Once this was in place, however, I was able to create the world model and develop situation detecting queries exclusively in Esper’s EQL language.
Here, by the way, is the final product:
I didn’t demonstrate complex behaviors during the presentation; the live behavior of the robot was to govern its own speed to a high degree of accuracy while driving in a circle. It did help me to make the argument, however, that open source projects from the enterprise world might have a place in robotics applications and other real-world domains.
If you’re interested in the code other other aspects of the project, please post a comment.
Of the many points made by Cliff Stoll in his 1990 computer espionage novel “The Cuckoo’s Egg,” one of the most insightful was also somewhat of an IT paradox. Near the end of the book, Stoll described an infamous Internet worm attack, and suggested that one way to avoid extensive damage from attacks of this nature was to encourage diversity of networked systems, thus increasing incompatibility.
This flies in the face of conventional data center wisdom, which would tend to push us towards homogeneity for its inherent efficiencies. Nearly 20 years later, IT has overachieved in the realm of incompatibility (with the eager help of developers). This hasn’t prevented the emergence of viruses and worms, but has probably minimized their scope of impact.
This diversity, however, presents IT with the integration problems with which we have all become very, very familiar, and which allow people like me to earn a living. Data for an executive summary report lives in an old mainframe AND in a new MySQL database, and must be merged. Three corporate HR systems exist due to mergers, and now must be rationalized. Call center agents enter contact information using the new web-based CRM system, but register sales opportunities in SAP R/3. And integration developers go ca-ching, ca-ching, ca-ching. Thanks for that, by the way.
But this condition becomes much more serious when lives are at stake. Take the recent airline bombing attempt as an example. As has been disclosed by the White house, actionable intelligence existed that could have prevented the alleged attacker from boarding the plane in the first place. Unfortunately, neither the organizational processes nor the integration technologies existed anywhere in our government to support the detection of this situation. While both will require improvement, it’s the second of these (naturally) that I want to address here.
We’ve heard that the data to stop this guy existed — so where was it? We could possibly have known the following things:
- 6/16/2008: Abdulmutallab granted a US Visa, valid until 6/12/2010
- 8/1/2008 to 8/17/2008: He visited Houston
- 5/2009: The UK refused his visa application
- 11/19/2009: His father informed our embassy in Nigeria that he might be a threat; “Visas Viper” notification sent to the State Department on 11/20/2009; Abdulmutallab’s name entered into the National Counterterrorism Center’s Terrorist Identities Datamart Environment (TIDE) database
- 12/16/2009: He buys a round-trip ticket to Detroit (in cash) by way of Amsterdam- 12/24/2009: He boards the connecting flight to Detroit in Amsterdam, passing through security with his valid passport and visa
(Source: the Associated Press by way of mlive.com)
Taken one at a time, none of these short of the warning from Abdulmutallab’s father would call for any reaction at all, and even that might not be recognized as a threat under some circumstances. However, taken together, there is enough information here to a) determine that a credible threat exists, and b) do something about it. Unfortunately, as we have heard so very many times in the last two weeks, the holders of these important bits of information failed to “connect the dots.”
Human communication is a chaotic business, even under the influence of a well-defined process. For an alternative to reliance on the chance recognition of a pattern by an individual, or on the random phone call from the State Department to someone at the CIA, one need look no further than Complex Event Processing (CEP) technologies, which are designed to listen on rapidly flowing streams of seemingly unrelated events, all the while searching for patterns in the data, or in the sequence thereof. For an example of a very rich, open source CEP engine, check out Esper:
But what of the events themselves? A “Visas Viper” missive is indubitably much different in form and communication method from an international airline reservation notification, and yet it is just these disparate information sources that must be combined to connect those all-important dots we keep hearing about. To acquire, mold, and feed these notifications into a CEP, we need an Enterprise Services Bus (ESB). Again reaching into the open source grab bag, we find Mule, an amazingly rich tool, the Swiss Army Knife of the open source world:
Of course, no integration project is free, even one involving heavy use of open source software. But the cost of not knowing about a potential attack is potentially much greater, measured in lives, dollars, or threats to personal freedom. I expect that someone in the federal government’s IT department is already working towards a CEP architecture of some sort. If not, here’s hoping they stumble across this post and consider it.
I’ve finally done it. I’ve finally decided to put some effort into sharing my opinions about enterprise software integration, computer-telephony integration, peanut butter-chocolate integration, basically all things related to the i-word. Because if there’s one thing I know, it’s definitely integration. And there really is just that one thing.
Throughout my career, my work has tended less towards building shiny, pristine software modules where core business logic lives and more towards that sticky, nasty area between systems where no solution is simple or pretty, nothing fits with anything else, everything can and will go wrong at any time, and no one believes any of it. Projects of mine have included:
- an old-school Visual Basic VBX that allowed Windows to talk to X10 wall socket network adapters
- a first-party telephony library for business desktop telephones
- a serial/modem/TCP connector for industrial flow meters
- a Windows 3.1 program that used COM to glue a bunch of unruly Windows processes together and provide a single, cohesive-looking reporting application
- a standalone Java-based service that bridged the Asterisk FastAGI protocol to the Oracle BPEL workflow engine for communication-enabled business apps
- a document import connector for a workflow engine
- Salesforce and Twitter integrations for call centers
…and on and on. No, it’s not the glorious path — nothing I write tends to get it’s name in the headlines. But the problems are interesting and hard, and the solutions, if thought through properly, can be elegant and more broadly applicable than might immediately be understood.
You’ll hear no complaints from me; I like dropping into the middle of a mess and finding some reasonable way to clean it up. Even more, I really enjoy the opportunity to create some new binding or bit of integration infrastructure, or find a way to apply open source ESB or CEP technology in a way that solves immediate problems and may also address future needs.
I suppose I feel ready to blog about this stuff because, at this point, I think I’ve got a fairly broad view of the integration landscape, from patterns and problems to the dizzying array of applicable tools. I also find that teaching is one of the best ways to learn, so even if no one ever reads my posts, I’m getting something out of the effort.
Regardless of the public results, I look forward to sharing my views, and I hope that someone, somewhere, on some deep, dank night in the integration dungeon, finds them useful.