+ All Categories
Home > Documents > Socket #25 to them!myflex.org/yf/kiev2013/WebSockets/JAX-Magazine-2013-01.pdf · Pushing browser...

Socket #25 to them!myflex.org/yf/kiev2013/WebSockets/JAX-Magazine-2013-01.pdf · Pushing browser...

Date post: 29-Sep-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
26
www.jaxenter.com #25 Issue February 2013 | presented by Socket to them! – Pushing data using WebSockets in Glassfish Open source politics – Vert.x dispute ends with foundation move Total Eclipse – In conversation with Mike Milinkovich FuseFabric – Managing multiple ServiceMix clusters This issue proudly supported by ©iStockphoto.com/traffic_analyzer
Transcript
  • www.jaxenter.com

    #25

    Issue February 2013 | presented by

    Socket to them! –Pushing data using WebSockets in Glassfish

    Open source politics – Vert.x dispute ends with foundation move

    Total Eclipse – In conversation with Mike Milinkovich

    FuseFabric – Managing multiple ServiceMix clusters

    This issu

    e

    proudly s

    upporte

    d by

    ©iStockphoto.com/traffic_analyzer

  • Editorial

    www.JAXenter.com | February 2013 2

    I know what you’re thinking: this isn’t the friendly face you expect to see on page two of JAX Magazine. But don’t worry, Chris hasn’t been ousted in a bloody of� ce uprising – I’m merely joining him as co-editor of JAX Magazine. We already run JAX Magazine’s sister website JAXenter.com together, so it seemed to make sense to share this publication, too.

    Speaking of sharing: the big story this month has been the dispute between VMware, Red Hat and Tim Fox over the ownership of Vert.x’s IP. If you haven’t been following along, there’s a blow-by-blow report of the drama later in the mag. Luckily, at the time of writing it looks like a happy ending, with Vert.x likely to go to the Eclipse Foundation.

    Some drew comparisons with the situation that the Hud-son community found themselves in 2010, and for a while it seemed a textbook example of large corporations misunder-standing open source software.

    Yet these corporate-backed projects aren’t the biggest issue facing the open source movement today: instead, it’s open source projects’ increasingly relaxed attitude to licensing. Of

    The politics of open source

    the � ve currently trending repos on GitHub at the time of writing, one has no license whatsoever beyond an “All rights reserved” statement.

    It’s a prevalent problem: one report claims that 50  % GitHub projects have no copyright information whatsoev-er. Technically, all published code remains under the own-ership of the author without an explicit license – potentially opening up casually forkers to a world of legal trouble. Besides, just because an MIT license has been slapped on a repo, it doesn’t mean the code hasn’t been ripped from somewhere else.

    In this context, OSS foundations such as Eclipse and ASF are as relevant as ever: necessary to assure enterprise users that the code is vetted, properly licensed and isn’t controlled by the competition. GitHub is great, too: but let’s not mistake it for the de� nitive open source library.

    Elliot Bentley, Co-Editor

    Open Source Governance 4The Vert.x fallout shows the intricacies of managing an open source project Chris Mayer

    Talking everything Eclipse with Mike Milinkovich 6 We sat down with the Executive Director of the Eclipse Foundation to discuss 2012 and beyond

    Smart Search with FIQL and Apache CXF 9Sergey Beryozkin explores the use of FIQL within the popular Java framework. Sergey Beryozkin

    Fixing Java Production Problems with APM 12Dan Delany hunts in the haystack with New RelicDan Delany

    Pushing browser updates using WebSockets in GlassFish 16Steve Millidge gives us a WebSocket taster using a combination of languages and a popular Java server. Steve Millidge

    Managing ServiceMix clusters with Fuse Fabric 21Torsten Mielke introduces some powerful OSGi and ESB concepts through the FuseSource project Torsten Mielke

    Inde

    x

  • Hot or Not

    www.JAXenter.com | February 2013 3

    Firefox OS Among the humongous TVs and USB forks of CES, Firefox OS was quietly making waves with a near-complete build of a 1.0 release run-ning on a real smartphone. The first commercial devices are likely to be on sale later this year, initially aimed at the low-end South American market (although Smart, Sprint, Tel-ecom Italia, Telenor, Telefónica and Deutsche Telekom have all pledged support too). If you’re interested in developing for Firefox OS, Mozilla are putting on free “Firefox OS App Days” hacking events around the world at the end of January.

    ScriptCraft Our favourite open source project of the month involves an unlikely combination of JavaScript, Mine-craft and education. Walter Hig-gins’ ScriptCraft mod uses the JVM’s inbuilt Rhino engine to parse JavaScript commands and send them to Minecraft’s (Java-based) API. More than a mere technical exercise, ScriptCraft is de-signed to lower the barrier of entry to children taking their first steps into coding, marrying a popular game with an accessible language. You can find out more at github.com/walterhiggins/ScriptCraft.

    Vert.x This very nearly ended up in the “not” category, but at the time of writing it looks as though the Vert.x drama is ending happily. Long story short, the intel-lectual property rights to open source asynchronous framework Vert.x were contested by VMware – according to creator Tim Fox, anyway. After a piling-on by the community and weeks of passionate mailing-list debates, Vert.x now looks likely to move under the Eclipse Foundation. Chris Mayer explores the full saga later in this issue of JAX Magazine.

    Java security woes This particular topic is getting rather tiresome, but when the Department of Homeland Security issues a warning urging users to disable Java, it’s probably worth sitting up and taking notice. January’s zero-day exploits, discovered by the developers of “Blackhole” malware software, may only affect browser applets but each headline causes damage to Java’s overall reputation. Sort it out, Oracle!

    QualcommFilling the space of Microsoft’s traditional opening keynote this year at CES was Qualcomm, who seem to be attempting to become a desirable consumer brand. Unfortunately they may have a way to go, judging by this embarrassing market-ing exercise. Opening with an excruciating segment about how today’s genera-tion are “born mobile”, they keynote went from bad to worse with appearances from Guillermo del Toro, a Star Trek actress, Steve Ballmer and Sesame Street’s Big Bird. It’s a shame, because it detracted from interesting news about a Java-based M2M system, “Internet of Everything” barely mentioned onstage.

  • Vert.x is Eclipse-bound

    www.JAXenter.com | February 2013 4

    by Chris Mayer

    Asynchronous event-driven framework Vert.x was only known to a select group of people at the beginning of this year. Inspired by Node.js, the polyglot project became one to watch for developers looking to create modernised applica-tions, utilising a palette of JVM languages to build separate components.

    With only a few months of development under its belt, the fledgling project achieved modest attention with its first GA release in May 2012. Many developers were keen to play with the project, but only a handful of enterprises placed bets ear-ly, which was to be expected.

    Those unaware of the Apache 2.0-licensed project before might do now for entirely different reasons. January’s very delicate public tussle between two tech giants left the com-munity and early adopters anxious over Vert.x’s future.

    What could have been a very messy fallout and a bad PR move for the two companies appears to have ended amica-bly, with the voices of the open source community crucial in salvaging the project.

    BackgroundThe spark was the decision of Vert.x’s creator, Tim Fox to move from VMware, where Vert.x was originally spon-sored, to Red Hat. As a result, VMware demanded (deliv-ering legal letters in person no less) that Fox give up all administrative rights to the project, including its domain and GitHub repo.

    Fox’s tale of their heavy-handed approach didn’t go down well with Vert.x community members or prominent open source figures. Furthermore, VMware rejected Fox’s perhaps naïve proposal for him to use the trademark elsewhere, leav-ing the project in a precarious position.

    After VMware and Red Hat squabbled over the future of the JVM’s answer to Node.js, the community behind Vert.x comes out on top.

    The Vert.x fallout shows the intricacies of managing an open source project

    Open Source Governance

    Imag

    e lic

    ense

    d by

    Ingr

    am Im

    age

  • Vert.x is Eclipse-bound

    www.JAXenter.com | February 2013 5

    Mark Little, VP at Red Hat/JBoss, allayed fears of a stale-mate between the two companies in a joint statement with VMware, adding that both were very “much in active discus-sion” over how to move forward and that they would like to hear the views of the Vert.x community.

    This left the project with two viable options on the table: house the project at a neutral open source foundation, such as Eclipse or Apache, or fork the project and make a clean break, free of the Vert.x trademark.

    Talk of forking drew comparisons to the Hudson/Jenkins sit-uation from some observers, when the Hudson CI server split into two entities following Oracle’s acquisition of the project in the Sun Microsystems takeover in 2009. There are of course nuances between the two situations. Vert.x is still in its infancy, whereas Hudson had already generated huge developer inter-est by the time of its fork, winning a Duke’s Choice Award in 2008. Vert.x is by no means a one-man band, but much of the responsibility surrounding the project falls on Fox.

    The discussionIt soon became apparent that the community were swaying towards moving Vert.x to an open source foundation in its current state, as VMware and Red Hat both were keen to go down this road. Many recognisable open source figures waded in throughout the discussion to offer their opinions on the situation. Amongst those giving advice were Eclipse direc-tor Mike Milinkovich and Apache co-founder Jim Jagielski – both of whom could be considered to have ulterior motives, with Vert.x’s potential valued so highly in the industry. Yet each played the role of open source diplomat, explaining how their foundation is structured, outlining the practices they preach and how each project is managed (by a committee or project lead). It was refreshing yet unsurprising to see such openness about which foundation would work for Vert.x and which would not. The thread could have quite easily have been titled “Open Source Governance 101” – it was that good at outlining the most intricate differences between the two foundation that you wouldn’t think of considering from the outside.

    Kohsuke Kawaguchi, the creator of Hudson and now lead-ing the Jenkins project, also offered sage words, having expe-rienced similar problems a few years earlier. Encouragingly, it wasn’t the big two who dominated, with other smaller FOSS foundations like SPI, Outercurve and Conservancy making their mark in a very active debate.

    Moving to a neutral foundation isn’t easy at the best of times. With Vert.x moving in such difficult circumstances,

    the amount of support Fox was given (especially towards the bureaucratic side of FOSS) really showed that open source foundations are driven to see the best outcome for any given project.

    To Eclipse!After “much personal thought”, Fox made his own personal recommendation, weighing up whether Vert.x needed what he called “the full service” or a foundation that provides suf-ficient IP management, leaving Apache and Eclipse standing.

    What eventually swung it in Eclipse’s favour for Vert.x’s creator was its “business friendly attitude”, which he saw as crucial in getting “a foothold in large enterprises” in the future. He also disclosed that he was not a big fan of Apache’s voting process, which would see his role as project lead diminish. Jagielski even conceded that Fox’s reasoning for not picking Apache was correct before adding that the ASF “isn’t for everyone nor for every project (nor do we claim to be)”.

    Red Hat and VMware gave their blessing and the commu-nity appeared to generally approve of the decision. Within hours, a new topic led by Mike Milinkovich had appeared, outlining the move to Eclipse and what was needed. Every Eclipse project needs a draft proposal to begin with, whilst the issue of a dual-license was discussed. Vert.x already has an Apache 2.0 license but most Eclipse projects prefer to have an Eclipse Public License (EPL), to make it easier to move code across from one project to another. At the time of writ-ing, the process is still ongoing but steady progress is being made.

    It’s a relief to see this situation between VMware, Red Hat and the Vert.x community end well. It has shown us how easy it can be for an open source project to held up by a corporate entity and also the importance that open source governance plays when hosting a project to avoid issues such as these. Hopefully, under Eclipse Vert.x can blossom under fresh guidance.

    “The thread could have quite easily have been titled “Open Source Governance 101” – it was that good.”

    “Vert.x is by no means a one-man band, but much of the responsibility surround-ing the project falls on Fox.”

    References

    [1] https://groups.google.com/forum/?fromgroups=#!topic/vertx/gnpGSxX7PzI

    [2] https://groups.google.com/forum/?fromgroups=#!topic/vertx/

    WIuY5M6RluM%5B151-175-false%5D

    https://groups.google.com/forum/?fromgroups=#!topic/vertx/WIuY5M6RluM[151-175-false]

  • Eclipse in 2012

    www.JAXenter.com | February 2013 6

    JAX Magazine: So as Executive Director of the Eclipse Foun-dation, what have been your personal highlights for yourself and Eclipse throughout 2012?Mike Milinkovich: For me, the two most important mile-stones in 2012 were shipping Eclipse 4.2 as the platform for the Juno release, and shipping Orion 1.0. Both of these events are about the future of the Eclipse community. Eclipse 4.2 is a signifi cant re-factoring and redesign of the Eclipse platform which has been more-or-less stable since 2004. Orion is a completely new tooling platform for the web, which offers the ability to work on your code from a browser. Both of these represent the future of the Eclipse community and ecosystem.

    A third important milestone was SAP shipping its Netweav-er Cloud offering based on the Eclipse Virgo project. Having a major vendor basing such a signifi cant product on Eclipse runtime technologies is a great endorsement of the work that the Eclipse RT community has been doing for several years.

    JAXmag: How proud were you to see the Association for Computing Machinery (ACM) recognise Eclipse with the pres-tigious Software System Award?Milinkovich: The ACM Software Systems Award was an amazing and well-deserved recognition of the original Eclipse team. It is hard to over-state the impact that Eclipse has had on the industry since it was introduced in 2001. It has com-pletely changed the software development landscape by pro-viding an extensible and open source tooling platform. To win this award it’s not enough to simply dream big. You need

    to build an industrial-quality implementation and see world-wide adoption of your technology. John Wiegand, Dave Thomson, Greg Adams, Philippe Mulet, Julian Jones, John Duimovich, Kevin Haaland, Stephen Northover, and Erich Gamma were the leaders that made that happen.

    On a personal note, it was very gratifying to see many of my former OTI colleagues and fellow Carleton University alumni winning such a prestigious award.

    JAXmag: How did EclipseCon/EclipseCon Europe go this year?Milinkovich: Incredibly well. EclipseCon Europe in particu-lar was the biggest and best ECE yet. The feedback that we got from the attendees was that EclipseCon Europe 2012 was the best EclipseCon ever. I am already looking forward to EclipseCon in Boston in March, and EclipseCon Europe in Ludwigsburg in October.

    In 2013 we’re adding a third EclipseCon. EclipseCon France will be held in Toulouse in June. We are expecting another great Eclipse community event.

    JAXmag: Eclipse Juno was the biggest release train yet, with 72 projects. Just how big a challenge was this logistically for the Eclipse Foundation?Milinkovich: The release train process itself runs extremely well under the leadership of David Williams. The fact that we run

    Portrait

    Mike Milinkovich is the current Executive Director of the Eclipse Foundation. He also serves on the board of the Open Source Initia-tive. Outside of work, Mike's passions are his family, the family cottage and hockey (as a coach, player and fan) in pretty much that order. When he's not working, or traveling for work, you will probably fi nd him involved in one of those three things.

    We sat down with the Executive Director of the Eclipse Foundation to discuss 2012 and beyond

    Talking everything Eclipse with Mike Milinkovich

    Juno, Kepler and Orion

    “All of us involved in Eclipse are committed to excellence, and our community keeps us honest.”

  • Eclipse in 2012

    www.JAXenter.com | February 2013 7

    so much of this as a distributed process, where each project is responsible for its own work, is what makes the simultaneous release even possible. Some refinements that we’ve put in place like the final quiet period, have made the logistics of getting the mirrors ready and the downloads set up, have made the Eclipse Foundation’s logistics much more manageable than they were say five or six years ago. Probably the biggest challenges are around the IP review process managed by Janet Campbell and the release review process managed by Wayne Beaton. Those two and their staff definitely worked hard to make Juno pos-sible.

    JAXmag: It’s fair to say performance issues with the 4.2 plat-form have dogged this release. What have you learnt from the situation?Milinkovich: That our community is and will always be de-manding. Which is a good thing. All of us involved in Eclipse are committed to excellence, and our community keeps us honest. But I am definitely happy – even in retrospect – with the decision to release Juno based on the new Eclipse 4.2 plat-form. After two years of betas, we were not getting the level of detailed feedback that we needed to continue to improve the Eclipse 4 platform. Yes, we ended up with some contro-versy. But in the end we are getting a faster and better Eclipse with a more extensible, simple and modern architecture.

    JAXmag: How are the issues being addressed in the next release Kepler?Milinkovich: We’re not waiting for Kepler. These issues are being addressed now. In fact, the team took the unprec-edented step of releasing an interim update on December 13th which addresses a great deal of the performance issues. For those who are experiencing performance issues, I highly recommend reading wiki.eclipse.org/Platform_UI/Juno_ Performance_Investigation and downloading the Eclipse UI Juno SR1 Optimizations referenced there. Eclipse users can expect to have all of those performance improvements includ-ed in the Juno SR2 release which will ship in February.

    JAXmag: Orion recently went 1.0 – can you explain what Orion is, the thinking behind launching it and whether this represents a completely new frontier for Eclipse?Milinkovich: Orion is definitely a completely new frontier for the Eclipse community. And I chose those words carefully, because the Orion project actually has very little to do with

    what people know today at Eclipse’s technology. To a certain degree, Orion is also intended for a different audience, as we expect many of Orion’s users to be web developers who have little or no experience with the Eclipse IDE.

    Orion is a new codebase that provides an open source web tooling platform. As an editor it competes handily with tools like Cloud9 IDE and CodeMirror. But it is significantly more ambitious than that, in that its real goal is to provide an ex-tensible tool integration platform which works with all of the major browser platforms. Orion provides a simple URI-based approach to integrate web tools into a workflow.

    It is difficult to explain Orion to an Eclipse audience, be-cause without a demo it is hard to get past the preconceived notion that it must be just like Eclipse in a browser. It’s not. Orion uses the normal idioms of web navigation, and the browser as its platform to deliver a very different set of navi-gation and development workflow patterns. I highly encour-age people to try it out at OrionHub [1].

    JAXmag: Eclipse Kepler – how is that shaping up for June 2013?Milinkovich: As I said earlier, the simultaneous release pro-cess works extremely well. So far the process is operating smoothly. One major new project coming in Kepler which deserves mention is Stardust, which is a toolset and runtime for business process management [2].

    JAXmag: We’ve seen plenty of news regarding M2M at Eclipse over the past 12 months – how pleasing is it to see these projects blossom, and what plans are afoot for 2013? Will M2M play a big part?Milinkovich: Watching new communities and projects come to Eclipse and become successful is without a doubt my fa-vourite part of the job.

    Machine-to-machine, or the Internet of Things as you also sometimes hear it referred to, is a major new technol-ogy trend which over time is going to impact all of use. Web-enabled devices are going to be a very large part of our existence in the near future. I personally believe that it is critical that the technology which drives M2M be complete-ly open. This is both for philosophical and ethical reasons as well as business reasons. Ethically, if we as humans are going to be observed and measured by these communicating devices we need to be able to know the code that’s in them, and where the data is going. From a business perspective, the Internet itself is an example of how a radically free and open architecture has created massive opportunities. The

    “Orion is definitely a completely new frontier for the Eclipse community”

    “In the end, we are get-ting a faster and bet-ter Eclipse with a more extensible, simple and modern architecture.”

    http://wiki.eclipse.org/Platform_UI/Juno_Performance_Investigationhttp://wiki.eclipse.org/Platform_UI/Juno_Performance_Investigation

  • Eclipse in 2012

    www.JAXenter.com | February 2013 8

    future Internet of Things needs to be at least as open as the Internet we know today.

    I am expecting big things from the M2M community at Eclipse. We already have an interesting collection of tools, frameworks and protocols, and I believe there is much more to come.

    JAXmag: Looking forward to this year, what are some of the key goals for the Eclipse Foundation?

    Milinkovich: 2013 is going to be a busy year! On the plate for the Eclipse Foundation are CBI, LTS, and continuing work to grow our working groups.

    The Common Build Infrastructure (CBI) is a new service that we are offering Eclipse projects. Since the beginning, Eclipse projects have been responsible for creating and man-aging their own builds. This has meant that we have a wide variety of build technologies and solutions within the Eclipse community. The CBI will provide every Eclipse project an op-portunity to have their builds managed as a service by the Eclipse Foundation. We’ve invested a lot in getting the Eclipse platform project moved to CBI, and in 2013 we hope that the majority of Eclipse projects are utilizing this service.

    The Long Term Support (LTS) program will offer Mem-ber companies the ability to leverage a single, shared infra-

    structure for maintaining Eclipse project releases. Currently Eclipse projects only do three service releases: SR0, SR1 and SR2. This basically translated to nine months of maintenance for Eclipse releases. Given that many enterprise software companies offer years of support for their Eclipse-based prod-ucts, plus the increasing use of Eclipse runtime technologies, this gap has been a significant issue for the Eclipse ecosystem. By building a LTS forge the Eclipse Foundation will be pro-viding an important service to its community and commercial ecosystem.

    Eclipse working groups such as PolarSys, M2M, Automo-tive and LocationTech have made a lot of progress in 2012. Next year we expect to see more companies and projects par-ticipating in these IWGs.

    So as you can see, 2013 will be enormously busy for the Eclipse Foundation, and for our community. I’m looking for-ward to the challenges that next year will bring.

    References

    [1] https://orionhub.org/

    [2] http://eclipse.org/stardust/

    Advert

    For more information visit: www.whitepapers360.com

    The FREE technical whitepapers resource

    .NET

    Big Data

    Mobile

    Java

    Web

    Cloud

  • Smooth operators

    www.JAXenter.com | February 2013 9

    by Sergey Beryozkin

    Feed Item Query Language (FIQL) [1] was originally specified by Mark Nottingham as a language for querying Atom [2] feeds. It appears the combination of FIQL and Atom has not become well referenced in the community. However the sim-plicity of FIQL and its capability to express complex queries in a compact and HTTP URI-friendly way makes it a good candidate for becoming a generic query language for search-ing REST endpoints.

    FIQL OverviewFIQL introduces simple and composite operators which can be used to build basic and complex queries. Table 1 lists basic operators. These six operators can be used to do all sort of simple queries, for example:

    •“name==Barry”: find all people whose name is Barry•“street!=central”: find all people who do not live at Cen-

    tral•“age=gt=10”: find all people older than 10 (exclusive)•“age=ge=10”: find all people older than 10 (inclusive)•“children=lt=3”: find all people who have less than 3

    children•“children=le=3”: find all people who have less than or 3

    children

    Table 2 lists two joining operators. These two operators can be used to join the simple queries and build more involved queries which can be as complex as required. Here are some examples:

    •“age=gt=10;age=lt=20”: find all people older than 10 and younger than 20

    •“age=lt=5,age=gt=30”: find all people younger than 5 or older than 30

    •“age=gt=10;age=lt=20;(str=park,str=central)”: find all people older than 10 and younger than 20 and living either at Park or Central Street.

    Note that while the complexity of the queries can grow, the complete expression still remains in a form which is easy to

    understand and quite compact. The latter property becomes very useful when considering how to embed FIQL queries into HTTP URIs.

    FIQL draft [1] also introduces Atom extensions for describ-ing query interfaces and which items are available for queries. The draft is specific to working with Atom, but it is a good idea in general and can be implemented by non-Atom endpoints too, as it can help the consumers with typing the correct queries.

    FIQL in HTTP URIFIQL draft [1] implies that FIQL expressions can be used in different URI parts and it is up to developers to decide what makes most sense with respect to making it easier to docu-ment and for users to type such queries (example, from con-soles) when needed. For example:

    •“/search?s=age=lt=5;age=gt=30”•“/search?age=lt=5;age=gt=30” •“/search/age=lt=5;age=gt=30”

    The first two HTTP queries have FIQL expressions em-bedded within the URI Query component, with the second query even omitting the actual query parameter name such as “s” used in the first one. The last HTTP query actually embeds the expression within the last URI Path segment – some care is needed in the last case when processing it to make sure FIQL 'OR' operator (“;”) is not treated as HTTP matrix parameter.

    The flexibility of HTTP URI and simplicity and compact-ness of FIQL makes it a very interesting combination.

    FIQL support in Apache CXFApache CXF [3] is a well-known Java-based framework for developing production-quality WS (SOAP) and HTTP (REST) applications. The development of REST applications is supported by JAX-RS 1.1 implementation front-end, with JAX-RS 2.0 being currently implemented [4].

    Search support is the fundamental feature of most WEB applications and thus Apache CXF chose to support FIQL [5] to make it easier for developers to get their applications offer more advanced search capabilities, in addition to those which

    Sergey Beryozkin explores the use of FIQL within the popular Java framework.

    Smart Search with FIQL and Apache CXF

  • Smooth operators

    www.JAXenter.com | February 2013 10

    can be supported by simple HTTP queries, with the minimum additional complexity overhead.

    Processing of FIQL expressions is supported as follows. Initially, the expression is captured into a “search condition” which can then be applied to the data already available in memory or converted with the help of custom visitors to an-other typed or untyped expression which can be used to query the actual data store supporting a given REST endpoint.

    In Listing 1 is the example code showing how a query can be applied to the in-memory data.

    The original search expression is converted into a Search-Condition typed by Book class (the latter is a typical bean with properties like 'title', 'author' etc). This offers a way to validate the actual properties used in the query by having Book implementation itself validating the properties or us-

    ing the bean validation framework. SearchCondition can rep-resent a primitive query such as author==nhornby or much more involved query involving composite 'AND' or 'OR' operators.

    Finally, this condition is used to filter the in-memory data and return a list of matching Book instances.

    Very often the actual HTTP queries are converted to the query language such as SQL to get the query run against the data store. This approach is supported with the help of cus-tom FIQL converters implementing the well-known Visitor pattern. Apache CXF ships the utility converters for SQL and LDAP, JPA2 TypedQuery and Lucene Query and documents how other custom converters can be easily implemented [5].

    In Listing 2 is the example of using an SQL converter.Please check a demo [6] shipped with the Talend ESB dis-

    tribution for a more complex example (remove a parent pom section from the demo pom to get the build done fast). This demo shows how FIQL queries can be transparently converted to JPA2 TypedQuery or CriteriaQuery typed queries easily.

    It is also worth looking at the Apache CXF wiki page [5] for more information, such as how to map search and bean properties, as well as decoupling the properties used in the search interface from the actual bean properties, on CXF spe-cific extensions and other recommendations.

    When to use FIQL?Most of the time using traditional HTTP query name and value pairs is very effective. However this approach has its limitations: query parameters have to be 'invented' in order

    Listing 1

    package my.company.search;

    import javax.ws.rs.Path;import javax.ws.rs.GET;import javax.ws.rs.core.Context;import org.apache.cxf.jaxrs.ext.search.SearchContext;import org.apache.cxf.jaxrs.ext.search.SearchCondition;

    @Path("search") public class SearchResource {

    // list of books, the way this list is populated is not shown for brevity private List theBooks;

    @Path("book") @GET public List findBooks(@Context SearchContext searchContext) { SearchCondition condition = searchContext.getCondition(Book.class); return condition.findAll(theBooks); }}

    Basic Operator Description

    == Equal To

    != Not Equal To

    =gt= Greater Than

    =ge= Greater Or Equal To

    =lt= Less Than

    =le= Less Or Equal To

    Table 1: FIQL operators

    Composite Operator Description

    ; AND

    , OR

    Table 2: Joining operators

    Listing 2

    package my.company.search;

    import javax.ws.rs.Path;import javax.ws.rs.GET;import javax.ws.rs.core.Context;import org.apache.cxf.jaxrs.ext.search.SearchContext;import org.apache.cxf.jaxrs.ext.search.SearchCondition;

    @Path("search")public class SearchResource { // list of books, the way this list is populated is not shown for brewity private List theBooks;

    @Path("book") @GET public List findBooks(@Context SearchContext searchContext) { SearchCondition condition = searchContext.getCondition(Book.class); SearchConditionVisitor visitor = new SQLPrinterVisitor(); visitor.visit(condition); String sqlQuery = visitor.getQuery(); // use sqlQuery to query the data and return the list of books }}

  • Smooth operators

    www.JAXenter.com | February 2013 11

    to represent comparison operators other than “equals” and it can be tedious to get a similar query processing code ported across many similar data applications.

    In some cases, Google-style search offers a viable solution. However, it is not necessarily optimal for simple to medium complexity Web applications, with the number and proper-ties of data items known and where the search experience can be made much more user-centric with the options provided to do a fine-grained search around the data items.

    Apache CXF with its FIQL support offers one way to gen-eralize the search processing code and unify the search expe-rience with the front-end FIQL queries being transparently converted internally to other query languages.

    AlternativesAs already mentioned, using plain HTTP name and value que-ries is one approach which works well for simple applications. OData Protocol [7] introduces a powerful OData query lan-guage with OData4J project [8] implementing it in Java. Apache SOLR [9] provides a search engine on top of Apache Lucene.

    ConclusionThis article has introduced FIQL, a URI friendly, compact and flexible query language, which was originally specified for Atom feeds but can also be used with non-Atom Web ap-plications.

    Using FIQL can help with generalizing and simplifying the search processing for developers and offer a unified search experience to the users. Apache CXF offers a search extension which supports FIQL.

    Sergey Beryozkin is Software Architect working for Talend Application In-

    tegration Division. Sergey is Apache CXF committer and JAX-RS imple-

    mentation project lead.

    References

    [1] http://tools.ietf.org/html/draft-nottingham-atompub-fiql-00

    [2] http://tools.ietf.org/html/rfc4287

    [3] http://cxf.apache.org/

    [4] http://jax-rs-spec.java.net/

    [5] http://cxf.apache.org/docs/jax-rs-search.html

    [6] https://github.com/Talend/tesb-rt-se/tree/master/examples/cxf/jaxrs-advanced

    [7] http://www.odata.org/

    [8] http://code.google.com/p/odata4j/

    [9] http://lucene.apache.org/solr/

    Advert

    http://www.aspose.com/java/total-component.aspx?utm_source=magazine&utm_medium=advert&utm_campaign=jaxmagazine

  • Sponsored

    www.JAXenter.com | February 2013 12

    by Dan Delany

    Finding and fixing issues in a production system can be really difficult. Usually by the time the problem is visible, users are already complaining. Fixing these problems under the eye of management is no fun for anybody, especially when you don't know where the problems may be.

    You may or may not have access to the servers in question, and you may have to diagnose an issue involving multiple servers. And sometimes there’s a third party involved, such as a database administrator (DBA) or hosting company, for whom your problem is not a priority. Depending on how de-tailed your log files are, you might be able to search through them and find some hints. It may also be that your code is using third party jars, and they may not log the level of detail you need.

    How APM can helpIt’s often possible to derive useful information from log files, network monitoring, database server monitoring, and the like. The problem there is that you're trying to infer things about your code’s behavior from the information that you’ve already decided to log. If you change your logging to add more information, it's too late. The error has already hap-pened.

    Application Performance Management (APM) systems al-low you to remotely instrument your code and log data to an external system continuously. This is advantageous for several reasons. Since this data collection and logging is hap-pening in the background, you don’t need to think about log-ging metrics during software development. When you need information about the performance of your software in pro-duction, the information has already been gathered for you

    Dan Delany hunts in the haystack with New Relic

    Fixing Java Production Problems with APM

    Tracking down issues in a production system can be a nightmare, but application performance management systems such as New Relic – which combine isolated log files and network and database monitoring – can help.

    ©iS

    tock

    phot

    o.co

    m/b

    dibd

    us

  • Sponsored

    www.JAXenter.com | February 2013 13

    during the normal operation of the system. It has been gath-ered under real system load on the actual production environ-ment, as opposed to data from a test system under simulated load. It also means that when an error occurs in production, such as a performance problem or a threading problem, data about it has already been gathered and is already available.

    In addition to providing help diagnosing problems, an APM system can provide more visibility into your code’s performance and usage patterns by providing metrics about which pages are accessed the most often and how much time the server is taking to generate those pages. Once a page has

    been identified as needing improvement, an APM can help you drill in and see where the server is spending the most time. This lets you can prioritize your fixes.

    For example, this page shows statistics about our office’s site that shows people’s contact information. It’s a small site, but it gives a feel for what APM can tell you. We see usage spikes, and can see how much time is being spent in application code versus database code. And it’s identified in Figure 1 that the PeopleController#phonenumbers page is the slowest on the site.

    In this article, I’ll demonstrate using New Relic’s APM sys-tem to help identify production performance issues. I created

    Figure 3: Transaction Trace page showing where time was spent in a specific servlet call

    Figure 2: Web Transactions page showing four very slow servlet callsFigure 1: Summary dashboard: Shows general statistics about an app in New Relic

    Figure 4: SQL Detail Tab on the Transaction Trace showing the SQL as captured by New Relic

  • Sponsored

    www.JAXenter.com | February 2013 14

    Figure 5: DBA tool showing that the table being queried isn’t indexed for our query

    Figure 6: Transaction Trace showing the same servlet call after the table indexes were added

    a demo app with a single servlet that takes in a first name and last name and searches for entries with that name in a database using Hibernate. Adding APM to a system is fairly simple: to get started, I only had to set up an additional direc-tory containing code and configuration, which contains the contents of a zip file downloaded from New Relic.

    6:/opt/local/apache-tomcat-7.0.34/newrelic% lsCHANGELOG newrelic-extension-example.xmlLICENSE newrelic-extension.xsdREADME.text newrelic.jarlogs newrelic.ymlnewrelic-api.jar7:/opt/local/apache-tomcat-7.0.34/newrelic%

    After the directory is created, you can activate New Relic with a simple change to the launch script. In this case, the change is in Tomcat’s catalina.sh script.

    # ---- New Relic switch automatically added to start command on 2013 Jan 08, 11:43:26NR_JAR=/opt/local/apache-tomcat-7.0.34/newrelic/newrelic.jar; export NR_JARJAVA_OPTS="$JAVA_OPTS -javaagent:$NR_JAR"; export JAVA_OPTS

    Once your server has been launched with this new flag (see Figure 2), it will report data to New Relic. The data can then be mined to help you monitor your code as it runs.

    In this case, the performance problem seen in Figure 3 is easy to spot. My single servlet is taking between 8000 and 9000 milliseconds every time it runs.

    The dashboard shows us that the issue lies with the Quer-yServlet that’s taking a long time to run. It’s revealed to be a database query that is taking all but 6ms of the slow request. Since I used Hibernate in my persistence layer, it’s generating SQL for me. Tweaking the SQL code may not be so simple a task (Figure 4).

    Drilling a little deeper shows us exactly which query was slow:

    select person0_.id as id0_, person0_.fname as fname0_, person0_.lname as lname0_, person0_.middlename as middlename0_ from person person0_ where frame+? and lame=?

    Now I can send this query to my DBA and ask what can be done to make that query run faster (Figure 5).

    It turns out to be a simple fix. The query is only against a single table which has over 21 million rows, and none of the columns in the 'where' clause of the query have indexes.

    The DBA has added some indexes to the table. Now I can run the app again and see the results of the change in Figure 6.

    ConclusionWe improved the system response time from 8470ms to 20ms, a huge improvement in a simple case. But most im-portantly, I was able to get all the information I needed in an organized fashion in the browser. I didn’t waste any time log-ging into servers, viewing log files or anything like that. I also didn’t need to change anything in my source code to enable

    Dan has been writing Java code since 1996, and is currently a senior software engineer at New Relic in Portland. When he is not at work, he enjoys playing with trains with his son and writing model train related software for his iPhone.

    this data collection. I added the New Relic jar to the server launch scripts, and after that, my server logged information to New Relic in the background. From the New Relic website, I was able to track down my performance problem. I drilled through to the slow web transaction, looked at different parts of the transaction to see what was the slowest, and acted on those results.

    This was a simple demonstration where the fix was obvious once the slow query was identified, but it illustrates the value of app performance management. Not only can it be used to find performance problems, it can also be used to measure your app in your production environment so you can know where to spend your time and money to make your system better.

  • Sponsored

    www.JAXenter.com | February 2013 15

    http://newrelic.com/bestfriend?utm_source=SSM&utm_medium=banner_ad&utm_content=bestfriend&utm_campaign=RPM&utm_term=fullpage&mpc=BA-SSM-RPM-en-100-jaxenter-fullpage)

  • WebSockets in GlassFish

    www.JAXenter.com | February 2013 16

    by Steve Millidge

    WebSockets are new in HTML5 and provide the capability to establish a full duplex connection between the web server and the web browser. This means for the first time we can write applications to push updates to the browser directly from the server without having to use complex hacks like long polling, Comet or third party plugins like Flash.

    In this tutorial I’ll demonstrate pushing stock “updates” to a browser over WebSockets to asynchronously update a stock price graph purely using the push capabilities inher-ent in WebSockets. I’ll also use GlassFish in this tutorial as this has out of the box WebSockets support in the latest production GlassFish 3.1.2.2 and can therefore be built and deployed now. However WebSocket support is not enabled out of the box. WebSocket support can be enabled via the administration console, but the simplest way is to use an asadmin command;

    asadmin set configs.config.server-config.network-config.protocols.protocol.http- listener-1.http.websockets-support-enabled=true

    To keep things short, in this tutorial, our application will just spawn a thread to create random updates to the Stock price.

    However in a real application it would be simple to hook our application to a data feed via JMS or some other mechanism.

    Side Bar?The Java API for WebSockets is being standardised in the JCP under JSR 356 [1]. Currently application servers use a proprietary api to unlock WebSockets functionality and the GlassFish api here is specific to GlassFish. Tomcat and other servers have a different API. If you are interested in the proposed JEE7 WebSockets API head over to the JCP page to take a look.

    WebSocket is supported in GlassFish thanks to the Grizzly library. The key classes in the Grizzly WebSocket API we need are shown in Figure 1.

    StockSocket classWorking from the bottom up, first we must create a derived class of the Grizzly WebSocket class. This class will imple-ment the protocol between the browser and GlassFish. As we’ll see later one instance of this class is created for each client browser. In our class, we will implement Runnable, spawn a Thread and send updates to the browser. In this code (shown in Listing 1), we will take advantage of the Grizzly provided DefaultWebSocket class. This implements all the

    Socket to them!

    Pushing browser updates using WebSockets in GlassFish Steve Millidge gives us a WebSocket taster using a combination of languages and a popular Java server.

    ©iS

    tock

    phot

    o.co

    m/m

    ikda

    m

  • WebSockets in GlassFish

    www.JAXenter.com | February 2013 17

    methods of the WebSocket interface with no-ops, so we can just override the methods we are interested in.

    In the onConnect method (which is called when a cli-ent browser connects to the server) we need to create a new Thread and pass the WebSocket instance as the Runnable, as shown in Listing 2.

    In the run method (Listing 3), we will periodically call our custom sendUpdate method using a random value for a Stock.

    Our Stock class is a simple Serializable POJO DTO with three attributes, name, description and price.

    In the sendUpdate method (Listing 4), we serialize the Stock object using the Jackson library into a JSON string. We then send this to the browser over WebSockets by calling the Griz-zly base class’ send method which writes the JSON string down to the browser.

    Finally in our onClose method (Listing 5) we will notify the thread to stop by setting connected to false.

    StockApplication classTo hook our derived StockSocket class into the GlassFish server, we need to create a derived WebSocketApplication class, shown below;

    public class StockApplication extends WebSocketApplication{

    In this class there are two methods we need to override. The first is isApplicationRequest which is called by GlassFish when a client browser connects to GlassFish over the Web-Socket protocol (Listing 6). Our application needs to check the request and decide whether it wants to accept the con-nection. In this case, we will check whether the context path of the WebSocket request contains the string "/stocks". If so we need to tell GlassFish that the request is for us by returning true.

    The second method is createWebSocket (Listing 7). Here, we need to create and return an instance of our StockSocket class described above. The createWebSocket method is called by GlassFish when a client browser connects to our applica-tion and we have accepted the request.

    Listing 1

    public class StockSocket extends DefaultWebSocket implements Runnable{ private Thread myThread; private boolean connected = false;public StockSocket(ProtocolHandler protocolHandler, WebSocketListener... listeners) { super(protocolHandler, listeners); }

    Listing 2: onConnect method

    @Override public void onConnect() { myThread = new Thread(this); connected = true; myThread.start(); super.onConnect(); }

    Figure 1: Key classes in the Grizzly WebSocket API

  • WebSockets in GlassFish

    www.JAXenter.com | February 2013 18

    StockServlet classThe final class we need to write is a simple servlet. This servlet only exists to ensure we register our derived StockApplication class with Grizzly’s WebSocketEngine.

    @WebServlet(name = "StockServlet", urlPatterns ={ "/stocks"}, loadOnStartup = 1)public class StockServlet extends HttpServlet { private StockApplication pushApp;

    We do this in the init method of our servlet and to ensure our servlet is created on deployment, we must specify that an instance should be created on startup, in the annotations as shown above.

    @Override public void init(ServletConfig config) throws ServletException { super.init(config); pushApp = new StockApplication(); WebSocketEngine.getEngine().register(pushApp); }

    We also need to override the destroy method to ensure our StockApplication is unregistered from the WebSocketEngine on undeployment.

    @Override public void destroy() { super.destroy(); WebSocketEngine.getEngine().unregister(pushApp); }}

    HTML5 & JavaScriptOnce we have written our Java servlet and the classes to use Grizzly’s WebSocket API, we need to turn our mind to the HTML and JavaScript code. For our demonstration we are going to use a JavaScript library called Highcharts which is free for non-commercial use [2]. This JavaScript library can render very sexy Charts purely by using HTML5.

    For our browser we will create a simple JSP page which uses the WebSocket JavaScript API to connect to GlassFish

    Listing 3: Run method

    public void run() {while(connected) { Stock stock = new Stock("C2B2","C2B2",Math.random() * 100.0); int sleepTime = (int)(500*Math.random() + 500); Thread.currentThread().sleep(sleepTime);sendUpdate(stock); }}

    Listing 4: sendUpdate

    public void sendUpdate(Stock stock) { // CONVERT to JSON ObjectMapper mapper = new ObjectMapper(); StringWriter writer = new StringWriter(); try { mapper.writeValue(writer, stock); } catch (IOException ex) { } String jsonStr = writer.toString(); // SEND down the Websocket send(jsonStr); }

    Listing 5: onClose

    @Override public void onClose(DataFrame frame) { connected = false; super.onClose(frame); }}

    Listing 6: isApplicationRequest

    @Override public boolean isApplicationRequest(Request request) { if (request.requestURI().toString().endsWith("/stocks")) { return true; } else { return false; } }

    Listing 7: createWebSocket

    @Override public WebSocket createWebSocket(ProtocolHandler protocolHandler, WebSocketListener... listeners) { return new StockSocket(protocolHandler, listeners); }

  • WebSockets in GlassFish

    www.JAXenter.com | February 2013 19

    over the WebSocket protocol and then receives our JSON stock updates which it feeds to HighCharts to graph.

    The first thing you need to do in the WebSockets JavaScript API is to connect to the GlassFish server, using the WebSocket protocol. To do this, we need to create a URL of the form ws://:/ and pass this to the constructor of the WebSocket class.

    var wsUri = "ws://" + location.host + "${pageContext.request.contextPath}/stocks";websocket = new WebSocket(wsUri);

    Once we have our WebSocket object, we then must set up the call back functions. These JavaScript functions are called by the browser when WebSocket events occur, for example, when the socket is opened (onOpen), closed (onClose) or there is an error (onError). For simplicity, we will set these as empty functions.

    websocket.onopen = function(event) { }; websocket.onclose = function(event) { }; websocket.onerror = function(event) { };

    The most important callback is onmessage. This is triggered when the browser receives data from the server over the Web-Socket, and in our case will be called when we receive the JSON string representing the stock object. So we will parse the JSON string and create a new datapoint in HighCharts for this Stock price update.

    websocket.onmessage = function(event) { var object = JSON.parse(event.data); var x = (new Date()).getTime(); var y = object.price; document.chart.series[0].addPoint([x,y],true,true,false);}

    The initialisation of the HighCharts chart is done in the head of the document, a snippet of which is shown in Listing 8.

    The JSP page should be packaged up into a war file, with the servlet and Java Grizzly code shown above and deployed to your GlassFish server in the usual way.

    Final ViewOnce the code is deployed successfully, you can navigate to it using your usual browser and you should see an updating chart (Figure 2).

    Building push applications using the standard WebSocket Javascript API and modern application servers like GlassFish is very easy to do. Hopefully this tutorial has whetted your appetite and inspired you to explore WebSockets in your ap-plications.

    Steve is the director and founder of C2B2 Consulting Limited and organiser of London JBoss User Group. C2B2 is a specialist JBoss consultancy focusing exclusively on achieving nonfunctional requirements thereby ensuring JBoss based solutions go live; Fast, Reliable, Manageable and Secure. Steve has used Java extensively since pre1.0 and has been a

    field based professional service consultant for over 10 years. Prior to founding C2B2, Steve was a Principal Consultant in Solution Architecture at Oracle UK where he was an architect of Ordnance Survey’s Master Map project to deliver digital mapping data over the web and also worked on a large integration application for the Foreign Office. Steve has many years experience of building large scale web applications and was an architect for the Tour De France’s web infrastructure.

    Figure 2: Updating chart

    References

    [1] http://jcp.org/en/jsr/detail?id=356

    [2] http://www.highcharts.com/

    Listing 8: HighCharts

    $(document).ready(function() { Highcharts.setOptions({ global: { useUTC: false } }); var chart; document.chart = new Highcharts.Chart({ …});

  • More information: www.developer-press.com

    Available on:

    Mobile – Java – .NET – Web Development – PHP and more ...

    $ 4.99

    Your brand new

    one-stop-

    digital-bookshop!

    $ 3.99

    $ 1.99

    $ 2.99

    Out now!

    $ 3.99

    $ 3.99

    $ 2.99

    $ 1.99

  • Lighting the fuse

    www.JAXenter.com | February 2013 21

    by Torsten Mielke

    Apache ServiceMix is quite a popular open source ESB [1] that is best suited for Integration and SOA projects. It offers all the functionality one would expect from a commercial ESB – but in contrast to most commercial counterparts, at its core it is truly based on open standards and specifications.

    ServiceMix leverages a number of very popular open source projects. Its excellent message routing capabilities are based on the Apache Camel framework [2]. Apache Camel is a lightweight integration framework that uses standard Enter-prise Integration Patterns (EIP) for defining integration routes using a variety of domain specific languages (DSL).

    The majority of integration projects require a reliable mes-saging infrastructure. ServiceMix supports reliable messag-ing by embedding an Apache ActiveMQ message broker [3], which is one of the most popular, fully JMS 1.1 compliant open source message brokers. It offers a long list of messaging features, can be scaled to thousands of clients and supports many Clustering and High Availability broker topologies.

    Support for Web Services and RESTful services is achieved by integrating Apache CXF. CXF is perhaps the most well known open source Web Services framework [4] and has been fully integrated into ServiceMix. CXF supports both the JAX-WS and JAX-RS standard and all major WS-* specifications.

    At the heart of ServiceMix is an OSGi container runtime [5]. The OSGi Framework is responsible for loading and run-ning truly dynamic software modules – so called OSGi bun-dles. An OSGi bundle is a plain Java jar file that contains additional OSGi specific meta data information about the classes and resources contained inside the jar.

    The OSGi runtime used in ServiceMix is Apache Karaf [6], which offers many interesting features like hot deployment, dynamic configuration of OSGi bundles at runtime, a cen-tralized logging system, remote management via JMX and an extensible shell console that can be used to manage all aspects of an OSGi runtime. Using Karaf one can manage all life cycle aspects of the deployed application modules indi-vidually. Karaf not only supports deploying OSGi bundles, but also plain Java jar files, Blueprint XML, Spring XML and war files. The flexible deployment options ease the migration of existing Java applications to OSGi.

    ServiceMix deploys these open source projects out of the box on top of the Karaf OSGi runtime. ActiveMQ and Camel register additional shell commands into the Karaf shell that can manage the embedded JMS broker and Camel environ-ment at runtime. It’s also possible to only deploy those ESB functions that are needed for a particular project. If support for a certain element, for example Web Services, is not need-ed, the CXF related OSGi bundles can all be uninstalled. This further reduces the already small runtime memory footprint of ServiceMix. Figure  1 summarizes the technologies and standards that Apache ServiceMix is built on.

    ServiceMix leverages a number of very successful open source projects. Each of these projects is based on open stand-ards and industry specifications and designed to provide a maximum level of interoperability. All of these aspects make ServiceMix a very popular ESB that is deployed in thousands of customer sites today and in many mission critical appli-cations. There is also professional, enterprise level support available from companies like Red Hat (who acquired Fuse-Source in 2012) and Talend.

    Torsten Mielke introduces some powerful OSGi and ESB concepts through the FuseSource project

    Managing ServiceMix clusters with Fuse Fabric Managing a large number of ServiceMix instances with dozens of applications deployed is a non trivial task, but open source project ServiceMix from Red Hat can help reduce the complexity of your application deployment.

  • Lighting the fuse

    www.JAXenter.com | February 2013 22

    Introduction to Fuse FabricIt’s no surprise that some companies have dozens and even hundreds of ServiceMix instances deployed in their IT infra-structure. Larger projects may spawn multiple ServiceMix containers as one single JVM instance would not fit the en-tire application. In addition, the same application may be de-ployed to multiple ServiceMix containers for load balancing reasons. Each ServiceMix instance is an independent OSGi container with its own OSGi runtime and an individual set of deployed applications. However, managing a larger number of ServiceMix instances with dozens of applications deployed becomes a non trivial task as ServiceMix itself does not pro-vide any tools to manage multiple ESB instances centrally.

    Installing updates of an application deployed to multiple independent OSGi containers becomes a tedious and error-prone task. It is necessary to manually log into each OSGi container (e.g. using an ssh client session), stop the existing application, install and perhaps reconfigure the new version of the application and finally start the new application. These steps then need to be repeated on all the remaining ESB in-stances that run the same application. If anything goes wrong during such upgrade, changes need to be reverted back manu-ally. This manual approach is cumbersome and chances are high that mistakes are made along the way.

    Here, we can use Fuse Fabric, an open source Integration Platform under Apache license [7] which began as a project within FuseSource. With Fuse Fabric you can group all Ser-viceMix container instances into one or several clusters, so called Fabrics. All instances of this cluster can then be man-aged from a central location, which potentially may be any ServiceMix instance within the Fabric. This includes both the configuration of all ESB instances in a cluster as well as the deployment of applications to each ServiceMix container.

    Fabric extends the Karaf shell with additional commands for managing the cluster of OSGi containers, so users don’t need to use another tool for managing the Fabric. It also supports deploying applications to both private and public clouds. Using the jclouds library [8] all major cloud provid-ers are supported. Applications may be deployed to the cloud with a single Karaf shell command and even the virtual ma-chine in the cloud can be started by Fabric.

    Fabric can also create ESB containers on-demand. Not only can it create new ESB containers locally (sharing the exist-ing installation of ServiceMix) but it can also start new ESB containers on remote machines that do not even have Service-Mix pre-installed. Using ssh, Fabric is capable of streaming a full ServiceMix installation to a remote machine, unpacking and starting that ServiceMix installation and provision it with pre-configured applications.

    To better understand these features, let's have a look into the mechanisms and concepts used by Fabric.

    Fabric conceptsFabric defines a couple of components that work together to offer a centralized integration platform. Each Fabric contains one or more Fabric Registries. A Fabric Registry is an Apache Zookeeper-based [9], distributed and highly-available config-uration service which stores the complete configuration and deployment information of all ESB containers making up the cluster in a configuration registry.

    The data is stored in a hierarchical tree-like structure inside Zookeeper. ESB containers get provisioned by Fabric based on the information stored in the configuration registry. There is also a runtime registry that stores details of the physical ESB instances making up the Fabric cluster, their physical loca-tions and the services they are running. The runtime registry is used by clients to discover available services dynamically at runtime. The Fabric Registry can be made highly available by running replica instances. The example cluster in Figure 2 consists of three ESB instances that each run a registry replica.

    Fabric Registries store all configuration and deployment in-formation of all ESB instances. This information is described in Fabric Profiles, where users fully describe their applications and the necessary configuration in these profiles. Profiles therefore become high level deployment units in Fabric and specify which OSGi bundles, plain Java jar or war files, what configuration and which Bundle Repositories a particular application or ap-plication module requires.

    A Profile can be deployed to many ESB containers and each ESB container may deploy multiple profiles. Profiles are ver-sioned, support inheritance relationships, and are managed using a set of Karaf shell commands. It is possible to describe common configuration or deployment information in a base profile that other more specific profiles inherit from.

    Figure 3 shows some example profiles that are provided out of the box. It defines a common base profile called default that all other profiles inherit from. The example also lists profiles named camel, mq or cxf. These profiles define the OSGi bundles and configuration for various ESB functions like message routing (based on Camel), reliable messaging (based on ActiveMQ) and Web Services support (based on CXF). Users are encouraged to create their own profile that inherit from these standard profiles.

    Profiles can be easily deployed to one or more ESB contain-ers. Deploying a profile to a particular container is the task of the Fabric Agent. There is an agent running on each ESB con-tainer in the Fabric cluster. It connects to the Fabric Registry and evaluates the set of profiles it needs to deploy to its con-tainer. The agent further listens for changes to profile defini-tions and provisions the changes immediately to its container.

    Figure 1: ESB enabling technologies in ServiceMix

  • Lighting the fuse

    www.JAXenter.com | February 2013 23

    Finally Fabric defines the component of a Fabric Server or Fabric Container. This is every ESB container that is managed by Fabric. Each Fabric Server has a Fabric Agent running.

    For true location transparency Fabric also defines a number of Fabric Extensions [10]. Each CXF based Web Service, each Camel consumer endpoint (the start endpoint of a Camel inte-gration route) and each ActiveMQ message broker instance can register its endpoint address in the Fabric runtime registry at start up. Clients can query the registry for these addresses at runtime rather than having the addresses hard-coded. This allows you to move endpoints to different physical machines at runtime, run-ning replicas of endpoints for load balancing reasons, or even creating master/slave topologies where a slave endpoint (e.g. a slave message broker) waits on standby for the master endpoint to become unavailable. Fabric Extensions are outside the scope of this article but [10] explains them in full detail.

    Fabric defines some really powerful concepts. All provision-ing information is stored in a highly available Fabric Registry in form of Fabric profiles. These profiles can then be deployed quickly to any number of ESB instances inside the cluster thanks to the Fabric Agents. Also, Fabric is capable of creat-ing new local and remote ESB instances on demand. Together with the Fabric Extensions this allows for very flexible deploy-ments. If the load of a particular ESB container increases it is possible to start up another ESB container instance (perhaps in the cloud) that deploys the same set of applications and then load balance the overall work across all instances. Further-more ESB instances can be moved to different physical serv-ers if there is a need to run on faster hardware while clients automatically get rebalanced. With Fuse Fabric it is possible to quickly and easily adapt on any changes to your runtime requirements and have a fully flexible integration platform.

    A quick walkthroughHaving introduced the concepts of Fabric, this last section aims to provide a quick introduction on how to practically use Fuse

    Fabric for deploying an integration project. Although one could download and run Fuse Fabric from its project web site, this part uses Fuse ESB Enterprise 7.1 as released by Red Hat [11]. Fuse ESB Enterprise is based on Apache ServiceMix and already includes Fabric out of the box. It is fully documented at [12]. The default workflow when working with Fabric is as follows:

    1. Create a new Fabric. This starts the Fabric Registry and imports the default profiles.

    2. Create the Integration or SOA application using the tech-nologies offered by ServiceMix.

    3. Define the deployment of the application plus its configu-ration in one or more Fabric Profiles.

    4. Create the required number of ESB containers and config-ure these containers for one or many profiles.

    5. Test or run the deployed application.

    Let’s go through these steps one by one.

    Create a new FabricAfter installing Fuse ESB Enterprise 7.1, it can be started us-ing the script bin/fuseesb. A few seconds later the welcome screen of the shell console is displayed (Listing 1).

    Tip: All Karaf shell commands take the --help argument which displays a quick man page of the command.

    On its first start up this ESB container does not have a Fabric pre-configured. It’s only a standalone ServiceMix installation with a number of OSGi bundles deployed. It is necessary to create a Fabric first using the Karaf shell command fabric:create. This reconfigures the current ESB container, deploys and starts the Fabric registry and im-ports the default profiles into the registry. Alternatively a container can join an existing fabric cluster using the com-mand fabric:join providing the URL of the already running Fabric registry. This Fabric enabled ESB container does not deploy any ESB functionality by default (use the command

    Figure 2: A Fabric cluster consisting of 3 ESB instances, all running a Fabric Registry Figure 3: Sample profiles

  • Lighting the fuse

    www.JAXenter.com | February 2013 24

    osgi:list to verify). ESB functions get enabled by deploying the relevant profiles.

    Create the Integration or SOA Application Fuse ESB Enterprise 7.1 also comes with a couple of demos from which this article picks the examples/jms demo. It dem-onstrates how to connect to an ActiveMQ broker and use JMS messaging between two Camel based integration routes (Listing 2). The demo works in a plain ServiceMix environ-ment but in this part it will be deployed to a Fabric enabled ESB container. This demo has only one interesting file, which is the Camel route definition located in examples/jms/src/main/resources/OSGI-INF/blueprint/camel-context.xml.

    This Camel context defines two Camel routes. The first route with the id=file-to-jms-route consumes a message from a file location on the local file system (directory work/jms/input). It then logs the file name and sends the content of the file to the incoming Orders queue on an external Ac-tiveMQ broker. The second Camel route with id=jms-cbr-route consumes messages from the incoming Orders JMS queue and runs a content-based routing. Depending on the

    XML payload of the message it gets routed to different tar-get directories on the local file system. This is a simple yet fairly common integration use-case. Some small additional configuration is needed to tell Camel how to connect to the external ActiveMQ broker (Listing 3).

    Notice the broker URL property. Rather than using a hard coded url like tcp://localhost:61616 the real broker address is queried from the Fabric registry at runtime using the Fabric MQ Extension. That way the broker can be moved to a dif-ferent physical machine and clients automatically reconnect to the new broker address. The demo can be build by running mvn install. This will install the generated OSGi bundle to the local Maven repository.

    Define deploymentNow it's time to create the Fabric profiles that will deploy this integration project. Let’s assume there is a requirement to run the ActiveMQ broker externally in its own ESB container, which can be useful for various reasons like providing a common mes-saging infrastructure to a number of deployed applications. Therefore two ESB containers are required: one running the Ac-tiveMQ broker, the other running the Camel integration route.

    For running an ActiveMQ broker there is already a profile with the name mq provided out of the box. That ActiveMQ broker has a default configuration, which is sufficient for running this demo. The mq profile can simply be re-used so there is no need to create a new profile. The command fabric:profile-list lists all available profiles. fabric:profile-display profilename shows the content of a profile.

    For running the Camel integration demo, the Camel run-time needs to be deployed to the ESB container. Furthermore, both Camel routes connect to the external ActiveMQ broker. So it's also necessary to deploy the ActiveMQ client librar-ies to this ESB container. fabric:profile-list lists the following three profiles among others (Listing 4).

    The profile activemq-client deploys the ActiveMQ client libraries needed for connecting to an ActiveMQ broker. The profile camel deploys the core Camel runtime (but not the many Camel components). Finally the profile camel-jms has two parent profiles named camel and activemq-client, so it de-

    Listing 1: Fuse ESB 7.1 welcome screen

    ______ _____ _____ ______ | ___| | ___|/ ___| | ___ \ | |_ _ _ ___ ___ | |__ \ `--. | |_/ / | _| | | | |/ __| / _ \ | __| `--. \ | ___ \ | | | |_| |\__ \| __/ | |___ /\__/ /| |_/ / \_| \__,_||___/ \___| \____/ \____/ \____/

    Fuse ESB (7.1.0.fuse-047) http://fusesource.com/products/fuse-esb-enterprise/

    Hit '' for a list of available commands and '[cmd] --help' for help on a specific command. Hit '' or 'osgi:shutdown' to shutdown Fuse ESB.

    FuseESB:karaf@root>

    Listing 2: camel-context.xml describes the Camel integration route

    /order:order/order:customer/ order:country = 'UK' /order:order/order:customer/ order:country = 'US'

  • Lighting the fuse

    www.JAXenter.com | February 2013 25

    ploys both of the ActiveMQ client libraries – the Camel core runtime and camel-jms component. When using the profile camel-jms as a parent, it will automatically deploy the Camel runtime and ActiveMQ client runtime:

    fabric:profile-create --parents camel-jms camel-jms-demo

    This command creates a new profile called camel-jms-demo and uses the profile camel-jms as its parent, which also needs to deploy the OSGi bundle of the ServiceMix demo. This can

    be added using the demo’s Maven coordinates (the demo was previously built and installed to the local Maven repository) by invoking:

    fabric:profile-edit --bundles mvn:org.fusesource.examples/jms/7.1.0.fuse-047 camel-jms-demo

    It modifies the camel-jms-demo profile and adds the demo’s OSGi bundle that is identified by its Maven coordinates org.fusesource.examples/jms/7.1.0.fuse-047. That’s all! Thanks to the out of the box profiles it took only two Fabric shell com-mands to create a profile that fully deploys the Camel integra-tion demo.

    Create ESB containersThe last step is to create the two ESB containers that run the ActiveMQ broker and the Camel demo. For running the Ac-tiveMQ broker in its own ESB container this command is all that is needed:

    fabric:container-create-child --profile mq root activemq-broker

    It creates a new local ESB container called activemq-bro-ker (using the existing installation of Fuse ESB Enterprise) with the parent container being the root container. It also deploys the mq profile, which runs the ActiveMQ broker. The ESB container could be created on a different machine, using the command fabric:container-create-ssh. Running fabric:container-list verifies that the new ESB container got started. It is possible to connect to that container using fabric:container-connect activemq-broker and check the log file using log:tail. If the ActiveMQ broker got started success-fully, the log will contain a line like:

    Listing 3: camel-context.xml also configures the ActiveMQ consumer endpoint

    Listing 4: Some out of the box profiles

    FuseESB:karaf@root> profile-list [id] [# containers] [parents] activemq-client 0 default camel 0 karaf camel-jms 0 camel, activemq-client...

    Figure 4: Fuse Management Console

  • Lighting the fuse

    www.JAXenter.com | February 2013 26

    Apache ActiveMQ 5.7.0.fuse-71-047 (activemq-broker, ID:XPS-49463-1357740918210-0:1) started.

    With the broker running it’s time to deploy the camel-jms-demo profi le to another ESB container. The existing root container only runs the Fabric Registry so the demo can be deployed to the root container using the command:

    fabric:container-add-profi le camel-jms-demo root

    This reconfi gured the root container to also deploy the camel-jms-demo profi le (the jms demo).

    Test the applicationThe demo can fi nally be tested by copying a sample XML mes-sage to the work/jms/input folder that the fi rst Camel route lis-tens on. Fortunately some sample messages are provided with the demo. On a plain Unix or Windows shell run:

    cp examples/jms/src/test/data/order2.xml instances/camel-jms-demo/work/jms/input/

    Right after copying the fi le will be picked up by Camel, get routed through the two Camel routes via JMS and is fi nally put into the target directory instances/camel-jms-demo/work/jms/output/uk/order2.xml. This verifi es that the demo works correctly.

    For users that aren’t fans of command line tools, it is also possible to manage all aspects of a Fabric using the Fuse Man-agement Console (FMC) [13]. The FMC is a graphical, brows-er based management tool for Fabric and a full alternative to using the Karaf Shell. It can be installed directly to the root ESB container using the command:

    fabric:container-add-profi le fmc root

    Thereafter, it can be accessed from a browser using the url http://localhost:8181/index.html (Figure  4). Discussing the details of the Fuse Management Console is outside of the scope of this article.

    ConclusionAnyone who needs to manage multiple instances of Service-Mix should look into Fuse Fabric. The ability to describe all deployments centrally and roll them out to any number of ESB instances can greatly increase productivity and reduce management complexity.

    Torsten Mielke works as a Senior Technical Support Engineer at Red Hat. He is part of the global professional support team at Red Hat and a spe-cialist in open source enterprise integration and messaging systems. Tor-sten actively works on open source projects such as Apache ActiveMQ, Apache ServiceMix, Apache Camel and Apache CXF and is a committer

    on the Apache ActiveMQ project.

    References

    [1] http://servicemix.apache.org

    [2] http://camel.apache.org

    [3] http://activemq.apache.org

    [4] http://cxf.apache.org

    [5] http://www.osgi.org

    [6] http://karaf.apache.org

    [7] http://fuse.fusesource.org/fabric/

    [8] http://www.jclouds.org

    [9] http://zookeeper.apache.org

    [10] http://fuse.fusesource.org/fabric/docs/overview.html#Fabric_Extensions

    [11] https://access.redhat.com/jbossnetwork/restricted/listSoftware.html

    [12] http://fusesource.com/products/fuse-esb-enterprise/#documentation

    [13] http://fusesource.com/documentation/fuse-management-console-documentation/

    PublisherSoftware & Support Media GmbH

    Editorial Offi ce AddressSoftware & Support Media Limited24 Southwark Bridge RoadLondon SE1 9HFUnited Kingdomwww.jaxenter.com

    Editor in Chief: Sebastian MeyenEditors: Chris Mayer, Elliot Bentley Authors: Sergey Beryozkin, Dan Delany, Torsten Mielke, Steve Millidge Copy Editor: Jennifer Diener, Lisa PychlauCreative Director: Jens MainzLayout: Flora Feher, Maria Rudi, Petra Rüth, Franziska Sponer

    Sales:Ellen May+44 (0)20 7401 [email protected]

    Entire contents copyright © 2013 Software & Support Media GmbH. All rights reserved. No part of this publication may be reproduced, redistributed, posted online, or reused by any means in any form, including print, electronic, photocopy, internal network, Web or any other method, without prior written permission of Software & Support Media GmbH

    The views expressed are solely those of the authors and do not refl ect the views or po-sition of their fi rm, any of their clients, or Publisher. Regarding the information, Publisher disclaims all warranties as to the accuracy, completeness, or adequacy of any informa-tion, and is not responsible for any errors, omissions, in adequacies, misuse, or the con-sequences of using any information provided by Pub lisher. Rights of disposal of rewarded articles belong to Publisher. All mentioned trademarks and service marks are copyrighted by their respective owners.

    Imprint

    http://fusesource.com/documentation/fuse-management-console-documentation/

    1_Cover_version12_Editorial_m25_HotOrNot_m23_4_Mayer_m26_8_Milinkovich_Interview_m210_12_Beryzokin_m2delany_cmedit_m2Milidge_m2Mielke_m2


Recommended