Sunday, August 3, 2014

From zero to LDP: a tale of our 5 day journey/sprint to create an LDP impl from scratch in Node.js


Highest priority, produce a Node.js® based W3C Linked Data Platform (LDP) implementation.  I will discuss some of the goals later (and in subsequent posts).

Had a BasicContainer and RDFSource implementation at the end of the week, passing all the automated test cases from the test suite.  Guessing we spent < 40 hours total on the work.  40 sounds like a lot, though we had a bit of learning curve in areas.  And it is 'live' at: (assuming our autodeploys continue to be green).

Some background on us that did the development, well just Sam Padgett and Steve Speicher.  To be fair, we know a fair amount about the LDP spec already and have already done a reference implementation in Java in Eclipse Lyo.  Neither of us have done anything in Node.js other than a “Hello Node” sample.  Sam on the other hand, is an experienced JavaScript developer and Steve has stumbled his way through a few applications.

Day 1, we started the effort on Monday July 21.  We had sparkle in our eyes and two thermos full of coffee.  Then we were in meetings for a while and caught up on LDP test suite items.  Well, we did do some initial project structure setup to share the code and auto deploys to Bluemix (more on that later), read a little bit to determine a general direction we were ready to head.  After day 1, we had a place to store code, a sample app checked in, using express.js and starting to play with rdfstore.js API.

Day 2, day started off with the promise of all day code slinging.  We only got about 1/2 a day in, due to other OSLC and Lyo normal operations (WG meetings, code reviews, …).  We may a good dent in a simple GET/PUT working with rdfstore.js.  Though we were struggling to make a few things to work with rdfstore.js and Steve’s newbie callback hell was not improving his drinking problems (he says he doesn’t have a problem, he has it figured out).

Day 3, again about 1/2 a day of hacking…some progress with some of the BasicContainer features and various formats supported.

Day 4, realization that we should reconsider using rdfstore.js to store and parse RDF data.  The needs we have with LDP are quite simple.  We looked at mongoDB model and what we were doing, and looked at a simple format for JSON we dealt with using N3 library.  It was fairly straightforward to do, greatly simplified our dependencies and removed a couple barriers we were hitting with rdfstore.js.  We ended taking the graph-centric approach, where each JSON document in mongo is an RDF graph.  This approach, and drawbacks, is outlined over in dotNetRdf's blog post.

Day 5, complete transition to mongoDB, handle all GET/HEAD/OPTIONS/POST/DELETE, got viz service working, all tests green (well Sam did all/most of the hard LDP work, Steve was the “idea man” and “bounce ideas off man”).

Days 6-10 we were able to add JSON-LD support and full support for all variations of ldp:DirectContainers.
What next?  We’d like to solidify the impl more, a little more testing, and do non-RDFSource.  We’ve talked about building a sample app as well, something like a poor-mans bug tracker, photo album or address book.  Oh yes, and making the source code available.  That wasn't an initial priority just we didn't know how much would be worth sharing, we'll be going through approval processes to make it available.

Be on the look out for an upcoming blog post that talks about our experiences with DevOps Services and Bluemix.

Friday, July 11, 2014

Trip Report - INCOSE IS 2014 Systems Engineering meets (needs) open integrations + 2025 vision

I was fortunate enough to be able to attend the INCOSE International Symposium (IS) 2014, an event I've never been to before.  I've been spending more and more time over the years with Systems Engineering, it was good to learn more, share a bit more what it going on with OSLC and related topics, catch up with some friends and help some implementors lay out a plan.

One of the things that struck me as interesting is how the speakers and attendees referred to OSLC.  I'm used to seeing so many presentations over the years defining it, spelling out what the acronym means, etc.  At the IS, there was none of that.  It was just referred to by name, as everyone knows clearly what it is.  I didn't hear anyone asking or taking a note to look it up later.  OSLC was often referred to as an area which showed great promise for SE tool interoperability: as a protocol to exchange data, a way to define a minimal data model at web scale and simple ways of doing UI integration.

We had an impromptu meet up at lunch for OSLC, we in fact had too many people at the table (and yes I was the only IBMer).  It included people from PTC, Atego, JPL, Deere, Koneksys, Eclipse Foundation. Great discussion to share peoples interest, share what things are in motion and look for a way to coordinate all the activity going on in all the different places: INCOSE TII, OASIS OSLC, OMG OSLC4MBSE and more.  Looking forward to following up with this group and seeing how it advances.

I was able to give an overview and update on OSLC to an audience the represented many industries: automotive (2), air & space (2) and large machinery.

An interesting piece of work that I received when I registered what INCOSE's System's Engineering Vision for 2025, specifically these items:

  • Foundations and Standards (p. 20) 
    "This systems engineering body of knowledge today is documented in a broad array of standards, handbooks, academic literature, and web-resources, focusing on a variety of domains. A concerted effort is being made to continually improve, update and further organize this body of knowledge. "
  • Current Systems Engineering Practices and Challenges (p. 20-21) practice areas of "Modeling, Simulation, and Visualization", "Design Traceability by Model-Based Systems Engineering" which highlight the growing needs around improved tools and tool interoperability.
  • Leveraging Technology for Systems Engineering Tools (p. 30)
    Discusses the need to move towards a set of tools that allow for: "high fidelity simulation, immersive technologies to support data visualization, semantic web technologies to support data integration, search, and reasoning, and communication technologies to support collaboration. Systems engineering tools will benefit from internet-based connectivity and knowledge representation to readily exchange information with related fields."
I'm hoping to make it to Boston around September 10th to run an OSLC workshop for the INCOSE community, stay tuned.

Thursday, June 12, 2014

Rational User Conference (aka IBM Innovate) Take #10

Last week I attended my 10th (yes I said one-zero or tenth) Rational User Conference (aka IBM Innovate, aka Rational Software Developer User Conference, aka Rational Software Conference).  It is also the 5th time I have attended while talking about OSLC.  Hard to believe that Mik Kersten and myself did the first ever OSLC presentation back in 2009.  It has been interesting to be part of the transition from people hearing "O S L C" and having no idea, to today where most attendees not only know what it is, they are actively working to build integrations using OSLC, encouraging their other tool suppliers to support it and active in various OSLC activities such as specification working groups or general community promotion.   I has transitioned from an unknown new concept, to the way we do integrations.  By "we", I'm not just talking about Rational, I'm talking about attendees there that were talking how how they are using OSLC such as Airbus, NEC, ...

Though, still many people have a hard time saying or spelling OSLC right (it is a tough one)...most commonly is OSCL.  If only we pushed to rename it back in 2010 to something like I proposed as SLIC, that would have been...well "slick".  I digress.

This year, I arrived a couple days before the official conference started as it was a good opportunity for those of us very active in OSLC to get together for some face-to-face discussions on OSLC strategy.  This was spearheaded by the Steering Committee (SC).  Out of these early discussions (which have been a continuation of ongoing thoughts by the community and SC), came the idea of an organizing and higher-level concept of "Integration Patterns".  I threw together a page to articulate the thoughts, propose a way forward and start to gather interest.  This was discussed at couple other times during the week, such as the OSLC SC discussion at the Wednesday's Birds of a Feather session, which was well received from the attendees.

Sunday afternoon held the Open Technology Summit, where various leaders in open technologies shared how various efforts have help drive business efficiencies and improve overall time and quality of delivery around such things as: OpenStack, OSLC, CloudFoundry, Apache Cordova, ...

I led a panel discussion titled "Best practices on implementing integrated tools" with panelist with a wide and vast set of experience (I hope to share the recording once I receive it)

After 5 years, Mik and I were reunited as we talked about "Lifecycle Tool Integration through Open Interfaces" (though Mik and I have been talking/collaborating this whole time, it wasn't like a band breakup and then reunion)

There were many other great conversations, learning how customers are looking to build out their own OSLC implementations by either evolving their own in house tools or looking to build adapters for 3rd party tools.  The demand continues to grow and look forward to continue to helping them succeed by making their integrations happen.

As with many of these conferences, especially ones that you've gone to 10 times, it is great to catch up with many good friends I've made over the years.  Now on to make sure we continue to deliver value and have some cool things to show and talk about next year (oh and at next week's EclipseCon France event and end of June's INCOSE conference).

Monday, April 21, 2014

Trip Report - OSLC Connect @ ALM Forum in Seattle March 30 - April 4

I recently attended an event in Seattle called ALM Forum, used to go by a slightly different name and purpose in past years (ALM Summit).

Quick Summary

Overall I thought it was well worth the time and would recommend going back next year.  Since my primary purpose for being there was to promote OSLC and get a better understanding of adoption problems.  So I think we covered those fairly well.

Event by Event Summary

I had to opportunity to attend many of the sessions and events, I'll touch on a majority of them but some with less significant information to share I probably just omitted for brevity.


Spent most of the day meeting up with some customers and OSLC advocates.

OASIS OSLC Booth in Exhibit Hall

Booth setup, this was a first.  It was a good opportunity for the OASIS OSLC Member Section to leverage funds from OASIS membership to contribute to sponsorship of the event and have this booth in the exhibit hall.  When we could, Sean Kennedy and I would staff the booth.

OSLC Happy Hour

This was a good social event Monday evening which we had about 10% of the conference attendees (not bad) attend.  Collected some information via some surveys, met some new people, finally me some face-to-face and many good conversations on issues and successes with integrations.


Breakout Session: Better Integrations through Open Interfaces
This was my session on the Integration track, which had good attendance.


ALM for the Internet of Things by Ravit Danino (Director, Applications Product Management HP Software & Solutions)

Ravit gave a good overview of the key challenges and opportunities for ALM and PLM tool integrations, highlighting needs for standards-based integration between vast set of tools and suppliers that will be used.


PROMCODE: An Open Platform for Large-Scale Contracted Software Delivery in Software Supply Chains - Mikio Aoyama

Professor Aoyama gave an excellent presentation on the challenges of large-scale efforts and how OSLC is being used to combat those challenges.

Lightning Sessions

Challenges and Opportunities in ALM-PLM integration - Michael Azoff

Michael touched on large opportunity for ALM-PLM integrations, seeing that there is still a large gap between the disciplines.  He also observed that OSLC was the place where many PLM vendors were turning towards solving some of the integration challenges and spoke of the positive outlook there.

Integration Principles and Reality - Ludmila Ohlsson

Ludmilla summarized their work at Ericsson and the vision around open standards-based integrations based on leading standards such as OSLC and how to broaden the adoption to more tools.

Friday, February 28, 2014

Considerations with event driven solutions in a Linked Data / OSLC world

I've often been asked about OSLC's plans to support some technology that allows for an event driven model.  Often what this request comes down to is that customers would like to have an open way to subscribe to certain events from a tool and then have that tool (or intermediary) notify them when their criteria has been met.  Once I drill down in the use case a bit further it is often the case that have some clear logic they want to run on the notification receiver side.

For example, let's look at the scenario between a bug tracking system a development team uses and a ticketing system the operations team uses to track customer reported problems.  We've already established how we leverage OSLC to easily relate the ticket from ops to dev.  Though the ops team has a process by which they modify the ticket's status to indicate a fix from development is ready.  The ops team has a number of tools, scripts and reports that run against the ticking system for these 'fix ready' tickets.

There are many ways this problem could be solved, let's take a look at some:

Event driven - this would require new software to be written on both the ticking and bug tracking tool end (perhaps with some eventing software) to make this work.

Polling or cross tool query - there really is no need to promote the state of the ticket to 'fix ready', the ticketing system could just look at (fetch directly) the status of the linked bug.  This would require processes to change on the ops side to not be driven based on that specific state of the ticket but the combined view.

    • A variant of this is to just have the ticketing system poll the bug tracking tool (either when ops person is viewing the ticket or an agent) and set the status of the ticket to 'fix ready'.
Manual - dev sends ops an email/IM/txt, ops loads dashboard with sees bugs from dev that have a fix ready, or NCIS-like where ops shows up to dev with giant slushy asking if and when their reported bug will get fixed.

I think we all have seen the manual way of working, which I don't think I need to highlight what is so not fantastic about this.  I touched briefly on the the impact of the two other automated ways to solved this.  Each requires change --  though in the case of polling, only the consumer needs to change (assuming linking is already there).  Here's a quick summary of considerations with each approach:
  • Event driven
    • Need to manage subscriptions
    • Need to process notifications (which could include processing subscription rules and handling authentication)
    • May require a 3rd party bus (message broker) tool.  This comes with its own costs to acquire and maintain
    • Administration to handle failed notifications (authentication, firewall, server down, etc)
    • Easy model for consumers to just subscribe
  • Polling / query
    • Responses could be cached, polling requests to origin could be intercepted by caching servers and therefore taking the load off the backend tool
    • Linked-to tools never needs to know about external apps
Some of these considerations are of course potentially offset if your organization already has an ESB deployed, so that cost may have already been sunk and the investment in administration expertise has been made.  One aspect of event driven solutions is often a problem of mismatch of models and conflicts of change.  For example, if the configuration (state model) of one of the tools changes it may make it so the ESB can't deliver and process the event to the desired end point.  Also if there is a desired to use this approach to synchronize data, in 1-to-1 or many-to-many, then conflicts will arise and the authority (or master) of the data is perhaps lost.

There are many other factors to consider.  It is often best to sit down and look at the topology of tools today, expected view into the future, and scenarios to see what is right for you.  I'm not saying one is better than the other and there is a clear answer for every integration question that will be asked.  I wanted to elaborate on some of the considerations when looking at each alternative.  Some day OSLC may define or endorse a RESTful event driven approach that meets the scenarios provided by the community.  After all that is what drives the work and those the work in the various groups agree with that, that is what get done.

Wednesday, August 21, 2013

Supporting Accept-Post in JAX-RS applications

Recently in the W3C Linked Data Platform working group we did a survey of the various discovery (or also referred to as affordances) there are for various methods or actions a client may want to introspect or learn from error responses.  One specific scenario is the case of which resource formats (content-types) are accepted by a server when a client wants to send the representation of a resource using POST with the intent of giving birth to a new resource.  The current approaches rely on trial-and-error or some out-of-band knowledge the client application has.  The trial-and-error approach relies on the client sending content of the content-type it believes the server accepts, if the server does accept it and successfully processes the request, it will send back a friendly 201 (maybe 202 or other 200-range) status code.  If the server doesn't like the content-type the client is giving it, it can kindly reply with a 415 (Unsupported Media Type).  Well the client knows what doesn't work but has to guess what might.  Let me introduce you to Accept-Post which is being proposed as a way for a server to tell a client what content-types it prefers.  Accept-Post is somewhat like the Accept header but more closely matches the newer (and less supported) Accept-Patch header.

Ok, that is a enough about the motivation and usage.  I thought I'd share the few lines of Java code needed to support this in JAX-RS 2.0 based implementations.  Since I want the HTTP response header Accept-Post to be returned for a variety of use scenarios such as: when JAX-RS returns a 415, on OPTIONS and HEAD requests, and so on, I decided to always return the header.  To do this, I implemented the ContainerResponseFilter with a simple Class and filter() method as:


public class AcceptPostResponseFilter 
       implements ContainerResponseFilter {
   public void filter(ContainerRequestContext requestContext,
                      ContainerResponseContext responseContext) 
                      throws IOException {
         "text/turtle", "application/ld+json", "image/png");

That is about it, except of course you needs to register this filter with your JAX-RS Application, such as:
import java.util.HashSet;
import java.util.Set;

public class MyApplication extends Application {
   public Set<Class<?>> getClasses() {
      Set<Class<?>> classes = new HashSet<Class<?>>();
      return classes;

I've made this change for the in-progress LDP reference implementation occurring at Eclipse Lyo.

Similar approaches to other web server implementations or configurations make implementing Accept-Post quite trivial as well. Feel free to provide feedback on the IETF working draft for Accept-Post as well.

Tuesday, August 13, 2013

OSLC Resource Models - pictures are worth a thousand words

Pardon the metaphor but seems quite accurate here that in order to scale, OSLC working groups (WGs) operate as near-independent groups.  These WGs all produced their piece of the overall picture of resources, their properties and relationships to other types of resources.  The resource models (sometimes referred to as data models) were driven based on a key set of integration scenarios and defined following guidance produced by the Core WG.  The Core WG itself even has various models to support a number of cases such as: describing service providers, resource shapes, discussions, and other commonly used terms.  With all these pieces often laying around in separate specifications (Wiki pages, vocabulary documents, etc) it can be quite helpful to pull these all together...especially using visualization of the resource models.
This first figure is an example of a diagram from the perspective of an OSLC Change Management Change Request definition.

I'll go into a bit more detail about this model a bit later.

In an attempt to simply view these resource models, I started with some work that Scott Rich had done, along with some approaches I had experimented with using Rational Software Architect (RSA)

To keep things very simple, I'll highlight some guidelines on how to develop this model in a UML modeling tool:

  • This is just a picture (for now), semantics are not clearly defined and they are not those of OO design.
  • All properties are modeled as an  'Attribute', they are just visualized in the diagram as an association (since property values/objects in RDF are nothing special).
  • Each domain, which has its own vocabulary document, is a separate package.  Also give each domain/package its own color
  • No special profile is used (I attempted to use OWL profile from OMG).
  • Even though there isn't an example restriction on the resource types (range) of properties, an explicit expected Class is still set.  A diagram with everything pointing to rdf:Resource wouldn't be too interesting.  Note to self: create a stereotype/visualization to make this obvious.
Ideally (and I know my modeling geek friends are going to like this) we can transform to/from some other descriptive form (OSLC Resource Shapes + RDFSchema).

The current model has been shared in the Eclipse Lyo project and additional examples are highlighted over on the OSLC Core wiki page.  I tucked the RSA model file into an Eclipse project titled org.eclipse.lyo.model which you can find in expected git location.  For those that use some tool other than RSA, I have also provided the .uml file.  I'd be interested to hear if anyone has another tool (and/or approach) to modeling these.  I'll try to advance in my spare time, including improving the diagrams.