Wednesday, August 21, 2013

Supporting Accept-Post in JAX-RS applications

Recently in the W3C Linked Data Platform working group we did a survey of the various discovery (or also referred to as affordances) there are for various methods or actions a client may want to introspect or learn from error responses.  One specific scenario is the case of which resource formats (content-types) are accepted by a server when a client wants to send the representation of a resource using POST with the intent of giving birth to a new resource.  The current approaches rely on trial-and-error or some out-of-band knowledge the client application has.  The trial-and-error approach relies on the client sending content of the content-type it believes the server accepts, if the server does accept it and successfully processes the request, it will send back a friendly 201 (maybe 202 or other 200-range) status code.  If the server doesn't like the content-type the client is giving it, it can kindly reply with a 415 (Unsupported Media Type).  Well the client knows what doesn't work but has to guess what might.  Let me introduce you to Accept-Post which is being proposed as a way for a server to tell a client what content-types it prefers.  Accept-Post is somewhat like the Accept header but more closely matches the newer (and less supported) Accept-Patch header.

Ok, that is a enough about the motivation and usage.  I thought I'd share the few lines of Java code needed to support this in JAX-RS 2.0 based implementations.  Since I want the HTTP response header Accept-Post to be returned for a variety of use scenarios such as: when JAX-RS returns a 415, on OPTIONS and HEAD requests, and so on, I decided to always return the header.  To do this, I implemented the ContainerResponseFilter with a simple Class and filter() method as:

import javax.ws.rs.container.ContainerRequestContext;
import javax.ws.rs.container.ContainerResponseContext;
import javax.ws.rs.container.ContainerResponseFilter;
import javax.ws.rs.ext.Provider;

public class AcceptPostResponseFilter 
       implements ContainerResponseFilter {
   @Override
   public void filter(ContainerRequestContext requestContext,
                      ContainerResponseContext responseContext) 
                      throws IOException {
      responseContext.getHeaders().putSingle(
         "Accept-Post", 
         "text/turtle", "application/ld+json", "image/png");
   }
}

That is about it, except of course you needs to register this filter with your JAX-RS Application, such as:
import java.util.HashSet;
import java.util.Set;
import javax.ws.rs.core.Application;

public class MyApplication extends Application {
   @Override
   public Set<Class<?>> getClasses() {
      Set<Class<?>> classes = new HashSet<Class<?>>();
      classes.add(AcceptPostResponseFilter.class);
      return classes;
   }
}



I've made this change for the in-progress LDP reference implementation occurring at Eclipse Lyo.

Similar approaches to other web server implementations or configurations make implementing Accept-Post quite trivial as well. Feel free to provide feedback on the IETF working draft for Accept-Post as well.

Tuesday, August 13, 2013

OSLC Resource Models - pictures are worth a thousand words

Pardon the metaphor but seems quite accurate here that in order to scale, OSLC working groups (WGs) operate as near-independent groups.  These WGs all produced their piece of the overall picture of resources, their properties and relationships to other types of resources.  The resource models (sometimes referred to as data models) were driven based on a key set of integration scenarios and defined following guidance produced by the Core WG.  The Core WG itself even has various models to support a number of cases such as: describing service providers, resource shapes, discussions, and other commonly used terms.  With all these pieces often laying around in separate specifications (Wiki pages, vocabulary documents, etc) it can be quite helpful to pull these all together...especially using visualization of the resource models.
This first figure is an example of a diagram from the perspective of an OSLC Change Management Change Request definition.


I'll go into a bit more detail about this model a bit later.

In an attempt to simply view these resource models, I started with some work that Scott Rich had done, along with some approaches I had experimented with using Rational Software Architect (RSA)

To keep things very simple, I'll highlight some guidelines on how to develop this model in a UML modeling tool:

  • This is just a picture (for now), semantics are not clearly defined and they are not those of OO design.
  • All properties are modeled as an  'Attribute', they are just visualized in the diagram as an association (since property values/objects in RDF are nothing special).
  • Each domain, which has its own vocabulary document, is a separate package.  Also give each domain/package its own color
  • No special profile is used (I attempted to use OWL profile from OMG).
  • Even though there isn't an example restriction on the resource types (range) of properties, an explicit expected Class is still set.  A diagram with everything pointing to rdf:Resource wouldn't be too interesting.  Note to self: create a stereotype/visualization to make this obvious.
Ideally (and I know my modeling geek friends are going to like this) we can transform to/from some other descriptive form (OSLC Resource Shapes + RDFSchema).

The current model has been shared in the Eclipse Lyo project and additional examples are highlighted over on the OSLC Core wiki page.  I tucked the RSA model file into an Eclipse project titled org.eclipse.lyo.model which you can find in expected git location.  For those that use some tool other than RSA, I have also provided the .uml file.  I'd be interested to hear if anyone has another tool (and/or approach) to modeling these.  I'll try to advance in my spare time, including improving the diagrams.


Friday, July 19, 2013

OSLC's motivation on the design of the LDP Container

It is often that when some concepts in specifications exist, it is hard to tie together the learning from past experiences and what had been considered prior.  Sometimes preserving this and providing the background is too much filler for some already long documents, so I will use this opportunity to pull together a series of activities.  Some of these activities can be found in OSLC 1.0 specs and elsewhere.

All these approaches were motivated with a set of requirements which are quite similar:

  • Common patterns for dealing with multi-valued properties
  • These multi-valued properties often become large
  • Easy way to get all the properties
  • Easy way to add new property values
Below are a sample of approaches

OSLC CM 1.0 - @collref

In CM 1.0, when these patterns first evolved and we were working on applying REST we came up with a concept of @collref which was an XML attribute that held the value of the URL of the collection where you could POST to add new members and GET on to get a list of all members.  This approach was developed based on some legacy XML and pragmatic need to solve a narrow problem.  It would for a good number of cases but was a bit special and non-standard.

OSLC CM 2.0 - multi-value props

In CM 2.0 and Core 2.0, we took a more RDF-view of REST and simply treated these as multi-valued properties or in RDF terminology they were same-subject and same-predicate triples.  They were just like any other property on a HTTP (OSLC) resource, using GET to fetch the representation of the state of the resource, POST to create it and PUT to update.  This approach worked well in that it didn't require any new invention but was limited by what existed, it would be good if we could layer on semantics without losing the simple usage of RDF to define the resources.

OSLC CM 3.0 - multi-value props + LDP Container

In drafts of CM 3.0 and Core 3.0, we look to build off a concept called Linked Data Platform Container defined by Linked Data Platform 1.0.  This isn't far from what we've done in 2.0 except it is based on a W3C specification and has some additional semantics, namely layering on the ldp:Container semantics.

Other motivation

Another key scenario motivating similar requires is dealing with things such as query results or filtering long lists of resources and where to POST to create new ones.  In Core 1.0 and 2.0, we used concepts such as query capability and creation factory to meet these needs.  As we see with ldp:Container, this provides that same function while we can annotation it to provide some of the additional needs for knowing what kinds of resources a server will accept in a container it hosts and things like Web UI dialogs that can be used to select and create resources.


Thursday, May 23, 2013

Applying LDP concepts to bug tracker

In this post I'll explore how some products that support bug tracking, such as Rational Team Concert, model their internal "Work Item" resource given W3C Linked Data Platform and OSLC concepts.  This will also highlight how various capabilities are needed to solve some key problems in a simple way.  I will be "paraphrasing" what the actual runtime resource will look like to help with human consumption of the examples (as the real-world ones can sometimes have additional concepts and capabilities that don't apply to the simple part of the model we are focused on here).

Let's take one of the most simple examples of what one might find, let's say a bug (aka "Defect") that has a few properties but no associated attachments or child bugs.

Representation of bug http://example.org/bugs/13:
@prefix dc:  <http://purl.org/dc/terms/> .
@prefix ocm: <http://open-services.net/ns/cm#> . 

<> a ocm:Defect ;
   dc:title "Bug in rendering large jpgs" ;
   dc:identifier "13" .

Now we'll explore how add a couple screen shots to the bug. Using this information I have with this resource, I'm not sure how I do that (assuming I am a Linked Data client). I could just attempt to PUT replace the contents and add a ocm:attachment statement referring to the screenshot. Depending on the server, it may reject it for a couple of reasons, such as: it doesn't known anything about ocm:attachment, or it has restrictions on the object (where the attachment is physically stored), or simply PUT updates not allowed for ocm:attachment.
To help with this problem, we can associate an ldp:Container with this bug resource to assist with this. So we expand our example, with out modifying any of the original model, just adding a few more statements.
@prefix dc:  <http://purl.org/dc/terms/> .
@prefix ldp: <http://w3.org/ns/ldp#> .
@prefix ocm: <http://open-services.net/ns/cm#> . 

<> a ocm:Defect ;
   dc:title "Bug in rendering large jpgs" ;
   dc:identifier "13" .

<attachments> a ldp:Container ;
   ldp:membershipPredicate ocm:attachment ;
   ldp:membershipSubject <>.

This tells my client now that we have an ldp:Container associated with the bug, since the ldp:membershipSubject connects this container to the bug. I can inspect also ldp:membershipPredicate to know for which same-subject and same-predicate pairing I can use this container to assist with managing and navigating them.
Now I have a URL http://examples.org/bugs/13/attachments where I can POST a screenshot to create an attachment and associate it with bug 13. Let's look at what the POST looks like:
Request:
POST http://example.com/bugs/13/attachments
Slug: screenshot1.png
Content-Type: image/png
Content-Length: 18124

[binary content]

Response:
HTTP/1.1 201 CREATED
Location: http://example.com/bugs/13/attachments/3
Content-Length: 0
Now that the attachment has been created, we can fetch bug 13 again to see what we have.
@prefix dc:  <http://purl.org/dc/terms/> .
@prefix ldp: <http://w3.org/ns/ldp#> .
@prefix ocm: <http://open-services.net/ns/cm#> . 

<> a ocm:Defect ;
   dc:title "Bug in rendering large jpgs" ;
   dc:identifier "13" ;
   ocm:attachment <attachments/3> .

<attachments> a ldp:Container ;
   ldp:membershipPredicate ocm:attachment ;
   ldp:membershipSubject <>.
We now see that there is an ocm:attachment statement associated with bug 13. This statement was added by the server when it processed the POST request to create a new resource (attachment) and then added it as a member of the attachments associated with the bug.
We can also see that this list can grow to be quite large. Experience has shown, that end users of bug trackers need to attach a number of documents, images, logs, etc. with bugs. This need also comes from a number of other capabilities such as having nested bugs or tasks. To illustrate, let's assume our bug tracking server has been upgrade or now exposes children resource within bug resources (or has had children added by other means). Let's take a look at bug 13 again:
@prefix dc:  <http://purl.org/dc/terms/> .
@prefix ldp: <http://w3.org/ns/ldp#> .
@prefix ocm: <http://open-services.net/ns/cm#> . 

<> a ocm:Defect ;
   dc:title "Bug in rendering large jpgs" ;
   dc:identifier "13" ;
   ocm:attachment <attachments/3>, <attachments/14> .
   ocm:child <../47>, <../2> .

<attachments> a ldp:Container ;
   dc:title "Attachment container for bug 13" ;
   ldp:membershipPredicate ocm:attachment ;
   ldp:membershipSubject <>.

<children> a ldp:Container ;
   dc:title "Children for bug 13" ;
   ldp:membershipPredicate ocm:child ;
   ldp:membershipSubject <>.

As you can see, the bug model stays very simple with statements about bug 13 being made directly about it using simple RDF concepts where the statements are of the form [bug13, predicate, object|literal]. We can repeat this pattern and use it in many other forms, such as a container of all the bugs the serve knows about, which I plan to illustrate in other posts.
Disclaimer: some of the predicates used in these examples show they are in the OSLC-CM namespace but have not been approved by the WG. This was done just for simple illustration.

Thursday, May 9, 2013

Making progress towards an OSLC-CM 3.0

Been a while I have talked about a specific domain working group and what has been going on.  Though the work for a post-2.0 Change Management specification has been evolving for some time, it is coming along nicely.  I've been focusing much of my energy on getting the core-est part of Core (W3C Linked Data Platform and Core 3.0) rolling, while keeping an eye and hand involved with CM.

To summarize the OSLC-CM WG has looked at a number of prioritized scenarios:

We currently have draft specification text for most of the scenarios above.  We are planning to enter into convergence soon and start to do some implementation to validate the specifications.

Special thanks to Sam Padgett for leading the OSLC-CM efforts, planning meetings, keeping minutes, keeping track of issues and pushing forward with the draft specification.  Looking forward to getting some of these approaches implemented and specification wrapped up.

Thursday, April 11, 2013

Trip Report from EclipseCon 2013

I had a chance, along with Michael Fiedler,  to attend the US EclipseCon 2013 in Boston, MA in late March.   In addition to presenting a BOF and a session on OSLC, Linked Data and Lyo, we were able to attend several sessions, panel discussions and BOFs.   It was also a great chance to meet with members of the OSLC and Lyo communities to discuss the challenges of tool integration.

General Impressions from the Conference

It was cool to see and hear OSLC and Eclipse Lyo in a variety of sessions.  There were good active discussions with the Eclipse Mylyn team regarding their recent m4 proposal, including how OSLC and Lyo play a key part.  It was good to explore ways we could work together with Mylyn, Orion, Hudson and more.  We did do a lot of education and exploration with a number of attendees that had no or limited knowledge of what was going on within OSLC and Lyo.  Various panels, keynotes and sessions often called out the need for tool vendors to be able to collaborate on open interfaces....we asked them: "have you heard of OSLC?"  There were a number of people making the case for OSLC-based integrations very easy.
Here is a summary of the two sessions we presented at EclipseCon:

1) Lifecycle Tool Integrations: Linked Data, OSLC and Eclipse Lyo
Session type: Birds of a Feather

The participants ranged from experienced OSLC implementers interested in contributing to Eclipse Lyo to those new to OSLC looking for learning resources. 
The discussion covered the re-vamped tutorial on open-services.net, using the Lyo OSLC4J Bugzilla adapter as a learning resource and some general OSLC and linked data integration philosophies.  Topics included:
  • a description of an OSLC4J implementation in progress which exposes EMF Models as OSLC resources - being considered for contribution to Lyo
  • the experiences of a developer who has implemented several OSLC integrations to enable tools to participate in an ALM system
  • The OSLC CM 1.0 integrations developed for the Mantis bug tracker and FusionForge
  • Status of Eclipse Lyo - what is new in 1.1 and what is coming
  • One OSLC implementer strongly recommended developing integrations using 2.0 of OSLC.  He also pointed out authentication is a primary painpoint for the work he has done.


2) Leveraging W3C Linked Data, OSLC, and Open Source for Loosely Coupled Application Integrations
Session type: ALM Connect track

This session was a 35 minute tour through tool integration, problems with previous approaches and new approaches using linked data and OSLC.
This was followed by an overview of open source projects relevant to this space (Jena, Wink, Clerezza, Lyo etc) and a brief demo of some potential integrations between Eclipse Orion and a Change Management tool (Bugzilla) and an Automation tool (Lyo automation reference implementation).   
There was very little time left for questions at the end, but there were some good ones which pushed the session into overtime: 
  • Interest in the concept of delegated UIs and the responsibilities of the tool hosting them - described the OSLC concepts in some detail
  • Interest in the concept of UI previews and compact representations.   How these resources can be used to link to full representations was described.
  • Some clarifying questions were raised by a few folks about the details of the Bugzilla integration demonstrated.   Explained it is the live Eclipse Bugzilla instance, but that it could have been any OSLC CM provider.
Special thanks to Michael Fiedler for both authoring some of this content of this blog post, as well as building the demo, presenting and too many things to list here.

Wednesday, March 20, 2013

W3C Linked Data Platform WG 2nd Face to Face meeting March 13-15 Boston

I just returned from the 2nd face to face meeting for the W3C Linked Data Platform (LDP) working group located at MIT in Cambridge, MA. There were about 14 attendees in all, a good number but perhaps low for a WG that has 50 total registered. Being the 2nd face to face meeting of a relatively new working group, meant this was the first meeting where we could dig in our heals and start tackling issues. We reviewed our deliveries: a must deliverable of the spec and a number of supporting documents: use case and requirements, access control guidance, possible primer, deployment guide (need a better name) and test suite.

We prioritized the issues we wanted to discuss early on the first day to make sure we got all the "big hitter" issues out on the table. I added to this list some of the key OSLC items such as binary resources and metadata, patch and more. One thing we learned as we got into the next day, there still was some confusion on the model. I surfaced ISSUE-59 to hopefully help simplify the model.

Here are the detailed meeting minutes from: day 1, day 2 and day 3.

Some of the key issues discussed were:

ISSUE-15 Sharing binary resources and metadata
The WG came to the following resolution after much discussion about concerns of different behavior of POST'ing RDF formats which create LDP-Rs and which response location headers should used.

Resolution: Assuming the existing qualifications that POST is optional, and supporting "binary" media types is optional: The expected response to a post for create that creates a non-LDPR is a 201, with location header whose URI I identifies the resource whose representation matches the POST input, and the response SHOULD include a Link header rel="meta" href= another URI P identifying the resource whose state is the server-managed properties. The URIs I and P MAY be distinct, but this is not required. When the object of a membership triple (I) is DELETEd, the server MUST automatically deletes any related resource P that it created previously

ISSUE-17 Recommended PATCH document (and 27 & 45) 
There were a number of PATCH related issues, like 27 whether POST could be used to tunnel PATCH but WG decided just to use PATCH. Also there is 45, which suggested the need to use POST'ing of RDF formats to LDP-Rs to append/add to existing resources. The WG decided that this would be fairly different than what is done with POSTing to LDP-Cs which creates a new resource and adds membership triples, plus we have a solution to append by using PATCH-insert. For 17, the original proposal was to use the Talis changeset format but due to a variety of reason this was withdrawn (don't recall the problems with Talis changeset). Instead the WG decided to pursue its own simple PATCH document that leveraged existing quad formats such as TriG. The WG has an action to define the minimal and work with issues and proposals to expand on it.

Other issues we discussed were on our list to watch as of course the resolution could impact us but the issues listed were not identified by OSLC community members as a high priority.  Overall it was very encouraging to see the WG make progress on such key issues.  It is also a little disappointing we didn't make more progress in other areas.  It is most likely that we won't enter into "Last Call" until after our 3rd face to face which is being planned for mid-June.

Sunday, March 3, 2013

OSLC's relationship with Semantic Web technologies

The technical basis for accessing data through OSLC has its roots in Linked Data, which has its origins with Semantic Web technologies.  Some worry about the costs of supporting this for some simple integration scenarios, though OSLC only depends on a small amount of it.  For those that bought into the full stack of Semantic Web technologies for various domain solutions realize there is tremendous value in what it can provide but that it also comes at some cost.  The cost is in aspects of Semantic Web such as reasoners, inferencing, search engines, RDF and specialized repositories for dealing with these things.

OSLC takes a minimal incremental approach to depend on only what is needed to satisfy the integration scenarios.  So far that has led us to leverage a simple standard way of describing resources using RDF.  That is about where the Semantic Web technology dependency ends.  We leverage a few terms out of RDF Schema to help with defining our own vocabulary terms but do not go beyond that as it might imply that clients would need to process inference rules against the resource representations they receive to learn more.

Since a primary goal of OSLC is to not reinvent but leverage standards based approaches that meet our requirements, I can see cases where it might be good to adopt some more Semantic Web technologies.  Though to be clear, for tools to get value out of OSLC-based integrations, only some RDF syntax readers and writers are needed.  There is no need for tools to have be rehost or rewrite them onto a new technology base, they can simple adapt their solution with a simple fasade or update their existing REST APIs to provide this support.