Thursday, December 18, 2014

Web API maturity model

Web APIs (REST-based HTTP services with simple payloads, typically JSON) have proven to be a very simple and effective way for applications to consume capabilities provided by apps or expose some capabilities themselves. As we've experienced, there are various ways these APIs are defined from satisfying a large audience based on a single server (Twitter) to something more general that handles various kinds of resources and operations.

The reality of many of these Web APIs, their purpose and needs have evolved from their early inception and then evolving into their latest 3.1 version with additional bells and whistles, deprecating some baggage along the way.

So the title of this post is "Web API maturity model", so what is this and why is it needed?  Perhaps maturity model isn't the right title for it, let me describe the aspects of it and give some examples.  This looks a bit beyond Martin Fowler's article about this to go beyond his levels.

Basically it is a way to talk about how Web APIs are development and the various needs as the customer needs evolve and therefore the APIs evolve to meet those needs.

Maturity levels (loosely sketched out as):

  1. REST+JSON (barely docs) - this is barebones, your app defines a simple Web API.  The goals at this level are not to make it rock solid and cover ever edge case but instead to focus on supporting an important scenario quickly, getting feedback on usage and evolve from there (if needed).  You rely on a sample, an incomplete wiki page or reading the source to figure out how best to use it.
  2. REST+JSON (better docs, source) - a better web page dedicated to describing the API, samples and some source.  At this point you are promoting it a bit more as "the API" for your service.
  3. REST+JSON (LD, docs, source) - needing a way to describe the payloads more consistently between various endpoints, could reuse existing vocabularies such as  App developers may even want to share some of their vocabularies or capabilities
  4. LDP (REST+JSON-LD&Turtle+containers) - needing a way to have a consistent pattern for interacting with "buckets" of like resources
  5. Domain extensions (OSLC, etc) - needs around app development, systems engineering, ... which could create new vocabularies or capabilities themselves
In fact, these layers of maturity can be cleanly applied on one another without disrupting previous apps that consume the API.

Some may say "ya, dumbass this is just how we work", which they are right (at least not the dumbass part).  We often start small, find what works and evolve from it.  Though it is good sometimes to be clear and intentional in that is how you plan to operate as some look building integration APIs in a way that starts in the middle or end, instead of at level 1.  Though learning about various techniques and technologies in other levels helps guide developers to upgrade their mental toolbox of skills.

The levels I highlighted are just examples, incomplete and could even be viewed that there are different axis that might emerge.  Related to that is understanding in what scenarios stepping up the levels, or over to another axis, may make sense.  I plan to explore some more of these ideas, including how best to define, present, collaborate, evolve, enable all this. 

As always, interested to hear feedback (I prefer agreement but accept other forms as well)

Friday, November 7, 2014

Considering the right solution to your integration problem

So often we often get tied up in a specific subject or technology area it feels like we see this is the everything it is just to help people think different.
only game in town.  It reminds me of the Jim Carrey film "Yes man" (by the way, I relate all my work to Jim Carrey films -- please no "Dumb and Dumber "references ;-) where he takes part in a program to change his life by answering "Yes" to any question or opportunity.  He learns, it doesn't apply to all situations but just trying to force people to see a different perspective by opening up to a new way of approaching situations.

I relate my work in OSLC and Linked Data work in a similar way.  It is not the only solution but thinking this way, apart from some traditional ways of thinking about integrations, helps to find alternative solutions that could end up being more loosely-coupled, scale better and are more resilient to upgrades.  Other benefits are that it allows the data to stay where it can be properly protected via access-control and updated as needed.

Often people and companies are so passionate about a technology or solution, they often answer the customer's question before they ever fully hear what the problem is.  There are varying degrees of this, of course if your job is to sell a product then that is then it is hard to imagine that you'd recommend an alternative solution.  Though I think if you worked with the customer to determine it isn't the right fit or the trade-offs, they would be more willing to continue to attempt to business with you.

There are so many factors on deciding what is the right integration solution for your current integration problem.  It would be fantastic if I could clearly define a concise decision tree that fit on a cocktail napkin and handled 90% of the cases...unfortunately, it is not that easy (and a group of integration consultants might hunt me down).  I've worked with a number of customers to identify possible solutions to either simple 1-to-1 integration problems to defining a corporate integration architecture/approach and plan.

Here's some factors that I typically consider to drive a recommendation:

  • problem statement & user stories, including any other constraints
  • # of tools
  • anticipated growth or consolidation
  • integration technology already available in tools landscape
  • ability to develop integrations
  • timeframe and ownership
As I come up with more, I will add to the list but I'd be interested to hear what other considerations people have when tackling integration problems.  I'd like to elaborate more on each of these points and weight them compared to each other as well.

Creating Containers in LDP, it is just this easy

I've heard a number of times that it isn't very clear how one creates a Linked Data Platform (LDP) Container.

Let's start with an example of how to create a simple resource (say LDP RDF Source).  You need to know a URL that accepts POST of the media type you want.
Here's the POST request:

1:  POST /netWorth/nw1/liabilities/ HTTP/1.1  
2:  Host:  
3:  Accept: text/turtle  
4:  Content-Type: text/turtle  
5:  Content-Length: 63  
7:  @prefix dcterms: <>.
8:  @prefix o: <>.
10:  <> a <>;
11:       dcterms:title "Home loan";
12:       o:value 200000.00 .
13:    # plus any other properties that the domain says liabilities have  

(Example "borrowed" from LDP spec example)

s Very simple, just POST some Turtle to a URL.

Now let's look at what it would look like to create a Container:

1:  POST /netWorth/nw1/liabilities/ HTTP/1.1  
2:  Host:  
3:  Accept: text/turtle  
4:  Content-Type: text/turtle  
5:  Content-Length: 72  
6:  Link: <>; rel="type"  
8:  @prefix dcterms: <>.
9:  @prefix o: <>.  
11:  <> a <> ;
12:       dcterms:title "Home loans" ;
13:       o:limit 500000.00 .   
14:    # plus any other properties  

That's it, just POSTing some content just like before. I added a Link header (line #6) on the POST request to be explicitly clear that I want this newly created resource to behave as an ldp:BasicContainer. Note that I wanted to be expressive so I included the type triple on line 11, though the spec doesn't require this as the Link header as the spec-defined way to clearly assign the desired behavior.

There are various other ways that ldp:Containers can come into existence, most of those reasons are dependent on the server's application-specific rules.  For example, an application might allow the creation of a resource of say "Customer", the application logic may determine to create a number of supporting Containers for the new Customer resource, such as which assets and liabilities they have.

Tuesday, October 7, 2014

OSLC Specification Update (OASIS TCs, WGs), 3rd Quarter 2014 report

I regularly give an overall specification, Technical Committee and Working Group update for the OSLC Steering Committee.  I thought it would be useful to share it more broadly as well.  Intended to be a brief high-level update, perhaps quarterly. (if I missed anything or misspoke, let me know I'll get it fixed up)


OASIS OSLC Change and Configuration Management TC

  • Participation from IBM, PTC, Mentor Graphics, Boeing
  • Various updates to Configuration specification based on review feedback
  • Splitting out Change Management 3.0 spec into separate capabilities: States, Severity/Priority, Resource types, Attachments (to core)

OASIS OSLC Automation TC

  • Participation from IBM, Mentor Graphics
  • Contribution and review of scenarios for such things as: Automation in Systems Engineering and Model transformation


  • Participation from Fujitsu, NEC, IBM, Nanzan University
  • Refinement of overall model.  Working to define how best to leverage Estimation and Measurement work, specifically around usage in ScopeItem and measurement units (see minutes).
  • An initial draft of the vocabulary and shapes has been created.


  • Updates to Tracked Resource Set 2.0 and Indexable Linked Data Provider guidance to align with changes made in: LDP, LDP-Paging and LD-Patch

OSLC Automation WG

  • Closing down work on Automation 2.1 and Actions, incorporating review feedback and preparing to transfer to OASIS TC

Sunday, August 3, 2014

From zero to LDP: a tale of our 5 day journey/sprint to create an LDP impl from scratch in Node.js


Highest priority, produce a Node.js® based W3C Linked Data Platform (LDP) implementation.  I will discuss some of the goals later (and in subsequent posts).

Had a BasicContainer and RDFSource implementation at the end of the week, passing all the automated test cases from the test suite.  Guessing we spent < 40 hours total on the work.  40 sounds like a lot, though we had a bit of learning curve in areas.  And it is 'live' at: (assuming our autodeploys continue to be green).

Some background on us that did the development, well just Sam Padgett and Steve Speicher.  To be fair, we know a fair amount about the LDP spec already and have already done a reference implementation in Java in Eclipse Lyo.  Neither of us have done anything in Node.js other than a “Hello Node” sample.  Sam on the other hand, is an experienced JavaScript developer and Steve has stumbled his way through a few applications.

Day 1, we started the effort on Monday July 21.  We had sparkle in our eyes and two thermos full of coffee.  Then we were in meetings for a while and caught up on LDP test suite items.  Well, we did do some initial project structure setup to share the code and auto deploys to Bluemix (more on that later), read a little bit to determine a general direction we were ready to head.  After day 1, we had a place to store code, a sample app checked in, using express.js and starting to play with rdfstore.js API.

Day 2, day started off with the promise of all day code slinging.  We only got about 1/2 a day in, due to other OSLC and Lyo normal operations (WG meetings, code reviews, …).  We may a good dent in a simple GET/PUT working with rdfstore.js.  Though we were struggling to make a few things to work with rdfstore.js and Steve’s newbie callback hell was not improving his drinking problems (he says he doesn’t have a problem, he has it figured out).

Day 3, again about 1/2 a day of hacking…some progress with some of the BasicContainer features and various formats supported.

Day 4, realization that we should reconsider using rdfstore.js to store and parse RDF data.  The needs we have with LDP are quite simple.  We looked at mongoDB model and what we were doing, and looked at a simple format for JSON we dealt with using N3 library.  It was fairly straightforward to do, greatly simplified our dependencies and removed a couple barriers we were hitting with rdfstore.js.  We ended taking the graph-centric approach, where each JSON document in mongo is an RDF graph.  This approach, and drawbacks, is outlined over in dotNetRdf's blog post.

Day 5, complete transition to mongoDB, handle all GET/HEAD/OPTIONS/POST/DELETE, got viz service working, all tests green (well Sam did all/most of the hard LDP work, Steve was the “idea man” and “bounce ideas off man”).

Days 6-10 we were able to add JSON-LD support and full support for all variations of ldp:DirectContainers.
What next?  We’d like to solidify the impl more, a little more testing, and do non-RDFSource.  We’ve talked about building a sample app as well, something like a poor-mans bug tracker, photo album or address book.  Oh yes, and making the source code available.  That wasn't an initial priority just we didn't know how much would be worth sharing, we'll be going through approval processes to make it available.

Be on the look out for an upcoming blog post that talks about our experiences with DevOps Services and Bluemix.

Friday, July 11, 2014

Trip Report - INCOSE IS 2014 Systems Engineering meets (needs) open integrations + 2025 vision

I was fortunate enough to be able to attend the INCOSE International Symposium (IS) 2014, an event I've never been to before.  I've been spending more and more time over the years with Systems Engineering, it was good to learn more, share a bit more what it going on with OSLC and related topics, catch up with some friends and help some implementors lay out a plan.

One of the things that struck me as interesting is how the speakers and attendees referred to OSLC.  I'm used to seeing so many presentations over the years defining it, spelling out what the acronym means, etc.  At the IS, there was none of that.  It was just referred to by name, as everyone knows clearly what it is.  I didn't hear anyone asking or taking a note to look it up later.  OSLC was often referred to as an area which showed great promise for SE tool interoperability: as a protocol to exchange data, a way to define a minimal data model at web scale and simple ways of doing UI integration.

We had an impromptu meet up at lunch for OSLC, we in fact had too many people at the table (and yes I was the only IBMer).  It included people from PTC, Atego, JPL, Deere, Koneksys, Eclipse Foundation. Great discussion to share peoples interest, share what things are in motion and look for a way to coordinate all the activity going on in all the different places: INCOSE TII, OASIS OSLC, OMG OSLC4MBSE and more.  Looking forward to following up with this group and seeing how it advances.

I was able to give an overview and update on OSLC to an audience the represented many industries: automotive (2), air & space (2) and large machinery.

An interesting piece of work that I received when I registered what INCOSE's System's Engineering Vision for 2025, specifically these items:

  • Foundations and Standards (p. 20) 
    "This systems engineering body of knowledge today is documented in a broad array of standards, handbooks, academic literature, and web-resources, focusing on a variety of domains. A concerted effort is being made to continually improve, update and further organize this body of knowledge. "
  • Current Systems Engineering Practices and Challenges (p. 20-21) practice areas of "Modeling, Simulation, and Visualization", "Design Traceability by Model-Based Systems Engineering" which highlight the growing needs around improved tools and tool interoperability.
  • Leveraging Technology for Systems Engineering Tools (p. 30)
    Discusses the need to move towards a set of tools that allow for: "high fidelity simulation, immersive technologies to support data visualization, semantic web technologies to support data integration, search, and reasoning, and communication technologies to support collaboration. Systems engineering tools will benefit from internet-based connectivity and knowledge representation to readily exchange information with related fields."
I'm hoping to make it to Boston around September 10th to run an OSLC workshop for the INCOSE community, stay tuned.

Thursday, June 12, 2014

Rational User Conference (aka IBM Innovate) Take #10

Last week I attended my 10th (yes I said one-zero or tenth) Rational User Conference (aka IBM Innovate, aka Rational Software Developer User Conference, aka Rational Software Conference).  It is also the 5th time I have attended while talking about OSLC.  Hard to believe that Mik Kersten and myself did the first ever OSLC presentation back in 2009.  It has been interesting to be part of the transition from people hearing "O S L C" and having no idea, to today where most attendees not only know what it is, they are actively working to build integrations using OSLC, encouraging their other tool suppliers to support it and active in various OSLC activities such as specification working groups or general community promotion.   I has transitioned from an unknown new concept, to the way we do integrations.  By "we", I'm not just talking about Rational, I'm talking about attendees there that were talking how how they are using OSLC such as Airbus, NEC, ...

Though, still many people have a hard time saying or spelling OSLC right (it is a tough one)...most commonly is OSCL.  If only we pushed to rename it back in 2010 to something like I proposed as SLIC, that would have been...well "slick".  I digress.

This year, I arrived a couple days before the official conference started as it was a good opportunity for those of us very active in OSLC to get together for some face-to-face discussions on OSLC strategy.  This was spearheaded by the Steering Committee (SC).  Out of these early discussions (which have been a continuation of ongoing thoughts by the community and SC), came the idea of an organizing and higher-level concept of "Integration Patterns".  I threw together a page to articulate the thoughts, propose a way forward and start to gather interest.  This was discussed at couple other times during the week, such as the OSLC SC discussion at the Wednesday's Birds of a Feather session, which was well received from the attendees.

Sunday afternoon held the Open Technology Summit, where various leaders in open technologies shared how various efforts have help drive business efficiencies and improve overall time and quality of delivery around such things as: OpenStack, OSLC, CloudFoundry, Apache Cordova, ...

I led a panel discussion titled "Best practices on implementing integrated tools" with panelist with a wide and vast set of experience (I hope to share the recording once I receive it)

After 5 years, Mik and I were reunited as we talked about "Lifecycle Tool Integration through Open Interfaces" (though Mik and I have been talking/collaborating this whole time, it wasn't like a band breakup and then reunion)

There were many other great conversations, learning how customers are looking to build out their own OSLC implementations by either evolving their own in house tools or looking to build adapters for 3rd party tools.  The demand continues to grow and look forward to continue to helping them succeed by making their integrations happen.

As with many of these conferences, especially ones that you've gone to 10 times, it is great to catch up with many good friends I've made over the years.  Now on to make sure we continue to deliver value and have some cool things to show and talk about next year (oh and at next week's EclipseCon France event and end of June's INCOSE conference).