Tuesday, November 23, 2010

OSLC Change Management 2.0 Specification is now Final!

Looking back, I see that I blogged on June 19, 2009 that the CM 1.0 Specification has reached finalization.  A lot of good hard work has occurred between now and then.  There has been a strong focus on alignment across the various domains and applying what has been done and learned in 1.0 specification to define what is now know was OSLC Change Management 2.0 Specification.

First I want to say thanks to the many contributors of the 2.0 CM specification, obviously without their dedication and work to this effort I would not be able to announce it today.  Contribution comes in many forms: scenario development, feedback, specification writing, contributions to specification text, implementation feedback, spec issue tracking and on and on.


So, what's new about CM 2.0?  I will only summarize some of the key items and will provide a more detailed writeup later.

  • Alignment - now all domain specification are based off the same OSLC Core specification.  Most of these areas have enhancements over CM 1.0.
    This covers areas such as:
    • Service discovery
    • RESTful resource interactions
    • Simple query syntax
    • UI Delegation
    • Resource formats
  • UI Preview - the ability for getting a minimal rendering of a resource that can be displayed as a tooltip or hover.
  • Expanded ChangeRequest resource definition - many new properties defined, supporting new scenarios as well as commonly used across most CM providers
  • Resource Shapes -For creation, for query and for update.
  • Depreciation of resource-specific content types - focus more on leveraging standard content types

I look forward to more CM 2.0 implementation reports and what lies next for CM domain.

Wednesday, November 17, 2010

OSLC Open Source Proposal

Some OSLC community members are drafting a proposal for a companion open source project in support of the OSLC specification efforts occurring at http://open-services.net

The OSLC Open Source Project is planned to be hosted at SourceForge.net and possibly include:
  • test suites for testing OSLC service provider implementations  *** initial contribution planned by IBM
  • reference implementations of OSLC core and domain services for use in testing OSLC clients *** contribution planned by IBM
  • sample code and applications *** contribution planned by IBM
  • tools, models, pictures, etc. used in the specification process
  • specification artifacts that need to be under version control (e.g. namespace documents)
The proposal and project are going to defined and maintained by a core set of committers as defined in the proposal.

This project will look to align with other appropriate open source projects such as Eclipse, Apache, etc  if and when needed. The focus of this OSLC open source project is narrowly on specification and implementation validation.

Please respond to this email thread or reply to this posting.

Tuesday, October 12, 2010

Getting into Shapes: OSLC 3 step program

There have been many scenarios since the inception of OSLC that have hinted at various needs around the ability to describe resources either that exist or don't, or if you do exist, there are many of them that are similar.  So looks take a look at what drove OSLC specification to create the concept of Resource Shapes to support these scenarios and what these Shapes look like.

  • Creation
    Perhaps the most commonly requested scenario that supports the need for Shapes.  Resource creation can also be accomplished by leveraging delegated Web UIs that hide the complexity of rules for submission for a successful resource creation.  This supports programmatic creation of resources driven by processes that run and create resources, like monitoring applications and finding problems, then automatically logging them.
    Shape location: within service provider definition of the creation factory.
  • Query
    Often is the case that we need to find something. We can always navigate from a hierarchy of folders into subfolders or using tags, though this only gets us so far.  Many of the services that are exposed have differing data models and these need to be exposed to support a meaningful query.  Intelligent query builders can be written and used, select what the criteria to search on and to define what resources and properties to deliver in the response.
    Shape location: within service provider definition of the query capability.
  • Modify
    For any resource in hand (or the URI for the resource and a representation of it), we'd like to know what are the allowed properties and property values.  Though in open systems, the shape associated with a given resource could change over time, depend on some values of properties or even based on which user is currently accessing the resource and its shape.
    Shape location: property oslc:instanceShape on subject resource.
If you look hard in other places, you see Shapes referenced from Shapes and you can see Shapes also can be useful for some other purposes.  For example, the Shape associated with the Modify scenario could be used to build a simple resource viewer based on value types and number of occurrences of some properties.

The current model for Resource Shapes continues down the model we've been follow at OSLC, just enough specification to support our scenarios.  The shapes provide some key capabilities for describing resources: allow properties, the number of them, any range restrictions, required-ness, readonly-ness, allow values and so on.   Implementations are started to surface that support these scenarios and we look forward to getting feedback on this support.

Thursday, July 22, 2010

Resource creation: keep it simple for integrations sake

Often the case when working with tool providers in exposing their capabilities via OSLC, the discussion always ends up on how to deal with all the complexities of that system.  There are rules for what is a valid change request properties needed for submissions like: required headline, required found in component (which in turn may require additional fields), etc, etc, etc.   My advice is for exposing these constraints and rules, especially from an integration interface protocol like OSLC, is to shield these complexities.  I am not advocating that these tool provider relax these submission rules or come up with a new way to create change requests from integration APIs (well maybe I am in a way).   In OSLC, there is a concept of creation factories which provide consumers with a URI in which to POST some content to create a new resource.  These factories could be based on a concept such as a creation template or a reference change request.  A creation template would be basically a blueprint or sample of what to pre-fill all the properties in a change request if the POST request has some missing pieces.  A reference change request would be an existing change request in the change management tool that is duplicated (aka copied) into a new change request and properties from the POST will override the copied fields.   Most common CM tools have both these capabilities but have not seen too many of them leverage these for a more simplified creation factory to make the job a little easier for integrating clients to not have to learn more about the CM tools submission constraints.  Be interested to hear if anyone has explored these options and their findings.

To add to this, when service providers advertise their support for creation factories they can associate additional meaning to them and there can be multiple creation factories per configuration context (service provider resource).  For example, a service provider could indicate which factory is the default one to use, define the rdf:type of resource created from the factory, link to an associated shape definition and other informative pieces such as the intended usage (defects, comments, etc).

Thursday, June 24, 2010

Think Link, not Sync when doing tool integrations

Often when working with ALM tool development teams, partners and customers there is a need to go through an evolutionary re-thinking of how tool interoperability should work.  Often the time when discussing how OSLC can help them, it often starts as a discussion how they can throw out their current 10's of product specific connectors to have a single OSLC API to synchronize data with their tool.   Though there are cases where pulling data, not bi-directional synchronization, has value for purposes of efficient data warehouse access for reporting solutions.  I challenge these tool integrators and vendors to re-think their integrations to instead provide "just enough" information about the other tool being integrated with.  This "just enough" could be as simple as two things: 1) a link to resource owned by another tool and 2) knowing the semantics of that link (OSLC).


There are many advantages of leaving the data within that tool that owns it and providing a link to it:
  • The tool that owns it knows best the rules to govern changes to it, including auditing support
  • The tool that owns it, knows best how to control access to data.  By copying the data, often access controls needs to be replicated as well (as best possible)
  • State-models of resources across tools often don't align
  • There are ways to expose this data in other tools, without replicating it
It is sometimes convenient to have a cloned/cached copy of data in a local tool, OSLC does not prohibit it this and can support this.

Update:  Also see interesting and related IBM DeveloperWorks article "Stop copying, start linking"

Wednesday, June 16, 2010

OSLC reference implementations and test suites

There have been a number of implementation efforts underway for various OSLC specifications.
I've heard many requirements including...

for Service Providers:
  • A hosted reference implementation that can react to client consumer requests, to ensure consistent behavior across implementations
  • Make the source code available for download
  • Allow contributions to the source
  • The language it is written in is less important, though some tend towards JEE based
  • A client testsuite that can give level
  • Samples that highlight key integration scenarios
  • A framework in which to quickly enable new implementations
for Consumers:
  • A hosted reference implementation that can react to client consumer requests
  • A reference service provider that can provide feedback on consumer implementations (testsuite)
  • Provide a variety of samples
  • Java client samples and/or SDK
  • Command-line or Perl based samples and/or SDK
  • HTML/Javascript samples and/or SDK

Some current efforts underway:
Some thoughts on technology basis for service provider reference implementation:
  • Apache Wink - REST framework
  • Jena
    - RDF/XML, Turtle parsers and generators
    - Apply custom rules for RDF/XML and Turtle
    - Add JSON support
    - Simple storage
    - Extended to support ResourceShapes
    - Query - mapping of query syntax - oslc.where/select
    - Resource subsets - oslc.properties
  • OAuth - Provider only
    (need good consumer example)
  • Service Discovery
    - Various models combinations of Catalogs and ServiceProviders
  • Web UI
    - Simple example/demo HTML/JS
    - Prefill
    - Via a draft resource creation
    - Via direct prefill and redirect
    - UI Preview
Dave Johnson posted some thoughts as well here. Which are fairly close to this as well.

Feedback and additional requirements as needed.

Wednesday, February 3, 2010

CM 1.0 Simple Query Syntax, how we got here

Often I get asked about what how it was decided what was included into the CM 1.0 Simple Query Syntax, so I figured I'd take a couple of minutes to summarize some key points. The goals for the 1.0 were quite simple: produce a simple syntax that supports our scenarios and that will work with SQL and SPARQL based back-ends. One of the most obvious omissions from the query syntax is the "or " operator, or UNION if you are from SPARQL. This was intentional since the CM WG realized that all scenarios could be accomplished by having support for the "in" operator. By doing this, we avoided the complexity of having to support grouping of terms with parenthesis.

The type of query we saw as being key to support in 1.0 were queries such as: Show me all open change requests assigned to a specific user
?oslc_cm.query= owner="bob" and status in ["submitted","working"]

Beyond that, most other operators are quite common. Another primary scenario was the ability to retrieve a set of change requests that have been modified since a given date. Which is supported by both having the Dublin Core usage of dc:modified and comparison operators such as:
?oslc_cm.query=dc:modified>="12-02-2008T18:42:30"

This has been proven to be very powerful and easy to map to service provider search capabilities. At some point there may need to be support for a full query syntax service, which could be a unique URL end point to post a query to. We'll have to wait and see how this plays out over time.

Monday, February 1, 2010

Using Selective Properties (aka Partial Fetch)

Often is an issue when working with retrieving resources and their properties is getting the desired information efficiently. By definition of the CM 1.0 specification, when requesting a change request resource without any parameters to refine the list of properties you should retrieve ALL properties that resource has. Take for example this simple request:

GET http://example.com/bugs/2314
Accept: application/x-oslc-cm-change-request+xml

This will result in the change request identified by the URL to be retrieved:

<oslc_cm:ChangeRequest
rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
dc="http://purl.org/dc/terms/"
oslc_cm="http://open-services.net/xmlns/cm/1.0/"
xmlns="http://myserver/xmlns"
about="http://example.com/bugs/2314">

<dc:title> Provide import </dc:title>
<dc:identifier> 2314 </dc:identifier>
<dc:type> http://myserver/mycmapp/types/Enhancement </dc:type>
<dc:description>
Implement the system's import capabilities.
</dc:description>
<dc:subject> import, blocker </dc:subject>
<dc:creator resource="http://example.com/users/aadams" />
<dc:modified> 2008-09-16T08:42:11.265Z </dc:modified>
<owner resource="http://example.com/users/john">
<priority>High</priority>
<severity>High</severity>
<status>Working</status>
</oslc_cm:ChangeRequest>

This is useful when a consumer doesn't know which properties to request but can provide more information than is needed (both that the provider needs to generate and the consumer needs to process). To limit the amount of properties returned, the concept of selective properties can be used. If the consumer is only interested in the owner and the status, the request could be formulated such as:

GET http://example.com/bugs/2314?oslc_cm.properties=owner,status
Accept: application/x-oslc-cm-change-request+xml

Resulting in:

<oslc_cm:ChangeRequest
dc="http://purl.org/dc/terms/"
oslc_cm="http://open-services.net/xmlns/cm/1.0/"
xmlns="http://myserver/xmlns"
about="http://example.com/bugs/2314">

<owner resource="http://example.com/users/john" />
<status>Working</status>
</oslc_cm:ChangeRequest>

Perhaps we are interested in the owner's name and email address, we can expand the owner entry and select only those properties to be returned such as:

GET http://example.com/bugs/2314?oslc_cm.properties=owner{fullname,email},status
Accept: application/x-oslc-cm-change-request+xml

Resulting in:

<oslc_cm:ChangeRequest
rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
dc="http://purl.org/dc/terms/"
oslc_cm="http://open-services.net/xmlns/cm/1.0/"
xmlns="http://myserver/xmlns"
about="http://example.com/bugs/2314">

<owner>
<User rdf:about="http://example.com/users/john">
<fullname>John Doe</fullname>
<email>jdoe@myco</email>
</User>
<status>Working</status>
</oslc_cm:ChangeRequest>

This technique is used in CM 1.0 not only for selective retrieval of properties on a change request resource, it is used for partial updates of resources as well as controlling the content on query responses.