Thursday, June 24, 2010

Think Link, not Sync when doing tool integrations

Often when working with ALM tool development teams, partners and customers there is a need to go through an evolutionary re-thinking of how tool interoperability should work.  Often the time when discussing how OSLC can help them, it often starts as a discussion how they can throw out their current 10's of product specific connectors to have a single OSLC API to synchronize data with their tool.   Though there are cases where pulling data, not bi-directional synchronization, has value for purposes of efficient data warehouse access for reporting solutions.  I challenge these tool integrators and vendors to re-think their integrations to instead provide "just enough" information about the other tool being integrated with.  This "just enough" could be as simple as two things: 1) a link to resource owned by another tool and 2) knowing the semantics of that link (OSLC).


There are many advantages of leaving the data within that tool that owns it and providing a link to it:
  • The tool that owns it knows best the rules to govern changes to it, including auditing support
  • The tool that owns it, knows best how to control access to data.  By copying the data, often access controls needs to be replicated as well (as best possible)
  • State-models of resources across tools often don't align
  • There are ways to expose this data in other tools, without replicating it
It is sometimes convenient to have a cloned/cached copy of data in a local tool, OSLC does not prohibit it this and can support this.

Update:  Also see interesting and related IBM DeveloperWorks article "Stop copying, start linking"

2 comments:

  1. These are all good arguments for the OSLC integration approach.
    We must keep in mind that we will get a new kind of development resource, which is "the link". That was no "first class citizen" in previous approaches.

    I see kind of the following discussion: which tool is governing *the links*?
    In which respository should they be created and managed? Or are they "owned" by some glue-application that sits between tools (the "ALM" process)?
    How are we going to baseline/version endpoint resources and the links between them ?
    etc...

    I feel that OSLC is the best approach so far, but we still need some points to solve..

    ReplyDelete
  2. The idea of a link repository or registry can have many drawbacks as well. This starts to head into a central repository where all the links are managed. Instead, thinking of tools as simply being owners of resources that have relationships (links) to other resources, which may or may not be in other repositories. There may be certain applications that can provide a "baseline" resource, with links to the containing resources.

    It would be good to define the key integration scenarios here and what technique and approaches that can be taken.

    ReplyDelete