The technical basis for accessing data through OSLC has its roots in Linked Data, which has its origins with Semantic Web technologies. Some worry about the costs of supporting this for some simple integration scenarios, though OSLC only depends on a small amount of it. For those that bought into the full stack of Semantic Web technologies for various domain solutions realize there is tremendous value in what it can provide but that it also comes at some cost. The cost is in aspects of Semantic Web such as reasoners, inferencing, search engines, RDF and specialized repositories for dealing with these things.
OSLC takes a minimal incremental approach to depend on only what is needed to satisfy the integration scenarios. So far that has led us to leverage a simple standard way of describing resources using RDF. That is about where the Semantic Web technology dependency ends. We leverage a few terms out of RDF Schema to help with defining our own vocabulary terms but do not go beyond that as it might imply that clients would need to process inference rules against the resource representations they receive to learn more.
Since a primary goal of OSLC is to not reinvent but leverage standards based approaches that meet our requirements, I can see cases where it might be good to adopt some more Semantic Web technologies. Though to be clear, for tools to get value out of OSLC-based integrations, only some RDF syntax readers and writers are needed. There is no need for tools to have be rehost or rewrite them onto a new technology base, they can simple adapt their solution with a simple fasade or update their existing REST APIs to provide this support.