The Semantic Web (Web 3.0 to some) is difficult to implement, scientists and linguists are not agreed on the methodology, and technical experts have thus far been implementing it poorly. There are no answers, and for the moment, the debate belongs in academia.
This is annoying. A semantic web, a context-relevant metadata web is tangibly near, look at what we have:
- converted tag soups to xhtml in a few years and general adherence to standards
- got the structure-content-design thing bundled into all current web software
- user-tagging of content
- context-based translation
- metadata about metadata about metadata
- other browser software that can intelligently order content
- Mash ups, with tagging to make not only sense of it, but to intertextualise
In my field, investor relations, analysts and market investors thrive on data to make analysis, and not only mealy-mouthed press releases but actual figures and bottom lines. This is used for analysis and directly impacts on the market and other business decisions.
Problems with comparing figures from different accounting jurisdictions (countries) skew analysis. That’s when XBRL was proposed. Based on XML, it tags line items within context and with metadata. Implementation is problematic and not currently well-implemented, but it’s a solution with buy-in and with a schematic. I contrast this market-oriented approach with the academic one purposely.
I feel the community won’t accept this. If anything, you can see by that list of technologies that they are crying out for someone to them all together, perhaps create a natural language sequence and an API or two.
In the spirit of the 90s, if your HTML is not pushing the limits, we’ll create messy HTML that will. If business wants a v3.0 with the benefits of refined metadata, business will get a v3.0. The W3C is about standards, they need to wrest this issue from the academics and sandbox it for the developers who are defining the web anew. Daily.