The Perl Toolchain Summit needs more sponsors. If your company depends on Perl, please support this very important event.
<HTML>
<HEAD>
<TITLE>Indexing and retrieving Semantic Web resources: the RDFStore model</TITLE>
<LINK REL="meta" TYPE="application/rdf+xml" TITLE="FOAF" HREF="http://www.asemantics.net/presos/SWAD-E/SWADe-rdfstore.rdf" />
</HEAD>
<BODY BGCOLOR="#ffffff">
<CENTER>
<H1>Indexing and retrieving Semantic Web resources: the RDFStore model</H1>
<BR><CITE><a href="mailto:alberto@asemantics.com">Alberto Reggiori</a>, <a href="mailto:dirkx@asemantics.com">Dirk-Willem van Gulik</a>, <a href="mailto:z@asemantics.com">Zavisa Bjelogrlic</a><BR><BR><a href="http://www.asemantics.com">Asemantics</a> S.r.l., Milan, Rome - Italy, Leiden - Netherlands<BR>{alberto, dirkx, z}@asemantics.com</CITE>
</CENTER>

<H2>Abstract</H2>

The Semantic Web is a logical evolution of the existing Web. It is based on a common conceptual data model of great generality that allows both humans and machines to work with interrelated, but disjoint, information as if it was a single global database. The design and implementation of a general, scalable, federated and flexible data storage and indexing model, which corresponds to the data model of the Semantic Web, is fundamental for the success and deployment of such a system. The generality of the RDF data model presents unique challenges to efficient storage, indexing and querying engines. This paper presents our experience and work related to RDFStore which implements a new flexible indexing and query model. The model is tailored to RDF data and is designed around the Semantic Web from the ground up. The paper describes the underlying indexing algorithm, together with comparisons to other existing RDF storage and query strategies.


<H2>Towards a lightweight database architecture</H2>

The generality of the RDF data model presents unique challenges to efficient storage, indexing and querying software.  Even if the Entity-Relational (ER) data model <a href="#1">[1]</a> is the dominant technology for database management systems today, it has limitations in modeling RDF constructs.<BR><BR>
RDF being unbounded, the resulting data structures are irregular, expressed using different data granularity, deeply nested or even cyclic. As a consequence, it is not possible to easily fix the "structural view" of a piece of information (object), which is instead one of the fundaments of traditional RDBMS systems trying to be much narrower and precise as possible and where an update not conforming to a single static schema is rejected. Database systems also optimize data storage and retrieval by knowing ahead of time how records are structured and interrelated and tend to use very inefficient nested SQL SELECT statements to process nested and cyclic structures. <BR>
All this is too restrictive for RDF data. Like most semi-structured formalisms <a href="#2">[2]</a><a href="#3">[3]</a> RDF is self-describing. This means that the schema information is embedded with the data, and no a priori structure can be assumed, giving a lot of flexibility to manage any data and deal with changes in the data's structure seamlessly at the application level. The only basic structure available is the RDF graph itself, which allows describing RDF vocabularies as groups of related resources and the relationships between these resources <a href="#4">[4]</a>. All new data can be  "safely" accepted, eventually at the cost of tailoring the queries to the data. On the other side, RDF data management systems must be much more generic and polymorphic like most of dynamically-bound object-oriented systems <a href="#5">[5]</a>; changes to the schema are expected to be as frequent as changes to the data itself and could happen while the data is being processed or ingested.<BR><BR>

A drawback of RDF heterogeneity is that the schema is relatively large compared to the data itself <a href="#6">[6]</a>; this in contrast to traditional RDBMS where the data schema is generally several orders of magnitude smaller than the data. This also implies that RDF queries over the schema information are as important as queries on the data. Another problem is that most RDF data (e.g. metadata embedded into an HTML page or RSS1.0 news feed) might exist independently of the vocabulary schemas used to mark-up the data, further complicating data structure and semantics "validation" (RDF Schema validation). This de-coupling aspect also makes the data "de-normalization" more difficult <a href="#7">[7]</a><a href="#8">[8]</a><a href="#9">[9]</a>. "De-normalization" is needed in RDBMS to overcome query performance penalties caused by the very general "normalized" schemas. De-normalization must be done taking into account to how the database will be used and how data is initially structured. In RDF this is not generally possible, unless all the RDF Schema definitions of the classes and properties used are known a-priori and available to the software application. Even if that might be the case, it is not a general rule and it would be too restrictive and make RDF applications extremely fragile. In the simplest and most general case, RDF software must associate the semantics to a given property exclusively using the unique string representation of its URIs. This will not stop of course more advanced and intelligent software to go a step further and retrieve, if available, the schema of the associated namespace declarations for validation, optimization or inference purposes. <BR>
It is interesting to point out that a large part of queries foreseen for Web applications are information discovery and retrieval queries (e.g. Google) that can "ignore" the data schema taxonomy. Simple browsing through the RDF data itself or searching for some sub-string into literals, or using common patterns is generally enough for a large family of RDF applications.<BR>
On the other hand, we strongly believe that RDBMS has proven to be a very effective and efficient technology to manage large quantities of well-structured data. This will continue to be true for the foreseeable future. We thus see RDF and similar less rigid, or semi-structured data technologies as complementary to traditional RDBMS systems. We expect to see RDF increasingly appear in the middle layer where lightweight systems that focus on interoperability, flexibility and a certain degree of decoupling of rigid formats are desired.<BR>
We believe that a fundamentally different storage and query architecture is required to support the efficiency and the flexibility of RDF and its query languages.<BR><BR>

At a minimum such storage system needs to be:
<UL>
<LI>Lightweight</LI>
<LI>Native implementation of the graph</LI>
<LI>Fundamentally independent from data structure</LI>
<LI>Allow for very wide ranges in value sizes; where the size distribution is not known in advance, most certainly is not Gaussian and will fluctuate wildly.</LI>
<LI>Be efficient - it should not be necessary to retrieve very large volumes of data in order to reconstruct part of the graph.</LI>
<LI>Allow built in support for arbitrary complex regular-path-expressions on the graph to match RDF queries like RDQL <a href="#50">[50]</a> statement triple-patterns.</LI>
<LI>Have some free-text support</LI>
<LI>Context/provenance/scope or flavoring of triples</LI>
</UL>

Furthermore given that RDF and the Semantic Web are relatively new, and will require significant integration and experimentation - it is important that its technology matches that of the Internet:<BR>

<UL>
<LI>Easy to interface to C, Perl and Java at the very least. Ruby, Python, Visual Basic and .NET are a pre.</LI>
<LI>Easy to distribute (part of) the solution across physical machines or locations in order match scaling and operational habits of existing key Internet infrastructure.</LI>
<LI>Very resistant to "missing links" and other noise.</LI>
</UL>

<H2>Contexts and provenance</H2>

A RDF statement represents a fact that is asserted as true in a certain context - space - time, situation, scope, etc. The circumstances where the statement has been stated represent its "contextual" information <a href="#10">[10]</a><a href="#11">[11]</a>.<BR>

For example, it may be useful to track the origin of triples added to the graph, e.g. the URI of the source where triples are defined, e.g. in an RDF/XML file, when and by whom they where added and the expiration date (if any) for the triples. Such context and provenance information can be thought of as an additional and orthogonal dimension to the other components of a triple. The concept is called in the literature "statement reification". Context and provenance are currently not included in the RDF standardisation process <a href="#48">[48]</a><a href="#49">[49]</a>, but will hopefully adressed in a next release of the specification.<BR>

From the application developer point of view there is a clear need for such primitive constructs to layer different levels of semantics on top of RDF which can not be represented in the RDF triples space. Applications normally need to build meta-levels of abstraction over triples to reduce complexity and provide an incremental and scaleable access to information. For example, if a Web robot is processing and syndicating news coming from various on-line newspapers, there will be overlap. An application may decide to filter the news based not only on a timeline or some other property, but perhaps select sources providing only certain information with unique characteristics. This requires the flagging of triples as belonging to different contexts and then describing in the RDF itself the relationships between the contexts. At query time such information can then be used by the application to define a search scope to filter the results. Another common example of the usage of provenance and contextual information is about digital signing RDF triples to provide a basic level of trust over the Semantic. In that case triples could be flagged for example with a PGP key to uniquely identify the source and its properties.<BR>

There have been several attempts <a href="#12">[12]</a><a href="#13">[13]</a><a href="#14">[14]</a><a href="#15">[15]</a><a href="#53">[53]</a><a href="#54">[54]</a> trying to formalize and use contexts and provenance information in RDF but  a common agreement has not been reached yet. However, context and provenance information come out as soon as a real application is built using RDF. Some first examples are presented below.<BR><BR>

Our approach to model contexts and provenance has been simpler and motivated by real-world RDF applications we have developed <a href="#16a">[16a]</a><a href="#16b">[16b]</a><a href="#16c">[16c]</a>. We found that an additional dimension to the RDF triple can be useful or even essential. Given that the usage of full-blown RDF reification is not feasible due to its verbosity and inefficiency we developed a different modeling technique that flags or mark a given statement as belonging to a specific context.

First example considers subjective assertions. The Last Minute News (LMN) <a href="#16b">[16b]</a> and The News Blender (NB) <a href="#16c">[16c]</a> demos allow an user rating and qualifying the source - newspapers. The user can "say" that a newspaper is "liberal" or "conservative". Of course, two users, X and Y, will show two different opinions. Without considering the context, this will result in two triples:

<PRE>
Newspaper A -> Quality -> "liberal"
Newspaper A -> Quality -> "conservative"
</PRE>

Which will be interpreted later as two quality properties assigned to the same newspaper. In reality, these triples were defined in different contexts (which can also be expressed as triples):

<PRE>
Quality -> Defined by -> User X : Newspaper A -> Quality -> "liberal"
Quality -> Defined by -> User Y : Newspaper A -> Quality -> "conservative"
</PRE>

Next example of context needed in a Semantic Web application comes from a practical example in the Image ShowCase (ISC) <a href="#16a">[16a]</a> where RDF/XML descriptions of resources are used. RDF files are created by parsing free-formatted HTML pages. Once triples are created, some turn out to be wrong (or incomplete) and need to be edited and corrected. After the correction of the original RDF description (e.g. A1), a new set of triples is created, but these are added to the old ones created form A. This results, e.g. after a correction of the Track attribute, in something like:

<PRE>
Image X -> Track -> 3333
Image X -> Track -> 3334
</PRE>

A more correct presentation considers the source from which the triples are derived:

<PRE>
Source RDF -> is -> A   : Image X -> Track -> 3333
Source RDF -> is -> A1  : Image X -> Track -> 3334
</PRE>

This type of context allows removing (or ignoring) all triples created from the source RDF file A and replace them with the ones of source file A1.<BR><BR>

Last example is taken again from the Last Minute News (LMN) demo system <a href="#16b">[16b]</a>. One of newspapers uses a solution which reuses URLs of articles day after day and the article - a resource - disappears after 24 hours. Triples, created for an article A, identified by URLa will be mixed with triples for article B, published a day after and identified by the same URLa. In this case, we can say, the original resource expires after 24 hours and we can decide, e.g.  to make triples expire after some time. Of course, different solutions can be used here, like referencing the article "by description" instead of using URI and saving old articles but in any of these solutions we will need information about the time when triples were defined and their expiration date. It shall be noted that this is not a specific case, in any realistic application the time component must be considered. Even if assertions about the resource can be "eternal truths", the subject - resource described may expire or disappear after some time.<BR><BR>
A simpler example, with less theoretical implications, is simple house-keeping where knowledge about a domain can be refreshed from time to time. In this case existence of an older triple must be considered to avoid mixing of same triple defined at different time.<BR>
Another important example of the usage of provenance and contextual information is about digital signing RDF triples to provide a basic level of trust over the Semantic Web. In that case triples could be flagged for example with a PGP key to uniquely identify the source and its properties.

<H2>Related work</H2>

Several groups have developed technology to store RDF nodes, arcs and labels into database management systems like PostgreSQL, MySQL, Oracle, IBM DB2, Interbase (and many others) and significant progress has been made. Examples are: <a href="#17">[17]</a><a href="#18">[18]</a><a href="#19">[19]</a><a href="#20">[20]</a><a href="#21">[21]</a><a href="#22">[22]</a><a href="#23">[23]</a><a href="#24">[24]</a><a href="#25">[25]</a><a href="#26">[26]</a><a href="#27">[27]</a><a href="#28">[28]</a>. Each design strikes a careful balance between flexibility, scalability, query facilities, efficiency and optimization. The major drawback of these systems is that they force RDF data into few tables having a lot of rows, resulting in multiple joins and slow retrieval time. Even though joins are relatively cheap in modern databases, the number of disk operations and query requests remains very high; even for the simplest of requests. The number of 'sub queries' needed to satisfy a single RDF query is often several orders of magnitude larger than commonly seen in RDBMS applications. Also, very often to save space DBAs design tables using significant number of indirect references, deferring to the application, or a stored procedure layer, for expanding the operation into large numbers of additional join operations just to store, retrieve or delete a single atomic statement from the database and maintaining consistency. Existing RDF implementations using object-relational database models have partially overcame this limitation by de-normalizing tables into more domain specific objects using RDF Schema taxonomy information. Such models are generally easier and more efficient to store and query, but less flexible <a href="#20">[20]</a><a href="#29">[29]</a>. The key limitation of such approach is the requirement to bundle the complete machine-readable schema definition (RDF Schema), a-priori, with the data to best structure the tables. As already explained this will rarely be the case in the RDF world.<BR>
Due to the fact that most data is already stored into traditional RDBMS other approaches simply try to turn the problem of storing RDF efficiently upside down by producing various mapping schemas to export SQL data into RDF <a href="#30">[30]</a><a href="#31">[31]</a><a href="#32">[32]</a>. Other systems being more XML oriented focus on providing document centric views of RDF/XML from relational tables <a href="#33">[33]</a>. Recently so-called "query rewriters" started to appear; these function by directly mapping RDF graphical queries inside applications (e.g. RDQL statements) to SQL SELECT statements onto the native RDBMS and as such leaves the problem of storage efficiency, query optimization heuristics and performance to the RDBMS - leveraging existing database experience. Significant progress been made in this area <a href="#34">[34]</a><a href="#35">[35]</a><a href="#36">[36]</a><a href="#37">[37]</a>. The SWAD-Europe project deliverable from Dave Beckett <a href="#38">[38]</a> is a good introduction.<BR>
The BerkeleyDB <a href="#39">[39]</a> is a good example of lightweight, portable and fast data management solution; it is suited for applications having to deal with a large amount of unbounded data of any type with an arbitrary complexity. Having no notion of schema, like traditional relation databases do, the BerkeleyDB is ideal for applications that cannot predict in advance the access and query patterns to the actual data, while needing high performance in the storage of records.  The key/value paradigm also at the base of the database library results to be quite natural to store and retrieve RDF descriptions of resources having properties with some specific property values. The most common implementations use multiple B-Tree hash tables with duplicates containing pre-canned indexes of outbound nodes from a resource with a given arc, inbound nodes with a given arc and destination and the arcs between two given nodes; some implementations have additional indexes for inbound and outbound arcs of particular nodes or statement contexts <a href="#18">[18]</a><a href="#40">[40]</a><a href="#41">[41]</a><a href="#42">[42]</a>. Even if such indexes are quite fast to access and to write they only support a small subset of the possible combinations of the indexes and some queries can not be answered; such indexes use a flat storage which is in general not compact and space consuming by not avoiding repeated strings being stored. Free-text indexing of literal values is not trivial to implement unless using an off-the-shelf database or free-text indexing software. <BR>
Other approaches provide very efficient and scalable RDF storage support by using memory-mapped files on disk and ad-hoc designed in-memory data structures to hash properties and their values <a href="#43">[43]</a><a href="#44">[44]</a>.  Such systems can index data quite efficiently and the size of the database can grow to the maximum address space without consuming swap space. Possible queries are limited and free-text search can only be obtained by installing third party packages.

<H2>Our solution</H2>

We have been developing a new BerkeleyDB based hashed storage which unlike most of other hashed indexers and SQL approaches uses various compressed multi-dimensional sparse matrix inverted indexes instead of B-Tree (s). Such indexes map the RDF nodes, contexts and free-text words contained into the literals to statements. There are several advantages to this approach. First, the use of a hybrid run-length and variable-length encoding to compress the indexes makes the resulting data store much more compact. Second, the use of bitmaps and Boolean operations allows matching arbitrary complicated queries with conjunction, disjunction and free-text words without using backtracking and recursion techniques. Third, this technique gives fine-grained control over the actual database content.<BR>
Although our implementation of RDF indexing and storage uses custom developed software, this does not limit the potential use of the algorithm in other storage software or commercial RDBMSes. In fact our technique does not dictate a particular storage architecture beyond requiring (efficient) key/arbitrary-length-value pair operations such as GET, STORE, INCREMENT, DECREMENT and DELETE on a small number of tables.

<H2>Modeling the RDF graph data</H2>

An RDF Directed-Labeled-Graph (DLG) consists of nodes that are either resources or literals (valid Unicode strings); resources can be URI references or anonymous resources (bNodes). Nodes into a graph are connected via named resource nodes having a valid URI picked up from one or more vocabularies. A statement or triple (or fact) consists of a subject node (URI or bNode), a predicate node (URI) and an object (URI, bNode or string). Optionally, even if it is outside the RDF model, a context node (URI or bNode) might be associated to a triple.<BR><BR>

In this section we will present a complete example taken from the RDF/XML Syntax Specification (Revised) <a href="#45">[45]</a> and how it can be stored and indexed into a generic hashed storage using ad-hoc designed tables to store RDF information. For generic hashed storage we refer to a system supporting some algorithm for hash-addressing which provides direct retrieval of a specific row of a table or matrix. BerkeleyDB systems support hashed-storage natively. Others traditional database management systems like Oracle allows to structure ad-hoc database structure as hashed-storage. Most programming languages such as Perl, Java, C++ and others also support hash-tables which could be used to store the RDF as an in-memory structure which could be stored on a secondary storage with several serialisation techniques available. Other options might be to use some XML based vocabulary to markup ad-hoc hash-table and use an XML toolkit to manage the structure. Particularly we will use one extra table to support free-text indexing of Unicode strings contained into RDF literal values.<BR><BR>

Here is the RDF/XML <a href="#45">[45]</a>, N-Triples <a href="#46">[46]</a> and graphical representation of the Example 7 from the RDF M&S document <a href="#45">[45]</a>:

<PRE>
&lt;?xml version="1.0"?>
&lt;rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
         xmlns:dc="http://purl.org/dc/elements/1.1/"
         xmlns:ex="http://example.org/stuff/1.0/">
  &lt;rdf:Description rdf:about="http://www.w3.org/TR/rdf-syntax-grammar"
		   dc:title="RDF/XML Syntax Specification (Revised)">
    &lt;ex:editor>
      &lt;rdf:Description ex:fullName="Dave Beckett">
	&lt;ex:homePage rdf:resource="http://purl.org/net/dajobe/" />
      &lt;/rdf:Description>
    &lt;/ex:editor>
  &lt;/rdf:Description>
&lt;/rdf:RDF>
</PRE>

Which gives the following N-Triples:<BR>

<PRE>
&lt;http://www.w3.org/TR/rdf-syntax-grammar> &lt;http://purl.org/dc/elements/1.1/title> "RDF/XML Syntax Specification (Revised)" .
_:genid1 &lt;http://example.org/stuff/1.0/fullName> "Dave Beckett" .
_:genid1 &lt;http://example.org/stuff/1.0/homePage> &lt;http://purl.org/net/dajobe/> .
&lt;http://www.w3.org/TR/rdf-syntax-grammar> &lt;http://example.org/stuff/1.0/editor> _:genid1 .
</PRE>

See the picture below for a graphical representation of the resulting RDF graph:<BR><BR>

<img src="figure1.png"><BR>

<BR>
<TABLE BORDER="1">
<TR><TH ALIGN="center">st_num</TH></TR>
<TR><TD ALIGN="center">0</TD><TD>&lt;http://www.w3.org/TR/rdf-syntax-grammar> &lt;http://purl.org/dc/elements/1.1/title> "RDF/XML Syntax Specification (Revised)" .</TD></TR>
<TR><TD ALIGN="center">1</TD><TD>_:genid1 &lt;http://example.org/stuff/1.0/fullName> "Dave Beckett" .</TD></TR>
<TR><TD ALIGN="center">2</TD><TD>_:genid1 &lt;http://example.org/stuff/1.0/homePage> &lt;http://purl.org/net/dajobe/> .</TD></TR>
<TR><TD ALIGN="center">3</TD><TD>&lt;http://www.w3.org/TR/rdf-syntax-grammar> &lt;http://example.org/stuff/1.0/editor> _:genid1 .</TD></TR>
</TABLE>

<BR>

Each statement gets a sequential number from zero on; then for any given statement number we have associated a specific bitno (offset) which gives some kind of unique key for the given component. It results to be handy to interpret such offsets in different ways in different tables. A statement context (statement group) is represented by a resource with a specific URI or bNode nodeID.<BR><BR>

In the example above we assume that each statement has some provenance URI (statement-group, context or whatever :) which for semplicity we assume to be equal to 'http://www.w3.org/TR/rdf-syntax-grammar/example07.rdf' - then the above N-Triples maps to Quads <a href="#13">[13]</a> as follow:<BR>

<BR>
<TABLE BORDER="1">
<TR><TH ALIGN="center">st_num</TH></TR>
<TR><TD ALIGN="center">0</TD><TD>&lt;http://www.w3.org/TR/rdf-syntax-grammar> &lt;http://purl.org/dc/elements/1.1/title> "RDF/XML Syntax Specification (Revised)" &lt;http://www.w3.org/TR/rdf-syntax-grammar/example07.rdf> .</TD></TR>
<TR><TD ALIGN="center">1</TD><TD>_:genid1 &lt;http://example.org/stuff/1.0/fullName> "Dave Beckett" &lt;http://www.w3.org/TR/rdf-syntax-grammar/example07.rdf> .</TD></TR>
<TR><TD ALIGN="center">2</TD><TD>_:genid1 &lt;http://example.org/stuff/1.0/homePage> &lt;http://purl.org/net/dajobe/> &lt;http://www.w3.org/TR/rdf-syntax-grammar/example07.rdf> .</TD></TR>
<TR><TD ALIGN="center">3</TD><TD>&lt;http://www.w3.org/TR/rdf-syntax-grammar> &lt;http://example.org/stuff/1.0/editor> _:genid1 &lt;http://www.w3.org/TR/rdf-syntax-grammar/example07.rdf> .</TD></TR>
</TABLE>
<BR>

For efficient storage and retrieval of statements and their components we assume there exist some hash functions which generates a unique CRC64 (or similar) integer number for a given MD5 or SHA-1 cryptographic digest representation of statements and nodes of the graph as follow:<BR>

<PRE>

st0 = get_statement_hashCode('&lt;http://www.w3.org/TR/rdf-syntax-grammar>
                               &lt;http://purl.org/dc/elements/1.1/title>
                               "RDF/XML Syntax Specification (Revised)"
                                &lt;http://www.w3.org/TR/rdf-syntax-grammar/example07.rdf> .' )
s0 = get_node_hashCode( 'http://www.w3.org/TR/rdf-syntax-grammar' )
p0 = get_node_hashCode( 'http://purl.org/dc/elements/1.1/title' )
o0 = get_node_hashCode( 'RDF/XML Syntax Specification (Revised)' )

st1 = get_statement_hashCode('_:genid1
                               &lt;http://example.org/stuff/1.0/fullName>
                               "Dave Beckett"
                               &lt;http://www.w3.org/TR/rdf-syntax-grammar/example07.rdf> .' )
s1 = get_node_hashCode( '_:genid1' )
p1 = get_node_hashCode( '&lt;http://example.org/stuff/1.0/fullName>' )
o1 = get_node_hashCode( '"Dave Beckett"' )

st2 = get_statement_hashCode('_:genid1
                               &lt;http://example.org/stuff/1.0/homePage>
                               &lt;http://purl.org/net/dajobe/>
                               &lt;http://www.w3.org/TR/rdf-syntax-grammar/example07.rdf> .' )
s2 = get_node_hashCode( '_:genid1' ) = s1
p2 = get_node_hashCode( '&lt;http://example.org/stuff/1.0/homePage>' )
o2 = get_node_hashCode( '&lt;http://purl.org/net/dajobe/>' )
 
st3 = get_statement_hashCode('&lt;http://www.w3.org/TR/rdf-syntax-grammar>
                               &lt;http://example.org/stuff/1.0/editor>
                               _:genid1 
                               &lt;http://www.w3.org/TR/rdf-syntax-grammar/example07.rdf> .' )
s3 = get_node_hashCode( '&lt;http://www.w3.org/TR/rdf-syntax-grammar>' ) = s0
p3 = get_node_hashCode( '&lt;http://example.org/stuff/1.0/editor>' )
o3 = get_node_hashCode( '_:genid1' ) = s1 = s2

c0 = get_node_hashCode( '&lt;http://www.w3.org/TR/rdf-syntax-grammar/example07.rdf>' )
</PRE>
<BR>

Here is list of the hash tables used with a little bit of explaination:<BR>

<BR>
<TABLE BORDER="1">
<TR><TH ALIGN="center">table</TH><TH>description</TH><TH>notes</TH></TR>
<TR><TD ALIGN="center">MODEL</TD><TD>Used to store statement counters and other model misc options (free-text, source URI, compression algorithms and so on)</TD><TD></TD></TR>
<TR><TD ALIGN="center">STATEMENTS</TD><TD>Maps statements hash codes to sequential integer numbers of statements st(n)</TD><TD>It is generally needed to quickly check whether or not a statement is already stored in the db</TD></TR>
<TR><TD ALIGN="center">NODES</TD><TD>Contains all the actual content more or less like in the N-Triples<a href="#46">[46]</a>/Quads<a href="#13">[13]</a> syntax</TD><TD></TD></TR>
<TR><TD ALIGN="center">SUBJECTS</TD><TD>Maps a subject resource node to a certain st(n)</TD><TD>Sparse matrix with compression</TD></TR>
<TR><TD ALIGN="center">PREDICATES</TD><TD>Maps a predicate resource node to a certain st(n)</TD><TD>Sparse matrix with compression</TD></TR>
<TR><TD ALIGN="center">OBJECTS</TD><TD>Maps an object node to a certain st(n)</TD><TD>Sparse matrix with compression</TD></TR>
<TR><TD ALIGN="center">CONTEXTS</TD><TD>Maps resources representing contexts to st(n)</TD><TD>Sparse matrix with compression</TD></TR>
<TR><TD ALIGN="center">LANGUAGES</TD><TD>Maps an object node xml:lang to a certain st(n)</TD><TD>Sparse matrix with compression</TD></TR>
<TR><TD ALIGN="center">DATATYPES</TD><TD>Maps a resource node which is the rdf:datatype of an object node to a certain st(n)</TD><TD>Sparse matrix with compression</TD></TR>
<TR><TD ALIGN="center">S_CONNECTIONS</TD><TD>Maps a subject resource node to statements directly or indirectly connected to it</TD><TD>Sparse matrix with compression</TD></TR>
<TR><TD ALIGN="center">P_CONNECTIONS</TD><TD>Maps a predicate resource node to statements directly or indirectly connected to it</TD><TD>Sparse matrix with compression</TD></TR>
<TR><TD ALIGN="center">O_CONNECTIONS</TD><TD>Maps an object node to statements directly or indirectly connected to it</TD><TD>Sparse matrix with compression</TD></TR>
<TR><TD ALIGN="center">WINDEX</TD><TD>Maps Unicode case-folded representations of free-text words present into statement object literals and their stemming literals to st(n)</TD><TD>Sparse matrix with compression</TD></TR>
</TABLE>
<BR>

After having parsed and ingested the above RDF/XML example statements (quads) the storage hash tables looks like as follow:<BR>

<BR>

<TABLE BORDER="1">
<TR><TH COLSPAN="3">MODEL hash table</TH></TR>
<TR><TH ALIGN="center">KEY</TD><TH ALIGN="center">VALUE</TD><TD ALIGN="center">notes</TD></TR>
<TR><TD ALIGN="center">counter</TD><TD ALIGN="center">4</TD><TD>we have got 4 statements</TD></TR>
<TR><TD ALIGN="center">counter_removed</TD><TD ALIGN="center">0</TD><TD>none removed yet</TD></TR>
</TABLE>

<BR>

<TABLE BORDER="1">
<TR><TH COLSPAN="3">STATEMENTS hash table</TH></TR>
<TR><TH ALIGN="center">KEY</TD><TH ALIGN="center">VALUE</TD><TD ALIGN="center">notes</TD></TR>
<TR><TD ALIGN="center">st0</TD><TD ALIGN="center">0</TD><TD>MODEL->{counter} is counting statements from one but we start from zero anyway</TD></TR>
<TR><TD ALIGN="center">st1</TD><TD ALIGN="center">1</TD><TD></TD></TR>
<TR><TD ALIGN="center">st2</TD><TD ALIGN="center">2</TD><TD></TD></TR>
<TR><TD ALIGN="center">st3</TD><TD ALIGN="center">3</TD><TD></TD></TR>
</TABLE>

<BR>

<TABLE BORDER="1">
<TR><TH COLSPAN="11">NODES hash table (template)</TH></TR>
<TR><TH ALIGN="center">KEY</TH><TH ALIGN="center" COLSPAN="10">VALUE</TH></TR>
<TR><TH>&nbsp;</TH><TH ALIGN="center">C data type ---></TH> <TH>int</TH> <TH>int</TH> <TH>int</TH> <TH>int</TH> <TH>int</TH> <TH>int</TH> <TH>int</TH> <TH>char</TH><TH>(strings of content)</TH></TR> <TR><TD>st_num</TD><TD></TD> <TD>s_len</TD> <TD>p_len</TD> <TD>o_len</TD> <TD>o_lang_len</TD> <TD>o_dt_len</TD> <TD>c_len</TD> <TD>st_res_len</TD> <TD>special_byte</TD> <TD>s p o olang odt c sr</TD> </TR>
</TABLE>

<PRE>

 where:
   s_len      = length in bytes of the subject resource identifier
   p_len      = length in bytes of the predicate resource identifier
   o_len      = length in bytes of the object resource identifier or literal string
   o_lang_len = length in bytes of the object xml:lang property value
   o_dt_len   = length in bytes of the object rdf:datatype property resource identifier
   c_len      = length in bytes of the context resource identifier
   st_res_len = length in bytes of the statement resource identifier

 NOTE: possible add ons: object_xmlencoding (e.g. if different from UTF-8)

                   bits
 special_byte =  01234567
                 ^^^^^^^^
                 ||||||||
                 |||||||+--- 128 (reserved for future use)
                 ||||||+---- 64  (reserved for future use)
                 |||||+----- 32  statement is reified
                 ||||+------ 16  context is bNode
                 |||+------- 8   object is bnode
                 ||+-------- 4   predicate is bNode
                 |+--------- 2   subject is bNode
                 +---------- 1   object is literal
</PRE>

<BR>

Then for the above four quads we have:<BR>

<BR>

<TABLE BORDER="1">
<TR><TH COLSPAN="2">NODES hash table (values for given example)</TH></TR>
<TR><TH ALIGN="center">KEY</TH><TH ALIGN="center">VALUE</TH></TR>
<TR><TD ALIGN="center">0</TD><TD>39, 37, 38, 0, 0, 53, 0, 1, "http://www.w3.org/TR/rdf-syntax-grammar>", "http://purl.org/dc/elements/1.1/title", "RDF/XML Syntax Specification (Revised)", "http://www.w3.org/TR/rdf-syntax-grammar/example07.rdf"</TD></TR>
<TR><TD ALIGN="center">1</TD><TD>8, 37, 12, 0, 0, 53, 0, 3,  "_:genid1", "http://example.org/stuff/1.0/fullName", "Dave Beckett", "http://www.w3.org/TR/rdf-syntax-grammar/example07.rdf"</TD></TR>
<TR><TD ALIGN="center">2</TD><TD>8, 37, 27, 0, 0, 53, 0, 2,  "_:genid1", "http://example.org/stuff/1.0/homePage", "http://purl.org/net/dajobe/", "http://www.w3.org/TR/rdf-syntax-grammar/example07.rdf"</TD></TR>
<TR><TD ALIGN="center">3</TD><TD>39, 37, 8, 0, 0, 53, 0, 8,  "http://www.w3.org/TR/rdf-syntax-grammar", "http://example.org/stuff/1.0/editor", "_:genid1", "http://www.w3.org/TR/rdf-syntax-grammar/example07.rdf"</TD></TR>
</TABLE>

<PRE>
 NOTE: eventually nodes hash table will also store the rdf:datatype and xml:lang values (see above 
       bits/bytes map explaination for nodes table)
</PRE>

<BR>

Adjacency matrixes:<BR>

<BR>

<TABLE BORDER="0">
<TR>
<TD VALIGN="top">
<TABLE BORDER="1">
<TR><TH COLSPAN="13">SUBJECTS hash table</TH></TR>
<TR><TH></TH><TH></TH><TH COLSPAN="11">VALUE</TH></TR>
<TR><TH></TH><TH>st_num --></TH><TH>0</TH> <TH>1</TH> <TH>2</TH> <TH>3</TH> <TH>4</TH> <TH>5</TH> <TH>6</TH> <TH>7</TH> <TH>9</TH> <TH>...</TH></TR>
<TR><TH>KEY</TH><TH COLSPAN="12"></TH></TR>
<TR><TD>s0=s3</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD></TR>
<TR><TD>s1=s2=o3</TD><TD></TD><TD>0</TD> <TD>1</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD></TR>
</TABLE>
</TD>

<TD VALIGN="top">
<TABLE BORDER="1">
<TR><TH COLSPAN="13">PREDICATES hash table</TH></TR>
<TR><TH></TH><TH></TH><TH COLSPAN="11">VALUE</TH></TR>
<TR><TH></TH><TH>st_num --></TH><TH>0</TH> <TH>1</TH> <TH>2</TH> <TH>3</TH> <TH>4</TH> <TH>5</TH> <TH>6</TH> <TH>7</TH> <TH>9</TH> <TH>...</TH></TR>
<TR><TH>KEY</TH><TH COLSPAN="12"></TH></TR>
<TR><TD>p0</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD></TR>
<TR><TD>p1</TD><TD></TD><TD>0</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD></TR>
<TR><TD>p2</TD><TD></TD><TD>0</TD> <TD>0</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD></TR>
<TR><TD>p3</TD><TD></TD><TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD></TR>
</TABLE>
</TD>

<TD VALIGN="top">
<TABLE BORDER="1">
<TR><TH COLSPAN="13">OBJECTS hash table</TH></TR>
<TR><TH></TH><TH></TH><TH COLSPAN="11">VALUE</TH></TR>
<TR><TH></TH><TH>st_num --></TH><TH>0</TH> <TH>1</TH> <TH>2</TH> <TH>3</TH> <TH>4</TH> <TH>5</TH> <TH>6</TH> <TH>7</TH> <TH>9</TH> <TH>...</TH></TR>
<TR><TH>KEY</TH><TH COLSPAN="12"></TH></TR>
<TR><TD>o0</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD></TR>
<TR><TD>o1</TD><TD></TD><TD>0</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD></TR>
<TR><TD>o2</TD><TD></TD><TD>0</TD> <TD>0</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD></TR>
<TR><TD>o3=s1=s2</TD><TD></TD><TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD></TR>
</TABLE>
</TD>
</TR>

</TABLE>
<BR>

In the simple example considered the LANGUAGES and DATATYPES hash tables are empty because the RDF triples resulting do not contain any xml:lang or rdf:datatype information - we could eventually default them to some sensible values or set them accordingly to the input RDF/XML synyax. The generated tables will store values as sparse matrixes with compression.<BR><BR>

Until now we did not cover the statement context component, which can also be represented quite efficiently in a matrix form. For example if we assume that the above 4 statements have been stated into a context e.g. 'http://www.w3.org/TR/rdf-syntax-grammar/example07.rdf' (from which a hashcode c0 is generated), we can represent the contextual information of statements into a separated sparse matrix as follow:<BR>

<BR>

<TABLE BORDER="1">
<TR><TH COLSPAN="14">CONTEXTS hash table</TH></TR>
<TR><TH></TH><TH></TH><TH COLSPAN="10">VALUE</TH><TH>notes</TH></TR>
<TR><TH></TH><TH>st_num --></TH><TH>0</TH> <TH>1</TH> <TH>2</TH> <TH>3</TH> <TH>4</TH> <TH>5</TH> <TH>6</TH> <TH>7</TH> <TH>9</TH> <TH>...</TH></TR>
<TR><TH>KEY</TH><TH COLSPAN="12"></TH></TR>
<TR><TD>c0</TD><TD></TD><TD>1</TD> <TD>1</TD> <TD>1</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD>all four triples are into the given context i.e. they are quads</TD></TR>
</TABLE>

<BR>

The WINDEX hash table needs a little bit of explaination. The scope of having an additional table to index free-text words is to provide the flexibility needed to run very generic RDF queries over the database using wild-card style regular-expressions <a href="#51">[51]</a> on arbitrary match nodes in a graph. This is generally useful in real-live applications to find a starting point from where to navigate up or down the graph connections to extract meaningful data. The WINDEX matrix maps the pure raw UTF-8 case-folded text representation <a href="#52">[52]</a> of each word present into the literal part of each statement to the correspondent statement number. Case-folding is used to improve query recall. The stemming of the first and last chars present in each such literals is also stored. The splitting algorithm for a given object literal splits the string up into pieces removing all blank spaces and special chars; each single lowercased word resulting is separately indexed for each statement. Query precision is not jeopardized by this as the full, non-case folded, value is stored separately into the NODES table.<BR><BR>

The resulting WINDEX table for the above RDF/XML example is the following:<BR>

<BR>

<TABLE BORDER="1">
<TR><TH COLSPAN="14">WINDEX hash table</TH></TR>
<TR><TH></TH><TH></TH><TH COLSPAN="10">VALUE</TH><TH>notes</TH></TR>
<TR><TH></TH><TH>st_num --></TH><TH>0</TH> <TH>1</TH> <TH>2</TH> <TH>3</TH> <TH>4</TH> <TH>5</TH> <TH>6</TH> <TH>7</TH> <TH>9</TH> <TH>...</TH></TR>
<TR><TH>KEY</TH><TH COLSPAN="12"></TH></TR>
<TR><TD>dave</TD><TD></TD><TD>0</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD>the word 'dave' is present in the literal of st(1)</TD></TR>
<TR><TD>beckett</TD><TD></TD><TD>0</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>rdf</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>xml</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>syntax</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>specification</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>revised</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>d</TD><TD></TD><TD>0</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>da</TD><TD></TD><TD>0</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>dav</TD><TD></TD><TD>0</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>b</TD><TD></TD><TD>0</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>be</TD><TD></TD><TD>0</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>bec</TD><TD></TD><TD>0</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>beck</TD><TD></TD><TD>0</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>becke</TD><TD></TD><TD>0</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD>stemming up to (configurable) 5 chars stemming</TD></TR>
<TR><TD>r</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>rd</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>x</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>xm</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>s</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>sy</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>syn</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>synt</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>synta</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>sp</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>spe</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>spec</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>speci</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>re</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>rev</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>revi</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
<TR><TD>revis</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD></TD></TR>
</TABLE>

<BR>

In the above table we do not represent the reverse stemming (last 5 chars) from the end of each indexed word for simplicity.<BR>

<H3>Querying RDF graph data</H3>

By using the hash tables described in the previous section is possible to run very generic queries over the RDF storage but the are generally limited to match one set of statements at time. Even so, the usage of sparse matrixes to represent the indexes of RDF statements allow to run basic triple-pattern queries simply using boolean logic operations.<BR><BR>

For example, we could run a simple query to find the resource who as title "RDF/XML Syntax Specification (Revised)" as follows:<BR>

<PRE>
find(?x, &lt;dc:title&gt;, "RDF/XML Syntax Specification (Revised)") # i.e. ?x = http://www.w3.org/TR/rdf-syntax-grammar
</PRE>

or a more generic free-text query like:<BR>

<PRE>
find(?x, &lt;dc:title&gt;, %"syn"%) # i.e. ?x = http://www.w3.org/TR/rdf-syntax-grammar
</PRE>

which searches for any title containing the string "syn".<BR><BR>

The result of such queries is generally stored into an iterator, which uses a sparse matrix to represent which statements matched a specific query - for example the two queries above would return the following iterator structure:<BR>

<BR>

<TABLE BORDER="1">
<TR><TH>st_num --></TH><TH>0</TH> <TH>1</TH> <TH>2</TH> <TH>3</TH> <TH>4</TH> <TH>5</TH> <TH>6</TH> <TH>7</TH> <TH>9</TH> <TH>...</TH><TH>notes</TH></TR>
<TR><TH>result iterator</TH><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD>if matched st_num 0</TD></TR>
</TABLE>

<BR>

Then this allows to scan the result set (iterator) once again and return the actual matching statements (or their parts) using the STATEMENTS and NODES tables. More complicated queries can be composed in a similar fashion which would match different bits of an RDF graph. The whole graph would generate a resulting iterator with bits set for any statement acutally stated into the RDF storage. Arbitrary triple-patterns can be run on RDF graphs using combinations of the above sparse matrixes and specific boolean operations.<BR><BR>

Even if the above queries can model most of simple RDF query application requirements to build simple Web pages or data conversion, it is is not generally possible to soley combine SUBJECTS, PREDICATES and OBJECTS tables to map all possible connections between different statements and nodes inside an RDF graph. In other words, it is not possible to search connected statements (or nodes) without using expensive and inefficient recursion algorithms. To make the RDF storage and query scaleable and guarantee that any arbitrary query can be run efficently over a stored RDF graph (which might span several bilions entries), a set of additional tables is required. Such tables must allow to map RDF statements connections and process their connections purely using boolean operations.<BR><BR>

This problem is generally present as soon as an application needs to aggregate and query RDF statements using a more complete query language such RDQL <a href="#50">[50]</a> or XPath <a href="#55">[55]</a>. In that case, arbitrary nodes and statement connections need to be tracked back to the application.<BR><BR>

For example, if one would like to run a RDQL query on the 4 statements above to get the name of the fullname of the editor of document "http://www.w3.org/TR/rdf-syntax-grammar", we would write:<BR>

<PRE>

SELECT
	?fullname
WHERE
	( ?document, &lt;example:editor>,   ?editor ),
	( ?editor,   &lt;example:fullName>, ?fullname )
AND
	?document eq 'http://www.w3.org/TR/rdf-syntax-grammar'
USING
	example FOR &lt;http://example.org/stuff/1.0/>

</PRE>

By using the basic set of tables as described in previous section, it is not generally possible to get the value of '?fullName' variable without running the first query find( ?document, &lt;example:editor&gt;, ?editor ) and then recursively try (backtracking) on all possible values of ?editor variable returned on the second query find( ?editor, &lt;example:fullName&gt;, ?fullname ) and keep track of the matching statements.<BR><BR>

What is needed instead, is some way to efficently run the above triple-pattern searches independently and "join" then using some kind of common logical AND operation, which would return all the statement representing the biggest (or largest) sub-graph matching the above query.<BR><BR>

To overcome this problem, the RDFStore model uses 3 addional tables called S_CONNECTIONS, P_CONNECTIONS and O_CONNECTIONS, which stores the connections of specific RDF nodes (resources or literals) to other nodes in the graph in an arbitrary way. In fact, a certain node in a statement (being either subject, proedicate or object) can be connected to other statements via any other nodes present in the statement itself. Only resource nodes can connect to other nodes (either resources or bNodes), while literals are generally indireclty connected to other resources.<BR><BR>

The S_CONNECTIONS hash table maps all possibile connections of a given resource node present as subject in a certain statement to all the other statements connected to the statement it belongs to via itself (e.g. if the same subject node would be the predicate or the object of another statement) or the predicate or the object (if it is a resource). Here is the S_CONNECTIONS table as being stored for the above 4 statements:<BR>

<BR>

<TABLE BORDER="1">
<TR><TH COLSPAN="14">S_CONNECTIONS hash table</TH></TR>
<TR><TH></TH><TH></TH><TH COLSPAN="10">VALUE</TH><TH>notes</TH></TR>
<TR><TH></TH><TH>st_num --></TH><TH>0</TH> <TH>1</TH> <TH>2</TH> <TH>3</TH> <TH>4</TH> <TH>5</TH> <TH>6</TH> <TH>7</TH> <TH>9</TH> <TH>...</TH></TR>
<TR><TH>KEY</TH><TH COLSPAN="12"></TH></TR>
<TR><TD>s0=s3</TD><TD></TD><TD>1</TD> <TD>1</TD> <TD>1</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD>s0=s3 is (directly and indirectly) connected to all the other statements</TD></TR>
<TR><TD>s1=s2</TD><TD></TD><TD>0</TD> <TD>1</TD> <TD>1</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD><TD>s1=s2 do not connect to st(0)</TD></TR>
</TABLE>

<BR>

Similarly we can generate the P_CONNECTIONS and O_CONNECTIONS tables to map in the connections to the other two statement components: <BR>
<BR>


<TABLE BORDER="0">

<TR>

<TD VALIGN="top">
<TABLE BORDER="1">
<TR><TH COLSPAN="13">P_CONNECTIONS hash table</TH></TR>
<TR><TH></TH><TH></TH><TH COLSPAN="10">VALUE</TH></TR>
<TR><TH></TH><TH>st_num --></TH><TH>0</TH> <TH>1</TH> <TH>2</TH> <TH>3</TH> <TH>4</TH> <TH>5</TH> <TH>6</TH> <TH>7</TH> <TH>9</TH> <TH>...</TH></TR>
<TR><TH>KEY</TH><TH COLSPAN="12"></TH></TR>
<TR><TD>p0</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD></TR>
<TR><TD>p1</TD><TD></TD><TD>0</TD> <TD>1</TD> <TD>1</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD></TR>
<TR><TD>p2</TD><TD></TD><TD>0</TD> <TD>1</TD> <TD>1</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD></TR>
<TR><TD>p3</TD><TD></TD><TD>1</TD> <TD>1</TD> <TD>1</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD></TR>
</TABLE>
</TD>

<TD VALIGN="top">
<TABLE BORDER="1">
<TR><TH COLSPAN="13">O_CONNECTIONS hash table</TH></TR>
<TR><TH></TH><TH></TH><TH COLSPAN="10">VALUE</TH></TR>
<TR><TH></TH><TH>st_num --></TH><TH>0</TH> <TH>1</TH> <TH>2</TH> <TH>3</TH> <TH>4</TH> <TH>5</TH> <TH>6</TH> <TH>7</TH> <TH>9</TH> <TH>...</TH></TR>
<TR><TH>KEY</TH><TH COLSPAN="12"></TH></TR>
<TR><TD>o0</TD><TD></TD><TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD></TR>
<TR><TD>o1</TD><TD></TD><TD>0</TD> <TD>1</TD> <TD>1</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD></TR>
<TR><TD>o2</TD><TD></TD><TD>0</TD> <TD>1</TD> <TD>1</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD></TR>
<TR><TD>o3</TD><TD></TD><TD>1</TD> <TD>1</TD> <TD>1</TD> <TD>1</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>0</TD> <TD>000</TD></TR>
</TABLE>
</TD>

</TR>
</TABLE>
<BR>

By using the above 3 additional tables is then possible to very efficently run RDQL (or XPath) queries spanning different connected statements, on the same storage, or even on different storages distributed over the Web as RDF/XML (or stored in some kind of RDF storage). What the storage code has to do, is to logically AND the resulting bitmaps (bitmasks or iterators) for each triple pattern. The resulting sub-graph then is further processed to prune unwanted statements (using other logical opearations on bitmaps) and extract the values of the bound variables as resquested by the application (or specific DB interface). The same apporach can as well be used to run very efficently nested RDQL queries, disjuntion (OR) queries or exclude certain specific RDF graph branches. Coupled with free-text indexing and contexts it turned out to be a very good compromise between RDF storage and retirval for the Semantic Web.<BR><BR>

Even if the graph is extremely large it is generally possible to store the above (sparse) matrix efficiently by making use of some specific properties. We will briefly discuss these and the hybrid run-length/variable-length encoding algorithm used by the RDFStore software in the next paragraph.

<H2>The compression algorithm</H2>

Both the graph as well as the free-text words index are relatively sparsely populated which make simple compression possible. The bit arrays used in each can grow to very significant sizes; in the order of several, if not tens of page multiples. Combined with the intensive use of unique keys and a hashed storage a fair percentage of the data needed for any given query is unlikely to be needed often enough as to rely on natural caching of for example the disk system and the operating system. This makes compression attractive as it reduces the data volume that needs to be transferred from the relatively slow disk and/or networked storage; and instead relies on the CPU working off main memory and the CPU level cache. However as one typically operates in a stateless manner on just a few rows it is important to avoid global dictionary based compression methods. And given that the rows are relatively simple; and operated on a lot; care must be taken that any pattern search is of order(1) or better and for a "normal" row is able to (de)compress more than 1Mbyte/second on the target hardware platform. The latter is to ensure that it is still "worthwhile" versus disk-backed storage speed. Though it has been observed that even on memory based systems compression is effective. We suspect that this is due to compiler optimization with respect to the L2 or L3 cache(s) of the CPUs.<BR>
Initially a Run Length Encoding method was used; with two small optimizations. The first optimization was early termination; i.e. if the remainder of the row would solely contain zero's it would simply not list those explicitly. The second optimization was a bias towards sequences of 0's rather than 1's. With respect to the graph a number of issues make the above method not ideal. <BR>
The first issue is that certain values, such as a reference to a schema or a common property are dis-proportionally over represented; by several orders of magnitude (e.g rdf:type property or contextual information). Secondly certain other values; such as the unique reference to an item  only appear once. And the final issue is that (in this release) triples and their context; i.e. the 4 values, are stored sequentially this means that not all bits are equal and certain patterns will happen more often than others.<BR>
So for this reason a variant of the Variable Run Length encoding is used along with part of the above RLE method. This method is still applicable to the word indexing but adds the ability to recognize short patterns; and code the patterns which occur most often with short tokens, code the frequent patterns with a token which is an order to half an order shorter than the pattern and then use RLE for the remainder.<BR>
At this point in time (de-)compression is such that the storage volumes are reasonable, that transfer volumes are manageable and we do not expect to give priority to work in this area. However we expect to examine this issue again and will be looking at a variant of LZ77 and/or replacement of the key pattern matching as soon as we have access to a more varied range of datasets from a wider range of operational applications.

<H2>Conclusion: RDFStore</H2>

RDFStore <a href="#47">[47]</a> is a perl/C toolkit to process, store, retrieve and manage RDF; it consists of a programming API, streaming RDF/XML and N-Triples parsers and a generic hashed data storage which implements the indexing algorithm as described in this paper. RDFStore implements the RDQL query language which allows to query RDF repositories using an SQL like syntax directly from standard database interfaces like DBI, JDBC and ODBC. The storage sub-system allows transparent storage and retrieval of RDF nodes, arcs and labels, either from an in-memory hashed storage, from the local disk using BerkleyDB or from a very fast and scaleable remote storage <a href="#56">[56]</a>. The latter is a fast networked TCP/IP based transactional storage library which uses  multiple  single  key  hash  based  BerkeleyDB  files  together  with  an optimized  network  routing  daemon  with  a  single  thread/process per  database. The API supports RDF contexts, reification, bNodes, typed and muli-lingual literals.<BR>
<BR>
RDFStore has been successfully used for the development of several Semantic Web applications <a href="#16a">[16a]</a><a href="#16b">[16b]</a><a href="#16c">[16c]</a> and others which read/write and query RDF descriptions using RDQL.

<H2>References</H2>

<a name="1">[1]</a> "A Relational Model of Data for Large Shared Data Banks", E.F. Codd, Communications of the ACM, Vol. 13, No. 6, June 1970, pp. 377-387. <a href="http://www.acm.org/classics/nov95/toc.html">http://www.acm.org/classics/nov95/toc.html</a><BR>
<a name="2">[2]</a> P. Buneman, S. Davidson, G. Hillebrand and D. Suciu, "A query language and optimization techniques for unstructured data". In SIGMOD, San Diego, 1996<BR>
<a name="3">[3]</a> S. Abiteboul, D. Quass, J. McHugh, J. Widom and J. Wiener "The lorel query language for semistructured data" 1996 ftp://db.stanford.edu/pub/papers/lorel96.ps<BR>
<a name="4">[4]</a> Dan Brickley, R.V. Guha "RDF Vocabulary Description Language 1.0: RDF Schema" <a href="http://www.w3.org/TR/rdf-schema/">http://www.w3.org/TR/rdf-schema/</a><BR>
<a name="5">[5]</a> Grady Booch "Object-Oriented Analysis and Design with Applications" p. 71-72<BR>
<a name="6">[6]</a> Aimilia Magkanaraki, Sofia Alexaki, Vassilis Christophides, Dimitris Plexous "Benchmarking RDF Schemas for the Semantic Web" <a href="http://139.91.183.30:9090/RDF/publications/iswc02.PDF">http://139.91.183.30:9090/RDF/publications/iswc02.PDF</a><BR>
<a name="7">[7]</a> S. Abiteboul "Querying Semi-Structured Data" 1997 <a href="http://citeseer.nj.nec.com/abiteboul97querying.html">http://citeseer.nj.nec.com/abiteboul97querying.html</a><BR>
<a name="8">[8]</a> S. Abiteboul and Victor Vianu, "Queries and Computation on the Web" 1997 <a href="http://citeseer.nj.nec.com/abiteboul97queries.html">http://citeseer.nj.nec.com/abiteboul97queries.html</a><BR>
<a name="9">[9]</a> Dan Suciu, "An overview of semistructured data" <a href="http://citeseer.nj.nec.com/160105.html">http://citeseer.nj.nec.com/160105.html</a><BR>
<a name="10">[10]</a> Graham Klyne, 13-Mar-2002 "Circumstance, provenance and partial knowledge - Limiting the scope of RDF assertions" <a href="http://www.ninebynine.org/RDFNotes/UsingContextsWithRDF.html">http://www.ninebynine.org/RDFNotes/UsingContextsWithRDF.html</a><BR>
<a name="11">[11]</a> John F. Sowa, "Knowledge Representation:  Logical, Philosophical, and Computational Foundations", Brooks Cole Publishing Co., ISBN  0-534-94965-7<BR>
<a name="12">[12]</a> Graham Klyne, 18 October 2000 "Contexts for RDF Information Modelling" <a href="http://public.research.mimesweeper.com/RDF/RDFContexts.html">http://public.research.mimesweeper.com/RDF/RDFContexts.html</a><BR>
<a name="13">[13]</a> Seth Russel, 7 August 2002 "Quads" <a href="http://robustai.net/sailor/grammar/Quads.html">http://robustai.net/sailor/grammar/Quads.html</a><BR>
<a name="14">[14]</a> T. Berners-Lee, Dan Connoly "Notation3" <a href="http://www.w3.org/2000/10/swap/doc/Overview.html">http://www.w3.org/2000/10/swap/doc/Overview.html</a><BR>
<a name="15">[15]</a> Dave Beckett, "Contexts Thoughts" <a href="http://www.redland.opensource.ac.uk/notes/contexts.html">http://www.redland.opensource.ac.uk/notes/contexts.html</a><BR>
<a name="16a">[16a]</a> Asemantics S.r.l. "Image ShowCase (ISC)" <a href="http://demo.asemantics.com/biz/isc/">http://demo.asemantics.com/biz/isc/</a><BR>
<a name="16b">[16b]</a> Asemantics S.r.l. "Last Minute News (LMN)" <a href="http://demo.asemantics.com/biz/radio/">http://demo.asemantics.com/biz/radio/</a><BR>
<a name="16c">[16c]</a> Asemantics S.r.l. "The News Blender (NB)" <a href="http://demo.asemantics.com/biz/lmn/nb/">http://demo.asemantics.com/biz/lmn/nb/</a><BR>
<a name="17">[17]</a> GINF <a href="http://www-diglib.stanford.edu/diglib/ginf/">http://www-diglib.stanford.edu/diglib/ginf/</a><BR>
<a name="18">[18]</a> Jena <a href="http://www.hpl.hp.com/semweb/">http://www.hpl.hp.com/semweb/</a><BR>
<a name="19">[19]</a> Algae <a href="http://www.w3.org/1999/02/26-modules/User/Algae-HOWTO.html">http://www.w3.org/1999/02/26-modules/User/Algae-HOWTO.html</a><BR>
<a name="20">[20]</a> RDFSuite <a href="http://139.91.183.30:9090/RDF/">http://139.91.183.30:9090/RDF/</a><BR>
<a name="21">[21]</a> Wraf <a href="http://wraf.org/RDF-Service/doc/html/wraf.html">http://wraf.org/RDF-Service/doc/html/wraf.html</a><BR>
<a name="22">[22]</a> PARKA-DB <a href="http://www.cs.umd.edu/projects/plus/Parka/parka-db.html">http://www.cs.umd.edu/projects/plus/Parka/parka-db.html</a><BR>
<a name="23">[23]</a> RDFGateway <a href="http://www.intellidimension.com/pages/site/products/rdfgateway.rsp">http://www.intellidimension.com/pages/site/products/rdfgateway.rsp</a><BR>
<a name="24">[24]</a> 3Store <a href="http://sourceforge.net/projects/threestore/">http://sourceforge.net/projects/threestore/</a><BR>
<a name="25">[25]</a> TAP <a href="http://tap.stanford.edu/">http://tap.stanford.edu/</a><BR>
<a name="26">[26]</a> Inkling <a href="http://swordfish.rdfweb.org/rdfquery/">http://swordfish.rdfweb.org/rdfquery/</a><BR>
<a name="27">[27]</a> RubyRDF <a href="http://www.w3.org/2001/12/rubyrdf/intro.html">http://www.w3.org/2001/12/rubyrdf/intro.html</a><BR>
<a name="28">[28]</a> 4RDF <a href="http://Fourthought.com/">http://Fourthought.com/</a><BR>
<a name="29">[29]</a> Sesame <a href="http://sesame.aidministrator.nl/">http://sesame.aidministrator.nl/</a><BR>
<a name="30">[30]</a> KAON REVERSE <a href="http://kaon.semanticweb.org/alphaworld/reverse/view">http://kaon.semanticweb.org/alphaworld/reverse/view</a><BR>
<a name="31">[31]</a> D2R <a href="http://www.wiwiss.fu-berlin.de/suhl/bizer/d2rmap/D2Rmap.htm">http://www.wiwiss.fu-berlin.de/suhl/bizer/d2rmap/D2Rmap.htm</a><BR>
<a name="32">[32]</a> DBVIEW <a href="http://www.w3.org/2000/10/swap/dbork/dbview.py">http://www.w3.org/2000/10/swap/dbork/dbview.py</a><BR>
<a name="33">[33]</a> Virtuoso <a href="http://www.openlinksw.com/virtuoso/">http://www.openlinksw.com/virtuoso/</a><BR>
<a name="34">[34]</a> Federate <a href="http://www.w3.org/2003/01/21-RDF-RDB-access/">http://www.w3.org/2003/01/21-RDF-RDB-access/</a><BR>
<a name="35">[35]</a> Triple querying with SQL <a href="http://www.picdiary.com/triplequerying/">http://www.picdiary.com/triplequerying/</a><BR>
<a name="36">[36]</a> Squish-to-SQL <a href="http://rdfweb.org/2002/02/java/squish2sql/intro.html">http://rdfweb.org/2002/02/java/squish2sql/intro.html</a><BR>
<a name="37">[37]</a> Jena2 Database interface <a href="http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/*checkout*/jena/jena2/doc/DB/index.html">http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/*checkout*/jena/jena2/doc/DB/index.html</a><BR>
<a name="38">[38]</a> Mapping Semantic Web data with RDBMSes <a href="http://www.w3.org/2001/sw/Europe/reports/scalable_rdbms_mapping_report/">http://www.w3.org/2001/sw/Europe/reports/scalable_rdbms_mapping_report/</a><BR>
<a name="39">[39]</a> BerkeleyDB Sleepycat <a href="http://www.sleepycat.com/">http://www.sleepycat.com/</a><BR>
<a name="40">[40]</a> rdfdb <a href="http://www.guha.com/rdfdb/">http://www.guha.com/rdfdb/</a><BR>
<a name="41">[41]</a> Redland <a href="http://www.redland.opensource.ac.uk/">http://www.redland.opensource.ac.uk/</a><BR>
<a name="42">[42]</a> rdflib <a href="http://rdflib.net/">http://rdflib.net/</a><BR>
<a name="43">[43]</a> DAML DB <a href="http://www.daml.org/2001/09/damldb/ ">http://www.daml.org/2001/09/damldb/ </a><BR>
<a name="44">[44]</a> ODP Search <a href="http://dmoz.org/ODPSearch/">http://dmoz.org/ODPSearch/</a><BR>
<a name="45">[45]</a> Dave Beckett "RDF/XML Syntax Specification (Revised)" <a href="http://www.w3.org/TR/rdf-syntax-grammar/">http://www.w3.org/TR/rdf-syntax-grammar/</a><BR>
<a name="46">[46]</a> Jan Grant, Dave Beckett "RDF Test Cases" <a href="http://www.w3.org/TR/rdf-testcases/">http://www.w3.org/TR/rdf-testcases/</a><BR>
<a name="47">[47]</a> Alberto Reggiori, Dirk-Willem van Gulik, RDFStore, <a href="http://rdfstore.sourceforge.net">http://rdfstore.sourceforge.net</a><BR>
<a name="48">[48]</a> Graham Klyne, Jeremy J. Carroll "Resource Description Framework (RDF): Concepts and Abstract Syntax" <a href="http://www.w3.org/TR/rdf-concepts/">http://www.w3.org/TR/rdf-concepts/</a><BR>
<a name="49">[49]</a> Patrick Hayes "RDF Semantics" <a href="http://www.w3.org/TR/rdf-mt/">http://www.w3.org/TR/rdf-mt/</a><BR>
<a name="50">[50]</a> Miller L., Seaborne A., Reggiori 'Implementations of SquishQL, a simpler RDF Query Language", 1st International Semantic Web Conference, Sardinia, 2002<BR>
<a name="51">[51]</a> Mastering Algorithms with Perl By JonOrwant ,Jarkko Hietaniemi ,JohnMacdonald 1st Edition August 1999 ISBN 1-56592-398-7 p.287<BR>
<a name="52">[52]</a> Unicode Caseless Matching <a href="http://www.unicode.org/unicode/reports/tr21/#Caseless_Matching">http://www.unicode.org/unicode/reports/tr21/#Caseless_Matching</a><BR>
<a name="53">[53]</a> Robert MacGregor and In-Young Ko 'Representing Contextualized Data using Semantic Web Tools' <a href="http://km.aifb.uni-karlsruhe.de/ws/psss03/proceedings/macgregor-et-al.pdf">http://km.aifb.uni-karlsruhe.de/ws/psss03/proceedings/macgregor-et-al.pdf</a><BR>
<a name="54">[54]</a> Eric Prud'hommeaux 'Attributions' <a href="http://www.w3.org/2001/12/attributions/">http://www.w3.org/2001/12/attributions/</a><BR>
<a name="55">[55]</a> J. Clark, S. DeRose 'XML Path Language (XPath) Version 1.0' <a href="http://www.w3.org/TR/xpath">http://www.w3.org/TR/xpath</a><BR>
<a name="56">[56]</a> Dirk-Willem van Gulik, August 1999, 'The DB engine' <a href="http://rdfstore.sourceforge.net/dbms.html">http://rdfstore.sourceforge.net/dbms.html</a><BR>

<BR>
<CITE>Last modified 2003/11/07</CITE>
</BODY>
</HTML>