Chris Mungall > DBIx-DBStag-0.10 >


Annotate this POD


New  3
Open  2
View/Report Bugs
Source   Latest Release: DBIx-DBStag-0.12

NAME ^ - information retrieval using a simple relational index

SYNOPSIS ^ -r person -k social_security_no -d Pg:mydb myrecords.xml -d Pg:mydb -q 999-9999-9999 -q 888-8888-8888


Indexes stag nodes (XML Elements) in a simple relational db structure - keyed by ID with an XML Blob as a value

Imagine you have a very large file of data, in a stag compatible format such as XML. You want to index all the elements of type person; each person can be uniquely identified by social_security_no, which is a direct subnode of person

The first thing to do is to build the index file, which will be stored in the database mydb -r person -k social_security_no -d Pg:mydb myrecords.xml

You can then use the index "person-idx" to retrieve person nodes by their social security number -d Pg:mydb -q 999-9999-9999 > some-person.xml

You can export using different stag formats -d Pg:mydb -q 999-9999-9999 -w sxpr > some-person.xml

You can retrieve multiple nodes (although these need to be rooted to make a valid file) -d Pg:mydb -q 999-9999-9999 -q 888-8888-8888 -top personset

Or you can use a list of IDs from a file (newline delimited) -d Pg:mydb -qf my_ss_nmbrs.txt -top personset



This database will be used for storing the stag nodes

The name can be a logical name or DBI locator or DBStag shorthand - see DBIx::DBStag

The database must already exist


Deletes all data from the relation type (specified with -r) before loading


Does not check if the ID in the file exists in the db - will always attempt an INSERT (and will fail if ID already exists)

This is the fastest way to load data (only one SQL operation per node rather than two) but is only safe if there is no existing data

(Default is clobber mode - existing data with same ID will be replaced)


If there is already data in the specified relation in the db, and the XML being loaded specifies an ID that is already in the db, then this node will be ignored

(Default is clobber mode - existing data with same ID will be replaced)


A commit will be performed every n UPDATEs/COMMITs (and at the end)

Default is autocommit

note that if you are using -insertonly, and you are using transactions, and the input file contains an ID already in the database, then the transaction will fail because this script will try and insert a duplicate ID


This is the name of the stag node (XML element) that will be stored in the index; for example, with the XML below you may want to use the node name person and the unique key id


This flag should only be used when you want to store data


This node will be used as the unique/primary key for the data

This node should be nested directly below the node that is being stored in the index - if it is more that one below, specify a path

This flag should only be used when you want to store data


Synonym for -k


If specified, this will create a table for the relation name specified below; you should use this the first time you index a relation

-idtype TYPE


This is the SQL datatype for the unique key; it defaults to VARCHAR(255)

If you know that your id is an integer, you can specify INTEGER here

If your id is always a 8-character field you can do this

  -idtype 'CHAR(8)'

This option only makes sense when combined with the -c option


This can be the name of a stag supported format (xml, sxpr, itext) - XML is assumed by default

It can also be a module name - this module is used to parse the input file into a stag stream; see Data::Stag::BaseGenerator for details on writing your own parsers/event generators

This flag should only be used when you want to store data


Fetches the relation/node with unique key value equal to query-id

Multiple arguments can be passed by specifying -q multple times

This flag should only be used when you want to query data


If this is specified in conjunction with -q or -qf then all the query result nodes will be nested inside a node with this name (ie this provides a root for the resulting document tree)


This is a file of newline-seperated IDs; this is useful for querying the index in batch


This will write a list of all primary keys in the index



For more complex stag to database mapping, see DBIx::DBStag and the scripts use file DBM indexes is for storing fully normalised stag trees


syntax highlighting: