Skip to end of metadata
Go to start of metadata

Atlas of Living Australia

National Species Lists (NSL): Linked data services 

Prototype services delivering Name, Taxon and Reference data sourced from the Australian Faunal Directory (AFD) , the Australian Plant Name Index (APNI) and the Australian Plant Census (APC ) and aggregated within the Atlas of Living Australia - National Species Lists (ALA-NSL) repository.  Data from AUSMOSS , ICAF , AMANI , ABRS Lichens are in the loading stage.

The services comply, where-ever possible, with recommendations  and principles for publishing Linked Data [1]   in using dereferencable, 303 URI's  to uniquely "name" NSL data objects and making them actionable through content negotiation to a range of resources that include HTML, RDF, XML, CSV and JSON document types.   The URI provides each NSL record a unique identifier that when reused in external data sets will provide a persistent and contestable reference back to the NSL name or taxon object.

As a contribution to the ongoing and collaborative evolution of an open 'TDWG ontology', output formats are based on the TDWG vocabularies published at

Within NSL: Names, taxa and classifications are found in references;  taxa re-use names; synonymy, missaplication, etc are all taxon-taxon relationships; and classifications are hierarchical arrangements of taxa.


To ensure identifiers are unique at the aggregate level, data are published within named collections to establish provenance for local identifiers and maintain the "unique within" relationship.   Seven collections are exposed at this time:








These collections are published as linked data using unique, abstract URIs of the form:<collection>/<local-Identifier>

which constitute resolvable, unique identifiers for the underlying collection objects.

AFD identifies records using UUIDs

APNI uses legacy unique integer sequences within a collection (guaranteed persistent since 1990).

Linked data compliance can tested at:



The NSL classification is essentially a grafted point-in-time tree with each source data set contributing branches that are named accordingly.  




The current NSL hierarchy is taken from the AFD and APC classifications.
Within the botanical collections there are also many "reference" trees presenting "as published" arrangements of taxa from individual works.  

Branch from node services have URI names based on the taxon identifier of the subtending node. These URI dereference to a URL extending the name with an output format type. 

For now only the current tree is available but there is provision for branch names that provide for point-in-time recovery of a taxon's children. 

http: // /branch /<branch-name> /<TaxonIdentifier>  
http: // /branch /<branch-name> /<TaxonIdentifier> .<format>

They deliver a branch of the tree beginning at the node identified by <TaxonIdentifier>.  

Available formats are html, csv, xml, and json

The branch service offers three optional parameters:

  • synonyms=yes|no    
  • stoprank=a rank abbreviation or name
  • levels=any nonnegative integer

Name strings

There are two services based on name "strings":

A name lookup service to facilitate client access to name objects using URI of the form

A Name to taxon Resolution Service returning the current taxon for a given name, according to AFD/APC

The name searches also support content negiotiation but it should be noted that the constructed URL can not be guaranteed to return a single object and taxon name resolution may return entirely different taxa at different times.

 All names are case sensitive and all AFD names above Genus currently use UPPER case.or fuzzy matching and case insensitive paramaterised searching, see

LSID equivalents for all URNs are also supported

Which can be dereferenced using the TDWG LSID resolver

The TAPIR implementation based on the TCS schema has been removed.


Quick Start


To find a name or accepted taxon by case sensitive exact match, construct a URL of the form: viscosa viscosa

where "+" or "_" can substitute for [space] in taxon names if necessary.

Quick Start: Bulk Data

Complete, point in time, extracts in XML, RDF and CSV formats are available for each collection at

Quick Start: SPARQL

Example interfaces using SPARQL end points can be found at and
a fast, basic Taxon Name Resolution Service  for up to 1000 names at:

Unknown macro: {page-info}
  • No labels


        1. For name searches, we have a TAPIR interface courtesy Matt Hand. The name-literal-as-psudeo-id a useful entry point, but it doesn't actually return a document with an "about" of the id you asked for.

          For other entry points, you can use APNI . Any url of the form is equivalent to the uri

          AFD is a little sneakier. If you navigate to an AFD profile page and look at the source, you will find an afdTaxonGUID and afdNameGUID div. URIs can be constructed from
          them such as

          Of course, once it's all working acceptably we should hyperlink link the human-friendly webapps back to the linked data with proper html liks. Alternatively, we may alter the page so that the hyperlinks are present, but hidden. That way, web spiders can index them, but it won't confuse people who want to see only human-readable pages.

        2. Pete,

          Plus (plus) in offending URL changed to space which works.

    1. specificEpithet: fixed (copy & paste bug)
      use of publishedIn: fixed (need a better diagram of the ontology)

    1. The rdf mime type is application/rdf+xml (rfc3870) .

      On the back end, both the URI and the LSID resolver look at the Accept header and attempt to find a match.

      I'll have to find Kevin's contact details - that's the main problem with this anonymous business. Landcare NZ? done, but our email server is not happy at the moment.

      1. ... Yes, this seems to be correct. This is a bit of a problem: I was hoping to represent XML array-style elements as a combination array (for the elements) and properties (for attributes). You would be able to access the nested elements like_so4 and also get named properties.

        But, it's simply not valid. Drat. If I fix it, it will break the code of everyone that is currently using it.

        I'll see what I can do.