Here we make some recommendations and share some tips for using changesets for automated business processes.

If you regularly use changesets to keep your copy of LINZ’s data as up to date as possible, you may want to establish automated processes to request the data. We provide advice on specifying timestamps, scheduling tasks, and share a simple process logic example.

Grab the GetFeature URL

For scripted processes you will need the Changeset API GetFeature URL. See our guide on How to use the Changeset generator on accessing this URL.

Specifying the WFS version

If you need to change the default WFS version (2.0) to use the Changeset API in your client, please see our information on web service versioning and coordinate axis order to avoid any version related issues.

Specifying timestamps

To avoid missing or duplicating data, we recommend you: 

  • Be consistent in your timestamp specification. For example, if you establish your process using a simple precision (e.g. 2015-06-20) continue on this basis
  • Keep track of your TO timestamp and use it as your FROM timestamp for your next update
  • Remember the FROM is exclusive if it is identical to a revision timestamp

See the Dates and Timestamp reference document for information on these elements.

Scheduling your request

In the case that there is a delay in the publishing of LINZ data on LDS, we recommend that you:

  • build a time buffer into your scheduled request so it happens after the scheduled publication time
  • schedule a second task to fetch data at a later time/date if the initial request returns no data

Our most regularly updated and biggest dataset - Landonline survey and title data – is updated on a weekly basis. The updated data is published on LDS every weekend, and is a snapshot covering the previous 7 days (Saturday to Friday). To account for any unforeseen delays in publishing, we recommend that you schedule a task to extract the changeset data on Monday and Tuesday nights following the logic below. 

Processing logic example

When setting up a process to use the Changeset API, we recommend you use the following high-level logic:

1. Create an initial copy of our data, and record the corresponding LDS revision date/time of your local version

  • To replicate our data, either download the full dataset or have the data delivered by courier
  • CSV is our recommended format for importing into a database
  • We recommend to use the VRT file in your zip download to keep the data format consistent with the LDS
  • Open the TXT file in your zip download to view the latest update date for the dataset (e.g. 2018-09-9), or note the date/time using the precise time of the changeset (e.g. 2018-09-08T10:34:27.542927Z)

2. When you make a request for the changeset, use the date/time of your last local revision as the FROM timestamp, and the date/time of the start of your processing script as the TO timestamp

  • Record this information in a database table or similar

3. Once the changeset has been successfully applied, update the revision date of your local version to the start date/time of the processing script you specified as the TO timestamp in the step above. This will then become the FROM timestamp for the next changeset request you make.

  • Note: It is legitimate for a changeset to contain no records. As long as the request doesn’t return an error, it simply means no changes occurred in the specified time span. A revision summary is also available to view in the history tab.
Reference Category: Changesets
Reference Tags: Script, Processing, Automation, Schedule update
Last Updated: 
2 October 2018