We support multiple types of data uploads. Data can be uploaded manually, via the data management interface, or you can set up real-time data connections using the API. We can also provide support on either manual or automatic data uploads.


Please note that Lizard assumes the data to be in UTC


Your raster data has to be in the format of a single band, georeferenced TIFF (geotiff), with the following requirements:

  • Geotiff should have valid projection including transformation (EPSG code). All projections supported by proj4 are supported.

  • Geotiff should have a NODATA value.

  • Geotiff should be single band. RGB or multi-band is not supported.

  • Temporal raster datasets with multiple timesteps should be supplied with a single geotiff per timestamp

Creating and editing a Raster Store

The first step in uploading your raster datasets is to create a Raster Store. This can easily be done using our Data Management app. Following this step-by-step tutorial to upload a raster dataset:


The Data Management interface is available at: “www.{your_organisation}”.

After landing on this page, please click on ‘Data Management’, then ‘Rasters’. Click on “New Raster” NewRaster to open the form for new Rasters. Or choose an existing raster to edit.

  1. Choose the organisation you’re supplying data for.

  2. Choose the organisations you want to share this dataset with.

  3. Choose the preferred authorisation type (read more).

  4. Give the dataset a name.

  5. Describe your dataset. Make sure to name the source and describe the analysis that resulted in this dataset. Users can read this description in the Lizard Catalog.

  6. Choose how your data should be aggregated. This functionality is only needed when you want to use the Region Analysis mode in Lizard Portal or Lizard API.

  7. Choose the observation type of this dataset.

  8. Choose a preferred color map. Choose “Rescalable” if you want to be able to rescale the color map in Lizard Portal.

  9. Fill in the supplier name. We use your username by default.

  10. You can fill in a supplier code for your own administration.

  11. If you’re supplying a temporal dataset. Choose “Raster Series”. Next, fill in the interval of the dataset.

  12. Click submit. You have now created the Raster Store. You’re all set up to supply your geotiff’s using the upload button.


You can supply your GeoTIFF’s in multiple ways:

  • Use the Data Management App

  • Use the Lizard API

  • Use the Lizard FTP

Use of the Data Management App is fairly straightforward and is build upon our API. If you want to upload larger raster datasets, please make use of our FTP server. Questions about FTP server access can be send to

Using the Data Management App

Once you have successfully created a raster store you will see the pop up below.


Choose upload data to browse your GeoTIFF’s. When you want to add data to an existing raster store, click on the upload icon uploadicon in the list of existing Raster Stores.

You can supply multiple rasters, Lizard will blend them together! Click “Save all Files” to start uploading your data. Your GeoTIFF’s will be uploaded in a task. You can follow the status of the task by clicking “show asynchronous task”.


Using the Lizard API

Below you find an example of how to upload a temporal geotiff in Python:

import requests
from localsecret import username, password
headers = {"username": username,
                   "password": password}

def post_geotiff_to_lizard(endpoint, file_path, timestamp=None):
        """Function to post a temporal geotiff to lizard"""
        url = "{}data/".format(endpoint)
        file = {"file": open(file_path, "rb")}
        if timestamp:
                data = {"timestamp": timestamp}
                r =, data=data, files=file, headers=headers)
        else: # use to send data to non-temporal endpoints
                r =, files=file, headers=headers)
        return r

uuid = "b73189fc-058d-4351-9b20-2538248fae4f"
endpoint = "{}/".format(uuid)
file_path = "local_geotiff.tif"
timestamp = "2020-01-01T000000Z"

response = post_temporal_geotiff_to_lizard(endpoint, file_path, timestamp)

Time Series


Time series should always be linked to one of the vector data models listed here.

Time series can be imported manually, by uploading a csv file to

Time series can be uploaded through a 4 column csv. Select both the organisation you want to upload and the asset type the time series belongs to (e.g. groundwater station). The csv should not contain a header.

Example with headers


unit id/name


asset id

















The columns should contain:

  • timestamp: a timestamp in iso8601 format.

  • unit id/name: This is both the name of the observation type name and this will become the timeseries name.

  • value: value as either a float or integer number.

  • asset id: either an asset uuid or a supplier_code (an identifier for an asset, unique for your organisation).In case the assets have been added with a code under category [columns].

Since a csv should not contain a header, your csv should look like this:

Example without headers


















New SFTP users need to generate a Personal API Key with “FTP” scope to authenticate. Provide your username and use the API Key as password.

Users that existed before January 2021 can keep using their username and password. Future changes to the passwords will not be reflected in the FTP password.

Supported data formats

Via SFTP we support the CSV format.

Every supplier has its own directory on the SFTP. It can be accessed by logging in with the Lizard credentials.

As soon as a new file is uploaded to the SFTP server, it will be automatically recognised and processed by Lizard. After processing the file is moved to a backup for a limited period of time.

When a file is rejected, the supplied file is moved to the directory ‘rejected’ and a message is sent to the suppliers Inbox. In the Inbox a supplier can see the status of his supplied files.


Use CSV for supplying timeseries data with numerical or textual values according to the following format:

<timestamp>,<timeseries_supplier_code or uuid>,<value>[\n]
<timestamp>,<timeseries_supplier_code or uuid>,<value>[\n]
<timestamp>,<timeseries_supplier_code or uuid>,<value>[\n]


  • timestamp: time in UTC in ISO 8601 format. For example 2012-10-26T09:22:35Z. Supplying timestamps in different timezone is only allowed when the UTC offset is added to the timestamp according to ISO 8601. For example: 2012-10-26T07:22:35+02.

  • timeseries_supplier_code or UUID: supplier_code attribute of timeseries as registered by administrator/supplier or the UUID of the timeseries object.

  • value: numerical or textual value.

  • [\n]: newline character.

CSV requirements:

  • CSV file size may not exceed 100 MB. For one timeseries with a measuring frequency of 1 second that would be around 1 month of data.

  • Every supplied file should contain new measurements. It is not allowed to add measurements to a previously supplied file.

  • Use the empirical CSV format where the field separator is a comma (,) and the decimal separator a period (.).

Error handling

When a file is in the wrong format, authorisation fails and/or value type is not valid:

  • File is moved to ‘rejected’ directory of supplier

  • An error is logged

  • A message is sent to the Inbox of the supplier


Timeseries data can be supplied with a POST request to the timeseries data endpoint in the API (<baseurl>/api/v4/timeseries/{uuid}/data/). Interaction with the API can be done from e.g. Postman or Python. User credentials should be included in the header and the data in the payload of the request.

Value based timeseries

This type of timeseries consists of integers, floats, float arrays or text. The body of the request is a JSON object with timestamps and values:

    "data": [{
                    "datetime": "2019-07-01T01:30:00Z",
                    "value": 40.7
                    "datetime": "2019-07-01T02:00:00Z",
                    "value": 39.1

File based timeseries

This type consists of images, movies or files. A single files is posted on a certain datetime, which is included in the header of the request.

An example of an upload of an image using requests in Python:

import requests
import datetime as dt

now = dt.datetime.utcnow()
uuid = ‘385c08c5-a0cf-4097-a98f-b6f053ef32c6’
url = '{}/data/'.format(uuid)
data = open('./x.png', 'rb').read()
res =,
                    'Content-Type': 'image/png',
                    'datetime': now.strftime('%Y-%m-%dT%H:%M:%S.%fZ'),
                    'username': 'jane.doe',
                    'password': 'janespassword'


We support vector synchronisation. This type of data feed has to be configured per customer. Changes in location names, coördinates and new locations can be seen in Lizard as soon as the following day.

Upload vectors as a shapefile

Assets can be uploaded to Lizard with shapefiles via the import form at <base-url>/import. These shapefiles contain information about assets or assets together with their nested assets (e.g. GroundwaterStations and their Filters).

A shapefile can be uploaded as a zipped archive. The zipfile should contain at least a .dbf, .shp, .sh and a .ini file. In case of nested assets, these should be found in the same shapefile record (row) as their assets. The following section provides an example of an .ini file for groundwater stations.

Assets without nested assets

An .ini file is used to map shapefile attributes to Lizard database tables, organisations and attributes. An .ini file consists of three sections:

  • [general]: indicates asset name to upload to and optionally organisation uuid.

  • [columns]: maps lizard columns to shapefile columns

  • [default]: optionally provide default values for columns

This example .ini creates a new asset from each record of the shapefile, with:

  • A code taken from the ID_1 column of the shapefile;

  • A name taken from the NAME column of the shapefile;

  • A surface_level taken from the HEIGHT column of the shapefile;

  • A frequency that defaults to daily;

  • A scale that defaults to 1, which means this asset can be seen at world scale, when the asset-layer in Lizard-nxt is configured accordingly.

Assets with nested assets

In case of nested assets another section should be added to the .ini file:

  • [nested]: maps lizard columns to shapefile columns, it is possible to add multiple nested assets for one asset.

A groundwater station with filters (its nested assets) would look like this:

asset_name = GroundwaterStation
nested_asset = Filter

code = ID_1
name = NAME
surface_level = HEIGHT

frequency = daily
scale = 1

first = 2_code
fields = [code, filter_bottom_level, filter_top_level, aquifer_confiment, litology]

The [nested] categories describe:

  • first: indicates the first column in the shapefile that maps lizard columns to shapefile columns. This column and all columns to its right configure nested assets. The number of these columns should be a multiple (the number of maximum nested assets per asset) of the fields.

  • fields: lizard-nxt fields. Each column in the shapefile (including the ‘first’) is mapped to these fields in order, without considering the shape column names.

This example .ini creates (a) new nested asset(s) from each record of the shapefile, with:

  • A link to an asset that conforms to the asset as described in the Assets without nested assets.

  • A code taken from the 2_code column of the shapefile. And a new nested asset with a filter_bottom_level for each 5th column from that column onwards;

  • A filter_bottom_level taken from the column directly next to the 2_code column of the shapefile. And a new nested asset with a filter_bottom_level for each 5th column from that column onwards;

  • A filter_top_level taken from the column 2 columns next to the 2_code column of the shapefile. And a new nested asset with a filter_top_level for each 5th column from that column onwards;

  • A aquifer_confiment taken from the column 3 columns next to the 2_code column of the shapefile. And a new nested asset with a aquifer_confiment for each 5th column from that column onwards;

  • A litology taken from the column 4 columns next to the 2_code column of the shapefile and each. And a new nested asset with a litology for each 5th column from that column onwards

You can copy paste this code in your own .ini file and zip it together with the shapefile.


SUF-HYD files can be imported manually, by uploading a file to

We currently do not support GWSW-Hyd yet.

The description of SUF-HYD files can be found here:

Data downloads


Download of rasters is possible but limited via the Lizard portal. The current limit is a 1 million by 1 million pixels download. Only possible when you are zoomed in far enough, depending on the resolution of the specific raster.

Select a raster from the datalayers menu to the right. Zoom in to the required extent. Click the export button, and click on the Rasters tab in the Export Data window. Select the required projection and cel size. Click on Start Export. When raster export is done, a download link will be supplied via the Lizard inbox.


Lizard supports two types of timeseries. There are timeseries connected to a location, and there are timeseries in the form of rasters.

Using the datalayers menu to the right, select your source for a timeseries. Select the point or points of which you want to download the timeseries. You can start the Export directly from the map view, or you can switch to the Graph view. After clicking on Export, a new window will pop-up. Using the timeseries (or timeseries from raster) you can select the period for which you want an export. If the selected point has more then one timeseries, you can select which one you want to export. Make your selection, and click on the Start Export button. When the export is finished, a download link will be supplied via the Lizard inbox.