Skip to main content

Query Language

To visualize data from Enapter Cloud you need to use a simple query language based on YAML.

This is what a typical request would be like:

telemetry:
- device: YOUR_DEVICE
attribute: YOUR_TELEMETRY
granularity: $__interval
aggregation: auto

H2 Sensor

Remember a fictional hydrogen sensor that was introduced in the blueprint tutorial? Let us query its readings.

Device

First, we need to know the UCM ID of the sensor. You can find it on the device page in the Enapter Cloud web interface. Usually UCM ID looks like this: CB748D2BD5D044995A0FD76F551F1AABF7384858.

Requesting data for multiple devices is currently limited to 10 devices per request. If you need to graph data about more devices on a single panel, use multiple Grafana queries.

Telemetry

Second, we need to know the names of the metrics of the sensor. Blueprint developers describe device telemetry in manifest.yml, so let us take a look at the telemetry section of the sensor's manifest:

# Sensors data and internal device state, operational data
telemetry:
# Telemetry attribute reference name
h2_concentration:
# Attribute type, one of: float, integer, string
type: float
# Unit of measurement
unit: "%LEL"
display_name: H2 Concentration

There is a single metric of type float called h2_concentration. Exactly what we need.

The fictional H2 sensor has only one metric, but other devices are likely to have many. You can request any number of metrics as long as they are declared in the manifest and there is relevant data in the Enapter Cloud.

Result

Now we have all the components required to build the request:

telemetry:
- device: CB748D2BD5D044995A0FD76F551F1AABF7384858
attribute: h2_concentration
granularity: $__interval
aggregation: auto

This should be enough for most use cases. If you would like to know what granularity and aggregation are, keep reading.

Granularity and Aggregation

Say, we want to request one day of sensor readings. How much data will the response contain?

TimeValue
2022-07-10 09:00:000.00005
2022-07-10 09:00:010.00006
2022-07-10 09:00:020.00006
......
2022-07-11 09:00:000.00005
2022-07-11 09:00:010.00005
2022-07-11 09:00:020.00004

If the sensor sends one data point per second, then there will be 24 * 60 * 60 = 86400 data points per day.

You probably do not need this many.

If the size of a point on a graph is 1px and you have a 4K display, your computer will not be able to draw a horizontal line that is longer than 4096 points. Furthermore, people tend to watch several graphs side by side, so the panels are usually 2 times narrower. It means that you could actually request 86400 / (4096 / 2) ~= 42 times less data points and still get a beautiful graph. And we do not even count padding around panels!


To load the dashboards faster, we need to reduce the number of data processed and transfered over the network.

This can be done by:

  1. Grouping data points by some time interval (e.g. 1m);
  2. Applying an aggregation function to each group (e.g. max);
  3. Returning only the result of aggregation.
TimeMax Value out of 60
2022-07-10 09:00:000.00006
2022-07-10 09:01:000.00005
2022-07-10 09:02:000.00006

Such group of data points is called a time bucket.

granularity is a time interval which defines a size of a time bucket and aggregation is a function to apply to data points whose timestamps fall into this time interval.

$__interval is a special value of granularity that means "use a time interval such that the amount of returned data points is no more than a width of a panel in pixels".

Currently supported aggregation functions are:

  • avg — calculate arithmetic mean;
  • last — use last known value;
  • auto — select either avg or last depending on the data type.

Gap Filling

Sometimes data sorted into time buckets can have gaps. This can happen if you have irregular sampling intervals, or you have experienced an outage of some sort. Gaps might make data analysis difficult, e.g. you cannot sum two timeseries if one of them contains a time bucket that has no data at all.

You can use gap filling to create additional rows of data in any gaps, ensuring that the returned rows are in chronological order, and contiguous.

Currently the only supported gap filling method is last observation carried forward (locf).

LOCF

locf fills the gaps using the last observed data point value:

No LOCF

LOCF

Specify gap_filling.method in your query to enable locf:

telemetry:
- device: CB748D2BD5D044995A0FD76F551F1AABF7384858
attribute: h2_concentration
granularity: $__interval
aggregation: auto
gap_filling:
method: locf

Because locf relies on having values before each time bucket to carry forward, it might not have enough data to fill in a value for the first bucket if it does not contain a value. This might happen e.g. if the start of time range of the query falls in the middle of the gap.

To mitigate the empty first time bucket you might want to use the optional look_around parameter which configures how far in the past to look for values outside of the time range specified.

telemetry:
- device: CB748D2BD5D044995A0FD76F551F1AABF7384858
attribute: h2_concentration
granularity: $__interval
aggregation: auto
gap_filling:
method: locf
look_around: 10m
Hardware diversity is welcome. Integrate any device into a unified energy network.
© 2022 Enapter
Developer toolkit
DocumentationReference