Gain Insights from Queries

The Tricloud Nexus Query Editor is a powerful tool designed to help users query, analyze, and visualize data effortlessly.

This guide will help you quickly get started with using the Query Editor of Tricloud Nexus to analyze and gain insights from data efficiently.


Introduction

The Tricloud Nexus Query Editor is a powerful tool designed to help users query and analyze data effortlessly. Built on the Azure Data Explorer Query UI, it uses KQL (Kusto Query Language) for efficient data access and exploration.

Learning Kusto Query Language (KQL)

Here are some links to help you get started with the learning the KQL language.


Accessing the Query Editor

  1. Login to Tricloud Nexus:

    • Navigate to the Tricloud Nexus platform using your browser.

    • Enter your credentials and click Login.

  2. Open the Query Editor:

    • Click on the Query Editor tab located in the navigation menu.

  1. Select the Database:

    • In the Query Editor, your workspace is pre-configured to access the Azure Data Explorer database.

    • Ensure you have the necessary permissions to query the database.

    • Select the database as seen in the example screenshot above, and take notice that the database is selected above the query editor using the convention cluster/databasename.

    • In the example above the selection is: tciotadxcluster.westeurope/TimeSeriesSandbox02 which may differ depending on your installation.


Creating your first Query

  1. Familiarize yourself with KQL:

  2. Build Your Query:

    • Use the query editor interface to type your KQL commands.

    • Example:

    • This Query will show the name all Asset Hierarchies that has been deployed to a device, and sort them alphabetically by the name of the Asset Hierarchy:

      AssetHierarchy
      | distinct HierarchyName
      | order by HierarchyName asc
  3. Preview your Query:

    • Click Run to execute the query and preview the results in the Results panel.


Database Tables

The database hosts several tables that are crucial for exploring and analyzing your data. This section provides an overview of the most commonly used tables to help you get started.

You can view the database structure by expanding the nodes in the Database Explorer. Among the available tables, the most relevant ones include:

  • AssetHierarchy: This Table contains information from Asset Hierarchies, that has been deployed to a Device. The Table details the organizational structure of deployed assets, featuring the latest version of each node— whether it is an Area or an Asset within any Asset Hierarchy.

  • AssetHierarchyMetadata: This Table contains metadata information from Asset Hierarchies, that has been deployed to a device. The Table provides metadata for all Area/Assets and Tags that has been configured for Asset hierarchies.

  • Measurements: Stores time-series measurements and events associated with your assets. The Table contains the actual metrics that has been collected from your devices.

All timestamps in any Timestamp column is always represented in the Date format ISO8601 as UTC unless otherwise specified


AssetHierarchy Table

This Table contains information from Asset Hierarchies, that has been deployed to a Device. The Table details the organizational structure of deployed assets, featuring the latest version of each node— whether it is an Area or an Asset within any Asset Hierarchy.

Running this query, lets you get a list of latest available AssetHierarchies in the Database.

AssetHierarchy
| distinct HierarchyName, HierarchyId, HierarchyVersion
| order by HierarchyName asc

Beneath is an example result of running the query

Now that we can see all available Asset Hierarchies in the Table, we can now refine our search, to only show all Nodes from a specific Asset Hierarchy. Running the following query, will show the Asset Hierarchy nodes for the Asset Hierarchy called "Odense Factory".

The Query only includes some of the available columns using the project operator.

AssetHierarchy
| where HierarchyName == "Odense Factory"
| project HierarchicalName, Type, Description, IsDeployed, DeploymentTimestamp, DeviceId
| order by HierarchicalName asc

Here is an example result of running the query

Notice that by ordering the query result by HierarchicalName displays the result exactly as it was Modelled in Tricloud Nexus in that version. Also notice that you can see whether a Node has been deployed, when it was deployed and the device that was targeted for the deployment.


AssetHierarchyMetadata Table

This Table contains metadata information from Asset Hierarchies, that has been deployed to a device. The Table provides metadata for all Area/Assets and Tags that has been configured for Asset hierarchies.

All metadata for an Area/Asset or Tag that has been deployed to a device, can found by running the query beneath. The query displays all metadata Key/Value objects for all available Asset Hierarchies. It removes some less important columns from the result (Id, IngestionTime, DataType). It then orders the result by the Type of metadata.

AssetHierarchyMetadata
| project-away Id, IngestionTime, DataType
| order by Type asc

Here is an example result of running the query

Notice that the first 2 rows are metadata about an Area that seemingly sets a GPS coordinate for the Area. The rest of the rows are metadata about a Tag such as uom (Unit Of Measure), description, ranges etc.

You can combine an entire AssetHierarchy with the available metadata for all Area/Assets or Tags into a single query by joining the AssetHierarchy and AssetHierarchyMetadata Tables. The query combines all metadata available for a specific node into a json document (Key/Value) and stores this in the Metadata column.

AssetHierarchy
| join kind=leftouter AssetHierarchyMetadata on Id
| where HierarchyName == "Dallas Factory"
| project Id, Name, HierarchicalName, MetadataKey = Key, MetadataValue =  Value, DeploymentTimestamp
| summarize Metadata = make_bag(pack(MetadataKey, MetadataValue)) by HierarchicalName
| order by HierarchicalName asc

Result of running Query


Measurement Table

The Measurement Table stores time-series measurements and events associated with your assets. The Table contains the actual metrics that has been collected from the devices.

The following Query, will find all available metrics/measurements that has a StartTimestamp between 17. the dec to 20. the dec. (UTC) for the Tag with the HierarchicalName OD.Printing.Line.Temperature, then order the result by StartTimestamp descending:

Measurements
| where StartTimestamp between (datetime('2024-12-17T00:00:00') .. datetime('2024-12-20T00:00:00'))
| where HierarchicalName contains "OD.Printing.Line.Temperature"
| order by StartTimestamp desc

Result of running the Query

The result shows the structure of the Measurement Table, the Table has the following Columns:

  • Id - The Id of the Tag that governs measurements

  • HierarcicalName - ISA95 name for the Tag that governs the measurements

  • TagName - Shorthanded name of the Measurements

  • TimeGenerated - The UTC time the Measurement was generated at the Data Collector

  • StartTimestamp - The UTC start time of the Measurement

  • EndTimestamp - The UTC end time of the Measurement. Many measurements does not have an endtime specifed, denoting that the measurement does not have timespan.

  • Type - Measurement type. Can be either Analog, Digital or String. The Type depends on Tag type that was configured in the Asset Hierarchy.

  • Value - The value of the measurement as a real data type. This will likely only be set for measurements of type Analog or Digital.

  • ValueDigital - The value of the measurement as a boolean data type. This will likely only be set for measurements of type Digital and Analog. The database will automatically try and convert the value of the measurement to a boolean, by converting 1 to true and 0 to false.

  • ValueString - The value of the measurement as a string data type. This will likely only be set for measurements of type String. This is a very flexible data type and can be used for metrics that carries a complex data type such as Json.

  • Quality - The quality of the measurement. The quality of each measurement is set by the data collector on the device, that is used to collect the measurement from the destination data source. If a given measurement has a quality other than the value good, it is not recommended to use the measurements for training an AI model.

The Measurement Table is indexed on the StartTimestamp column, which significantly improves Query performance for queries that starts by filtering data on this column. Example:

Measurements

| where StartTimestamp > ago(2d)


Basic Queries

In this section, we'll cover some of the basic operators used in KQL queries, to help you begin exploring data in Tricloud Nexus.


Where, order by, take operators

Here's a query example showing how to filter measurements using the Where operator on both timespan and HierarchicalName. The Order by operator sorts these measurements in ascending order based on StartTimestamp. The take operator limits the output to the first 1000 rows (alternatively, you can use the limit operator).

Measurements
| where StartTimestamp > ago(1d)
| where HierarchicalName has "DA.IM.CoolingStorage.Temp"
| order by StartTimestamp asc
| take 1000

Here is the result of running the query:

The query above filters data on time using the ago(1d) expression. The following examples are alternatives ways of filtering by time:

| where StartTimestamp > ago(1h) // Include measurements less than an hour old
| where StartTimestamp > ago(1m) // Include measurements less than a minute old
| where StartTimestamp > datetime('') // Include measurements less than an hour old
| where StartTimestamp > datetime('2025-01-16T12:00:00.000000Z') // Include measurements that is older than January 16. th 2025 12:00 UTC

Project, project-away operators

This query example shows you how to query Measurements, then using the project operator to only include the columns HierarchicalName, TagName, StartTimestamp, Value in the result.

Measurements
| where StartTimestamp > ago(1d)
| where HierarchicalName has "DA.IM.CoolingStorage.Temp"
| order by StartTimestamp asc
| project HierarchicalName, TagName, StartTimestamp, Value
| limit 1000

Here is the result of running the query:

The project-away operator can be used, to exclude columns from the result, instead of using the project operator to specifically include the columns.

Measurements
| where StartTimestamp > ago(1d)
| where HierarchicalName has "DA.IM.CoolingStorage.Temp"
| order by StartTimestamp asc
| project-away Id, TimeGenerated, EndTimestamp, Type, ValueDigital, ValueString
| limit 1000

Here is the result of running the query:


Extend operator

The extend operator is used below, to generate new columns in the result, and calculate row values for the extended columns.

Measurements
| where StartTimestamp > ago(1d)
| where HierarchicalName has "DA.IM.CoolingStorage.Temp"
| project StartTimestamp, HierarchicalName, Value
| extend AdxRowInsertTime = ingestion_time()
| extend DurationDifference = AdxRowInsertTime - StartTimestamp
| extend FormattedDuration = format_timespan(DurationDifference, 'ddd.h:mm:ss [fffffff]')
| take 1000

The query above starts by finding measurements less than a day old from the CoolingStorage.Temp sensor. It then projects only the columns StartTimestamp, HierarchicalName and Value columns. It extends the result with 3 new columns.

  • AdxRowInsertTime will contain the exact UTC timestamp, that the row was created in the Database.

  • DurationDifference Substracts the StartTimestamp from the AdxRowInsertTime and then stores the timespan in the column.

  • FormattedDuration Formats the timespan datatype of the DurationDifference column and stores a strinng representation of the timespan.

Here is the result of running the query:

Intermediate Queries

In this section, we'll cover some intermediate operators used in KQL queries, to help you begin exploring data in Tricloud Nexus.


Let, join operators

The query below retrieves data related to a specific asset hierarchy node ("DA.IM.P103.Line.OrderCompletion") from the AssetHierarchy and Measurements tables. It enriches the measurements data with metadata by joining it with the AssetHierarchyMetadata table and organizes the metadata into a structured bag format. The result of the first part of the query, is stored as a variable called metadata using the let operator. The second part then joins metadata with Measurements. The query filters measurements from the last 50 days, calculates the delivered items from dynamic properties, and selects relevant fields, limiting the output to 1,000 records sorted by the start timestamp. Using the todynamic operator lets you parse contents as json, and reference properties of the json, which is done in the query in this line toint(properties.deliverQuantity).

let metadata = AssetHierarchy
| where HierarchicalName has "DA.IM.P103.Line.OrderCompletion"
| join kind=leftouter AssetHierarchyMetadata on Id
| where HierarchicalName has "DA.IM.P103.Line.OrderCompletion"
| project Id, Name, HierarchicalName, MetadataKey = Key, MetadataValue =  Value, Type
| summarize Metadata = make_bag(pack(MetadataKey, MetadataValue)) by HierarchicalName, Type
| order by HierarchicalName asc
| project HierarchicalName, Type, Metadata;
Measurements
| where StartTimestamp > ago(50d)
| where HierarchicalName has "DA.IM.P103.Line.OrderCompletion"
| join kind=leftouter metadata on HierarchicalName
| extend properties = todynamic(ValueString)
| extend DeliveredItems = toint(properties.deliverQuantiy)
| project StartTimestamp, HierarchicalName, DeliveredItems, Quality, Metadata
| order by StartTimestamp asc
| take 1000

Here is the result of running the Query:


Summarize operator

The summarize operator can be used to produce a new table that aggregates certain values of the input table.

The following Query filters the Measurements to include all Measurements from the Line (Regardless of Tag name), it then uses the summarize operator to group the measurements by HierarchicalName and perform a Count() on each group.

Measurements
| where HierarchicalName startswith "DA.IM.P103.Line"
| summarize Count = count() by HierarchicalName

Here is the result of running the Query:

The following Query produces a time chart showing the average number of items delivered every 1 day (grouped by HierarchicalName) between December 1, 2024, and December 20, 2024. It first defines parameters for the query using the let operator. It then selects all measruments for the Tag called ("DA.IM.P103.Line.OrderCompletion") and filters its measurements in the timespan from startTime to endTime. It converts the payload of the Tag (ValueString) into a dynamic, to parse the value of the json property deliverQuantity into an integer. It uses the summarize operator, to group all measurements into bins of 1 day using the bin operator to specify the binning column StartTime, and the timespan of each group using the totimespan(resolution). Finally it projects the columns StartTimestamp and AverageItemsDelivered and rounds the AverageItemsDelivered to 2 decimals. Using the timechart operator, will draw the result as a timechart.

let startTime = datetime("2024-12-01 00:00:00");
let endTime = datetime("2024-12-20 00:00:00");
let resolution = 1d;
let tagName = "DA.IM.P103.Line.OrderCompletion";
Measurements
| where StartTimestamp > startTime and StartTimestamp < endTime
| where HierarchicalName has tagName
| extend properties = todynamic(ValueString)
| extend DeliveredItems = toint(properties.deliverQuantiy)
| summarize AverageItemsDelivered = avg(DeliveredItems) by bin(StartTimestamp, totimespan(resolution)), HierarchicalName
| project StartTimestamp, AverageItemsDelivered = round(AverageItemsDelivered, 2)
| render timechart

Here is the result of running the Query:


Make-series operator

The following query uses the make-series to aggregate the average number of delivered items per. day and the minimum/maximum orders delivered in an order each day between October 1, 2024 to February 1, 2025. The main difference between using the make-series operator instead of the summarize operator in the previous example above, lies in how they handle time-based grouping and gaps in data. The summarize operator does not assume or enforce continuity in time intervals, but the make-series operator always generate a continuous time series, and fills any missing values with user defined default value. In the example below it defaults any missing data in a bin to 0 using the statement default=0.

let startTime = datetime("2024-10-01 00:00:00");
let endTime = datetime("2025-02-01 00:00:00");
let resolution = 1d;
let tagName = "DA.IM.P103.Line.OrderCompletion";
Measurements
| where StartTimestamp > startTime and StartTimestamp < endTime
| where HierarchicalName has tagName
| extend properties = todynamic(ValueString)
| extend DeliveredItems = toint(properties.deliverQuantiy)
| make-series avg(DeliveredItems), min(DeliveredItems), max(DeliveredItems) default=0 on StartTimestamp from startTime to endTime step totimespan(resolution)
| project StartTimestamp, avg_DeliveredItems, min_DeliveredItems, max_DeliveredItems
| render timechart

Here is the result of running the Query:


Advanced Queries

In this section, we'll cover some advanced operators used in KQL queries, to help you begin exploring data in Tricloud Nexus.


series_fit_line operator (Trend line)

The Query below demonstrates the usage of the series_fit_line operator. The series_fit_line operator performs a linear regression analysis on the time series data. It determines the direction and strength of a trend in the AvgValue series over time. The key output, the TrendLine, provides a smoothed linear approximation of the data, which helps identify patterns or predict future behavior.

let startTime = datetime("2024-10-01 00:00:00");
let endTime = datetime("2025-01-01 00:00:00");
let resolution = 1d;
let tagName = "DA.IM.P103.Line.OrderCompletion";
Measurements
| where StartTimestamp > startTime and StartTimestamp < endTime
| where HierarchicalName has tagName
| extend properties = todynamic(ValueString)
| extend DeliveredItems = toint(properties.deliverQuantiy)
| make-series AvgValue = avg(DeliveredItems) default=0 on StartTimestamp from startTime to endTime step totimespan(resolution)
| extend (RSquare,Slope,Variance,RVariance,Interception,TrendLine)=series_fit_line(AvgValue)
| render timechart

Looking at the raw result of the query, it shows the AvgValue and TrendLine as an Array of values, that can be plotted to a timeChart and will show the AvgValue and Trend of the delivered items. However using the series_fit_line also returns other metrics like RSquare, Slope, Variance, RVariance as seen below:

Here's an explanation of the key metrics from the query result:

RSquare (R²):

  • What it measures: The proportion of the variance in the observed data (AvgValue) that is explained by the trend line (TrendLine).

  • Value in the result: 0.7435 (approximately 74%).

  • Interpretation: About 74% of the variability in the daily average delivered items can be explained by the trend line. This indicates a relatively good fit but leaves room for unexplained variation due to other factors or noise.

Slope:

  • What it measures: The rate of change of the trend line over time.

  • Value in the result: 0.6919.

  • Interpretation: The daily average number of delivered items is increasing at a rate of approximately 0.69 units per day. This indicates a consistent upward trend in deliveries over the analyzed time period.

Variance:

  • What it measures: The variability or spread of the observed data (AvgValue).

  • Value in the result: 459.1032.

  • Interpretation: The observed data has a moderate level of variability, showing fluctuations in the daily delivery averages over time. A high variance typically indicates significant day-to-day changes in deliveries.

RVariance (Residual Variance):

  • What it measures: The variance of the residuals, which are the differences between the observed values (AvgValue) and the predicted values on the trend line (TrendLine).

  • Value in the result: 117.7715.

  • Interpretation: This is the portion of variance in the observed data that is not explained by the trend line. The residual variance is significantly smaller than the total variance, which supports the relatively good R² value and indicates that the trend line captures most of the data's pattern.

Interception:

  • What it measures: The y-intercept of the trend line, representing the predicted value of AvgValue when the x-axis (time) starts (e.g., the first day of the time period).

  • Value in the result: -12.6436.

  • Interpretation: The trend line predicts an initial value of approximately -12.64 for the first day, which is not physically meaningful in this context (as deliveries cannot be negative). This suggests that the trend line is more relevant for capturing the overall slope and pattern rather than precise starting values.

Here is the result of running the Query and rendering it using the timeChart:


Series_decompose_forecast operator

In the following example the series_decompose_forecast operator is used to forecast the average value of the delivered items of workorders 1 week into January 2025, based on average values from December month of 2024. In the make-series operator, we are extending the end time by 7 days to include space for the forecasted points.

let startTime = datetime("2024-12-01 00:00:00");
let endTime = datetime("2024-12-31 00:00:00");
let resolution = 1d;
let tagName = "DA.IM.P103.Line.OrderCompletion";
let forecast_points=7;
Measurements
| where StartTimestamp > startTime and StartTimestamp < endTime
| where HierarchicalName has tagName
| extend properties = todynamic(ValueString)
| extend DeliveredItems = toint(properties.deliverQuantiy)
| make-series AvgValue = avg(DeliveredItems) default=0 on StartTimestamp from startTime to (endTime + forecast_points*resolution) step totimespan(resolution)
| extend Forecast = series_decompose_forecast(AvgValue, forecast_points)
| render timechart

Here is the result of running the Query. The Average DeliveredItems is shown per. day, and the Average DeliveredItems is forecasted 7 days into January.


Last updated

Was this helpful?