Posts Tagged ‘SQL’

Interfacing with SQL Servers with zenon – zenon Analyzer

Monday, September 8th, 2014

zenon AnalyzerThe zenon Analyzer is a standalone product which provides dynamic production reporting.

Use Case

The zenon Analyzer compiles historical data and online data from any desired source, from the automation infrastructure through to IT, into meaningful reports. The interactive report configuration, with drill-down and drill-through competence, as well as the export of reports into Office formats makes it easier to carry out individual adaptations and implement automatic dispatch.

zenon Analyzer supports benchmarking and quality control and allows the analysis of historical data to identify improvement potential. The clear display of key figures and KPIs such as OEE and reports on production efficiency, consumption management, downtime analyses etc. help to optimize production and contribute to a rapid return on investment.

How it Works

The system works with both centralized databases such as Microsoft SQL, as well as retrieving data directly from distributed production equipment, thus collecting online and offline data. In the latter case, no additional database is needed. Manual data input is also possible.

Both data structure and interfaces are well documented, enabling you to more easily integrate your own customer-defined features. The latest encryption technology ensures the highest level of data security.

A wizard to import meta data into zenon Analyzer is included. Meta data can also be automatically updated through the wizard. Integrated connectors collect the necessary data. zenon Analyzer generates reports from many different types of data e.g. Alarm, shift-related, user-related, historical, real-time, batch. The advantage to you: a reduction in complexity, as only one system needs to be installed rather than needing multiple systems to acquire and convert these different types of data.

zenon Analyzer has a web-based interface. This means that no additional maintenance of clients is needed and no revalidation of the existing environment is necessary. This is particularly helpful for heavily regulated industries such as Food & Beverage or Pharmaceutical. The users have flexible access to reports from any workplace in the network.


The zenon Analyzer is licensed separately from the zenon Operator and zenon Supervisor. Please contact your local COPA-DATA Sales Representative for additional licensing information.

Interfacing with SQL Servers with zenon – zenon SQL Driver

Thursday, August 28th, 2014

SQL DriverThe zenon SQL driver can interact with a SQL database via ODBC, as if the SQL database were a PLC itself. This driver requires the tables to be created and formatted in a specific way, but if properly configured will allow zenon to read and write to tags defined in the SQL database.

Use Case

zenon is able to read data from the pre-defined table, placed there e.g. by third party applications, and assign this data to PLC variables. Effectively reading data from a table in a database of a SQL server and writing that data to PLC variables using any of the many direct PLC drivers that come with zenon.

How it Works

The zenon SQL driver (sqldrv.exe) is a polling-based driver that can communicate with specially defined send and receive tables in an SQL database. COPA-DATA also offers SQL queries to create the needed tables, columns, and assign the appropriate data types for them.

The designed use-case for this driver needs a separate table for sending and receiving values. If these values need to be used in different applications or in different tables, a periodically executed stored procedure has to be created to transform the data into different formats and/or to transfer them into different tables.


This module must be licensed in the zenon Editor as well as the zenon Runtime. If this is a network project, it is only required to license this driver on the runtime server(s) which will actually communicate with the SQL database.

Interfacing with SQL Servers with zenon – zenon SQL Export Module

Thursday, August 21st, 2014

SQL Export ModuleThe zenon SQL Export Module is an optional module of the zenon Editor/Runtime which allows Archive data, Alarm data, and Event data to be exported to an SQL Database. This is different to the Historian SQL Server Interface in the fact that it is only a one directional export. The exported data cannot be read back into zenon.

Use Case

This module can be useful if it is only necessary to export zenon historical data, alarm data, or event data to an SQL database.

How it Works

There are pre-defined zenon functions available to define the SQL Server, database, and tables to use for the SQL Export. zenon can automatically create the tables & columns needed in the SQL Server.
This data can be exported on demand (e.g. mouse click), on an event (e.g. a bool goes true), or automatically according to a set schedule (e.g. every day at 12:00 a.m.).


This module must be licensed in the zenon Editor as well as the zenon runtime. If this is a network project, it is possible to only license this module on the runtime server(s).

Interfacing with SQL Servers with zenon – zenon Supervisor Process Gateway – SQL Online

Wednesday, July 30th, 2014

Process Gateway SQL OnlineThe zenon Process Gateway is an optional module of the zenon Supervisor Editor/Runtime and operates in parallel to the zenon Runtime. The Process Gateway – SQL Online module will use a table in an SQL Server database to hold the current timestamps, value, and status of any zenon variables in a predefined format.

Use Case

The zenon Process Gateway in general is used to provide online values of tags, derived from the many zenon drivers to any external systems or applications in a common format. Therefore, the SQL Online module of the Process Gateway allows all current values of zenon variables to be shared with applications which are able to read from a SQL Database.

How it Works

The zenon Process Gateway – SQL Online is configured with an OLE-DB connection to the SQL server database. Additionally, there is a dialog which allows the user to choose the individual zenon variables to be exposed. Therefore, it is possible to expose only desired variables in the SQL table.

Finally, the user must only configure the refresh rate to determine how frequently zenon updates the SQL table with the current values. zenon automatically creates the proper tables and columns needed.

Once properly configured and running, the following information will be automatically updated (e.g. every 1 second) in the SQL table:

  • Variable ID
  • Variable Value (Strings)
  • Variable Value (Float)
  • Timestamp Second
  • Timestamp Millisecond
  • Status


The zenon Process Gateway – SQL Online must be licensed on systems with the zenon runtime.

It is a common and recommended practice for the zenon process Gateway to be configured and running directly on the runtime server(s). However, it is also capable of running on a normal runtime client.

Interfacing with SQL Servers with zenon – Historian SQL Server Interface

Wednesday, July 30th, 2014

Historian SQL Server InterfaceThe historian SQL Server interface can be thought of as an add-on to the zenon Supervisor, Historian Module. The historian SQL Server interface allows zenon Archive Data to be transparently evacuated to a centralized SQL server. This offers the benefit of data being stored in a central place in an open format where data can be queried with a simple SQL statement. Additionally, this data can still be read back into zenon for usage in the Extended Trend Module, Reports or the Archive Revision Screen seamlessly.

Additionally, this module allows Alarms and Events to be exported to a centralized SQL Server.

Use Case 1

Some companies prefer the integration of process data to other organizations in the corporate structure to be over an open, flexible, and well understood medium such as an SQL Server. Once process data is available in an SQL Server, it can easily be queried, reported, and analyzed by many people or 3rd party software components.

Use Case 2

Often, another common reason to send process data to a centralized SQL Server is for long term storage. Directly on the HMI server, there may not be an abundance of freely-available disk space or there may not be any standard backup strategies in place. However, if process data is incrementally evacuated to a high availability SQL Server with redundant hard disk configurations (RAID), long-term data storage may become more practical.

How it Works

The process of moving locally stored historian data to an SQL server is called “evacuation” in zenon, and is performed automatically once configured. This evacuation cycle can range from an immediate evacuation to evacuation e.g. at the end of a production cycle or after every month.

Note 1: after each change in the archive configuration, the tables need to be updated. This can be done conveniently from the zenon editor.

Note 2: when configuring the SQL OLEDB provider string through the dialog and using the SQL native client 10 or 11, with SQL server authentication, the password for the SQL user that was entered in the dialog, is not taken over in the provider string. The provider string must be enhanced manually with the entry “password=”

To evacuate Historical (archived) data to an SQL Server, you must configure a time period, after which, older records will be automatically evacuated to an SQL Database. During the configuration in zenon, you must create the database connection, credentials to be used, and point to the SQL Server and database which should be used. From zenon, the necessary tables and formats are created automatically.

Once properly configured, the historian in the runtime will automatically evacuate the archived data, for which the configured time period has expired, to the SQL Server. This evacuated data can be transparently read back into the zenon Runtime via pre-defined screen types for the Extended Trend, Report Generator, Report Viewer, and the Archive Revision screen. Additionally, if for some reason the SQL Server cannot be reached by the zenon server, the historical data will be cached locally on the runtime server. If the SQL server again becomes available, the data which should have been evacuated will be automatically flushed.

If you have configured redundant zenon servers, then while both primary & standby servers are online, the primary server will evacuate the historical data to the SQL Server. If the primary server fails, the standby server will also automatically take over the responsibility of evacuating the historical data to the SQL server. Only after zenon confirms a successful SQL evacuation will the original archive files on the server be deleted. In case the SQL server cannot be reached (e.g. when the network or the database is down) the data is stored locally and evacuated on the next try.

The zenon Alarm Logs and Event Logs can also be exported to an SQL Server. The Alarm & Event exports are one-directional, meaning that zenon will export the records to separate SQL Tables, but will not read the alarms or events from the SQL Server.


This module requires the full zenon Historian Module and a zenon Supervisor Runtime License. For redundant systems, it is recommended to license the historian SQL Server interface on both the Primary & Standby servers. For the configuration of this module, it is also required to be licensed in the zenon Editor.

When this module is purchased from COPA-DATA, 1 CAL (client access license) is provided for the Microsoft SQL Server. However, COPA-DATA does not provide the actual license or installation media for the Microsoft SQL Server. This must be obtained from the system integrator or end-user.

Interfacing with SQL Servers with zenon – Introduction

Thursday, July 24th, 2014

SQL DatabaseIn zenon, there are a number of ways to interface with SQL Servers. The exact requirements and goals will dictate which method is most advantageous for the user. It is also possible to use any combination of these options.

In all cases, COPA-DATA verifies compatibility with the currently released version of Microsoft SQL Server. Although it should be technically possible to use any SQL database, we cannot provide detailed information or support on the large number of SQL server manufacturers on the market. However, COPA-DATA will always support you to the best of our abilities and knowledge. If you are considering a non-Microsoft SQL Server, we suggest contacting your local COPA-DATA support representative for more information.

In my upcoming blog series the following topics will be described:

  1. Historian SQL Server Interface
  2. Process Gateway – SQL Online
  3. SQL Export
  4. SQL Driver
  5. zenon Analyzer

Tips for well-performing SQL code

Friday, September 13th, 2013

SQLWhen designing a SQL statement, there are some things that need to be considered to ensure that the SQL statement is running at optimum performance. Topics that might affect SQL query performance are:

  • Connecting data tables
  • Filtering and converting data
  • Ordering data
  • Use of loops

Connecting data tables

When connecting two or more data tables, it is recommended to use the SQL JOIN statement. A FROM-WHERE-construction might have the same performance, but when one table contains optional information, the OUTER JOIN SQL statements generate far less complex SQL. In the case of optional connections, a FROM-WHERE-construction requires nested select statements.

Filtering and converting data

Filtering data should always be the first step when gathering data in an SQL statement, as it keeps the amount of data that has to be processed as low as possible. When the data would need to be converted in order to be comparable to the filter criteria, the performance can be improved significantly by instead converting the filter criteria to be comparable to the data. This approach allows the query optimizer to potentially use an index on the column to be filtered. Also reducing the number of calls to conversion functions increases the query performance.

This example shows how to apply the filtering and conversion best practices: a data table contains columns for variable name, timestamp as unixtime and value. The SQL statement receives time filter parameters as UTC timestamps and shall return columns for variable name, timestamp as UTC timestamp and value. Applying the best practices, these steps are necessary:

  1. Convert the filter criteria from UTC timestamp to unixtime before reading the table (2 conversion function calls)
  2. Filter the timestamp column of the source table based on the converted time filter criteria in the WHERE clause of the statement (no conversion function calls, an index on the timestamp column in the source table can potentially be used)
  3. Convert the timestamp column from unixtime to UTC timestamp in the SELECT clause of the statement (1 conversion function call for each row that matches the filter criteria)

Ordering data

Inside a SQL statement, the order of the data is not relevant (except when iterating through data in some kinds of loops; see next section). So ordering the data should always be the last part of a SQL statement, as this step can consume high amounts of CPU time and memory when there is much data that has to be ordered.

Use of loops

Generally, the query optimizer cannot reorganize a loop to increase performance. So when possible, loops should always be avoided to gain optimum performance.

Here is an example for that: in an event list (which is a table or table variable), there is an entry when a machine switches in a certain operation mode which contains a start-timestamp but no end-timestamp. For further calculation, an end-timestamp for every entry is required.

The first approach could look like this:

  1. Create a cursor for iterating through the data.
  2. Sort the data descending by the start-timestamp in the SELECT statement of the cursor.
  3. Create a local variable for storing a timestamp and initialize it with the end of the time filter.
  4. Iterate through the data in the cursor in a loop:

a. Update the current row and set the end-timestamp to the value stored in the local variable.

b. Update the local variable with the start-timestamp of the current row.

This approach generates more complex SQL code, but offers improved performance:

  1. Create a second table or table variable that contains an additional column for the end-timestamp which is nullable.
  2. Create a SELECT statement that reads all rows out of the original data table (called “T1”) and makes an OUTER JOIN to the very same table (called “T2”). The JOIN condition would be that the start-timestamp of T2 is higher than the start-timestamp of T1.
  3. Every entry in T1 is now connected to all entries in T2 that have a higher timestamp (all operation mode changes that have been made after the operation mode change in the current row of T1). The last operation mode change has no such entry, which is why you should use an OUTER JOIN.
  4. Create a GROUP BY clause that groups on all columns read from T1.
  5. In the SELECT clause, include all columns of T1 and the minimum start-timestamp from T2.
  6. Add an INSERT clause to the statement and store the data in the table or table variable created in step 1.
  7. As the last operation mode change has NULL as end-timestamp, create an UPDATE statement for setting the end-timestamp in the table or table variable created in step 1 to the end of the time filter when the current value of that column is NULL.

So avoiding loops increases performance but decreases SQL code readability. In order to ensure readable SQL code, comments can be used in such regions where the code is very complex for performance reasons.

Pros and Cons of Using SQL Server Table Indices

Thursday, September 5th, 2013

SQL IndicesFor almost every application, high performance and quick reaction times are customer demands. When working with zenon Analyzer and big databases full of SQL evacuated zenon archives with Gigabytes of entries per table, SQL Server table indices can have a huge influence on report performance. This blog entry contains some pros and cons of using such indices for zenon evacuated data to help decide whether to create those indices or not.

Query and Store Procedure Performance

If there is a usable index on the table, the query performance and therefore the stored procedure performance can be improved dramatically. As we explore the following scenario, it will become clear what improvement is possible. Let’s say there is an archive data table with data for dozens of variables for several years. The table size on the hard disk is 10 Gigabytes. The stored procedure needs data for one variable out of one month, which means that only a few Megabytes need to be read. Without a proper index, the whole table (10 GB) has to be read to get the required data. When there is a clustered index on the timestamp and variable columns, only a few 100 Kilobytes of index data have to be read to know where the required data resides and then only the required data is read. It is clear that reading a few Kilobytes and then a few Megabytes is much quicker than reading 10 Gigabytes.

Insert Performance

As insert operations in the table also have to update the index, the insert performance might be reduced after creating indices on the table. Concerning SQL evacuated zenon data, most insert operations occur with a timestamp higher than the highest existing timestamp in the table, which means that the impact on the insert performance will be minimized when the table has a clustered index on the timestamp column with ascending order. COPA-DATA can provide indexing best practices for the optimum SQL connector performance and minimal impact on the insert performance.

Space Requirements

As indices are additional information about a data table, they require some space to store the information. Basically, indices are a tree structure, so the level which consumes most disk space is the leaf level containing the pointers to the actual table rows. As clustered indices use the table itself as leaf level, additional space is only required for the branch and root levels of the tree, which is usually a fraction of the space consumed by the table.

Index Creation Performance

How long it takes to create an index depends on the size of the table to be indexed, on the hardware details of the database server and on other tasks running on the same server. If the database server has a constant flow of insert and query statements on the table to be indexed, indexing the table will need more time. Also creating a clustered index is more time consuming, as the complete data in the table has to be ordered according to the index settings.

Index Maintenance

Considering index maintenance, there are three facts to consider:

  • External fragmentation can have a negative influence on the performance, as the index pages are spread over the hard disk and I/O operation become slower as it seeks the fragments. It is recommended that you rebuild an index when its external fragmentation is higher than 5 percent.
  • Internal fragmentation can have a negative impact on performance, as there is free space in the index pages that inflate the page count of the index and lead to more I/O operations. It is recommended to reorganize an index when its internal fragmentation is higher than 5 percent and to rebuild it when its internal fragmentation is higher than 30 percent. But on the other hand, internal fragmentation can have a positive impact on insert operations in the table, as there is free space in the index pages that can be used (instead of allocating new index pages which might lead to external fragmentation) when the index needs to be updated after the insert operation.
  • Index statistics are used by the query optimizer to decide whether an index should be used and which index should be used. SQL Server automatically updates index statistics as well as every reorganization and every rebuild of an index.

How the zenon Analyzer connectors work and how they are used in practice.

Tuesday, June 4th, 2013

The zenon Analyzer reports are based on process data from HMI/SCADA systems. In order to avoid redundant data these data are not stored directly in the zenon Analyzer. When creating a report the data are loaded into the zenon Analyzer using our so called ‘connectors’. Depending on the location of the data, we provide two different connectors.

The SCADA Runtime Connector is used to load data directly from the SCADA runtime, whereas the SCADA SQL Connector is used to load the data directly from SQL databases.

For increased availability the SCADA Runtime Connector is capable of working in a redundancy environment and connects automatically to the standby server if required.

Connector types

Each connector works in a different way:

SCADA Runtime Connector:

zenon Analyzer_Runtime Connector

This connector consists of two components. An extension of the Microsoft SQL-Server directly on the Analyzer Server (the so called ‘ZRS Provider’) and the connector container incl. the connector plug-in  installed on the SCADA runtime computer. When requesting a report, the ZRS Provider connects via the network to the connector container. Its plug-in contacts the runtime using the COM interface and the runtime delivers the requested data which are sent back to the ZRS Provider. Now the Analyzer Server can access the data for further processing.

The big advantage of this connector is the possibility of using actual data from the runtime.

The connector can access:

  • Actual values from variables
  • Historical values from variables
  • Lots
  • Alarms
  • Chronological Event List


SCADA SQL Connector:


This connector directly accesses process data evacuated in databases. The database server used for storing these data just needs to be added once in the ‘zenon Analyzer Management Studio’ as a linked server and can be of any type of SQL-Server, Oracle server or ODBC-Server.

The data are accessed directly with Table Valued Functions in order to ensure optimal performance.

The big advantage of this connector is the speed of the data connection and the independence from the HMI/SCADA runtime.

The connector can access:

  • Historical values from variables
  • Shifts
  • Lots
  • Alarms
  • Chronological Event List


Open data interface

All connector functions can be accessed directly and manually in the SQL Server of the Analyzer Server using Table Valued Functions. So you are free to develop your own reports.

Furthermore, you can develop customized connectors according to your data and your needs.

SCADA SQL Connector:

If you need a personal, network-based connector, it is possible to develop an own connector plug-in as a dll file.

SCADA SQL Connector:

For individual connection to databases the SQL Connector has an open SQL interface. By using Table Valued Functions you can program your own connector accesses to any data format.


The connectors are a very easy and comfortable way of retrieving the data you need for reports, regardless of whether you use our predefined reports or if you wish to develop your own reports. The interfaces allow them to be customized and modified to suit the customers` needs.