How to run OPC UA PubSub on real-time Linux and TSN using open62541
This quick start guide serves as a starting point for a user to learn and evaluate OPC UA PubSub and TSN technologies for embedding into their products. This guide needs an x86 system with an Intel i210 Ethernet controller (as this controller supports features necessary for Time-Sensitive Networking) and leverages:
- Standard Debian 10 operating system and the real-time Linux kernel included in its package manager.
Note: In most cases, this kernel would already provide you the best possible hard-real-time deterministic performance. To further improve, you may have to tune the configuration and recompile the kernel on your own.
- A later version of iproute2 package (with support for newer real-time socket options and an IEEE 802.1 Qbv like scheduler)
- A later version of LinuxPTP package (with support for IEEE 802.1 AS gPTP configuration)
- Open-source OPC UA stack open62541 – an open-source C (C99) implementation of OPC UA licensed under the Mozilla Public License v2.0 available in GitHub (with support for PubSub feature as specified in part 14 of OPC UA Specification)
We recommend at least two Intel x86-based systems with 4-cores and Intel i210 Ethernet Controllers. We used Intel Apollo Lake, Intel Whiskey Lake, and Intel core-i5 architectures during our tests.
We selected Debian 10, as it is one of the more popular distributions. It also has a pre-compiled real-time kernel that can be installed using the Debian package manager. You can download the Debian GNU/Linux 10.7 distribution from here – Debian-10.7.0-amd64-xfce-CD-1. As it is not practically possible to provide a variant of the quick start guide for the many Linux distributions out there, we recommend that you use Debian for your initial tests and then switch to your chosen distribution. This is particularly important as our work focuses on high performance.
To set up the test architecture in this quick start guide, you would require two nodes with at least two Ethernet interfaces in each of the nodes, with one of them being an Intel i210 Ethernet controller. You need to connect one of the two Ethernet interfaces to your office network to download and install the required packages for this setup over the internet. The second interface (Intel i210) can be connected in a peer-to-peer fashion with the other node or via a TSN switch, as shown below. In case you are using a TSN switch between the two nodes, make sure you open all the gates in the TSN switch for your initial tests. It should be noted that we have used peer-to-peer networking for most of our tests.
Environment setupIn this section, we will set up the environment:
- Install real-time Linux kernel
- Configure network interface
- Configure TSN parameters
Install real-time Linux kernel:
apt-get update && apt-get upgrade
apt-get install linux-image-rt-amd64 -y
The appearance of the “rt” keyword in the print message that appears as part of the command output shows that we have booted into the right kernel:
Configure network interface:
In this section, we will set up static IP address and VLAN configuration to the i210 interface. After reboot, log in as the root user and open the interfaces file using the below command:
iface enp2s0 inet static
iface enp2s0.8 inet static
In our setup, the interface name is enp2s0; replace it with your nodes’ i210 interface name. Below is the screenshot of the interfaces files of the two nodes:
Configure TSN parameters:
In this section, we will run the necessary scripts (setup.sh and application_dependencies.sh) to configure the TSN parameters.
The setup.sh available in the package does the following,
- Updates the installed packages in the node
- Installs iproute2 package for kernel version 4.19
- Installs Linux PTP version 2.0
- Checks if 8021q module is loaded; if not, adds the same
If you are interested in knowing more about each command, read the comments included in the script for additional information. setup.sh
The application_dependencies.sh available in the package does the following,
- Configures traffic control parameters to transmit and receive
- Sets Egress and Ingress policies
- Tunes real-time behaviors
- Runs Linux PTP and PHC2SYS
If you are interested in knowing more about each command, read the comments included in the script for additional information. application_dependencies.sh
tar -xvf demo_package.tar && cd demo_package
After successfully executing the setup.sh script, run the application_dependencies.sh available in the demo_package folder. As shown in the architecture diagram, configure one of the nodes as PTP master and the other node as PTP slave.
./application_dependencies.sh -i <IFACE> -m
./application_dependencies.sh -i <IFACE> -s
tail -f /var/log/ptp4l.log
If you have configured your node as a PTP master, your output log should be similar to the below image. The text “assuming the grand master role” ensures that you have configured the node as PTP master. Once you have seen the output, press Ctrl + C to return to the console.
If you have configured your node as a PTP slave, your output log should be similar to the below image. (Note: The master offset values are represented in nanosecond (ns). This value should be within the range of -1000 ns to +1000 ns).
tail -f /var/log/phc2sys.log
Your PHC2SYS log should be similar to the below image. (Note: The sys offset values are represented in nanosecond (ns). This value should be within the range of -1000 ns to +1000 ns).
Build OPC UA PubSub application
git clone https://github.com/open62541/open62541.git
git fetch origin pull/3996/head:local_branch && git checkout local_branch
cmake -DUA_BUILD_EXAMPLES=ON -DUA_ENABLE_PUBSUB=ON -DUA_ENABLE_PUBSUB_ETH_UADP=ON ..
Run OPC UA PubSub application in Performance Evaluation mode
After successfully building the OPC UA PubSub application, we will run it by enabling the flag to generate the performance measurement log file.
./bin/examples/pubsub_TSN_loopback -interface <IFACE> -enableBlockingSocket
./bin/examples/pubsub_TSN_publisher -interface <IFACE> -enableBlockingSocket -enableLatencyCsvLog
The -enableLatencyCsvLog flag computes the round trip time (RTT) of a counter variable sent from one node to another and looped back (more information on this in the next section below). By default, this application stops after collecting 1 million packets. You can terminate it in-between by pressing Ctrl + C. The log file (latencyT1toT8.csv) will be generated in the pubsub_TSN_publisher application build folder (only after the application is terminated), containing the RTT, missed counters, and repeated counters.
The application in node 1 (node where pubsub_TSN_publisher is running) is designed to encode and publish a packet containing a counter variable to node 2 every 250 microseconds. The counter variable is incremented at the start of each new 250 microseconds cycle. Node 2 (node where pubsub_TSN_loopback is running) is designed to decode the received packet and loop the counter variable back to node 1. The application at node 1 receives the counter variable and computes the round-trip time: T8 – T1 (time taken to publish and receive back the same counter variable). This information is captured in the latencyT1toT8.csv file. The csv file also contains information on missed counters and duplicate counters – this can happen due to multi-threading issues or packets being lost during transmission/receive.
The below figure shows the flow of information and the timestamps at different stages (T1 to T8):
- T1 – Timestamp at which the counter variable was incremented (and handed over to the publisher)
- T2 – Ethernet packet outgoing timestamp at kernel space
- T3 – Ethernet packet incoming timestamp at kernel space
- T4 – Timestamp at which the counter variable is seen by the application on node 2
- T5 – Timestamp at which the counter variable was handed over to the publisher
- T6 – Ethernet loopback packet outgoing timestamp at kernel space
- T7 – Ethernet loopback packet incoming timestamp at kernel space
- T8 – Timestamp at which the counter variable is seen by the application on node 1 after the round trip
The generated csv file (latencyT1toT8.csv) should be imported into a statistical computing program such as an R script to visualize the data and evaluate its performance. For this purpose, we used two R scripts, mkhisto.R and mkloghisto.R, which is available in the demo_package folder. The mkhisto.R script reads the latencyT1toT8.csv file and generates a histogram plot of PubSub RTT latency. The mkloghisto.R script reads the latencyT1toT8.csv file, and calculates the deviation of the obtained RTT latency from the expected 1 ms latency (which is 4*cycle-time), and generates a histogram plot for the same.
Rscript <FILE_PATH>/mkhisto.R latencyT1toT8.csv
Rscript <FILE_PATH>/mkloghisto.R latencyT1toT8.csv
Both the histogram plots will be generated as PDF files (latencyT1toT8.pdf for mkhisto.R and latencyT1toT8-log.pdf for mkloghisto.R) in the current directory.
From the above plots, we see that all the packets/samples are in the range of desired 1 ms RTT (from latencyT1toT8.pdf) and the RTT jitter (from latencyT1toT8-log.pdf) is only a few micro-seconds. From this, we see that these nodes can run the PubSub application at a lower cycle-time of 250us and the behavior is deterministic within the boundaries seen in the plots. As part of our testing, we have further verified stability and deterministic behavior by running the setup for multiple days. If you wish to see the latency plots for long-term tests, look into the long-term plots in the OSADL QA farm.
Optional step 1: If you wish to make your TSN environment persistent after a reboot, follow the below steps:
echo ./<FILE_PATH>/application_dependencies.sh -i <IFACE> -m >> /etc/rc.local
chmod +x /etc/rc.local
Optional step 2: To know more usage of the PubSub application,