Two distinct classifications for the Internet of Things (IoT) have sharply started to emerge — the Industrial IoT and Consumer IoT. Although there are differences in the environments for which they’re best suited, the Industrial IoT (IIoT) and Consumer IoT (CIoT) share a basic need. Both derive their true value from ubiquitous information availability, and consequently, the decisions that can be made from it.
A great example is a Smart City application, such as the Connected Boulevard initiative in Nice, France, where real-time access of information such as parking and traffic, street lighting, waste disposal and environmental quality is enabling smarter and faster decisions. The ubiquitous availability of data provides better optimization of the various city management functions. Likewise, in a Smart Grid application, real-time access to energy production and demand can help match production to demand, improve energy trading strategies, and allow micro-power generators to decide whether to sell or store their energy surplus.
As ubiquitous, timely and efficient information availability is key in IIoT and CIoT environments, the technical (and non-technical) community has been engaging in passionate online discussions around the technologies that can best provide the right data-sharing platform — the fabric — to deliver the right data to the right place at the right time. Across the various discussions and augmentations, a common theme has emerged: the importance of a standards-based solution to ensure openness and interoperability across the variety of IIoT and CIoT environments.
In this post, I’ll analyze the requirements posed by the variety of data-sharing protocols found in a generic IoT system and then describe how the OMG Data Distribution Service standard is the best answer to serve as the fabric for IoT.
IoT Data-Sharing Requirements
IIoT and CIoT systems have very articulate data-sharing requirements. As such, these requirements have to be addressed holistically as to keep the complexity of developing and deploying these systems low and the efficiency of running them high.
There are three standard classifications to represent the different data-sharing characteristics of IoT systems.
Device-2-Device. Device-2-Device (D2D) communication is required in several different use cases. This communication pattern is prevalent on edge systems such as Industrial Plants and Vehicles. That said, it is slowly being exploited in more use cases. The latest notable example is FireChat, the infrastructure-less peer-to-peer chat. D2D is facilitated by broker-less peer-to-peer infrastructures that facilitate deployment, foster fault-tolerant execution, and can provide for more performance-sensitive applications, low-latency and high-throughput data-sharing.
Device-2-Cloud. Devices and sub-systems interact with cloud-based services and applications for mediating data-sharing and data collection. Device-2-Cloud (D2C) communications can feature vastly different needs and requirements based on the application and environment and the type of data that needs to be shared. For instance a remote surgery application has far more stringent temporal requirements than a smart city waste disposal application. At the same time, however, a smart city application may have more stringent requirements with respect to efficient network and energy management of devices. Depending on the use case and environment, D2C communication needs to be able to support high-throughput and low-latency data exchanges as well as operation over bandwidth constrained links. An additional key is the ability of the D2C communication to support intermittent connectivity and variable latency links.
Cloud-2-Cloud. Currently there are few systems being deployed across multiple IaaS instances or multiple IaaS regions (eg deployed across EC2 EU and US regions), however, it is becoming clear that it will be increasingly important to seamlessly and efficiently exchange data across clouds. For these applications, the data-sharing technology needs to support high throughput and low per/message overhead in order to keep the per-message cost under control.
In addition to the data-sharing patterns references above, there are a number of crosscutting concerns that a data distribution technology needs to support, including platform independence, eg the ability to run on embedded, mobile, enterprise and cloud apps and security.
OMG’s Data Distribution Service — the IoT Fabric
The Data Distribution Service (DDS) is an Object Management Group standard for ubiquitous, efficient, timely and secure data-sharing, independent from the hardware and the software platform. DDS implementations are available today for sharing data across mobile, embedded, enterprise, cloud and web applications. DDS defines a wire protocol that allows for interoperability among multiple vendor implementations as well as an API that allows applications to be easily ported across vendor products. The standard requires implementation to be fully distributed and broker-less, meaning that DDS applications communicate without any mediation, yet when useful, DDS communication can be transparently brokered.
The basic abstraction at the foundation of DDS is the Topic. A Topic captures the information to be shared along with the Quality of Service (QoS) associated with it — this makes it possible to control the functional and non-functional properties of data-sharing. DDS provides a rich set of QoS policies that allow for the control of local resource usage, network utilization, traffic differentiation, as well as data availability for late joiners. In DDS, the production of data is performed through Data Writers while consumption is through Data Readers. For a given topic, data readers can further refine the information received through content as well as temporal filers. DDS is also equipped with a dynamic discovery service that allows applications to dynamically discover the information available in the system and match the relevant sources.
The DDS Security standard provides an extensible framework for dealing with authentication, access control, logging and encryption. As an example, you may provide certain applications with rights for reading only certain topics, while other application for reading as well as writing and perhaps creating new topics. The rules that define access rights are very flexible and allow for very granular control over what applications are allowed. For securing communication, DDS Security takes an approach similar to that taken by SRTP, thus (1) allowing the use of multicast when possible, and (2) avoiding the in-line key re-negotiation issues by TLS/DTLS.
Applying the DDS Fabric
Among the standards that have been identified as relevant to IoT applications, DDS is the one that stands out with respect to the breadth and depth of coverage of data-sharing requirements. This does not come as a surprise to DDS aficionados, yet, those not familiar with the technology are astonished to learn of its many deployments in IoT systems. As you read below, you’ll see what makes DDS so special.
Device-2-Device. DDS provides a very efficient and scalable platform for D2D communication. DDS implementation can be scaled down to deeply embedded devices or up to high-end multi-core machines. In regard to efficiency, a DDS implementation can have latency as low as ~30usec on Gbps Ethernet networks and throughput of several million messages per second. At the same time, DDS has a binary and efficient wire-protocol that makes it a viable solution in network constrained environments. The broker-less and peer-to-peer nature of DDS makes it ideal for D2D communication, and at the same time, the ability to transparently broker DDS communication — especially when devices communicate through multicast — eases the integration of subsystems into IoT systems.
Device-2-Cloud. DDS supports multiple transport protocols, such as UDP and TCP, and when available takes advantage of multicast (support for Source Specific Multicast is available through vendor extensions and will soon be included in the standard). The support for UDP/IP is extremely beneficial for applications that deal with interactive, soft real-time data for which TCP/IP would be introducing either too much overhead or head-of-line blocking issues. For deployments that can’t take advantage of UDP/IP, DDS alleviates some of the challenges introduced by TCP/IP vis-a-vis head-of-line blocking. This is achieved through its support for traffic differentiation and prioritization along with selective down-sampling. Independent of the transport used, DDS supports three types of reliability: best effort, last value reliability and reliability. Of these three, only the latter behaves like “TCP/IP reliability,” the others allow DDS to drop samples to ensure that stale data does not delay new data.
The efficient wire-protocol, combined with the rich transport and reliability semantics support, make DDS an excellent choice for sharing both periodic data such as telemetry as well as data requiring high reliability. In addition, the built-in support for content filtering ensures data is only sent if there are consumers that share the same interest and whose filter match the data being produced.
Cloud-2-Cloud. The high throughput and low latency delivered by DDS makes it a perfect choice for data-sharing across the big pipes connecting different data centers.
In summary, DDS is the standard that better addresses the most compelling data-sharing requirements presented by Industrial and Consumer IoT applications. DDS based platforms, such as PrismTech’s Vortex, address mobile, embedded, web, enterprise and cloud applications along with cloud messaging implementations, and allow for scaling and integration of devices and sub-systems at an Internet scale. Solutions based on DDS are deployed today in Smart Cities, Smart Grids, Smart Transportation, Finance and Medical environments.
If you want to learn more about DDS check this tutorial or the many educational slides freely available on SlideShare.