The Department of Defense’s (DoD) next generation “Third Offset” initiative will target promising technology areas, including robotics, autonomous systems, miniaturization, and big data, whilst also seeking to improve the military’s collaboration with innovative private sector enterprises.
The latest innovations in real-time data-centric network edge computing based on the Data Distribution Service (DDS) standard will be key to delivering on some of the key priorities.
A key DoD objective is to accelerate the adoption of cloud computing. The aim is to move from a state of duplication, cumbersome and costly application siloes to a much more cost effective and agile service environment that can rapidly respond to changing mission requirements. Cloud computing can enhance battlefield mobility through device and location independence while providing on-demand secure global access to mission data and enterprise services.
Whilst cloud computing carries data back to a central server for storage and analysis with resultant issues in bandwidth capacity, connectivity and latency common to hostile environments; new fog computing and Tactical Cloudlet technologies enable real-time analytics and other functions to be performed at the tactical edge of a network, right at the data source – for example soldiers on the battlefield. This will help enable smart autonomous systems to send, receive and process information when and where it is needed, so ensuring speedier and optimal mission-critical decision making.
Tactical Cloudlets are a means to make cloud services and processing available to mobile users by offloading computation to servers deployed on platforms closer to the users. Cloudlets leverage capabilities such as automatic discovery and VM based provisioning, combined with peer-to-peer communications. Tactical Cloudlets, fog and edge computing will impact the way that the defense community builds the next generation of C4SIR and related military simulation systems.
PrismTech’s Vortex is a proven DDS standards-based technology for efficient, ubiquitous, interoperable, secure and platform independent data sharing across network connected devices. Vortex naturally fits with the fog computing and Tactical Cloudlet paradigm and is the only fog-ready data-sharing infrastructure capable of meeting the needs of defense and aerospace companies – connecting soldiers, unmanned machines, devices and commanders in the field with the intelligence community, and helping to improve decision-making.
In Industrial Internet of Things (IIoT) application domains such as energy generation (including transmission and distribution), the co-ordination of operational systems (power plants, electrical power grids etc.) has traditionally been via back office centralized management infrastructure. Energy utility technologies and their data are often siloed, based on proprietary hardware from different vendors, and use many different communication protocols and telecommunication technologies to make the device data available to the centralized information systems.
Open Architecture IoT solutions offer the potential for the energy industry to move from single function and proprietary centralized managed systems to new multi-function distributed control systems. They can enable co-ordination between grid-edge technologies and with the centralized systems, improving grid efficiency, reducing integration costs, enabling vendors to improve their products and ultimately, customers to pay less for the electricity that they consume.
The next generation of energy grids will need to adopt new approaches for the integration of distributed grid-edge devices and equipment from many different manufacturers to realize operational benefits. Existing systems that were designed to support a small number of large generation facilities will be faced with the need to integrate an increasing number of Distributed Energy Resources (DERs) such as wind, solar and electricity storage into existing power generation and distribution networks.
In the current business climate, the power industry has recognized the need for change. Already in Europe the profits of the large power utilities who have invested heavily in fossil fuel plants are starting to fall. In fact, over the last five years, the top 20 utilities in Europe have lost half of their value (source: http://www.greentechmedia.com/articles/read/this-is-what-the-utility-death-spiral-looks-like). Many of their large customers are leaving the grid as they adopt renewable energy resources and become self sufficient. This has the effect of pushing up energy costs for the customers who stay and encourages more companies to become self sufficient. These major shifts are forcing the utilities to re-assess their business models, driving them to greatly increase investment in wind power, solar and distribution projects to connect renewables into their existing grids in an attempt to arrest the decline in their profits.
In the US the legal and commercial drivers for the adoption of renewables have not been as great up to this point. However, the US utilities have also recognized that things need to change. In 2013 US utility giant Duke Energy formed the “Coalition of the Willing” (COW), a consortium of grid technology vendors focused on the promotion and adoption of an Open Architecture approach to standardizing the way grid-edge technologies are integrated together.
The consortium is made up of communications and grid control systems, electronics and software vendors. The initial COW member companies were limited to Duke Energy, Accenture Alstrom Grid, Ambient Corporation, Echelon, S&C Electric and Verizon. After successfully demonstrating in real-time how different grid devices could talk to each other without the need to contact a remote centralized management system and reducing the feedback control process from minutes to less than 10 seconds, the energy industry started to really take notice. The responsiveness that Duke and its partners demonstrated can enable a system to react dynamically to changes such as a sudden drop in the wind powering a farm of turbines. The distributed management system can automatically and in real-time (within seconds) switch in battery backup storage to ensure that a smooth voltage supply is maintained. This is something that is much harder to achieve if the process of communicating with a centralized system takes minutes.
As a result, the consortium has quickly grown to over 25 companies, with new members including ABB, Cisco, Itron, PrismTech, Schweitzer and others.
All COW members must implement interoperable communication protocols that conform to open standards. These protocols must also conform to the Common Information Model (CIM) utility standard. The protocols are used as the basis of a common communication backbone called the “Field Message Bus” which is used to connect edge-devices via standardized nodes deployed by Duke Energy. The core communication protocol that must be supported is the Object Management Group’s Data Distribution Service for Real-time Systems (DDS) standard. DDS implementations including PrismTech’s Vortex are being used by the consortium to provide a high performance, fault tolerant, secure, real-time interoperable data connectivity layer between edge-grids and centralized management systems. DDS can be used to unify co-ordination for the edge-grid devices while at the same time making important real-time data available to the centralized systems, as well as feeding centralized control decisions back down to the edge-grids
The ability to process data at the edge and share control decisions in real-time across device networks that were previously isolated from each other, is where the real value of the IIoT for the energy industry will be gained. DDS as part on an Open Architecture for edge-grids is a key enabler of this capability.
Today I’m excited to announce PrismTech’s new Vortex product, an Intelligent Data Sharing Platform for Business-Critical Internet of Things (IoT) Systems. Vortex provides efficient, secure and interoperable real-time Device to Device (D2D) and Device to Cloud information sharing. It is a key enabler for systems that have to reliably and securely deliver high volumes of real-time data with stringent end-to-end qualities-of-service (QoS).
Vortex enables system-wide data sharing for machines, devices and people. It allows users leverage the growing proliferation of data in next generation intelligent devices to create new IoT solutions. Vortex helps users to harness the ever-increasing amounts of device generated data, process the data in real-time and act on events as quickly as they occur to drive smarter decisions, enable new services / revenue streams and reduce costs. Vortex simplifies the development, the deployment and the management of large scale IoT applications, so enabling users to bring their new products and solutions to the market more quickly.
The Vortex Intelligent Data Sharing Platform consists of the Vortex Device and Vortex Cloud. The Vortex platform product bundles are designed to provide a range of capabilities that best suit the specific needs of a system:
Vortex Device enables device applications to securely share real-time data using different device platform and network configurations. This includes being able to support data sharing between devices (Device to Device) on the same Local Area Network (LAN), data sharing between devices and a Cloud-based datacenter (Device to Cloud) and between devices and a Web browser client. Vortex Device includes interoperable data sharing technologies that can support a broad range of Enterprise, Embedded and Handheld systems. Vortex Device also includes a suite of advanced tooling that helps users design, develop, test, debug, tune, monitor and manage deployed Vortex systems and systems of systems.
Vortex Cloud extends the capabilities of Vortex Device with support for data sharing over a Wide Area Network (WAN). This includes being able to share data seamlessly between applications running on different LANs via the Internet. Vortex Cloud can be used with Private, Public and Hybrid Cloud infrastructures.
PrismTech recently announced that NASA had selected OpenSplice DDS to help make Star Trek inspired Holodeck Technologies a reality. Following on from this I would like to highlight in my blog another space related project which is using OpenSplice DDS called IMPERA (integrated mission planning using heterogeneous robots).
The main goal of IMPERA is the development of a multirobot planning and execution architecture with a focus on a lunar sample collection scenario in an unknown environment. For future lunar and other planetary missions, autonomy is mandatory. Current NASA missions deal mainly with the exploration of the Mars surface and the analysis of the surface consistency. The systems Spirit, Opportunity, and Curiosity work as individual systems. Looking toward the future, it is a subsequent step to set up infrastructure and scientific components on Mars or on the moon. This infrastructure can consist of small stations measuring environmental conditions, units that are used to provide drill cores for subsurface analysis, or units for communication and energy supply.
Building infrastructure and interacting with infrastructure during a planetary mission calls for multirobot systems and coordination between multiple systems, each system having dedicated sensors and roles during a mission. If robots need to cooperate in a multirobot context, it is important to know how such a team of robots can exchange their knowledge about the world in terms of a world model, how to generate a mission plan, and how to execute a plan using robots with different abilities and configurations.
For the IMPERA project PrismTech’s OpenSplice DDS implementation of the Data Distribution Service for Real-Time Systems standard was chosen as the communcations middleware. OpenSplice DDS is ideally suited for the IMPERA project as it is based on the loosely coupled publish subscribe paradignm and provides extensive quality of service (QoS) options such as automatic reconnection and data buffering in the case of communication loss. The DDS communication middleware helps to ensure that each robot has the same knowledge about the internal status of each individual robot.
For further details about the IMPERA project, a Journal of Field Robotics paper titled “Towards Coordinated Multirobot Missions for Lunar Sample Collection in an Unknown Environment” authored by Markus Eich, Ronny Hartanto, Sebastian Kasperski, Sankaranarayanan Natarajan, and Johannes Wollenberg of DFKI, Robotics Innovation Center, Bremen, Germany is Now Available.
One of the key differentiators of PrismTech’s OpenSplice Enterprise is that it provides a user with the ability to choose exactly how to deploy Data Distribution Service for Real-Time Systems (DDS) applications, i.e. there are different DDS system architecture deployment modes and also different networking service protocols. This allows a user to maximize both intra-nodal and inter-nodal performance based on requirements specific to their own use case.
When evaluating OpenSplice Enterprise it is very important to understand all of these features and benefits to ensure that the most appropriate combination is evaluated against your specific performance criteria. Once the performance figures have been observed the choice is usually clear.
Every customer use case and set of requirements is different, so to help with your evaluation and benchmarking, PrismTech has produced a new guide “Evaluating and Benchmarking OpenSplice Enterprise”. You can access the guide Here.
The guide takes you through how to best deploy OpenSplice Enterprise so that it meets and exceeds your expectations. It explains how easy it is to get started with OpenSplice Enterprise and observe the excellent performance and scalability it provides. OpenSplice Enterprise is even shipped with dedicated performance tests that the user can build and run easily.
In my Spectra SDR blog today I am highlighting a new product which is being added to our Spectra DTP4700 Development and Test Platform. The Spectra DTP4700 Probes Toolbox is designed to reduce turn-around time for developing new waveforms on Spectra DTP4700 by providing a multi-processor debugging capability during integration.
The Probes Toolbox is a real-time debugging tool with the ability to prove and excite waveform elements, thus allowing the piecemeal integration of waveform components one-by-one, and the validation of component temporal, processor, and memory behaviors independently. Probe data is visualized either with the toolbox’s internal visualizer, ProbeViz, or through MATLAB/Simulink, thus eliminating the need for additional 3rd-party tooling licenses for data visualization.
The toolbox’s Data Probe allows monitoring or injection of real-time system flow data. The Resource Probe provides graphical or textual representation of memory and CPU utilization, CPU resources, and allow memory Peeks and Pokes. The Latency Probe provides a graphical display of latency for user-defined probe points based on a uniform system time reference. The Traffic Probe captures and displays network traffic. The SCA Adapter Probe provides latency, traffic and data probes in an SCA-compliant environment. By using the Probe Toolbox, a developer or systems engineer can thus study real-time data in any connected waveform with ease.
In summary using the Probes Toolbox through the waveform porting cycle: increases engineer productivity, reduces porting and integration effort, decreases defect-error rates, reduces time-to-deploy, and lowers overall project cost and schedule risk.
For further information about the Spectra DTP4700 Probes Toolbox read: