In the age of IoT, think “Data Centric”!

Data Centricity helps creating situation awareness to manage and control complex systems and systems of systems in an Internet-centric World.

In a smart city case it can help you building a system managing green smart houses and smart buildings, managing traffic, managing parking, garbage collections, energy and so one.

Unlike messaging, in a data centric based system each entity in the real world is truly represented as a data object that has :

  • An identity
  • A Structure
  • A State
  • A Lifecycle, and
  • Some Meta data that characterises it and captures all the above

Data qualities can even be attached ( such as Security, Persistency, Consistency etc …) to the data objects.

Take a Car as an example, a car is a pretty complex system that can live, circulate and evolve in other systems such as traffic mgmt. system or within a smart city system.

A car has:

  • An Identity :
  • A Status :

– It can be moving ,

– It can be Broken

– It can be in Maintenance:

– It can be Parked :

Or .. It can be scrapped:

A Data Centric system can instantaneously tells you which “new” cars have been observed or those that have left or been “disposed” from the system. It does all this with no extra effort because a Data Centric system “knows” each data entity individually and does automatic assertion of their liveliness.

  • A Lifecycle :

Being aware of the data lifecycle allows you to automate for instance your system resource management by releasing all the other objects that are associated to your data and releasing memory, threads, files etc … they are using .

Let’s now try to analyse a car structure by looking to its anatomy…. but more importantly by seeing it as an entire system.

  • A Structure

A car has a complex structure. Representing its structure is tough if you are not using a powerful data description language that captures not only subsets it is made of but also the relationships between them.

A Simple car will be made of :

  • A bodywork ….
  • 4 wheels ……..
  • Many many sensors
  • On board computer
  • Brake system …and
  • An Engine …

The engine is a subsystem itself yet is made of many other complex sub-systems:

  • A Carburettor
  • Pistons
  • Pushrods
  • Fan
  • Timing chain
  • Etc >>>>

Each of those subsystems can be captured as individual data objects:

Your data representation can be strongly structured or it can be loosely or semi-structured.

In Data centric world, you can apply and enforce business rules, for instance, the engine will function correctly only when all its sub systems are in a steady state working properly. Furthermore, Data centricity can capture relationship between data objects and entities:

Imagine you are a DRIVER OF ONE OF THOSE VEHICLES!! …

A Driver is first of all a Person, that has an identity, a state and a structure. He can own zero, one or several vehicles and cars, but unfortunately for the time being he can only drive 1 vehicle at a time !!!

All those data entities and their relationships can be captured naturally in a Data Centric system and they live in harmony in a Distributed Global Data Space (GDS).

Nowadays big and sensitive Data is coming from everywhere, data producers and the data consumers are distributed over Local, Wide area , Mobile networks or on Internet or even on ad-hoc networks. In a nutshell we can say that the Network and Internet are becoming a tremendous data space and certainly a huge gold deposit !!

Let’s say you are the owner of a “moving company”, …….

To better manage your fleet vehicles and trucks you decided to install a GPS tracking device on each vehicle of your fleet coupled with an “engine diagnostic box” provided by a cutting edge automotive supplier that enables remote diagnostic and assistance… .

Such system would not only help in knowing where your drivers are in a real-time manner but it will also allow you to assist them in case of an accident or if any truck is broken.

A scenario where Data Centricity Excels over Messaging

Let’s take the case of one driver that in a beautiful and sunny day takes his track to deliver some goods and products. While he is driving …His Truck brakes down in the middle of nowhere !!!!!!!!!! a disaster !!!!!

  1. Fortunately, the truck position is constantly reported by the GPS device and from time to time it reports the engine status compressed over mobile networks
  2. When the engine is broken its diagnostic box will update urgently the engine data graph that has been built over time and change its status to “broken” . Instantaneously, all the interested parties will be notified of “Engine status change”. Some diagnostic applications will immediately start the data analysis to identify the broken subsystem (c.f fig below), while others will be in charge of alerting you by SMS seeking for your assistance as the boss of the company in case there are some extra fees to pay. Some of those diagnostics and analysis applications will require assessing the overall engine data object, while others, more specialised, will only be looking at a subset of the engine data objects. What makes a Data Centric System unique compared to other design paradigm is that each application can have its own view on the data.
    In a very near future, such diagnostic and analysis will happen likely in the Truck provider’s “private cloud” … Such data centric system will require a technology and platform that maintains virtually a unique single dataspace yet where that dataspace will necessarily be physically distributed.
  3. If the failure cannot be fixed remotely, (by for instance restarting the engine in a degraded mode operations or disabling some automatic assistance systems (e.g the shield safety system , or restarting the on-board computer etc ….. ), a (human) expert with the required expertise will get involved for further analysis remotely .
  4. Let’s assume it is the alternator that is broken , a spare part will be immediately ordered …and
  5. an assistance vehicle will bring it to the broken truck minimizing the overall response time !!

Messaging oriented Vs Data Centric System, the Conclusion

With a Message centric system, things will be more complex to build, as …fundamentally…. a message is just an information container, with usually a header and some payload. Messaging will break the overall data representation of the system and you will lose the relationship between them !. With a messaging based system, application logic will need to rebuild each time the full picture of the system from unrelated individual pieces of basic information captured within messages and recreate the full picture and also by manually assuring the consistency of the overall picture. This process can be terribly complex, time consuming and error prone.

On the other hand, Data Centricity inherently rebuilds and guarantees consistency of the overall picture. It helps you getting the system situation awareness you are looking for free.

Data centricity is a very powerful paradigm that supersedes messaging as it can at the same time:

  1. Models Real-world entities as they are with their unique identity , state, structure, meta data and lifecycle
  2. Catchs and represent the relationship between data entities
  3. Helps implementing a decoupled systems where applications interact only by sharing data and exchanging information.
    • A Data Centric based platform will be in charge of maintaining the state of the overall system even in case of failure so that the latest consistent state of the system will always be known and will always be available from late joining applications, whatever their access point to the system will be.
  4. Data Centricity is generic enough and polymorphic to model other interaction paradigms, as it can also model :
    • Event and notification based communication ,
    • Conversational protocols , such as Request / reply based protocols in an abstract and efficient way. This will help part of the system -if needed- to be also Service Centric (to implement SOA based systems too).
    • Lightweight Transactional communications based on the so called coherent -sets and eventually consistent models
    • Message oriented communication -if needed- by a subsystem

Data Centricity is nowadays backed by an extremely powerful Real time Pub/Sub standard and interoperable Middleware technology (http://portals.omg.org/dds/) that is able to AUTOMATICALLY Discover all the Data entities whenever and wherever they are. You can now Build, Share and Monitor data distribution everywhere from embedded devices to machines and Servers in private and public Clouds or in internal company domains.

Data Centric Middlewares connect people at work whether they are, using their workstations, smartphones or tablets to the heart of any mission critical system, securely, efficiently in any circumstances.

So in a nutshell, the key lesson here is ……if you hear someone comparing Data Centric Platforms using the standard Data Distribution Service (DDS) to a Message oriented Middleware (M.o.M) tell him he is doing as if he is comparing a Data base system to a File system, or more precisely, comparing a Distributed Data Base to a Network File System!..

At the end, always remember that in the IoT Age, Data is certainly becoming your strategic economic value, its your key factor of growth and success ! Treat it appropriately!!

Object Management Group (OMG) Technical Meeting Reston Virginia 2015

Date: 23rd March 2015 to 27th March 2015
Location: Reston, VA, USA

Object Management Group OMG Technical MeetingPrismTech is a Contributing Member of the Object Management Group (OMG) and has been a member since 1993. PrismTech’s CEO, Keith Steele, serves on the OMG’s Board of Directors. PrismTech’s OpenSplice DDS CTO, Dr. Angelo Corsaro, co-chairs the OMG DDS Special Interest Group and serves on the architecture board.

PrismTech will be actively participating in this forthcoming OMG Technical Meeting. PrismTech’s products implement OMG standards including our Vortex OpenSplice Data Distribution Service (DDS) product suite.

Further information about the OMG’s Technical Meetings is available from their website at:
http://www.omg.org/news/schedule/upcoming.htm

Object Management Group (OMG) Technical Meeting Reston Virginia 2015

23rd March 2015 to 27th March 2015
Location: Reston, VA, USA

PrismTech is a Contributing Member of the Object Management Group (OMG) and has been a member since 1993. PrismTech’s CEO, Keith Steele, serves on the OMG’s Board of Directors. PrismTech’s OpenSplice DDS CTO, Dr. Angelo Corsaro, co-chairs the OMG DDS Special Interest Group and serves on the architecture board.

PrismTech will be actively participating in this forthcoming OMG Technical Meeting. PrismTech’s products implement OMG standards including our Vortex OpenSplice Data Distribution Service (DDS) product suite.

Further information about the OMG’s Technical Meetings is available from their website at:
http://www.omg.org/news/schedule/upcoming.htm

OMG’s Data Distribution Service: the Internet of Things Fabric

Two distinct classifications for the Internet of Things (IoT) have sharply started to emerge — the Industrial IoT and Consumer IoT. Although there are differences in the environments for which they’re best suited, the Industrial IoT (IIoT) and Consumer IoT (CIoT) share a basic need. Both derive their true value from ubiquitous information availability, and consequently, the decisions that can be made from it.

A great example is a Smart City application, such as the Connected Boulevard initiative in Nice, France, where real-time access of information such as parking and traffic, street lighting, waste disposal and environmental quality is enabling smarter and faster decisions. The ubiquitous availability of data provides better optimization of the various city management functions. Likewise, in a Smart Grid application, real-time access to energy production and demand can help match production to demand, improve energy trading strategies, and allow micro-power generators to decide whether to sell or store their energy surplus.

As ubiquitous, timely and efficient information availability is key in IIoT and CIoT environments, the technical (and non-technical) community has been engaging in passionate online discussions around the technologies that can best provide the right data-sharing platform — the fabric — to deliver the right data to the right place at the right time. Across the various discussions and augmentations, a common theme has emerged: the importance of a standards-based solution to ensure openness and interoperability across the variety of IIoT and CIoT environments.

In this post, I’ll analyze the requirements posed by the variety of data-sharing protocols found in a generic IoT system and then describe how the OMG Data Distribution Service standard is the best answer to serve as the fabric for IoT.

IoT Data-Sharing Requirements

IIoT and CIoT systems have very articulate data-sharing requirements. As such, these requirements have to be addressed holistically as to keep the complexity of developing and deploying these systems low and the efficiency of running them high.

There are three standard classifications to represent the different data-sharing characteristics of IoT systems.

Device-2-Device. Device-2-Device (D2D) communication is required in several different use cases. This communication pattern is prevalent on edge systems such as Industrial Plants and Vehicles. That said, it is slowly being exploited in more use cases. The latest notable example is FireChat, the infrastructure-less peer-to-peer chat. D2D is facilitated by broker-less peer-to-peer infrastructures that facilitate deployment, foster fault-tolerant execution, and can provide for more performance-sensitive applications, low-latency and high-throughput data-sharing.

Device-2-Cloud. Devices and sub-systems interact with cloud-based services and applications for mediating data-sharing and data collection. Device-2-Cloud (D2C) communications can feature vastly different needs and requirements based on the application and environment and the type of data that needs to be shared. For instance a remote surgery application has far more stringent temporal requirements than a smart city waste disposal application. At the same time, however, a smart city application may have more stringent requirements with respect to efficient network and energy management of devices. Depending on the use case and environment, D2C communication needs to be able to support high-throughput and low-latency data exchanges as well as operation over bandwidth constrained links. An additional key is the ability of the D2C communication to support intermittent connectivity and variable latency links.

Cloud-2-Cloud. Currently there are few systems being deployed across multiple IaaS instances or multiple IaaS regions (eg deployed across EC2 EU and US regions), however, it is becoming clear that it will be increasingly important to seamlessly and efficiently exchange data across clouds. For these applications, the data-sharing technology needs to support high throughput and low per/message overhead in order to keep the per-message cost under control.

In addition to the data-sharing patterns references above, there are a number of crosscutting concerns that a data distribution technology needs to support, including platform independence, eg the ability to run on embedded, mobile, enterprise and cloud apps and security.

OMG’s Data Distribution Service — the IoT Fabric

The Data Distribution Service (DDS) is an Object Management Group standard for ubiquitous, efficient, timely and secure data-sharing, independent from the hardware and the software platform. DDS implementations are available today for sharing data across mobile, embedded, enterprise, cloud and web applications. DDS defines a wire protocol that allows for interoperability among multiple vendor implementations as well as an API that allows applications to be easily ported across vendor products. The standard requires implementation to be fully distributed and broker-less, meaning that DDS applications communicate without any mediation, yet when useful, DDS communication can be transparently brokered.

The basic abstraction at the foundation of DDS is the Topic. A Topic captures the information to be shared along with the Quality of Service (QoS) associated with it — this makes it possible to control the functional and non-functional properties of data-sharing. DDS provides a rich set of QoS policies that allow for the control of local resource usage, network utilization, traffic differentiation, as well as data availability for late joiners. In DDS, the production of data is performed through Data Writers while consumption is through Data Readers. For a given topic, data readers can further refine the information received through content as well as temporal filers. DDS is also equipped with a dynamic discovery service that allows applications to dynamically discover the information available in the system and match the relevant sources.

The DDS Security standard provides an extensible framework for dealing with authentication, access control, logging and encryption. As an example, you may provide certain applications with rights for reading only certain topics, while other application for reading as well as writing and perhaps creating new topics. The rules that define access rights are very flexible and allow for very granular control over what applications are allowed. For securing communication, DDS Security takes an approach similar to that taken by SRTP, thus (1) allowing the use of multicast when possible, and (2) avoiding the in-line key re-negotiation issues by TLS/DTLS.

Applying the DDS Fabric

Among the standards that have been identified as relevant to IoT applications, DDS is the one that stands out with respect to the breadth and depth of coverage of data-sharing requirements. This does not come as a surprise to DDS aficionados, yet, those not familiar with the technology are astonished to learn of its many deployments in IoT systems. As you read below, you’ll see what makes DDS so special.

Device-2-Device. DDS provides a very efficient and scalable platform for D2D communication. DDS implementation can be scaled down to deeply embedded devices or up to high-end multi-core machines. In regard to efficiency, a DDS implementation can have latency as low as ~30usec on Gbps Ethernet networks and throughput of several million messages per second. At the same time, DDS has a binary and efficient wire-protocol that makes it a viable solution in network constrained environments. The broker-less and peer-to-peer nature of DDS makes it ideal for D2D communication, and at the same time, the ability to transparently broker DDS communication — especially when devices communicate through multicast — eases the integration of subsystems into IoT systems.

Device-2-Cloud. DDS supports multiple transport protocols, such as UDP and TCP, and when available takes advantage of multicast (support for Source Specific Multicast is available through vendor extensions and will soon be included in the standard). The support for UDP/IP is extremely beneficial for applications that deal with interactive, soft real-time data for which TCP/IP would be introducing either too much overhead or head-of-line blocking issues. For deployments that can’t take advantage of UDP/IP, DDS alleviates some of the challenges introduced by TCP/IP vis-a-vis head-of-line blocking. This is achieved through its support for traffic differentiation and prioritization along with selective down-sampling. Independent of the transport used, DDS supports three types of reliability: best effort, last value reliability and reliability. Of these three, only the latter behaves like “TCP/IP reliability,” the others allow DDS to drop samples to ensure that stale data does not delay new data.

The efficient wire-protocol, combined with the rich transport and reliability semantics support, make DDS an excellent choice for sharing both periodic data such as telemetry as well as data requiring high reliability. In addition, the built-in support for content filtering ensures data is only sent if there are consumers that share the same interest and whose filter match the data being produced.

Cloud-2-Cloud. The high throughput and low latency delivered by DDS makes it a perfect choice for data-sharing across the big pipes connecting different data centers.

In summary, DDS is the standard that better addresses the most compelling data-sharing requirements presented by Industrial and Consumer IoT applications. DDS based platforms, such as PrismTech’s Vortex, address mobile, embedded, web, enterprise and cloud applications along with cloud messaging implementations, and allow for scaling and integration of devices and sub-systems at an Internet scale. Solutions based on DDS are deployed today in Smart Cities, Smart Grids, Smart Transportation, Finance and Medical environments.

If you want to learn more about DDS check this tutorial or the many educational slides freely available on SlideShare.

Connected Boulevard — It’s What Makes Nice, France a Smart City

Known as the capital of the French Riviera, the city of Nice, France, is many things. It’s beautiful, it’s cosmopolitan and it’s vibrant. But it’s also something else — it’s possibly the smartest city in the world.

Among spectacular panoramic views, the rich culture, and all the shopping and nightlife opportunities is an underlying connectivity. It’s actually an intelligent data-sharing infrastructure that is enhancing the city’s management capabilities and is making daily life more efficient, enjoyable and easier for the more than 300,000 residents that call Nice home and the more than 10 million tourists who visit each year. It’s what makes this city smart… really smart.Chart for Angelo's Blog Post

Nice has been gaining much attention lately thanks to a series of innovative projects aimed at preserving the surrounding environment and enhancing quality of life through creative use of technology. Connected Boulevard is a great example of this.

The city launched the Connected Boulevard — an open and extensible smart city platform — as a way to continue to attract visitors while maintaining a high quality of life for its citizens. Connected Boulevard is used to manage and optimize all aspects of city management, including parking and traffic, street lighting, waste disposal and environmental quality.

A number of companies played a key role in the launch of Connected Boulevard, including Industrial Internet Consortium members Cisco, which is providing its Wi-Fi network, and PrismTech, which is providing its intelligent data-sharing platform, Vortex (based on the Object Management Group’s Data Distribution Service standard) at the core of the Connected Boulevard environment for making relevant data ubiquitously available.

Architecture Maximizes Extensibility and Minimizes Maintenance Costs

Think Global, an alliance of innovative start-ups and large companies, designed the Connected Boulevard architecture with an eye toward maximizing extensibility and minimizing maintenance costs. In a smart city environment, the main costs typically come from system maintenance, rather than initial development and launch efforts. A big part of these maintenance costs come from the replacement of sensor batteries. To help reduce these operating costs and maximize battery life, the Connected Boulevard project team made an interesting and forward thinking move — one which was in direct contrast with some of the latest thinking by those in the smart device and edge computing community.

Connected Boulevard relies on “dumb” sensors. These sensors typically are simply measuring physical properties such as temperature and humidity, magnetic field intensity, and luminosity. Once collected, these measurements are sent to signal processing algorithms within a cloud, where the data is then “understood” and acted upon. In the Connected Boulevard, magnetic field variation is used to detect parked cars, temperature and humidity levels are used to determine when to activate sprinklers, luminosity and motion detection are used to control street lighting.

The sensors in the Connected Boulevard rely on low power protocols to communicate with aggregators that are installed throughout the road network. Powered by the power line, the aggregators use Vortex to convoy the data into an Amazon EC2 cloud. The data is than analyzed by a series of analytics functions based on the Esper CEP platform. Finally, relevant information, statistics and insight gained through the data analysis are made available wherever it is needed within this connected ecosystem.

The applications within Connected Boulevard use caching features to maintain in-memory, a window of data over which real-time analytics are performed. The results of these analytics can be shared with applications throughout the overall system, where decisions are then made, such as what actions should take place. For example, the Nice City Pass application checks for free parking places and can also be used to reserve them. If a car is occupying a parking space that the driver has not paid for, a notification is sent to the police to ensure that the violating driver is fined.

Significant Benefits

After the initial installation of Connected Boulevard a few years ago, traffic congestion was reduced by 30 percent, parking incomes increased by 35 percent and air pollution has been reduced by 25 percent. It’s also anticipated that savings on street lighting will be at least 20 percent, but possibly as high as 80 percent. These are real, tangible results… and are clear examples of a smart city at work.

Building the Internet of Things with DDS

The real value of the () and the () is ubiquitous information availability and consequently the decisions that can be made from it. The importance of ubiquitous data availability has significantly elevated attention on standards-based data sharing technologies. In this post, I’ll analyze the data sharing requirement characteristics of IoT/I2 systems and describe how the Object Management Group (OMG) Data Distribution Service (DDS) standard ideally addresses them.

Data sharing in IoT/I2

Data sharing patterns within IoT/I2 systems can be classified as follows:

Device-2-Device. This communication pattern is prevalent on edge systems where devices or traditional computing systems need to efficiently share data, such as plants, vehicles, mobile devices, etc. Device-2-Device data sharing is facilitated by broker-less peer-to-peer infrastructures that simplify deployment, foster fault-tolerant, and provide performance-sensitive applications with low latency and high throughput.

Device-2-. Individual devices and subsystems interact with cloud services and applications for mediating data sharing as well as for data collection and analytics. The Device-2-Cloud communication can have wildly different needs depending on the application and the kind of data being shared. For instance, a remote surgery application has far more stringent temporal requirements than a application. On the other hand, the smart city application may have more stringent requirements with respect to efficient network and energy management of the device. Thus depending on the use case, Device-2-Cloud communication has to be able to support high-throughput and low-latency data exchanges as well as operation over bandwidth constrained links. Support for intermittent connectivity and variable latency link is also quite important.

Cloud-2-Cloud. Although few systems are currently being deployed to span across multiple instances or multiple IaaS regions (such as deploying across EC2 EU and U.S. regions), it will be increasingly important to be able to easily and efficiently exchange data across cloud “domains.” In this case, the data sharing technology needs to support smart routing to ensure that the best path is always taken to distribute data, provide high throughput, and deliver low per-message overhead.

Besides the data sharing patterns identified above, there are crosscutting concerns that a data distribution technology needs to properly address, such as platform independence – for example, the ability to run on embedded, mobile, enterprise and cloud apps, and security.

The (DDS)

The DDS is an OMG standard for seamless, ubiquitous, efficient, timely, and secure data sharing – independent from the hardware and the software platform. DDS defines a wire protocol that allows for multiple vendor implementations to interoperate as well as an API that eases application porting across vendor products. The standard requires the implementation to be fully distributed and broker-less, meaning that the DDS application can communicate without any mediation, yet when useful, DDS communication can be transparently brokered.

The basic abstraction at the foundation of DDS is that of a Topic. A Topic captures the information to be shared along with the Quality of Service associated with it. This way it is possible to control the functional and non-functional properties of data sharing. DDS provides a rich set of QoS policies that control local resource usage, network utilization, traffic differentiation, and data availability for late joiners. In DDS the production of data is performed through Data Writers while the data consumption is through Data Readers. For a given Topic, Data Readers can further refine the information received through content and temporal filters. DDS is also equipped with a dynamic discovery service that allows the application to dynamically discover the information available in the system and match the relevant sources. Finally, the DDS Security standard provides an extensible framework for dealing with authentication, encryption, and access control.

Applying DDS to IoT and I2

Among the standards identified as relevant by the Industrial Internet Consortium for IoT and I2 systems, DDS is the one that stands out with respect to the breath and depth of coverage of IoT/I2 data sharing requirements. Let’s see what DDS has that make it so special.

Device-2-Device. DDS provides a very efficient and scalable platform for Device-2-Device communication. DDS implementation can be scaled down to deeply embedded devices or up to high-end machines. A top-performing DDS implementation, such as PrismTech‘s intelligent data sharing platform, Vortex, which can offer latency as low as ~30 usec on Gbps Ethernet networks and point-to-point throughput of several million messages per second. DDS features a binary and efficient wire-protocol that makes it a viable solution also for Device-2-Device communication in network-constrained environments. The broker-less and peer-to-peer nature of DDS makes it an ideal choice for Device-2-Device communication. The ability to transparently broker DDS communication – especially when devices communicate through multicast – eases the integration of subsystems into IoT and I2 systems.

Device-2-Cloud. DDS supports multiple transport protocols, such as UDP/IP and TCP/IP, and when available can also take advantage of multicast. UDP/IP support is extremely useful in applications that deal with interactive, soft real-time data in situations when TCP/IP introduces either too much overhead or head-of-line blocking issues. For deployment that can’t take advantage of UDP/IP, DDS alleviates the problems introduced by TCP/IP vis-á-vis head-of-line blocking. This is done through its support for traffic differentiation and prioritization along with selective down-sampling. Independent of the transport used, DDS supports three different kinds of reliability: best effort, last value reliability, and reliability. Of these three, only the latter behaves like “TCP/IP reliability.” The others allow DDS to drop samples to ensure that stale data does not delay new data.

The efficient wire-protocol, in combination with the rich transport and reliability semantics support, make DDS an excellent choice for sharing both periodic data, such as telemetry, as well as data requiring high reliability. In addition, the built-in support for content filtering ensures that data is only sent if there are consumers that share the same interest and whose filter matches the data being produced.

Cloud-2-Cloud. The high throughput and low latency that can be delivered by DDS makes it a perfect choice for data sharing across the big pipes connecting various data centers.

In summary, DDS is the standard that ideally addresses most of the requirements of IoT/I2 systems. DDS-based platforms, such as PrismTech’s Vortex, provide device solutions for mobile, embedded, web, enterprise, and cloud applications along with cloud messaging implementations. DDS-based solutions are currently deployed today in smart cities, smart grids, smart transportation, finance, and healthcare environments.

If you want learn more about DDS check out this tutorial or the many educational slides freely available on SlideShare.

PrismTech Joins the Industrial Internet Consortium

Becomes the first (non-founder) industry member and plans to play an active role both in the IIC working groups and as a provider of its intelligent data-sharing platform for Industrial Internet systems

Reston, VA, USA – March 27, 2014 – PrismTech™, a global leader in software platforms for distributed systems, today announced that it has joined the Industrial Internet Consortium as its first (non-founder) industry member.

Quoting from today’s IIC launch press release, “AT&T, Cisco, GE, IBM and Intel today announce the formation of the Industrial Internet Consortium (IIC), an open membership group focused on breaking down the barriers of technology silos to support better access to big data with improved integration of the physical and digital worlds. The consortium will enable organizations to more easily connect and optimize assets, operations and data to drive agility and to unlock business value across all industrial sectors.”

The IIC will establish and influence common architectures, interoperability, and open standards that integrate devices and machines to people, processes and data.  The Object Management Group (OMG), a non-profit trade association in Boston, MA will manage the consortium.

“We’re excited to welcome PrismTech as one of the first members of the IIC,” said Dr. Richard Soley, Executive Director of the Industrial Internet Consortium. “As a leading contributor to the OMG Data Distribution Service standard – a protocol for the Industrial Internet – we look forward to leveraging PrismTech’s expertise in helping to identify the key enabling technologies for the Industrial Internet.”

Steve Jennis, SVP of Corporate Development at PrismTech added, “We are proud to be the first non-founder industry member of the IIC.  We believe our Vortex intelligent data-sharing platform provides a unique and key enabling technology for business-critical Industrial Internet systems, and the work of the IIC will just help us further develop and further differentiate Vortex.  As such, we look forward to working with the other IIC members to help deliver the huge economic potential of the Industrial Internet.”

PrismTech also announced today details of Vortex.  Vortex is an efficient, secure and interoperable device-to-device and device-to-Cloud real-time data-sharing platform with over 20 configurable end-to-end Qualities-of-Service.  Vortex is designed for Industrial Internet solutions that connect devices, machines, enterprise systems, mobile people and Cloud applications; to deliver new levels of secure inter-operability, control and real-time situational awareness.

Further information about PrismTech and its Vortex platform is available at http://www.prismtech.com/vortex.