Comments
yourfanat wrote: I am using another tool for Oracle developers - dbForge Studio for Oracle. This IDE has lots of usefull features, among them: oracle designer, code competion and formatter, query builder, debugger, profiler, erxport/import, reports and many others. The latest version supports Oracle 12C. More information here.
Cloud Expo on Google News
SYS-CON.TV
Cloud Expo & Virtualization 2009 East
PLATINUM SPONSORS:
IBM
Smarter Business Solutions Through Dynamic Infrastructure
IBM
Smarter Insights: How the CIO Becomes a Hero Again
Microsoft
Windows Azure
GOLD SPONSORS:
Appsense
Why VDI?
CA
Maximizing the Business Value of Virtualization in Enterprise and Cloud Computing Environments
ExactTarget
Messaging in the Cloud - Email, SMS and Voice
Freedom OSS
Stairway to the Cloud
Sun
Sun's Incubation Platform: Helping Startups Serve the Enterprise
POWER PANELS:
Cloud Computing & Enterprise IT: Cost & Operational Benefits
How and Why is a Flexible IT Infrastructure the Key To the Future?
Click For 2008 West
Event Webcasts
The Network-Centric Computing Model
How can enterprise-level applications access and process information generated by embedded real-time systems?

System architects and engineering teams are designing increasingly complex embedded systems in order to satisfy their customers' stringent functionality and performance requirements. In addition, within tactical systems, it's not uncommon to require deterministic real-time behavior while moving large quantities of information over non-deterministic network transports. Many of these systems must now face the challenge of interfacing or "bridging" to the Global Information Grid (GIG) in a seamless fashion in order to: a) communicate information with the broader electronic community while b) not negatively impacting high performance mission-critical functionality.

As the Object Management Group's (OMG) Data Distribution Service (DDS) specification continues to gain market traction, particularly within the Department of Defense (DoD), the ability to seamlessly integrate DDS-based high-performance tactical systems with enterprise-based applications provides the potential for a viable, standards-based architectural strategy that specifically addresses the embedded to Enterprise (e2E) communication challenge.

The Network-Centric Computing Model
As network centricity becomes pervasive, and integrating with the GIG impacts the design and implementation of distributed real-time systems, system architects of real-time tactical systems must grapple with the design realities of bridging to/from enterprise applications. It's critical that the act of bridging not compromise the real-time deterministic performance of mission-critical embedded systems. Therefore, a bridging integration that exhibits "elastic" properties must be available to allow seamless, non-intrusive integration - thus the Network-Centric Computing Model (NCCM).

The NCCM is one that facilitates localized management of distributed data as an integral part of the real-time application while not relying on the traditional central server topology. The model's topology is peer-to-peer versus client/server, allowing system architectures to be designed, from the computing node's perspective, with no single point of failure (i.e., no central server). Network-centric computing is based on computing elements being loosely coupled such that high-performance middleware abstracts the operating system and the hardware specific details so that, ultimately, the application software design does not require intimate knowledge of the underlying network topology. This computing model facilitates the design of location transparent software, which directly benefits software module reuse.

Within the NCCM, a huge challenge is managing the distributed data to be shared between the tactical and enterprise applications. Real-time data must be captured, stored, retrieved, queried, and managed such that the proper information can be easily accessed by all interested enterprise participants. Conversely, enterprise information that must be pushed back to the tactical system must be able to do so in an efficient and timely manner. This shared data management capability can be viewed as, but is not limited to, a distributed real-time database where peer-to-peer (P2P) networking and relational in-memory data management systems (RDBMS) are leveraged to provide a solution that manages storage, retrieval, and distribution of fast-changing data in dynamic network environments. Figure 1 provides a simple illustration of this NCCM architecture. The benefit of the distributed database model is that it scales and guarantees continuous real-time availability of all information critical to the enterprise without compromising tactical systems determinism or performance.

Another important aspect of the NCCM architecture is its ability to support network topologies that are bandwidth limited, lossy, and of an ad-hoc nature. Being able to federate applications is a key aspect of the NCCM and is precisely why DDS publish/subscribe middleware is leveraged as the communication backbone. Allowing systems to communicate large volumes of time-sensitive data, while operating over bandwidth-constrained, high-latency links that experience high bit-error rates is essential for operational success and system readiness requirements. The architecture has to be able to maintain overall system availability, even in the presence of individual sub-systems temporarily dropping out of the network, and quickly rejoining.

Leveraging Standards
As Figure 1 illustrates, the NCCM architecture is complemented by the support of the leading industry standards for application programming interfaces, data modeling, data manipulation, and high-performance, data-centric, publish and subscribe communication, such as ODBC, JDBC, SQL, and DDS. These familiar interfaces minimize the learning curve and facilitate quick time-to-market. In addition, the use of standards greatly simplifies integration with existing infrastructure solutions.

DDS
The OMG's DDS standard, which is now a mandated DoD technology within the DISR (previously JTA), is quickly gaining market traction due to its ability to abstract network complexities, facilitate the design of location-transparent applications, and provide application-level control of data Quality of Service (QoS) within a publish-subscribe communication paradigm. In fact, several DoD systems have been successfully deployed and are enjoying the benefits of DDS anonymous publish-subscribe middleware. By utilizing DDS technology for node-to-node communication, the complexity of managing a dynamic network environment, such as ad-hoc wireless networks, is removed from the application developer. It is imperative that a network-centric system accommodate network dynamics without adversely affecting the computing nodes comprising the overall distributed system.

SQL/ODBC/JDBC
Today's DBMS solutions utilize SQL, and Open Database Connectivity (ODBC) and Java Database Connectivity (JDBC) APIs and are widely accepted standards within the data management community. SQL is used for both data definition and data manipulation while ODBC and JDBC are used as a Call Level Interface for the C/C++ and Java programming languages.

Network-Centric Data Management - A Real-Time Implementation
Within the NCCM, a daunting challenge is managing the distributed data, and facilitating localized management of that data. An architectural approach that addresses these requirements is commonly referred to as the distributed database. The benefit of the distributed database model is that it guarantees continuous real-time availability of all information critical to the enterprise, and facilitates the design of location transparent software, which directly impacts software module reuse.

The NCCM-based architecture addresses the aforementioned technical challenges by providing software intelligence that enables real-time information management. Furthermore, it implements a distributed shared database where fragments of the shared database are kept in the local data caches (i.e., local memory) of the hosts that comprise the network - on an as-needed basis.

As a result, software applications gain reliable, instant access across dynamic networks to information that changes in real-time. The architecture uniquely integrates peer-to-peer networking (DDS) and real-time, in-memory database management systems (DBMS) into a complete solution that manages storage, retrieval, and distribution of fast-changing data in dynamically configured network environments. It guarantees continuous availability in real-time of all information that is critical to the enterprise. DDS technology is employed to enable a truly decentralized data structure for distributed database management while DBMS technology is used to provide persistence for real-time DDS data.

The power of this model is that embedded applications do not need to know SQL or OBDC semantics, and enterprise applications are not forced to know publish-subscribe semantics. This is a critical point when building large systems: get the data to where it needs to go in a format that is native to the developers. Thus, the database becomes an aggregate of the data tables distributed throughout the system. When a node updates a table by executing a SQL INSERT, UPDATE, or DELETE statement on the table, the update is proactively pushed to other hosts that require local access to the same table via real-time publish-and-subscribe messaging.  This architectural approach enables real-time replication of any number of remote data tables.

The NCCM architecture consists of two technologies:

  1. Database synchronization technology that unifies the global data space offered by multiple DBMSs.
  2. DDS-DBMS integration technology that provides DDS-DBMS integration by unifying the global data space offered by DDS and a DBMS, thus facilitating embedded to enterprise bridging.

Database Synchronization Technology
The NCCM architecture must provide an integral solution for data distribution and database management in the real-time applications space. By integrating these standard technologies in one framework, a service for efficient data distribution, storage, and retrieval is realized. In addition, it refines our view of the "database" within the net-centric computing model from being tied to a client/server central repository approach to a fully distributed architecture.

From a practical perspective, it is important to recognize that a distributed relational database capability can now be implemented on multiple computing nodes - which provide a multi-database vendor-independent solution. In fact, this new architecture views the combination of these distributed data tables as a single "distributed database." Figure 2 illustrates how the technology unifies the global data space for both embedded and enterprise applications.

DDS-DBMS Integration Technology - Embedded to Enterprise (e2E) Bridging
DDS-DBMS integration leverages both DDS and DBMS in order to provide DDS-DBMS integration. DDS-DBMS integration technology facilitates the bridging of embedded real-time systems with enterprise-based systems by allowing relational database table updates to be propagated, in real-time, to the embedded nodes. The embedded node utilizes the DDS API and subscribes to table updates. When the table is altered, either by an enterprise application (via SQL) or an application utilizing the DDS API, the local table is updated and update information is published (via DDS) for consumption by all interested DDS subscribers. This enables table updates to be seamlessly bridged from an enterprise application to an embedded real-time application.

The DDS-DBMS integration technology also allows DDS publications and/or subscriptions to be captured and logged into the in-memory database, in real-time, in order to capture and log all incoming or outgoing publish-subscribe activity.  This allows live network traffic captures to be logged directly into RAM, thus facilitating the ability to post-process and analyze system communication activity. This logging capacity provides the primitives for building distributed system debug and trace tools, message auditing, as well as design tools that can capture and playback messages in order to re-create original system activity for the purposes of lab testing and debug.

DDS-DBMS Integration - Architectural Details
The DDS-DBMS integration architecture is designed to provide application developers full control over the global data space using the DDS API and/or the SQL API.

Figure 3 illustrates the DDS-DBMS integration technology which consists of two bridge components: DDS-DBMS and DBMS-DDS.

DDS-DBMS Bridge
The DDS-DBMS Bridge monitors an application's published data and incoming (subscribed) data. It enables automatic storage of DDS topic data within the DBMS by mapping DDS topics to tables within the DBMS. As each topic instance is published, the topic instance is likewise inserted as a row in the table. This bridging provides the functionality necessary to support logging of both incoming and outgoing message traffic. When an in-memory RDBMS is employed, this logging can take place in real-time, without suffering the performance penalty typically associated with disk based databases.

DBMS-DDS Bridge
The DBMS-DDS Bridge manages the automatic publication of changes made to tables in the DBMS. It will also apply changes received via DDS to tables in the DBMS. This bridge allows table changes, whether made by an SQL enterprise application or by a DDS-enabled application, to be "pushed" to a pure DDS subscribing application, in real-time. This bridge component provides table event bridging from the enterprise application to the embedded application.

DDS-DBMS Integration Feature Summary
The DDS-DBMS integration technology provides application developers with a choice when accessing the global data space: updates made via the SQL API will be visible to DDS-based applications and updates made via the DDS API will be visible to DBMS user applications. These mechanisms offer a unique combination of features:

Storage of DDS Data in DBMS Tables
DDS-DBMS integration enables automatic storage of DDS data into DBMS tables. Changes made via the DDS API are propagated to the associated DBMS, as are the changes detected by DDS. Once the data is propagated to the DBMS table, it can be accessed by a SQL user application via the SQL API.

Publication of DBMS Data via DDS
DDS-DBMS integration enables automatic publication of changes in specified DBMS tables. Changes made via the SQL API (i.e., INSERT and UPDATE statements) will be published into the network via DDS. SQL queries will report the user data changes received from the network via DDS.

Mapping Between IDL and SQL Data Types
DDS-DBMS integration provides automatic mapping between DDS data type representation and DBMS schema representation. This mapping is utilized to directly translate a DBMS table record to the DDS wire format representation and vice-versa.

Mapping Between DDS Data Samples and DBMS Table Updates
The DDS type metadata specified in an Interface Description File (IDL) is mapped to a table schema in a DBMS. A DDS topic corresponds to a table in the DBMS, which may be named after the DDS topic name.

History
DDS-DBMS integration can automatically keep track of the history samples of a DDS topic instance. The number of history samples to store for an instance is specified as a configuration parameter of the DDS-DBMS Bridge. Normally, a topic instance is mapped to a single row in the associated table; when history is enabled, each sample of a topic instance will be stored as a separate row.

Conclusion
Real-time tactical systems face the challenge of interfacing or "bridging" to the Global Information Grid (GIG). This involves providing an efficient "data-path" that allows embedded applications the ability to communicate efficiently with enterprise applications. By combining the data-centric technologies of both DDS and DBMS, a viable architectural strategy emerges. This architectural approach is extending (refer to Figure 4) to facilitate data-centric communication with service-oriented architecture (SOA) messaging solutions such as JMS and Web Services.

With the emergence of the NCCM architecture, we now have answers to the questions: "How can enterprise-level applications access and process information generated by embedded real-time systems?" Furthermore, "How can an embedded system gain access and respond, in real-time, to data being managed by an enterprise-level application?"

Resources

About Mark Hamilton
Mark Hamilton has a strong background in embedded systems development and electrical engineering. He was involved in the design, development, and deployment of several embedded real-time systems targeting the intelligence and U.S. Naval communities. His professional experience includes the design and development of high-speed digital system and circuit design, NTDS protocols, real-time simulations, and the design of a common re-usable real-time simulation framework. He is an expert at the design and development of complex embedded systems. In his current role, Mark is responsible for field engineering for the South East Region of the U.S. He’s actively involved in pre-sales technical activities and works closely with current and prospective customers. He’s written white papers and spoken at various conferences concerning the emerging OMG DDS specification and RTI Data Distribution Service.

In order to post a comment you need to be registered and logged in.

Register | Sign-in

Reader Feedback: Page 1 of 1

Latest Cloud Developer Stories
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment b...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enter...
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scala...
"I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in th...
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implement...
Subscribe to the World's Most Powerful Newsletters
Subscribe to Our Rss Feeds & Get Your SYS-CON News Live!
Click to Add our RSS Feeds to the Service of Your Choice:
Google Reader or Homepage Add to My Yahoo! Subscribe with Bloglines Subscribe in NewsGator Online
myFeedster Add to My AOL Subscribe in Rojo Add 'Hugg' to Newsburst from CNET News.com Kinja Digest View Additional SYS-CON Feeds
Publish Your Article! Please send it to editorial(at)sys-con.com!

Advertise on this site! Contact advertising(at)sys-con.com! 201 802-3021



SYS-CON Featured Whitepapers
ADS BY GOOGLE