The Patient Centric Medical system (PCMS)

  

SES has developed a secure data integration engine that allows for easy integration of disparate data including traditional relational data and big data technology to store virtually any data in any form. The cornerstone of the system is the integrated security and identity management model that ensures the user is who they say they are by using a physical token as well as a two factor (2FA) and a multi-factored authentication (MFA) process.

The metadata database is designed to store key data about the enterprise and contains a list of standard or customizable data elements and data types.  It includes a search engine optimized for big data.  This integration engine includes a scalable front end that allows for on demand allocation of resources.  It also includes a data lake back end data store that allows on demand scaling to handle petabytes of data. The most important thing to understand about a data lake is not how it is constructed, but what it enables. It’s a comprehensive way to explore, refine, and analyze petabytes of information constantly arriving from multiple data sources.  

(A data lake is a centralized repository that allows you to store all your structured and unstructured data at any scale. You can store your data as-is, without having to first structure the data, and run different types of analytics—from dashboards and visualizations to big data processing, real-time analytics, and machine learning to guide better decisions.)

As part of our integration engine we have implemented an Enterprise Service Bus (ESB) that allows us to communicate with any accessible endpoint and involves procedures to transform data, along with establishing policies and procedures for identity management, security, data governance, and other essential activities.

 PCMS has been integrated to multiple cloud offerings including Microsoft Azure, and Amazon AWS. It has leveraged these data storage environments to create elastic cloud storage solutions to allow customers to store unlimited amounts of data in their native formats, against which they can conduct analytics using Apache Spark methodologies.

This architecture allows us to obtain high-performance analytics. The combination of a cloud analytics layer, the data warehouse, and a cloud-based object store comprises our data lake. Data lakes provide near-unlimited capacity and scalability for the storage and computing power one needs. A modern data lake dramatically simplifies the effort to derive insights and value from all that data and ultimately produces faster business results.