Solutions - AWS

Plug & Play Data Lake

By 28 May, 2021December 24th, 2021No Comments

Plug & Play Data Lake

A solution for Data Led Migration.

What is Plug & Play Data Lake?

Plug & Nubiral Plug & Play Data Lake, automatically configures the foundational AWS services needed to easily tag, search, share, transform, analyze, and manage specific subsets of data across an entire company or with other external users.
The solution is implemented through a project segmented into phases that allows the identification of maturity level in which the client is in order to develop a joint plan to meet the challenges from the infrastructure and database point of view.


Capture and store raw data, at scale and at a low cost.

Perform data transformations in the same place where it is stored..

Provides quality data for real-time analysis.

Availability, scalability and versatility of data at all times, in a very competitive cost.

Allows access for users without technical and / or data analysis skills so that everyone can see the same source of information within the organization.

Success stories

About the client | The client, leader in healthcare, wanted to modernize its data storage and centralization system to meet its growing operational needs and contribute to cost savings. It has important and confidential data stored in 32 different data sources in structured, semi-structured and unstructured format.

Needs | It is essential to centralize all data in one place and offer a data-driven decision-making system for different sectors of the organization. Some of the challenges are legacy systems with a complex data structure that makes reporting and analysis very difficult for the end user, the variety of data sources, and their high volume meaning that the reports cannot be consumed in real time.

Nubiral Solution | A data lake was implemented using AWS products and tools (data lake and data pipeline) to consolidate basic data and critical data in reference to their patients and the results of clinical examination results.
Based on the security and information segmentation policies (classification), a pipeline was created that contemplated their ingestion, transformation and presentation. A data catalog was created to unify the various sources and make their visualization more accessible. The client also requested to implement several data viewers for analysis, which is why the creation of the catalog was essential to save their relationships.

Results | The data lake solution allowed users to start making data-driven decisions by centralizing and consolidating data (internal and external data) in one place. In addition, it added capabilities to ingest and process any type of data, that is, structured, semi-structured, and unstructured. The data, its processing and analysis are available 24/7 and in real time, aligned with your vision of work without interruptions. This helped to easily access data from across the organization without relying on multiple internal and / or external systems.

Leave a Reply