Data Fabric: What It Is and How It Impacts Cybersecurity


Data use and generation today are both awesome and daunting to manage. What is the best way to manage this mountain of dispersed and disparate data? A possible answer lies in the concept of ‘data fabric’ as a means to unify data. This is an integrated layer of data and connecting processes that “utilizes continuous analytics over existing, discoverable and inferenced metadata assets to support the design, deployment and utilization of integrated and reusable data across all environments.” 


So what does that mean in everyday speech?


Think of the data fabric as a bedsheet spanning across data sources. Every source is connected, or woven, into the bedsheet through some form (metadata, API services, security information, etc.). Therefore, they can be linked to other storage and computing functions. Data silos come down by feeding data into one large connector.


Seems straightforward. In a hybrid environment, the concept sounds appealing, too. The details, of course, are where it gets tricky to implement. 


How Does It Work? 


To begin, we need to understand that data fabric architectures do not happen overnight. Putting one in place is a journey, requiring knowing your existing data (or ‘data mess’). Next, you need a plan to automate, harmonize and manage all sources with common connectors and components, removing the need for custom coding.


Remember when mobile phones had proprietary connectors? Nowadays, almost all of them use USB-C. Loosely, removing custom coding is a similar concept. Proprietary connectors, components and code have their use, but if the purpose is to connect your data sources, the commonality is your friend. You need a data framework ..

Support the originator by clicking the read the rest link below.