Now you’re ready to implement your data virtualization project. To start, you can try one of the following approaches:
- Data Blending: A DV solution can merge with your business intelligence (BI) tool’s semantic universe layer or can be added as a new module, combining multiple data sets to feed BI-specific enterprise tools.
- Data Services Module: Typically offered by a data integration suite or data warehouse vendors, this platform delivers robust data modeling, transformation and quality functions.
- SQLification Products: This emerging offering “virtualizes” underlying big data technologies and allows them to be combined with relational data sources and flat files — with querying done using standard SQL.
- Cloud Data Services: Data virtualization has prepackaged integrations for SaaS (software as a service) and other cloud architectures, databases and many on-premise tools like Excel that are implemented on private enterprise clouds. These products expose normalized APIs across cloud sources for easy data exchange in projects of medium-sized data sets.
In the end, data virtualization’s location-transparent, built-in architecture, coupled with large-scale analytics architectures, naturally supports applications in a hybrid cloud environment. It goes beyond tiered views and delegable query execution to offer enterprise growth. Overall, implementing your own data virtualization approach will let you derive information faster.
POST WRITTEN BY
Director, Digital Modernization | Principal Architect | Technology Evangelist for Sage IT Inc.