Optimizing OLAP Cubes Construction by Improving Data Placement on Multi-nodes Clusters

The increasing volumes of relational data let us find an alternative to cope with them. The Hadoop framework – which is an open source project based on the MapReduce paradigm – is a popular choice for big data analytics. However, the performance gained from Hadoop’s features is currently limited by its default block placement policy, which does not take any data characteristics into account. Indeed, the efficiency of many operations can be improved by a careful data placement, including indexing, grouping, aggregation and joins.

In this paper we propose a data warehouse placement policy to improve query gain performances on multi nodes clusters, especially Hadoop clusters. We investigate the performance gain for OLAP cube construction query with and without data organization. And this, by varying the number of nodes and data warehouse size. It has been found that, the proposed data placement policy has lowered global execution time for building OLAP data cubes up to 20 percent compared to default data placement.

You might also like