We are on the verge of the “industrial revolution of Big Data”, which represents the next frontier for innovation, competition, and productivity. Big data is rich with promise, but equally rife with challenges. It extends beyond traditional structured data, including unstructured data of all types; it is not only large in size, but also growing faster than Moore’s law. This paper presents the new paradigm the Hadoop stack, that is required for big data storage and processing. It then describes how to optimize the Hadoop deployment through proven methodologies and tools, and demonstrates the challenges and possible solutions for the real-world.