Adaptive Computing's Moab HPC Suite and Moab Cloud Suite are an integral part of the Big Workflow solution, which unifies all data centre resources, optimizes the analysis process and guarantees services, shortening the time to discovery. Adaptive Computing's Big Workflow solution derives it name from its ability to solve big data challenges by streamlining the work flow to deliver valuable insights from massive quantities of data across multiple platforms, environments and locations.
While current solutions solve big data challenges with just Cloud or just HPC, Adaptive utilizes all available resources - including bare metal and virtual machines, technical computing environments (e.g., HPC, Hadoop), Cloud (public, private and hybrid) and even agnostic platforms that span multiple environments, such as OpenStack - as a single ecosystem that adapts as workloads demand.
Traditional IT operates in a steady state, with maximum uptime and continuous equilibrium. Big Data interrupts this balance, creating a logjam to discovery. Big Workflow optimizes the analysis process to deliver an organized workflow that greatly increases throughput and productivity, and reduces cost, complexity and errors. Even with big data challenges, the data centre can still guarantee services that ensure SLAs, maximize uptime and prove services were delivered and resources were allocated fairly.
"The explosion of Big Data, coupled with the collisions of HPC and Cloud, is driving the evolution of Big Data analytics", stated Rob Clyde, CEO of Adaptive Computing. "A Big Workflow approach to Big Data not only delivers business intelligence more rapidly, accurately and cost effectively, but also provides a distinct competitive advantage. We are confident that Big Workflow will enable enterprises across all industries to leverage Big Data that inspires game-changing, data-driven decisions."
DigitalGlobe, a global provider of high-resolution Earth imagery solutions, uses Moab to dynamically allocate resources, maximize data throughput and monitor system efficiency to analyze its archived Earth imagery, which contains more than 4.5 billion square kilometers of global coverage. Each year, DigitalGlobe adds two petabytes of raw imagery to its archives and turns that into eight petabytes of new product.
With Moab at the core of its data centre, DigitalGlobe has been able to operate at a global scale with timelines their customers need by breaking down silos of isolated resources and increasing its maximum workflow capacity, helping decision makers better understand the planet in order to save lives, resources and time.
"Moab enables our responsiveness when disaster strikes", stated Jason Bucholtz, principal architect at DigitalGlobe. "With Big Workflow, we have been able to gain insights about our changing planet more rapidly - all without adding new resources to our existing infrastructure."
Adaptive Computing also launched Moab 7.5, which adds new features that make the software more robust to tackle stubborn big data and enhance Big Workflow by unifying data centre resources, optimizing the data analysis process and guaranteeing services. These new Cloud and HPC features include:
According to a hands-on survey of more than 400 data centre managers, administrators and users Adaptive Computing recently conducted at the Supercomputing, HP Discover and Gartner Data Center conferences, 91 percent believe some combination of Big Data, HPC and Cloud should occur for a better Big Data solution. This finding underscores the intensifying collision between Big Data, HPC and Cloud and is supported by the International Data Corporation (IDC) Worldwide Study of HPC End-User Sites.
According to the HPC sites included in the IDC's 2013 study, 67 percent said they perform Big Data analysis on their HPC systems, with 30 percent of the available computing cycles devoted on average to big data analysis work. In addition, the proportion of sites exploiting Cloud computing to address parts of their HPC workloads rose from 13.8 percent in 2011 to 23.5 percent in 2013, with public and private Cloud use about equally represented.
"Our 2013 study revealed that a surprising two thirds of HPC sites are now performing Big Data analysis as part of their HPC workloads, as well as an uptick in combined uses of Cloud computing and supercomputing", stated Chirag Dekate, Ph.D., research manager, High-Performance Systems at IDC. "As there is no shortage of Big Data to analyze and no sign of it slowing down, combined uses of Cloud and HPC will occur with greater frequency, creating market opportunities for solutions such as Adaptive's Big Workflow."