Back to Table of contents

Primeur weekly 2014-03-03

The Cloud

IBM to acquire Cloudant: Open, Cloud database service helps organisations simplify mobile, web app and Big Data development ...

SUSE Cloud 3 now available, based on OpenStack Havana release ...

Oracle buys BlueKai ...

IBM launches pay-as-you-go model to quickly get clients on Cloud storage ...

VMware vCloud Hybrid Service now generally available in Europe ...

VMware completes acquisition of AirWatch ...

Adaptive Computing introduces Big Workflow to accelerate insights that inspire data-driven decisions giving the business a distinct competitive advantage ...

IBM invests $1 billion to deliver unique Platform-as-a-Service capabilities to connect enterprise data and applications to the Cloud ...

Fujitsu launches A5 for Windows Azure public Cloud service based on strengthened alliance with Microsoft ...

Desktop Grids

HTC aims to create a supercomputer with "HTC Power to Give" ...

BOINCCalculator 0.5.1 released ...

EuroFlash

PRACE 8th Call for Proposals closes with larger allocations on all systems ...

Gauss Centre for Supercomputing allocates ~440 million computing core hours to European research ...

IBM challenges mobile developers to bring the power of Watson to the palm of your hand ...

ISC'14 to launch Exhibitors Innovation Forum on June 24-25 ...

HP helps telcos deploy new offerings in minutes, rather than months, with network functions virtualization ...

Intel announces family of virtualization platforms for industrial systems ...

USFlash

Rogue Wave shows off new analysis toolchain for software risk mitigation ...

OpenMP ARB releases new Mission Statement ...

Fujitsu delivers new levels of in-memory computing performance and x86 mission-critical uptime with PRIMEQUEST 2000 series ...

New record set for data-transfer speeds ...

University of New Mexico gains supercomputer from the New Mexico Consortium ...

Nova Southeastern University will ramp up research with its first supercomputer ...

Gartner says 2013 worldwide server market grew 2.1 percent in shipments, while revenue decreased 4.5 percent for the year ...

NIH and George Washington University researchers partner to accelerate genomics research using Internet2's 100G network ...

Adaptive Computing introduces Big Workflow to accelerate insights that inspire data-driven decisions giving the business a distinct competitive advantage


25 Feb 2014 Provo - Adaptive Computing, a company that powers many of world's largest private/hybrid Cloud and technical computing environments with its Moab optimization and scheduling software, has launched Big Workflow, an industry term coined by Adaptive Computing that accelerates insights by more efficiently processing intense simulations and Big Data analysis.

Adaptive Computing's Moab HPC Suite and Moab Cloud Suite are an integral part of the Big Workflow solution, which unifies all data centre resources, optimizes the analysis process and guarantees services, shortening the time to discovery. Adaptive Computing's Big Workflow solution derives it name from its ability to solve big data challenges by streamlining the work flow to deliver valuable insights from massive quantities of data across multiple platforms, environments and locations.

While current solutions solve big data challenges with just Cloud or just HPC, Adaptive utilizes all available resources - including bare metal and virtual machines, technical computing environments (e.g., HPC, Hadoop), Cloud (public, private and hybrid) and even agnostic platforms that span multiple environments, such as OpenStack - as a single ecosystem that adapts as workloads demand.

Traditional IT operates in a steady state, with maximum uptime and continuous equilibrium. Big Data interrupts this balance, creating a logjam to discovery. Big Workflow optimizes the analysis process to deliver an organized workflow that greatly increases throughput and productivity, and reduces cost, complexity and errors. Even with big data challenges, the data centre can still guarantee services that ensure SLAs, maximize uptime and prove services were delivered and resources were allocated fairly.

"The explosion of Big Data, coupled with the collisions of HPC and Cloud, is driving the evolution of Big Data analytics", stated Rob Clyde, CEO of Adaptive Computing. "A Big Workflow approach to Big Data not only delivers business intelligence more rapidly, accurately and cost effectively, but also provides a distinct competitive advantage. We are confident that Big Workflow will enable enterprises across all industries to leverage Big Data that inspires game-changing, data-driven decisions."

DigitalGlobe, a global provider of high-resolution Earth imagery solutions, uses Moab to dynamically allocate resources, maximize data throughput and monitor system efficiency to analyze its archived Earth imagery, which contains more than 4.5 billion square kilometers of global coverage. Each year, DigitalGlobe adds two petabytes of raw imagery to its archives and turns that into eight petabytes of new product.

With Moab at the core of its data centre, DigitalGlobe has been able to operate at a global scale with timelines their customers need by breaking down silos of isolated resources and increasing its maximum workflow capacity, helping decision makers better understand the planet in order to save lives, resources and time.

"Moab enables our responsiveness when disaster strikes", stated Jason Bucholtz, principal architect at DigitalGlobe. "With Big Workflow, we have been able to gain insights about our changing planet more rapidly - all without adding new resources to our existing infrastructure."

Adaptive Computing also launched Moab 7.5, which adds new features that make the software more robust to tackle stubborn big data and enhance Big Workflow by unifying data centre resources, optimizing the data analysis process and guaranteeing services. These new Cloud and HPC features include:

1. Unify

  • Role-based Access Control: Portal-based multi-tenancy for secure resource sharing among different users, departments and customers.
  • Cray Integration: Moab now integrates with Cray ALPS BASIL 1.3.

2. Optimize

  • Hardened Power Management: Advanced power management policies for true resource power down. In addition, power scripts now comply with IPMI (integrated power management interface) green computing standards.
  • Message Bus Communication: Increased job-scheduling speed by delegating communication to the message bus, which allows Moab to stay focused on scheduling versus communication.

3. Guarantee

  • Moab Accounting Manager (MAM) Enhancements: Several new enhancements, including non-blocking accounting calls, High Availability connection, synchronization between Moab and MAM accounts and users, discrete allocations, simplified charge rate specification and additional tracking metrics.
  • Moab Viewpoint Upgrade: In addition to advanced dashboard notifications and gadgets, Moab Viewpoint now reveals lifecycle states to quickly diagnose the status of a job.
  • Custom Reporting Expansion Capabilities: Expanded reporting API to include accounting data for generating accounting reports.
  • Service Phase Transition: Reduced diagnosis time for error transparency throughout the service life cycle.
  • Standardized Logging: Moab logs are now SPLUNK-ready across all components of Moab and its web services.

According to a hands-on survey of more than 400 data centre managers, administrators and users Adaptive Computing recently conducted at the Supercomputing, HP Discover and Gartner Data Center conferences, 91 percent believe some combination of Big Data, HPC and Cloud should occur for a better Big Data solution. This finding underscores the intensifying collision between Big Data, HPC and Cloud and is supported by the International Data Corporation (IDC) Worldwide Study of HPC End-User Sites.

According to the HPC sites included in the IDC's 2013 study, 67 percent said they perform Big Data analysis on their HPC systems, with 30 percent of the available computing cycles devoted on average to big data analysis work. In addition, the proportion of sites exploiting Cloud computing to address parts of their HPC workloads rose from 13.8 percent in 2011 to 23.5 percent in 2013, with public and private Cloud use about equally represented.

"Our 2013 study revealed that a surprising two thirds of HPC sites are now performing Big Data analysis as part of their HPC workloads, as well as an uptick in combined uses of Cloud computing and supercomputing", stated Chirag Dekate, Ph.D., research manager, High-Performance Systems at IDC. "As there is no shortage of Big Data to analyze and no sign of it slowing down, combined uses of Cloud and HPC will occur with greater frequency, creating market opportunities for solutions such as Adaptive's Big Workflow."
Source: Adaptive Computing

Back to Table of contents

Primeur weekly 2014-03-03

The Cloud

IBM to acquire Cloudant: Open, Cloud database service helps organisations simplify mobile, web app and Big Data development ...

SUSE Cloud 3 now available, based on OpenStack Havana release ...

Oracle buys BlueKai ...

IBM launches pay-as-you-go model to quickly get clients on Cloud storage ...

VMware vCloud Hybrid Service now generally available in Europe ...

VMware completes acquisition of AirWatch ...

Adaptive Computing introduces Big Workflow to accelerate insights that inspire data-driven decisions giving the business a distinct competitive advantage ...

IBM invests $1 billion to deliver unique Platform-as-a-Service capabilities to connect enterprise data and applications to the Cloud ...

Fujitsu launches A5 for Windows Azure public Cloud service based on strengthened alliance with Microsoft ...

Desktop Grids

HTC aims to create a supercomputer with "HTC Power to Give" ...

BOINCCalculator 0.5.1 released ...

EuroFlash

PRACE 8th Call for Proposals closes with larger allocations on all systems ...

Gauss Centre for Supercomputing allocates ~440 million computing core hours to European research ...

IBM challenges mobile developers to bring the power of Watson to the palm of your hand ...

ISC'14 to launch Exhibitors Innovation Forum on June 24-25 ...

HP helps telcos deploy new offerings in minutes, rather than months, with network functions virtualization ...

Intel announces family of virtualization platforms for industrial systems ...

USFlash

Rogue Wave shows off new analysis toolchain for software risk mitigation ...

OpenMP ARB releases new Mission Statement ...

Fujitsu delivers new levels of in-memory computing performance and x86 mission-critical uptime with PRIMEQUEST 2000 series ...

New record set for data-transfer speeds ...

University of New Mexico gains supercomputer from the New Mexico Consortium ...

Nova Southeastern University will ramp up research with its first supercomputer ...

Gartner says 2013 worldwide server market grew 2.1 percent in shipments, while revenue decreased 4.5 percent for the year ...

NIH and George Washington University researchers partner to accelerate genomics research using Internet2's 100G network ...