"Scyld ClusterWare is ideal for managing HPC and Hadoop workloads for Big Data customers", stated Victor Gregorio, Vice President and General Manager of Cloud Services, Penguin Computing. "ClusterWare is the genesis of Linux-based supercomputing and represents the evolution of HPC using Hadoop to evaluate large data sets."
Scyld Clusterware for Hadoop, which has been deployed in several organisations, runs on any x86-based hardware running Red Hat Enterprise Linux and is designed to integrate with all major Hadoop distributions. It is scalable, flexible, easy-to-use and ideal for managing HPC and Hadoop workloads for Big Data customers. Scyld ClusterWare for Hadoop also offers: extremely fast cluster provisioning; single system image architecture that guarantees configuration consistency; support for internal Clouds and Cloud bursting; Web-service-based architecture for management and workflow submission from anywhere; seamless integration with directory services such as LDAP & AD; worldwide commercial support from HPC experts; and qualification and optimization for Penguin Computing hardware.
Penguin Computing's years of HPC expertise allow ClusterWare to be highly-tuned for HPC workloads by offering pre-bundled OS optimizations for cluster performance, a single system install for straightforward change management, web GUIs for administrative dashboards and HPC workflow management, tools for predictive hardware failure analysis, and all middleware needed to effectively run a compute cluster.