Liqid, provider of a composable disaggregated infrastructure (CDI) platform, today announced dynamic Slurm Workload Manager integration for its Liqid Matrix Software, delivering a tool for HPC deployments designed to optimize resource utilization for artificial intelligence (AI).
Liqid uses the Slurm integration to compose servers on submission from pools of compute, storage and GPU resources via Liqid Matrix Software, delivering bare-metal configurations for each HPC job. “With the resources composed to match the needs of the job, rather than a best fit of standard configurations, Liqid matrix software frees stranded resources,” the company said in its announcement. “When a job is completed, the resources for the composed system are returned to the pools and can be automatically redeployed via Liqid Matrix for a future job.”
Slurm integration with Liqid Matrix allows for the automation of backend operations, such as evolving hardware and server types. Slurm also gives data scientists and researchers visibility into the Liqid multiverse of all the possible systems that can be configured from available pooled resources, according to the company. Newly acquired, disaggregated resources are recognized and represented in the pool as they are deployed, without the downtime associated with traditional hardware upgrades.
GPUs, NVMe storage, NICs, HBAs, FPGAs and storage-class memory can be aggregated and deployed in minutes via Liqid Matrix Software. Bare-metal resources can be shared and composed across intelligent fabrics in the ratios required for a given workload, at scale or to the level of individual component. In addition, with Liqid Matrix Software technologies such as GPU peer-to-peer connectivity can be enabled as though the resources were local to the server, according to Liqid. “Tight integration and agility enable Liqid Matrix Software and Slurm to deliver industry-leading performance with the tightest possible physical footprint,” the company said.
“Composable infrastructure is critical when research needs across an organization vary in requirement and scale,” said Lance Long, senior research manager responsible for infrastructure solutions at EVL and support for research in the College of Engineering, both part of the University of Illinois at Chicago. At the Electronic Visualization Lab (EVL), we are developing programmable interfaces for controlling composable hardware, providing user managed hardware and programmatic integrations,” “The new Slurm integration by Liqid helps better manage deployments of bare metal and containerized workloads and services for the composable infrastructure community.”
“With Slurm integration, Liqid delivers unprecedented performance and architectural adaptivity for data center environments utilizing HPC configurations laced together with increasingly complex artificial intelligence to solve the world’s most urgent problems,” said Sumit Puri, CEO & Cofounder, Liqid. “The ability to automate resources to rapidly scale as applications demand it, regardless of where researchers are physically located means the pace of global research can increase with the same available resources due to increased efficiency and flexibility, accelerating critical data analytics and time to market.”