High Performance Storage System
Developer(s) | IBM in conjunction with DOE National Labs |
---|---|
Stable release |
7.3.3 patch 4
/ March 2012 |
Operating system | cross-platform |
Type | Hierarchical Storage Management |
License | Proprietary |
Website | hpss-collaboration |
High Performance Storage System (HPSS) is a flexible, scalable, policy-based Hierarchical Storage Management product developed by IBM in collaboration with various DOE National Labs. It provides scalable hierarchical storage management (HSM), archive, and file system services using cluster, LAN and SAN technologies to aggregate the capacity and performance of many computers, disks, disk systems, tape drives and tape libraries.[1]
Architecture
HPSS supports a variety of methods for accessing and creating data. Among them are support for FTP, parallel FTP, VFS (Linux), as well as a robust client API with support for parallel I/O.
The current 7.3 release of HPSS has full support on AIX and Linux. The HPSS data mover and client API are supported on AIX, Linux, and Solaris.[1]
The implementation is built around IBM's DB2, a scalable relational database management system.
The HPSS collaboration
The collaboration which produced HPSS began in the fall of 1992, and involved IBM's Houston Global Services and five United States Department of Energy (DOE) National Laboratories (Lawrence Berkeley, Lawrence Livermore, Los Alamos, Oak Ridge, and Sandia).[1] At that time, the DOE national laboratory and IBM HPSS design team recognized there would be a data storage explosion driven by computing power rising to teraops/petaops requiring data stored in HSMs to rise to petabytes and beyond, data transfer rates with the HSM to rise to gigabytes/s and higher, and daily throughput with a HSM in 10s of terabytes/day. Therefore, the collaboration set out to design and deploy a system that would scale by a factor of 1,000 or more and evolve from the base above toward these expected targets and beyond.[2]
The HPSS collaboration is based on the premise that no single organization has the experience and resources to meet all the challenges represented by the growing imbalance between computing power and data collection capabilities, and storage system I/O, capacity, and functionality. Over twenty organizations worldwide including industry, US Department of Energy (DOE), other federal laboratories, universities, National Science Foundation (NSF) supercomputer centers and French Commissariat a l'Energie Atomique (CEA) have contributed to various aspects of this effort.
As of 2014, the primary HPSS development team consists of:
- IBM Global Business Services (Houston, TX)
- Los Alamos National Laboratory (Los Alamos, NM)
- Lawrence Livermore National Laboratory (Livermore, CA)
- Lawrence Berkeley National Energy Research Supercomputer Center (Berkeley, CA)
- Oak Ridge National Laboratory (Oak Ridge, TN)
- Sandia National Laboratory (Albuquerque, NM)
- Commissariat a l'Energie Atomique, Direction des Applications Militaires (Bruyères-le-Châtel, France)
- Gleicher Enterprises (Tucson, AZ)
Notable achievements
- Two of the larger HPSS sites, LANL and LLNL had 13 and 11.7 Petabytes of data stored within a single HPSS instance and namespace as of October 13, 2008.[3]
- On November 14, 2007, the San Diego Supercomputer Center along with IBM, DataDirect, and Brocade demonstrated a "Billion File" test which successfully backed up a billion files from GPFS into HPSS.[4]
- In May 2013 a 380 Petabyte HPSS installation entered service at the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign.[5]
References
- 1 2 3 "Official HPSS Collaboration Website". IBM.
- ↑ Largest HPSS Sites 1+ petabytes
- ↑ HPSS Usage as of October 13 2008
- ↑ HPCWire Nov 15, 2007 Archived November 17, 2007, at the Wayback Machine.
- ↑ "NCSA puts world's largest High Performance Storage System into production". 2013-05-30. Retrieved 2014-08-30.