Upload
kieu
View
21
Download
0
Embed Size (px)
DESCRIPTION
J-WAN: PSC Lustre-wan Efforts 2009-2010 Josephine Palencia, JRay Scott. Continuing & new development efforts in progression Close tie up with SUN Lustre roadmap. Lustre 2.0 -- release mid 2009 - PowerPoint PPT Presentation
Citation preview
© 2008 Pittsburgh Supercomputing Center
J-WAN:PSC Lustre-wan Efforts 2009-2010
Josephine Palencia, JRay Scott
© 2008 Pittsburgh Supercomputing Center
Continuing & new development efforts in progression
Close tie up with SUN Lustre roadmap
© 2008 Pittsburgh Supercomputing Center
Lustre roadmap
–Lustre 2.0 -- release mid 2009– GSS/Kerberos functionality may be disabled or unsupported to include other security features (newer capabilities, remote client handling) in a later release–GSS/Kerberos - public released with 2.x version, or even 3.0; being decided upon
© 2008 Pittsburgh Supercomputing Center
Work with kerberos-lustre: strictly on developers time schedule
We're their testbed for Lustre kerberos (and all other advanced features)
© 2008 Pittsburgh Supercomputing Center
Work Outline1
1. Centralized mgs server: mgs.teragrid.org managed/located by PSC and on TERAGRID.ORG realm
o MGS on TERAGRID.ORG
o MDS, OSS's on PSC.EDU
o RP sites lustre-CLIENTS, MDS and OSS's on remote sites
2. Distributed OSS (OST Pools)
o Addition of contributed (from remote sites) OSTS to 'OST pool'.
o Remote RP sites contribute 1 or more OSTS and PSC adds them increasing Lustre-wan storage
o Capability for OST pools will be part of 2.0 release
© 2008 Pittsburgh Supercomputing Center
Work Outline2
3. Clustered metadata MDS with local multiple failovers
a) Failovers
- For reliability, one or more MDS failovers will be added by PSC to it's current MDS setup.
- This may be replicated in other RP sites.
- All MDS resides on each local RP sites respective kerberos realms.
- MDS failovers will only be local and not done remotely.
© 2008 Pittsburgh Supercomputing Center
Work Outline3
b) Clustered Metadata Servers (CMS)
For scalability and better performance, we implement CMS setup at both local and remote sites to distribute MDS operations evenly (or appropriately) among's several MDS servers.
CMD feature won't be ready in 2.0 timeframe, but they'll have some "technology preview" releases which contain basically working CMD feature that we can try it out.
We use 3'rd party replication tools as lustre's native replication feature won't be ready in 2.0 timeframe.
© 2008 Pittsburgh Supercomputing Center
Work Outline4
4. Make operational lustre UID mapping for Teragrid users o Re-writing/implementing the UID mapping feature
o Ready for testing (not public release) within 2009 5. Have J-WAN appear on the PSC's Speedpage 6. Test integration of J-WAN with Teragrid portal
© 2008 Pittsburgh Supercomputing Center
Work Outline5
All these under the umbrella of Lustre-kerberos implementation on all lustre-components with bi-directional kerb auth operational between RP sites with transitivity kerb authentication with TERAGRID.ORG
© 2008 Pittsburgh Supercomputing Center
Reference
http://www.teragridforum.org/mediawiki/index.php?title=PSC%27s_Lustre-wan_efforts_2009-2010
PSC's Lustre-wan Efforts 2009-2010