Click here to load reader

EGO Computing Center site report EGO - Via E. Amaldi 56021 S. Stefano a Macerata - Cascina (PI) | Stefano Cortese INFN Computing Workshop – 26-05-2004

  • View

  • Download

Embed Size (px)

Text of EGO Computing Center site report EGO - Via E. Amaldi 56021 S. Stefano a Macerata - Cascina (PI) |...

EGO reportEGO Computing Center site report
EGO - Via E. Amaldi 56021 S. Stefano a Macerata - Cascina (PI)  |
Stefano Cortese
Virgo-EGO LANs
General windows PCs network ( 30 switches)
6TB near-line
70 TB of net RAID5 space
25 Infortrend FC-to-IDE arrays
4 Fibrenetix Zero-d SCSI-to-IDE
Performances are good: 50/60MBytes/s over 1TB RAID5 set (single array, 5400/7200rpm disks)
The quality of the first releases of the products are very poor due mainly to firmware bugs or hardware tolerances that ultimately lead to hidden data corruption (meaning undetected by storage controller or operating system)
We developed a procedure for storage acceptance:
requirement of minimal performances according to market survey and in demo testing
Tenders are required in 2 lots, the first is for the acceptance test. The positive validation of the first lot is required for the acceptance of the second
Acceptance Test:
Performances tested with “bonnie”
Data integrity checked with a continuous benchmark that reads, writes and deletes data with 128bit MD5 verification after each operation
Storage practices with IDE based arrays
Data Integrity test:
The data integrity test ends after the processing of about 30TB (after about 10days) giving confidence that the BER is less than 3x10-14
In our experience the errors may occur even after 1 week of processing and we rejected many configurations
The test needs to be repeated at each new firmware release installation, even if new features only are introduced
All this Is not enough:
Many functions of the firmware may happen to be executed after the systems is running since a long time.
That is the case of block remapping following bad block occurrences on the disks (this could only be tested using really bad disks)
The storage must be periodically monitored for data integrity
The firmware must provide the on-line low level media verification that must executed periodically to avoid the double bad-blocks or bad-block+disk-failure cases
Storage practices: data integrity
IDE based storage systems at 5000€/TB are good for mass storage with fast access and high density compared to near-line disk-cache/tape systems but availability is not guaranteed at all times
They don’t offer the same level of reliability for critical tasks as more expensive disk based storage. Duplication or tape backup is still needed
Direct Attached Arrays are preferable respect to NAS storage to be able to run tests independently of the network
We prefer also arrays connected via standard buses (e.g. SCSI or FC) rather than “on server” controllers to avoid intermixing OS/driver/array problems
LVM and automounter are required tools for mounting and serving about 100 file-systems (currently using amd, planning to pass to autofs on Linux)
Storage Conclusions
On-line Computing
Virgo detection channels are extracted from rawdata and processed to obtain the h-reconstructed signal where the Gravitational signal must be found
8 bi-processors
Intel Xeon 2.0GHz
The h-reconstructed signal (16-600 KB/s) is fed to the computing farms for on-line search
Small Scale Test System (2002)
16 bi-processors Compaq W6000
On-line Computing:
Physical problem of coalescing binaries “in-time” detection was estimated by Virgo to require a 300GFlop system
Flat search with Matched filtering via FFT with templates of various length is very dependent on the amount of RAM available for storing the templates, so the naive sizing by CPU power is not enough
A benchmarking Virgo/EGO workgroup has been working since beginning of the year to arrive at more precise specifications (benchmark provided by Perugia group, tests performed by EGO)
Overall problem
Opteron has the best speedup for SIMD problems where data are partitioned among processors: up to 60MB/s of template floats processed per CPU for the Virgo benchmark
The Maximum RAM supported by the platform has an impact on the number of CPUs
Overall Virgo problem for a space of 200.000 templates (1.6 TB RAM) to be processed in 256s would require about 200 opteron with 8GB/CPU or 130 Itanium with 12GB/CPU
Opteron has a higher performance per rack-unit
Current tender is for 64 CPUs