HPL
From FarmShare
(Difference between revisions)
m |
|||
Line 6: | Line 6: | ||
==Links== | ==Links== | ||
- | * run with [http://www.open-mpi.org/doc/v1.4/man1/mpirun.1.php | + | * run with [http://www.open-mpi.org/doc/v1.4/man1/mpirun.1.php mpirun]: http://www.rocksclusters.org/rocks-documentation/4.3/running-linpack.html |
* generate your HPL.dat: http://www.advancedclustering.com/faq/how-do-i-tune-my-hpldat-file.html | * generate your HPL.dat: http://www.advancedclustering.com/faq/how-do-i-tune-my-hpldat-file.html | ||
* get the right BLAS for Sandy Bridge: http://software.intel.com/en-us/articles/performance-tools-for-software-developers-hpl-application-note/ | * get the right BLAS for Sandy Bridge: http://software.intel.com/en-us/articles/performance-tools-for-software-developers-hpl-application-note/ |
Revision as of 23:04, 11 July 2012
markland
Trying to run HPL on markland cluster.
CPUs are E5-2670 which are supposedly ~155GFLOPS per socket. [1] Another source says 330 GFLOPS. [2] Let's be conservative and say 150GF per CPU. Trying one run of xhpl with gotoblas, I get about 26GFLOPS for one socket, or about one sixth of the max theoretical. Clearly not the best BLAS for this CPU.
Links
- run with mpirun: http://www.rocksclusters.org/rocks-documentation/4.3/running-linpack.html
- generate your HPL.dat: http://www.advancedclustering.com/faq/how-do-i-tune-my-hpldat-file.html
- get the right BLAS for Sandy Bridge: http://software.intel.com/en-us/articles/performance-tools-for-software-developers-hpl-application-note/
references
- http://forums.anandtech.com/showpost.php?p=33585052&postcount=1
- http://novatte.com/blog/2012/03/how-to-calculate-theoretical-peak-performance-of-a-cpu-based-hpc-system/