Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 30174
A High Performance MPI for Parallel and Distributed Computing

Authors: Prabu D., Vanamala V., Sanjeeb Kumar Deka, Sridharan R., Prahlada Rao B. B., Mohanram N.

Abstract:

Message Passing Interface is widely used for Parallel and Distributed Computing. MPICH and LAM are popular open source MPIs available to the parallel computing community also there are commercial MPIs, which performs better than MPICH etc. In this paper, we discuss a commercial Message Passing Interface, CMPI (C-DAC Message Passing Interface). C-MPI is an optimized MPI for CLUMPS. It is found to be faster and more robust compared to MPICH. We have compared performance of C-MPI and MPICH on Gigabit Ethernet network.

Keywords: C-MPI, C-VIA, HPC, MPICH, P-COMS, PMB

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1329851

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1208

References:


[1] Carlo Kopp, "Moore-s Law and it-s Implication for Information Warfare," The 3rd International Association of Old Crows (AOC) Electronic Warfare Conference Proceedings, Zurich, May 20-25, 2000. http://www.ausairpower.net/moore-iw.pdf
[2] Daniel Balkanski, Mario Trams, Wolfgang Rehm, " Communication Middleware System for Heterogeneous Clusters: A Comparative Study," Proceedings of the IEEE International Conference on Cluster Computing (CLUSTER-03) http://ieeexplore.ieee.org/iel5/8878/28041/01253359.pdf
[3] J. Silcock, A. Goscinski, " Message Passing, Remote Procedure Calls and Distributed Shared Memory as Communication Paradigm for Distributed System," Technical Report, School of Computing and Mathematics, Deakin University, Geelong, Australia. http://www.deakin.edu.au/scitech/sit/dsapp/archive/techreport/TR-C95- 20.pdf
[4] W. Gropp, E. Lusk, N. Doss and A. Skjellum, "A high-performance, portable, implementation of the MPI Message Passing Interface Standard," Parallel Computing, 22:789-828,1996. http://www.globus.org/alliance/publications/papers/paper1.pdf
[5] William Gropp, Ewing Lusk " MPICH Abstract Device Interface, Version 3.3," MCSD, Argonne National Laboratory, December 2001 http://www.cse.ohio-state.edu/~panda/788/papers/ 3c_adi3man.pdf
[6] PARAM Padma Center for Development of Advanced Computing (CDAC), Pune, India. Available at http://www.cdac.in
[7] TCP/IP. Available at http://www.ietf.org/rfc/rfc1180.txt
[8] Cornell Center for Materials Research Computing Facility. Available at http://monod.cornell.edu/docs/instructions/compilers/mpich.html
[9] Gigabit Ethernet Alliance, Gigabit Ethernet Overview. (1997) Available at http://www.gigabit-ethernet.org/
[10] Center for Development of Advanced Computing (C-DAC), Pune, India. CTSF. Available at www.cdac.in/html/ctsf/
[11] A. Petitet, R. C. Whaley, J. Dongarra, A. Cleary, "HPL- A Portable Implementation of The High-Performance Linpack Benchmark for Distributed-Memory Computers," Innovative Computing Laboratory, University of Tennessee, January 2004. http://www.netlib.org/benchmark/hpl/
[12] Pallas MPI Benchmark (PMB), Intel, http://www.pallas.com/e/products/index.htm
[13] PARAM- Communication Overhead Measurement Suites (P-OMS), Center for Development of Advanced Computing, Pune, India. http://www.cdac.in/html/betatest/hpc.asp
[14] C-DAC, GARUDA INDIA, The National Grid Computing Initiative. Available at http://www.garudaindia.in/tech_research.asp