mpi

package module
v0.0.0-...-ddaae73 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 26, 2021 License: MIT Imports: 5 Imported by: 0

README

mpi

mpi-binding package for golang. It is created and tested for openmpi and should work with all other mpi libraries. Only a small fraction of methods are already implemented... more are comming if needed. The names are quite similar to the mpi-library and they should do the same.

It is tested for golang go1.0.2,go1.0.3 and Open MPI v1.6.2 see

http://www.open-mpi.org/doc/v1.6/

Quick Usage

go get github.com/marcusthierfelder/mpi

Detailed Usage (linux)

In order to generate the binding library you need to install mpi on you system. On ubuntu/debain use:

sudo apt-get install openmpi-dev openmpi-common

To install this library, cgo needs the location of mpi-header (mpi.h) and the mpi-library (mpi.a). Sometimes the system already "knows" these locations. If not, you have to find them and export the path. On my system I needed:

export C_INCLUDE_PATH=/usr/include/openmpi
export LD_LIBRARY_PATH=/usr/lib/openmpi/lib

On some machines the compiler does not use LD_LIBRARY_PATH, then try:

export LIBRARY_PATH=/usr/lib/openmpi/lib

To start a parallel job on 4 cores do: mpirun -np 4 my_prog

Detailed Usage (mac osx)

You need gcc in order to compile mpi and bind it to golang. Easiest way to get gcc is by installing xcode and and the command line tools (xcode -> preferences -> downloads). Afterwards gcc should work.

To install mpi use this page:

https://sites.google.com/site/dwhipp/tutorials/installing-open-mpi-on-mac-os-x

or in short, download the newest version of openmpi (currently it is 1.6.5) and untar it somewhere. Use the terminal and go into the folder

./configure
make
sudo make install

Examples

simple:

simple example which writes the rank of each processor and the total number of mpi-jobs

alltoall:

simple example which communicates a small number of integer between all processors

nonblocking:

simple send recv example

scalarwave:

fancy scalarwave example in 3d with goroutines and mpi decomposition. you can choose between several integrators and orders of finite differencing. there are only simple boundary and output options. does not work properly yet.

note: this is not optimised, but can be used to test clusters for scaling etc.

Errors/Problems

If something is wrong or not working or missing, feel free to contact me or post an issue on github.com.

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (

	//communication structures
	COMM_WORLD C.MPI_Comm = C.get_MPI_COMM_WORLD()

	//datatypes
	INT     C.MPI_Datatype = C.get_MPI_Datatype(0)
	INT32   C.MPI_Datatype = C.get_MPI_Datatype(0)
	INT64   C.MPI_Datatype = C.get_MPI_Datatype(1)
	FLOAT32 C.MPI_Datatype = C.get_MPI_Datatype(2)
	FLOAT64 C.MPI_Datatype = C.get_MPI_Datatype(3)

	//operations
	MAX  C.MPI_Op = C.get_MPI_Op(0)
	MIN  C.MPI_Op = C.get_MPI_Op(1)
	SUM  C.MPI_Op = C.get_MPI_Op(2)
	PROD C.MPI_Op = C.get_MPI_Op(3)
)

implementational details: all #define values within mpi.h cannot be accessed directly, so go needs c-wrappers. below there is a small subsection of all values, which have to be extended, if needed. (mth: I implemented currently only things I need)

Functions

func Abort

func Abort(comm C.MPI_Comm, errorcode int)

func Allreduce

func Allreduce(sendbuf, recvbuf interface{}, op C.MPI_Op, comm C.MPI_Comm)

wrapper for all types (should be used, but this is slower)

func Allreduce_float32

func Allreduce_float32(sendbuf, recvbuf []float32, op C.MPI_Op, comm C.MPI_Comm)

func Allreduce_float64

func Allreduce_float64(sendbuf, recvbuf []float64, op C.MPI_Op, comm C.MPI_Comm)

func Allreduce_int

func Allreduce_int(sendbuf, recvbuf []int, op C.MPI_Op, comm C.MPI_Comm)

func Allreduce_int32

func Allreduce_int32(sendbuf, recvbuf []int32, op C.MPI_Op, comm C.MPI_Comm)

func Allreduce_int64

func Allreduce_int64(sendbuf, recvbuf []int64, op C.MPI_Op, comm C.MPI_Comm)

func Alltoall

func Alltoall(sendbuf, recvbuf interface{}, comm C.MPI_Comm)

wrapper for all types (should be used, but this is slower)

func Alltoall_float32

func Alltoall_float32(sendbuf, recvbuf []float32, comm C.MPI_Comm)

func Alltoall_float64

func Alltoall_float64(sendbuf, recvbuf []float64, comm C.MPI_Comm)

func Alltoall_int

func Alltoall_int(sendbuf, recvbuf []int, comm C.MPI_Comm)

func Alltoall_int32

func Alltoall_int32(sendbuf, recvbuf []int32, comm C.MPI_Comm)

func Alltoall_int64

func Alltoall_int64(sendbuf, recvbuf []int64, comm C.MPI_Comm)

func Barrier

func Barrier(comm C.MPI_Comm)

func Comm_rank

func Comm_rank(comm C.MPI_Comm) int

func Comm_size

func Comm_size(comm C.MPI_Comm) int

func Finalize

func Finalize()

func Init

func Init()

func Irecv

func Irecv(recvbuf interface{}, source, tag int, comm C.MPI_Comm, request *Request)

wrapper for all types (should be used, but this is slower)

func Irecv_float32

func Irecv_float32(recvbuf []float32, source, tag int, comm C.MPI_Comm, request *Request)

func Irecv_float64

func Irecv_float64(recvbuf []float64, source, tag int, comm C.MPI_Comm, request *Request)

func Irecv_int

func Irecv_int(recvbuf []int, source, tag int, comm C.MPI_Comm, request *Request)

func Irecv_int32

func Irecv_int32(recvbuf []int32, source, tag int, comm C.MPI_Comm, request *Request)

func Irecv_int64

func Irecv_int64(recvbuf []int64, source, tag int, comm C.MPI_Comm, request *Request)

func Isend

func Isend(sendbuf interface{}, dest, tag int, comm C.MPI_Comm, request *Request)

wrapper for all types (should be used, but this is slower)

func Isend_float32

func Isend_float32(sendbuf []float32, dest, tag int, comm C.MPI_Comm, request *Request)

func Isend_float64

func Isend_float64(sendbuf []float64, dest, tag int, comm C.MPI_Comm, request *Request)

func Isend_int

func Isend_int(sendbuf []int, dest, tag int, comm C.MPI_Comm, request *Request)

func Isend_int32

func Isend_int32(sendbuf []int32, dest, tag int, comm C.MPI_Comm, request *Request)

func Isend_int64

func Isend_int64(sendbuf []int64, dest, tag int, comm C.MPI_Comm, request *Request)

func Recv

func Recv(recvbuf interface{}, source, tag int, comm C.MPI_Comm, status *Status)

wrapper for all types (should be used, but this is slower)

func Recv_float32

func Recv_float32(recvbuf []float32, source, tag int, comm C.MPI_Comm, status *Status)

func Recv_float64

func Recv_float64(recvbuf []float64, source, tag int, comm C.MPI_Comm)

func Recv_int

func Recv_int(recvbuf []int, source, tag int, comm C.MPI_Comm)

func Recv_int32

func Recv_int32(recvbuf []int32, source, tag int, comm C.MPI_Comm)

func Recv_int64

func Recv_int64(recvbuf []int64, source, tag int, comm C.MPI_Comm)

func Redirect_STDOUT

func Redirect_STDOUT(comm C.MPI_Comm)

if you want to redirect each stdout per process in a file in order to avoid a parallel stdout mess (no standard mpi function)

func Send

func Send(sendbuf interface{}, dest, tag int, comm C.MPI_Comm, status *Status)

wrapper for all types (should be used, but this is slower)

func Send_float32

func Send_float32(sendbuf []float32, dest, tag int, comm C.MPI_Comm, status *Status)

func Send_float64

func Send_float64(sendbuf []float64, dest, tag int, comm C.MPI_Comm)

func Send_int

func Send_int(sendbuf []int, dest, tag int, comm C.MPI_Comm)

func Send_int32

func Send_int32(sendbuf []int32, dest, tag int, comm C.MPI_Comm)

func Send_int64

func Send_int64(sendbuf []int64, dest, tag int, comm C.MPI_Comm)

func Wait

func Wait(request *Request, status *Status)

func Waitall

func Waitall()

Types

type Request

type Request C.MPI_Request

now mpi has also some types, which we directly map

type Status

type Status C.MPI_Status

Directories

Path Synopsis
example

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL