Fullscreen
Loading...
 
Imprimir Comparte esta página

RootParallelGui

Image


Index


Go to Index

What is RootParallelGui?

It's a Graphical User Interface to integrate parallel computer technologies used in ROOT.
The idea is to ease execution and development of parallel algorithms within a complex environments
like grids.


Go to Index

Design

  • The Gui is a main window with a panel (2) where you can find modules separated by areas. The modules are "plugins" to run processes,monitor processes, configure system, etc.
  • An output panel (1), where you can see messages from stdout and stderr, and some information sent from modules.
  • Future modules like proof and globus are in separate areas (3).
  • The main area is a widget with tabs, where you can see different modules opened at the same time (4).
Image
Image
Image




Go to Index

How to Install

  • dependecies are Qt4, root, Qscintilla, cmake, git.

On debian
You should have installed root and rootmpi, see RootMpi/ROOT Installation(external link)
Like a super user.
Go to Index

Module for mpi

The function of this module is to launch OpenMpi/RootMpi processes in a comfortable graphical interface with either ROOT's macros or binary code.
It contains all mpirun(command) options and if the code is a macro then ROOT's interpreter options.

Manual


In the main window you will find a serie of tabs where are the options separated by areas,
they are Processes,Environment, Nodes, ROOT, Others/Debug and Editor(for macros).

Tab Pocesses


To specify the number of processes to launch:
Number of Processors:
Run this many copies of the program on the given nodes. This option indicates that the specified file is an executable program and not an application context. If no value is provided for the number of copies to execute (i.e., neither the "-np" nor its synonyms are provided on the command line), Open MPI will automatically execute a copy of the program on each process slot (see below for description of a "process slot"). This feature, however, can only be used in the SPMD model and will return an error (without beginning execution of the application) otherwise.

npersocket
On each node, launch this many processes times the number of processor sockets on the node. The npersocket option also turns on the bind-to-socket option.

npernode
On each node, launch this many processes.

pernode
On each node, launch one process — equivalent to -npernode 1.


For process binding:
bycore
Associate processes with successive cores if used with one of the -bind-to-* options.

bysocket
Associate processes with successive processor sockets if used with one of the -bind-to-* options.

cpus-per-proc/rank
Use the number of cores per process if used with one of the -bind-to-* options.

bind-to-core
Bind processes to cores.

bind-to-socket
Bind processes to processor sockets.

bind-to-none
Do not bind processes. (Default.)

report-bindings
Report any bindings for launched processes.


To map processes to nodes:
loadbalance
Uniform distribution of ranks across all nodes. See more detailed description below.

nolocal
Do not run any copies of the launched application on the same node as orterun is running. This option will override listing the localhost with host or any other host-specifying mechanism.

nooversubscribe
Do not oversubscribe any nodes; error (without starting any processes) if the requested number of processes would cause oversubscription. This option implicitly sets "max_slots" equal to the "slots" value for each node.

bynode
Launch processes one per node, cycling by node in a round-robin fashion. This spreads processes evenly among nodes and assigns ranks in a round-robin, "by node" manner.


Tab Environment



To manage files and runtime environment:
path
path that will be used when attempting to locate the requested executables. This is used prior to using the local PATH setting.

prefix
Prefix directory that will be used to set the PATH and LD_LIBRARY_PATH on the remote node before invoking Open MPI or the target process. See the "Remote Execution" section, below.

preload-binary
Copy the specified executable(s) to remote machines prior to starting remote processes. The executables will be copied to the Open MPI session directory and will be deleted upon completion of the job.

preload-files
Preload the comma separated list of files to the current working directory of the remote machines where processes will be launched prior to starting those processes.

preload-files-dest-dir
The destination directory to be used for preload-files, if other than the current working directory. By default, the absolute and relative paths provided by --preload-files are used.

tmpdir
Set the root for the session directory tree for mpirun only.

wdir
Change to the directory dir before the user's program executes. See the "Current Working Directory" section for notes on relative paths.

Environment Variables
Export the specified environment variables to the remote nodes before executing the program. Only one environment variable can be specified per this option. Existing environment variables can be specified or new variable names specified with corresponding values. For example:
DISPLAY=:1
OFILE=/tmp/out
The parser for this option is not very sophisticated; it does not even understand quoted values. Users are advised to set variables in the environment.

Tab Nodes




List of hosts on which to invoke processes from a file or manually specified.

Tab ROOT

Image
Options:
b : run in batch mode without graphics
n : do not execute logon and logoff macros as specified in .rootrc
q : exit after processing command line macro files
l : do not show splash screen
x : exit on exception
memstat : run with memory usage monitoring

NOTE: Message compression in serialization is not enabled yet, it is a future feature!

Tab Others/Debug

Image
There are also other options:
aborted
Set the maximum number of aborted processes to display.

cartofile
Provide a cartography file.

hetero
ndicates that multiple app_contexts are being provided that are a mix of 32/64-bit binaries.

leave-session-attached
Do not detach OmpiRTE daemons used by this application. This allows error messages from the daemons as well as the underlying environment (e.g., when failing to launch a daemon) to be output.

ompi-server
Specify the URI of the Open MPI server, or the name of the file (specified as file:filename) that contains that info. The Open MPI server is used to support multi-application data exchange via the MPI-2 MPI_Publish_name and MPI_Lookup_name functions.

wait-for-server
Pause mpirun before launching the job until ompi-server is detected. This is useful in scripts where ompi-server may be started in the background, followed immediately by an mpirun command that wishes to connect to it. Mpirun will pause until either the specified ompi-server is contacted or the server-wait-time is exceeded.

server-wait-time
The max amount of time (in seconds) mpirun should wait for the ompi-server to start. The default is 10 seconds.

For debugging:
debug-devel
Enable debugging of the OmpiRTE (the run-time layer in Open MPI). This is not generally useful for most users.

debug-daemons
Enable debugging of any OmpiRTE daemons used by this application.

debug-daemons-file
Enable debugging of any OmpiRTE daemons used by this application, storing output in files.

launch-agent
Name of the executable that is to be used to start processes on the remote nodes. The default is "orted". This option can be used to test new daemon concepts, or to pass options back to the daemons without having mpirun itself see them. For example, specifying a launch agent of orted -mca odls_base_verbose 5 allows the developer to ask the orted for debugging output without clutter from mpirun itself.

noprefix
Disable the automatic --prefix behavior


You can call mpi module for RootParalellGui? from Cint
root [0] gROOT->LoadClass("TGQt");
** $Id: TGQt.cxx 36275 2010-10-11 08:05:21Z brun $ this=0x1f4a630                                                                                                           
Symbol font family found:  "Standard Symbols L" 
root [1] gSystem->Load("libParallelGuiMpi");
root [2] ROOT::ParallelGuiMpiLauncher mpigui;
root [3] mpigui.Show();

You can run macro,
Go to Index

Visits