Schlumberger
- Description
- Schlumberger ECLIPSE Industry-Reference Reservoir Simulator.
- Modulefiles
eclipse-sim/2020.3
Contents
Getting Access
Schlumberger products like ECLIPSE are only available to members of the Faculty of Engineering of Memorial University of Newfoundland, who hold a license for Schlumberger products.
Therefore the software and configuration files are only available to members of the group an_soft_schlumberger
.
If you are a member of the MUN's Faculty of Engineering and want to be added to this group, please contact us at support@ace-net.ca and we will confirm with MUN Engineering Computing Services (ECS) whether you are indeed eligible for access to Schlumberger products and add you to the group if that is the case.
Notes
Loading the module eclipse-sim/2020.3
will set a number of important environment variables, that can be used in the jobscript.
$EBROOTECLIPSEMINSIM
- This follows the convention of modules in the Compute Canada Software stack: EB (for EasyBuild) ROOT (root/base directory for the module) NAMEOFMODULE (name of the module in all-caps; the dash/minus is replaced by MIN; eclipse-sim -> ECLIPSEMINSIM). This points always to the base-directory of the module.
$EBVERSIONECLIPSEMINSIM
- This variable also follows the same convention, but contains the version of the module/package.
Running Eclipse Simulator
Generic e300 serial job
#!/bin/bash #SBATCH --ntasks=1 # number of MPI ranks #SBATCH --time=0-01:00:00 # time-limit: d-hh:mm:ss #SBATCH --mem-per-cpu=2000M # memory limit: use n * 1000M # Load ECLIPSE module module load StdEnv/2020 intelmpi/2019.7.217 eclipse-sim/2020.3 # prepare environment source /opt/software/schlumberger/ecl/macros/\@eclrunsetup.sh eclrun e300 E300.DATA
Generic e300 parallel job
First make sure that the DATA file is set up to run in parallel.
The example MM40.DATA
has been configured to use 40 processors with:
[...] PARALLEL 40 / NPROCX 40 / NPROCY 1 / [...]
Then the jobscript would look like this:
#!/bin/bash #SBATCH --ntasks=40 # number of MPI ranks #SBATCH --time=0-01:00:00 # time-limit: d-hh:mm:ss #SBATCH --mem-per-cpu=2000M # memory limit: use n * 1000M module load StdEnv/2020 intelmpi/2019.7.217 eclipse-sim/2020.3 source /opt/software/schlumberger/ecl/macros/\@eclrunsetup.sh # generate machinefile for this job: slurm_hl2hl.py --format MPIHOSTLIST > hostlist_${SLURM_JOBID}.txt eclrun --machinefile hostlist_${SLURM_JOBID}.txt e300 MM40.DATA
Note that Siku has compute nodes with 40 and 48 cores each.
To make sure that the simulation uses whole nodes, you can replace #SBATCH --ntasks=40
with #SBATCH --ntasks-per-node=40
or #SBATCH --ntasks-per-node=48
.
Even when the cluster is quite busy, many of the 48-core nodes only run jobs with 40-cores. Thefore you could run a 40-CPU job across 5 nodes with each 8 CPUs per node:
#!/bin/bash #SBATCH --nodes=5 # number nodes #SBATCH --ntasks-per-node=8 # number of MPI ranks per node #SBATCH --time=0-01:00:00 # time-limit: d-hh:mm:ss #SBATCH --mem-per-cpu=2000M # memory limit: use n * 1000M module load StdEnv/2020 intelmpi/2019.7.217 eclipse-sim/2020.3 source /opt/software/schlumberger/ecl/macros/\@eclrunsetup.sh slurm_hl2hl.py --format MPIHOSTLIST > hostlist_${SLURM_JOBID}.txt eclrun --machinefile hostlist_${SLURM_JOBID}.txt e300 MM40.DATA
Running 2MM Benchmark on 40 cores-per-node
#!/bin/bash #SBATCH --nodes=1 # one node #SBATCH --ntasks-per-node=40 # number of MPI ranks per node #SBATCH --time=0-01:00:00 # time-limit: d-hh:mm:ss #SBATCH --mem-per-cpu=2000M # memory limit: use n * 1000M module load intelmpi eclipse-sim/2020.3 source /opt/software/schlumberger/ecl/macros/\@eclrunsetup.sh echo "extract 2MMbenchmark to scratch" mkdir -p ~/scratch/eclipse_test cd ~/scratch/eclipse_test if [ -d ./2MMbenchmark ] ; then rm -Rf ./2MMbenchmark fi unzip $EBROOTECLIPSEMINSIM/BENCHMARKS/2MMbenchmark.zip cd 2MMbenchmark/E300/ echo "run e300 parallel 2MM benchmark" slurm_hl2hl.py --format MPIHOSTLIST > hostlist_${SLURM_JOBID}.txt eclrun --machinefile hostlist_${SLURM_JOBID}.txt e300 MM40.DATA
Petrel
The Schlumberger Petrel software seems to be only available for Microsoft Windows. Therefore it cannot be installed on our Linux-based HPC clusters like Siku.