Parallel, highly efficient code (CPU and GPU) for DEM and CFD-DEM simulations.
Go to file
hamidrezanorouzi 0fab140eb6
Merge pull request #24 from hamidrezanorouzi/solvers
correted position random
2022-09-10 17:13:30 +04:30
benchmarks/rotatingDrum_1 correted position random 2022-09-10 17:12:12 +04:30
cmake src folder 2022-09-05 01:56:29 +04:30
doc Create howToBuild.md 2022-09-05 17:15:21 +04:30
solvers iterate geometry 2022-09-05 23:25:50 +04:30
src positionParticles-ordered modified to accept cylinder&sphere region 2022-09-07 22:22:23 +04:30
tutorials fix tutorials for new changes 2022-09-08 12:35:00 +04:30
utilities Merge pull request #24 from hamidrezanorouzi/solvers 2022-09-10 17:13:30 +04:30
.gitignore positionParticles-ordered modified to accept cylinder&sphere region 2022-09-07 22:22:23 +04:30
CMakeLists.txt positionParticles-ordered modified to accept cylinder&sphere region 2022-09-07 22:22:23 +04:30
LICENSE build system is added and is tested for serial execution 2022-09-02 12:30:54 +04:30
README.md Update README.md 2022-09-07 15:16:25 +04:30
phasicFlowConfig.H.in build system is added and is tested for serial execution 2022-09-02 12:30:54 +04:30

README.md

PhasicFlow

PhasicFlow is a parallel C++ code for performing DEM simulations. It can run on shared-memory multi-core computational units such as multi-core CPUs or GPUs (for now it works on CUDA-enabled GPUs). The parallelization method mainly relies on loop-level parallelization on a shared-memory computational unit. You can build and run PhasicFlow in serial mode on regular PCs, in parallel mode for multi-core CPUs, or build it for a GPU device to off-load computations to a GPU. In its current statues you can simulate millions of particles (up to 11M particles tested) on a single desktop computer. You can see the performance tests of PhasicFlow in the wiki page.

How to build?

You can build PhasicFlow for CPU and GPU executions. Here is a complete step-by-step procedure.

How to use PhasicFlow?

You can navigate into tutorials folder in the phasicFlow folder to see some simulation case setups. If you need more detailed discription, visit our wiki page tutorials.

Supporting packages

  • Kokkos from National Technology & Engineering Solutions of Sandia, LLC (NTESS)
  • CLI11 1.8 from University of Cincinnati.

Extensions in future

parallelization

  • Extending the code for using OpenMPTarget backend to include more GPUs for off-loading the computations.
  • Extending high-level parallelization and implementing space partitioning and load balancing for muilti-GPU computations and running PhasicFlow on distributed memory super-computers

basic features