Parallel, highly efficient code (CPU and GPU) for DEM and CFD-DEM simulations.
Go to file
PhasicFlow cedfcfea10
Merge pull request #46 from PhasicFlow/solvers
contact search NBS  refactored
2022-10-27 14:21:46 +03:30
benchmarks benchmarks update 2022-10-14 15:28:12 +03:30
cmake src folder 2022-09-05 01:56:29 +04:30
doc Create howToBuild.md 2022-09-05 17:15:21 +04:30
solvers iterate geometry 2022-09-05 23:25:50 +04:30
src contact search NBS refactored 2022-10-27 14:19:53 +03:30
tutorials action -> operation 2022-10-10 12:50:32 +03:30
utilities Merge pull request #46 from PhasicFlow/solvers 2022-10-27 14:21:46 +03:30
.gitignore positionParticles-ordered modified to accept cylinder&sphere region 2022-09-07 22:22:23 +04:30
CMakeLists.txt change of selctors, cmake config for float compilation 2022-09-30 11:43:19 +03:30
LICENSE build system is added and is tested for serial execution 2022-09-02 12:30:54 +04:30
README.md Update README.md 2022-10-24 23:40:50 +03:30
phasicFlowConfig.H.in change of selctors, cmake config for float compilation 2022-09-30 11:43:19 +03:30

README.md

PhasicFlow

PhasicFlow is a parallel C++ code for performing DEM simulations. It can run on shared-memory multi-core computational units such as multi-core CPUs or GPUs (for now it works on CUDA-enabled GPUs). The parallelization method mainly relies on loop-level parallelization on a shared-memory computational unit. You can build and run PhasicFlow in serial mode on regular PCs, in parallel mode for multi-core CPUs, or build it for a GPU device to off-load computations to a GPU. In its current statues you can simulate millions of particles (up to 11M particles tested) on a single desktop computer. You can see the performance tests of PhasicFlow in the wiki page.

How to build?

You can build PhasicFlow for CPU and GPU executions. Here is a complete step-by-step procedure.

How to use PhasicFlow?

You can navigate into tutorials folder in the phasicFlow folder to see some simulation case setups. If you need more detailed discription, visit our wiki page tutorials.

Supporting packages

  • Kokkos from National Technology & Engineering Solutions of Sandia, LLC (NTESS)
  • CLI11 1.8 from University of Cincinnati.

Extensions in future

parallelization

  • Extending the code for using OpenMPTarget backend to include more GPUs for off-loading the computations.
  • Extending high-level parallelization and implementing space partitioning and load balancing for muilti-GPU computations and running PhasicFlow on distributed memory super-computers