61 Commits

Author SHA1 Message Date
0d9de2c601 Merge pull request #222 from PhasicFlow/main
from main
2025-05-02 23:04:23 +03:30
b4bc724a68 readme helical 2025-05-02 22:28:56 +03:30
ee33469295 readme helical 2025-05-02 22:26:38 +03:30
3933d65303 yaml update5 2025-05-02 22:03:16 +03:30
cf4d22c963 yaml update4 2025-05-02 21:59:31 +03:30
86367c7e2c yaml update3 2025-05-02 21:51:03 +03:30
a7e51a91aa yaml update2 2025-05-02 21:46:43 +03:30
5e56bf1b8c yaml update1 2025-05-02 21:28:40 +03:30
343ac1fc04 yaml update 2025-05-02 21:27:23 +03:30
6b04d17c7f sync-wiki to process img<> tags 2025-05-02 20:47:21 +03:30
97f46379c7 image resize 2025-05-02 20:25:20 +03:30
32fd6cb12e features update 2025-05-02 20:06:49 +03:30
be16fb0684 tutorials link added 2025-05-02 18:29:08 +03:30
4c96c6fa1e test 2025-04-30 19:01:51 +03:30
196b7a1833 how to build readme.md to wiki 2025-04-30 18:52:15 +03:30
316e71ff7a test readme.md 2025-04-30 18:36:53 +03:30
7a4a33ef37 a new workflow for readme.md files to wiki 2025-04-30 18:34:53 +03:30
edfbdb22e9 readmd.md update8 2025-04-30 08:56:11 +03:30
c6725625b3 readmd.md update7 2025-04-30 08:45:28 +03:30
253d6fbaf7 readmd.md update6 2025-04-30 08:40:46 +03:30
701baf09e6 readmd.md update5 2025-04-30 08:37:17 +03:30
20c94398a9 readmd.md update4 2025-04-30 08:34:51 +03:30
dd36e32da4 readmd.md update3 2025-04-30 08:31:19 +03:30
a048c2f5d7 readmd.md update2 2025-04-30 08:27:07 +03:30
8b324bc2b6 readmd.md update1 2025-04-30 08:18:29 +03:30
c7f790a1fa readmd.md update 2025-04-30 08:14:10 +03:30
166d7e72c2 rrr 2025-04-29 20:23:08 +03:30
c126f9a8a3 rr 2025-04-29 20:19:25 +03:30
7104a33a4b r 2025-04-29 20:14:34 +03:30
16b6084d98 readme update 2025-04-29 20:10:06 +03:30
2afea7b273 workflow update 2025-04-29 20:09:22 +03:30
2c5b4f55d1 readme.test 2025-04-29 20:01:13 +03:30
a7dc69a801 Merge branch 'main' of github.com:PhasicFlow/phasicFlow 2025-04-29 19:59:36 +03:30
32287404fa workflow update 2025-04-29 19:54:20 +03:30
8b3530c289 Merge pull request #221 from wanqing0421/benchmarks
update phasicFlow snapshot
2025-04-29 19:47:25 +03:30
d8c3fc02d5 update phasicFlow snapshot 2025-04-29 20:46:30 +08:00
4dab700a47 update image 2025-04-29 20:30:10 +08:00
a50ceeee2c update readme and figure 2025-04-29 20:25:00 +08:00
468730289b test for wiki 2025-04-28 23:06:29 +03:30
27f0202002 workflow for wiki 2025-04-28 23:04:42 +03:30
c69bfc79e1 endsolid bug fix for space separated names 2025-04-28 19:42:49 +03:30
69909b3c01 bug fix in reading stl file 2025-04-28 13:56:21 +03:30
8986c47b69 readmd.md for benchmark is updated 2025-04-28 12:25:53 +03:30
37282f16ac Merge branch 'PhasicFlow:main' into importStl 2025-04-28 09:35:49 +08:00
cd051a6497 Merge pull request #220 from wanqing0421/benchmarks
update readme
2025-04-27 21:57:40 +03:30
8b5d14afe6 update readme figure 2025-04-28 02:20:42 +08:00
eb37affb94 update readme 2025-04-28 02:17:04 +08:00
c0d12f4243 Merge pull request #219 from PhasicFlow/postprocessPhasicFlow
diameter -> distance for benchmarks
2025-04-27 21:08:04 +03:30
a1b5a9bd5d Merge pull request #218 from wanqing0421/benchmarks
upload readme for benchmarks
2025-04-27 20:59:37 +03:30
dc0edbc845 diameter -> distance for benchmarks 2025-04-26 21:22:59 +03:30
b423b6ceb7 upload readme for benchmarks 2025-04-26 15:17:57 +08:00
1f6a953154 fix bug when endsolid with a suffix name 2025-04-26 14:58:56 +08:00
bbd3afea0e Merge pull request #216 from PhasicFlow/postprocessPhasicFlow
readme.md for geometryPhasicFlow
2025-04-25 21:04:53 +03:30
53f0e959b0 readme.md for geometryPhasicFlow 2025-04-25 21:04:18 +03:30
c12022fb19 Merge pull request #215 from wanqing0421/importStl
add scale and transform function during the stl model importing process
2025-04-25 20:45:53 +03:30
d876bb6246 correction for tab 2025-04-26 01:13:42 +08:00
cb40e01b7e Merge pull request #206 from wanqing0421/main
fixed selectorStride bug
2025-04-25 20:35:11 +03:30
5f6400c032 add scale and transform function during the stl model importing process 2025-04-26 00:43:56 +08:00
8863234c1c update stride selector 2025-04-25 23:11:19 +08:00
1cd64fb2ec Merge branch 'PhasicFlow:main' into main 2025-04-25 23:00:10 +08:00
5f8ea2d841 fixed selectorStride bug 2025-04-22 14:46:12 +08:00
37 changed files with 831 additions and 243 deletions

153
.github/scripts/sync-wiki.py vendored Executable file
View File

@ -0,0 +1,153 @@
#!/usr/bin/env python3
import os
import re
import yaml
import sys
# Constants
REPO_URL = "https://github.com/PhasicFlow/phasicFlow"
REPO_PATH = os.path.join(os.environ.get("GITHUB_WORKSPACE", ""), "repo")
WIKI_PATH = os.path.join(os.environ.get("GITHUB_WORKSPACE", ""), "wiki")
MAPPING_FILE = os.path.join(REPO_PATH, ".github/workflows/markdownList.yml")
def load_mapping():
"""Load the markdown to wiki page mapping file."""
try:
with open(MAPPING_FILE, 'r') as f:
data = yaml.safe_load(f)
return data.get('mappings', [])
except Exception as e:
print(f"Error loading mapping file: {e}")
return []
def convert_relative_links(content, source_path):
"""Convert relative links in markdown content to absolute URLs."""
# Find markdown links with regex pattern [text](url)
md_pattern = r'\[([^\]]+)\]\(([^)]+)\)'
# Find HTML img tags
img_pattern = r'<img\s+src=[\'"]([^\'"]+)[\'"]'
def replace_link(match):
link_text = match.group(1)
link_url = match.group(2)
# Skip if already absolute URL or anchor
if link_url.startswith(('http://', 'https://', '#', 'mailto:')):
return match.group(0)
# Get the directory of the source file
source_dir = os.path.dirname(source_path)
# Create absolute path from repository root
if link_url.startswith('/'):
# If link starts with /, it's already relative to repo root
abs_path = link_url
else:
# Otherwise, it's relative to the file location
abs_path = os.path.normpath(os.path.join(source_dir, link_url))
if not abs_path.startswith('/'):
abs_path = '/' + abs_path
# Convert to GitHub URL
github_url = f"{REPO_URL}/blob/main{abs_path}"
return f"[{link_text}]({github_url})"
def replace_img_src(match):
img_src = match.group(1)
# Skip if already absolute URL
if img_src.startswith(('http://', 'https://')):
return match.group(0)
# Get the directory of the source file
source_dir = os.path.dirname(source_path)
# Create absolute path from repository root
if img_src.startswith('/'):
# If link starts with /, it's already relative to repo root
abs_path = img_src
else:
# Otherwise, it's relative to the file location
abs_path = os.path.normpath(os.path.join(source_dir, img_src))
if not abs_path.startswith('/'):
abs_path = '/' + abs_path
# Convert to GitHub URL (use raw URL for images)
github_url = f"{REPO_URL}/raw/main{abs_path}"
return f'<img src="{github_url}"'
# Replace all markdown links
content = re.sub(md_pattern, replace_link, content)
# Replace all img src tags
content = re.sub(img_pattern, replace_img_src, content)
return content
def process_file(source_file, target_wiki_page):
"""Process a markdown file and copy its contents to a wiki page."""
source_path = os.path.join(REPO_PATH, source_file)
target_path = os.path.join(WIKI_PATH, f"{target_wiki_page}.md")
print(f"Processing {source_path} -> {target_path}")
try:
# Check if source exists
if not os.path.exists(source_path):
print(f"Source file not found: {source_path}")
return False
# Read source content
with open(source_path, 'r') as f:
content = f.read()
# Convert relative links
content = convert_relative_links(content, source_file)
# Write to wiki page
with open(target_path, 'w') as f:
f.write(content)
return True
except Exception as e:
print(f"Error processing {source_file}: {e}")
return False
def main():
# Check if wiki directory exists
if not os.path.exists(WIKI_PATH):
print(f"Wiki path not found: {WIKI_PATH}")
sys.exit(1)
# Load mapping
mappings = load_mapping()
if not mappings:
print("No mappings found in the mapping file")
sys.exit(1)
print(f"Found {len(mappings)} mappings to process")
# Process each mapping
success_count = 0
for mapping in mappings:
source = mapping.get('source')
target = mapping.get('target')
if not source or not target:
print(f"Invalid mapping: {mapping}")
continue
if process_file(source, target):
success_count += 1
print(f"Successfully processed {success_count} of {len(mappings)} files")
# Exit with error if any file failed
if success_count < len(mappings):
sys.exit(1)
if __name__ == "__main__":
main()

18
.github/workflows/markdownList.yml vendored Normal file
View File

@ -0,0 +1,18 @@
# This file maps source markdown files to their target wiki pages
# format:
# - source: path/to/markdown/file.md
# target: Wiki-Page-Name
mappings:
- source: benchmarks/readme.md
target: Performance-of-phasicFlow
- source: benchmarks/helicalMixer/readme.md
target: Helical-Mixer-Benchmark
- source: benchmarks/rotatingDrum/readme.md
target: Rotating-Drum-Benchmark
- source: doc/mdDocs/howToBuild-V1.0.md
target: How-to-build-PhasicFlowv1.0
- source: tutorials/README.md
target: Tutorials
- source: doc/mdDocs/phasicFlowFeatures.md
target: Features-of-PhasicFlow
# Add more mappings as needed

60
.github/workflows/sync-wiki.yml vendored Normal file
View File

@ -0,0 +1,60 @@
name: Sync-Wiki
on:
push:
branches:
- main
paths:
- "**/*.md"
- ".github/workflows/sync-wiki.yml"
- ".github/workflows/markdownList.yml"
- ".github/scripts/sync-wiki.py"
workflow_dispatch:
jobs:
sync-wiki:
runs-on: ubuntu-latest
steps:
- name: Checkout Repository
uses: actions/checkout@v3
with:
path: repo
- name: Checkout Wiki
uses: actions/checkout@v3
with:
repository: ${{ github.repository }}.wiki
path: wiki
continue-on-error: true
- name: Create Wiki Directory if Not Exists
run: |
if [ ! -d "wiki" ]; then
mkdir -p wiki
cd wiki
git init
git config user.name "${{ github.actor }}"
git config user.email "${{ github.actor }}@users.noreply.github.com"
git remote add origin "https://github.com/${{ github.repository }}.wiki.git"
fi
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install dependencies
run: pip install pyyaml
- name: Sync markdown files to Wiki
run: |
python $GITHUB_WORKSPACE/repo/.github/scripts/sync-wiki.py
env:
GITHUB_REPOSITORY: ${{ github.repository }}
- name: Push changes to wiki
run: |
cd wiki
git config user.name "${{ github.actor }}"
git config user.email "${{ github.actor }}@users.noreply.github.com"
git add .
if git status --porcelain | grep .; then
git commit -m "Auto sync wiki from main repository"
git push --set-upstream https://${{ github.actor }}:${{ github.token }}@github.com/${{ github.repository }}.wiki.git master -f
else
echo "No changes to commit"
fi

View File

@ -0,0 +1 @@
# Helical Mixer Benchmark (phasicFlow v-1.0)

7
benchmarks/readme.md Normal file
View File

@ -0,0 +1,7 @@
# Benchmarks
Benchmakrs has been done on two different simulations: a simulation with simple geometry (rotating drum) and a simulation with complex geometry (helical mixer).
- [rotating drum](./rotatingDrum/readme.md)
- [helical mixer](./helicalMixer/readme.md)

Binary file not shown.

After

Width:  |  Height:  |  Size: 124 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 180 KiB

View File

@ -0,0 +1,96 @@
# Rotating Drum Benchmark (phasicFlow v-1.0)
## Overview
This benchmark compares the performance of phasicFlow with a well-stablished commercial DEM software for simulating a rotating drum with varying particle counts (250k to 8M particles). The benchmark measures both computational efficiency and memory usage across different hardware configurations.
## Simulation Setup
<div align="center">
<img src="./images/commericalDEMsnapshot.png"/>
<div align="center">
<p>Figure 1. Commercial DEM simulation snapshot</p>
</div>
</div>
<div align="center">
<img src="./images/phasicFlow_snapshot.png"/>
<div align="center">
<p>Figure 2. phasicFlow simulation snapshot and visualized using Paraview</p>
</div>
</div>
### Hardware Specifications
<div align="center">
Table 1. Hardware specifications used for benchmarking.
</div>
| System | CPU | GPU | Operating System |
| :---------: | :----------------------: | :--------------------------: | :--------------: |
| Laptop | Intel i9-13900HX 2.2 GHz | NVIDIA GeForce RTX 4050Ti 6G | Windows 11 24H2 |
| Workstation | Intel Xeon 4210 2.2 GHz | NVIDIA RTX A4000 16G | Ubuntu 22.04 |
### Simulation Parameters
<div align="center">
Table 2. Parameters for rotating drum simulations.
</div>
| Case | Particle Diameter | Particle Count | Drum Length | Drum Radius |
| :-------: | :---------------: | :--------------: | :------------------: | :------------------: |
| 250k | 6 mm | 250,000 | 0.8 m | 0.2 m |
| 500k | 5 mm | 500,000 | 0.8 m | 0.2 m |
| 1M | 4 mm | 1,000,000 | 0.8 m | 0.2 m |
| 2M | 3 mm | 2,000,000 | 1.2 m | 0.2 m |
| 4M | 3 mm | 4,000,000 | 1.6 m | 0.2 m |
| 8M | 2 mm | 8,000,000 | 1.6 m | 0.2 m |
The time step for all simulations was set to 1.0e-5 seconds and the simulation ran for 4 seconds.
## Performance Comparison
### Execution Time
<div align="center">
Table 3. Total calculation time (minutes) for different configurations.
</div>
| Software | 250k | 500k | 1M | 2M | 4M | 8M |
| :---------------: | :----: | :-----: | :-----: | :-----: | :-----: | :------: |
| phasicFlow-4050Ti | 54 min | 111 min | 216 min | 432 min | - | - |
| Commercial DEM-4050Ti | 68 min | 136 min | 275 min | 570 min | - | - |
| phasicFlow-A4000 | 38 min | 73 min | 146 min | 293 min | 589 min | 1188 min |
The execution time scales linearly with particle count. phasicFlow demonstrates approximately:
- 20% faster calculation than the well-established commercial DEM software on the same hardware
- 30% performance improvement when using the NVIDIA RTX A4000 compared to the RTX 4050Ti
<div align="center">
<img src="./images/performance1.png"/>
<p>Figure 3. Calculation time comparison between phasicFlow and the well-established commercial DEM software.</p>
</div>
### Memory Usage
<div align="center">
Table 4. Memory consumption for different configurations.
</div>
| Software | 250k | 500k | 1M | 2M | 4M | 8M |
| :---------------: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: |
| phasicFlow-4050Ti | 252 MB | 412 MB | 710 MB | 1292 MB | - | - |
| Commercial DEM-4050Ti | 485 MB | 897 MB | 1525 MB | 2724 MB | - | - |
| phasicFlow-A4000 | 344 MB | 480 MB | 802 MB | 1386 MB | 2590 MB | 4966 MB |
Memory efficiency comparison:
- phasicFlow uses approximately 0.7 GB of memory per million particles
- Commercial DEM software uses approximately 1.2 GB of memory per million particles
- phasicFlow shows ~42% lower memory consumption compared to the commercial alternative
- The memory usage scales linearly with particle count in both software packages. But due to memory limitations on GPUs, it is possible to run larger simulation on GPUs with phasicFlow.
## Run Your Own Benchmarks
The simulation case setup files are available in this folder for users interested in performing similar benchmarks on their own hardware. These files can be used to reproduce the tests and compare performance across different systems.

View File

@ -35,7 +35,7 @@ surfaces
radius2 0.2; // radius at p2
resolution 24; // number of divisions
resolution 60; // number of divisions
material wallMat; // material name of this wall

View File

@ -27,7 +27,7 @@ positionParticles
orderedInfo
{
diameter 0.004; // minimum space between centers of particles
distance 0.004; // minimum space between centers of particles
numPoints 1000000; // number of particles in the simulation

View File

@ -35,7 +35,7 @@ surfaces
radius2 0.2; // radius at p2
resolution 24; // number of divisions
resolution 60; // number of divisions
material wallMat; // material name of this wall

View File

@ -27,7 +27,7 @@ positionParticles
orderedInfo
{
diameter 0.006; // minimum space between centers of particles
distance 0.006; // minimum space between centers of particles
numPoints 250000; // number of particles in the simulation

View File

@ -35,7 +35,7 @@ surfaces
radius2 0.2; // radius at p2
resolution 24; // number of divisions
resolution 60; // number of divisions
material wallMat; // material name of this wall

View File

@ -27,7 +27,7 @@ positionParticles
orderedInfo
{
diameter 0.003; // minimum space between centers of particles
distance 0.003; // minimum space between centers of particles
numPoints 2000000; // number of particles in the simulation

View File

@ -35,7 +35,7 @@ surfaces
radius2 0.2; // radius at p2
resolution 24; // number of divisions
resolution 60; // number of divisions
material wallMat; // material name of this wall

View File

@ -27,7 +27,7 @@ positionParticles
orderedInfo
{
diameter 0.003; // minimum space between centers of particles
distance 0.003; // minimum space between centers of particles
numPoints 4000000; // number of particles in the simulation

View File

@ -35,7 +35,7 @@ surfaces
radius2 0.2; // radius at p2
resolution 24; // number of divisions
resolution 60; // number of divisions
material wallMat; // material name of this wall

View File

@ -27,7 +27,7 @@ positionParticles
orderedInfo
{
diameter 0.005; // minimum space between centers of particles
distance 0.005; // minimum space between centers of particles
numPoints 500000; // number of particles in the simulation

View File

@ -35,7 +35,7 @@ surfaces
radius2 0.2; // radius at p2
resolution 24; // number of divisions
resolution 60; // number of divisions
material wallMat; // material name of this wall

View File

@ -27,7 +27,7 @@ positionParticles
orderedInfo
{
diameter 0.003; // minimum space between centers of particles
distance 0.003; // minimum space between centers of particles
numPoints 6000000; // number of particles in the simulation

View File

@ -0,0 +1,136 @@
# How to build PhasicFlow-v-1.0
You can build PhasicFlow for CPU or GPU. You can have a single build or oven multiple builds on a machine. Here you learn how to have a single build of PhasicFlow, in various modes of execution. You can install PhasicFlow-v-1.0 on **Ubuntu-22.04 LTS** and **Ubuntu-24.04 LTS**. Installing it on older versions of Ubuntu needs some additional steps to meet the requirements which are not covered here.
If you want to install PhasicFlow on **Windows OS**, just see [this page](https://www.cemf.ir/installing-phasicflow-v-1-0-on-ubuntu/) for more information.
# Required packages
You need a list of packages installed on your computer before building PhasicFlow:
* git, for cloning the code and package management
* g++, for compiling the code
* cmake, for generating build system
* Cuda-12.x or above (if GPU is targeted), for compiling the code for CUDA execution.
### Installing packages
Execute the following commands to install the required packages (Except Cuda). tbb is installed automatically.
```bash
sudo apt update
sudo apt install -y git g++ cmake cmake-qt-gui
```
### Installing Cuda for GPU execution
If you want to build PhasicFlow to be executed on an nvidia-GPU, you need to install the latest version of Cuda compiler (Version 12.x or above), which is compatible with your hardware and OS, on your computer.
# How to build?
Here you will learn how to build PhasicFlow for single execution mode. Follow the steps below to install it on your computer.
Tested operating systems are:
* Ubuntu-22.04 LTS
* Ubuntu-24.04 LTS
### Step 1: Package check
Make sure that you have installed all the required packages on your computer. See above for more information.
### Step 2: Cloning PhasicFlow
Create the PhasicFlow folder in your home folder and then clone the source code into that folder:
```bash
cd ~
mkdir PhasicFlow
cd PhasicFlow
git clone https://github.com/PhasicFlow/phasicFlow.git
mv phasicFlow phasicFlow-v-1.0
```
### Step 3: Environmental variables
Opne the bashrc file using the following command:
```bash
$ gedit ~/.bashrc
```
and add the following line to the end of the file, **save** and **close** it.
```bash
source $HOME/PhasicFlow/phasicFlow-v-1.0/cmake/bashrc
```
this will introduce a new source file for setting the environmental variables of PhasicFlow. If you want to load these variables in the current open terminal, you need to source it. Or, simply **close the terminal** and **open a new terminal**.
### Step 4: Building PhasicFlow
Follow one of the followings to build PhasicFlow for one mode of execution.
#### Serial build for CPU
In a **new terminal** enter the following commands:
```bash
cd ~/PhasicFlow/phasicFlow-v-1.0
mkdir build
cd build
cmake ../ -DpFlow_Build_Serial=On -DCMAKE_BUILD_TYPE=Release
make install -j4
```
For faster builds, use `make install -j`. This will use all the CPU cores on your computer for building.
#### OpenMP build for CPU
```bash
cd ~/PhasicFlow/phasicFlow-v-1.0
mkdir build
cd build
cmake ../ -DpFlow_Build_OpenMP=On -DCMAKE_BUILD_TYPE=Release
make install -j4
```
#### GPU build for parallel execution on CUDA-enabled GPUs
```bash
cd ~/PhasicFlow/phasicFlow-v-1.0
mkdir build
cd build
cmake ../ -DpFlow_Build_Cuda=On -DCMAKE_BUILD_TYPE=Release
cmake ../ -DpFlow_Build_Cuda=On -DCMAKE_BUILD_TYPE=Release
make install -j4
```
After building, `bin`, `include`, and `lib` folders will be created in `~/PhasicFlow/phasicFlow-v-1.0/` folder. Now you are ready to use PhasicFlow.
**note 1**: When compiling the code in parallel, you need to have enough RAM on your computer. As a rule, you need 1 GB free RAM per each processor on your computer for compiling in parallel.
You may want to use fewer number of cores on your computer by using the following command:
```bash
make install -j3
```
the above command only uses 3 cores for compiling.
**note 2**: By default PhasicFlow is compiled with **double** as floating point variable. You can compile it with **float**. Just in the command line of camke added `-DpFlow_Build_Double=Off` flag to compile it with float. For example if you are building for cuda, you can enter the following command:
```bash
cmake ../ -DpFlow_Build_Cuda=On -DpFlow_Build_Double=Off
```
### Step 5: Testing
In the current terminal or a new terminal enter the following command:
```bash
checkPhasicFlow
```
This command shows the host and device environments and software version. If PhasicFlow was build correctly, you would get the following output:
```
Initializing host/device execution spaces . . .
Host execution space is Serial
Device execution space is Serial
You are using phasicFlow v-1.0 (copyright(C): www.cemf.ir)
In this build, double is used for floating point operations and uint32for indexing.
This is not a build for MPI execution
Finalizing host/device execution space ....
```

View File

@ -1,151 +0,0 @@
# How to build PhasicFlow {#howToBuildPhasicFlow}
You can build PhasicFlow for CPU or GPU. You can have a single build or oven multiple builds on a machine. Here you learn how to have a single build of PhasicFlow, in various modes of execution.
# Required packages
You need a list of packaged installed on your computer before building PhasicFlow:
* git, for cloning the code and package management
* g++, for compiling the code
* cmake, for generating build system
* tbb, a parallel library for STL algorithms
* Cuda (if GPU is targeted), for compiling the code for CUDA execution.
* Kokkos, the parallelization backend of PhasicFlow
### git
if git is not installed on your computer, enter the following commands
```
$ sudo apt update
$ sudo apt install git
```
### g++ (C++ compiler)
The code is tested with g++ (gnu C++ compiler). The default version of g++ on Ubuntu 18.04 LTS or upper is sufficient for compiling. If it is not installed on your operating system, enter the following command:
```
$ sudo apt update
$ sudo apt install g++
```
### CMake
You also need to have CMake-3.22 or higher installed on your computer.
```
$ sudo apt update
$ sudo apt install cmake
```
### tbb (2020.1-2 or higher)
For **Ubuntu 20.04 LTS or higher versions**, you can install tbb using apt. For now, some parallel algorithms on host side rely on tbb parallel library (C++ parallel backend). Use e following commands to install it:
```
$ sudo apt update
$ sudo apt install libtbb-dev
```
If you are compiling on **Ubuntu-18.04 LTS**, you need to enter the following commands to get the right version (2020.1-2 or higher) of tbb:
```
$ wget "http://archive.ubuntu.com/ubuntu/pool/universe/t/tbb/libtbb2_2020.1-2_amd64.deb"
$ sudo dpkg --install libtbb2_2020.1-2_amd64.deb
$ wget "http://archive.ubuntu.com/ubuntu/pool/universe/t/tbb/libtbb-dev_2020.1-2_amd64.deb"
$ sudo dpkg --install libtbb-dev_2020.1-2_amd64.deb
```
### Cuda
If you want to build PhasicFlow to be executed on an nvidia-GPU, you need to install the latest version of Cuda compiler, which is compatible with your hardware and OS, on your computer.
# How to build?
Here you will learn how to build PhasicFlow for single execution mode. Follow the steps below to install it on your computer.
Tested operating systems are:
* Ubuntu 18.04 LTS
* Ubuntu 20.04 LTS
* Ubuntu 22.04 LTS
### Step 1: Package check
Make sure that you have installed all the required packages on your computer. See above for more information.
### Step 2: Cloning Kokkos
It is assumed that Kokkos source is located in the home folder of your computer. Clone the latest version of Kokkos into your home folder:
```
$ cd ~
$ mkdir Kokkos
$ cd Kokkos
$ git clone https://github.com/kokkos/kokkos.git
```
or simply download and extract the source code of Kokkos in `~/Kokkos` folder. In the end, the top level CMakeLists.txt file should be located in `~/Kokkos/kokkos` folder.
### Step 3: Cloning PhasicFlow
Create the PhasicFlow folder in your home folder and then clone the source code into that folder:
```
$ cd ~
$ mkdir PhasicFlow
$ cd PhasicFlow
$ git clone https://github.com/PhasicFlow/phasicFlow.git
```
### Step 4: Environmental variables
Opne the bashrc file using the following command:
`$ gedit ~/.bashrc`
and add the following line to the end of the file, **save** and **close** it.
`source $HOME/PhasicFlow/phasicFlow/cmake/bashrc`
this will introduce a new source file for setting the environmental variables of PhasicFlow. If you want to load these variables in the current open terminal, you need to source it. Or, simply **close the terminal** and **open a new terminal**.
### Step 5: Building PhasicFlow
Follow one of the followings to build PhasicFlow for one mode of execution.
#### Serial build for CPU
In a **new terminal** enter the following commands:
```
$ cd ~/PhasicFlow/phasicFlow
$ mkdir build
$ cd build
$ cmake ../ -DpFlow_Build_Serial=On
$ make install
```
For faster builds, use `make install -j`. This will use all the CPU cores on your computer for building.
#### OpenMP build for CPU
```
$ cd ~/PhasicFlow/phasicFlow
$ mkdir build
$ cd build
$ cmake ../ -DpFlow_Build_OpenMP=On
$ make install
```
#### GPU build for parallel execution on CUDA-enabled GPUs
```
$ cd ~/PhasicFlow/phasicFlow
$ mkdir build
$ cd build
$ cmake ../ -DpFlow_Build_Cuda=On
$ make install
```
After building, `bin`, `include`, and `lib` folders will be created in `~/PhasicFlow/phasicFlow/` folder. Now you are ready to use PhasicFlow.
**note 1**: When compiling the code in parallel, you need to have enough RAM on your computer. As a rule, you need 1 GB free RAM per each processor in your computer for compiling in parallel.
You may want to use fewer number of cores on your computer by using the following command:
`$ make install -j 3`
the above command only uses 3 cores for compiling.
**note 2**: By default PhasicFlow is compiled with **double** as floating point variable. You can compile it with **float**. Just in the command line of camke added `-DpFlow_Build_Double=Off` flag to compile it with float. For example if you are building for cuda, you can enter the following command:
`$ cmake ../ -DpFlow_Build_Cuda=On -DpFlow_Build_Double=Off`
### Step 6: Testing
In the current terminal or a new terminal enter the following command:
`$ checkPhasicFlow`
This command shows the host and device environments and software version. If PhasicFlow was build correctly, you would get the following output:
```
Initializing host/device execution spaces . . .
Host execution space is Serial
Device execution space is Cuda
ou are using phasicFlow v-0.1 (copyright(C): www.cemf.ir)
In this build, double is used for floating point operations.
Finalizing host/device execution space ....
```

View File

@ -1,64 +1,116 @@
# PhasicFlow Features {#phasicFlowFeatures}
# PhasicFlow Features (v-1.0)
The features of PhasicFlow described here are the main features that are implemented in the code for version 1.0. This document is not a complete list of all the features of PhasicFlow. The features are being added to the code continuously and this document may be behind the latest updates. Of course, the code review will give you the complete list.
## Table of Contents
- [1. Building options](#1-building-options)
- [2. Preprocessing tools](#2-preprocessing-tools)
- [3. Solvers for simulations](#3-solvers-for-simulations)
- [4. Postprocessing tools](#4-postprocessing-tools)
- [5. Models and features for simulations](#5-models-and-features-for-simulations)
- [5.1. General representation of walls](#51-general-representation-of-walls)
- [5.2. High precision integeration methods](#52-high-precision-integeration-methods)
- [5.3. Contact force models](#53-contact-force-models-needs-improvement)
- [5.4. Particle insertion](#54-particle-insertion)
- [5.5. Restarting/resuming a simulation](#55-restartingresuming-a-simulation)
- [5.6. Postprocessing data during simulation](#56-postprocessing-data-during-simulation)
## 1. Building options
## Building options
You can build PhasicFlow to be executed on multi-core CPUs or GPUs. It is also possible to select the type of floating point variables in PhasicFlow: double or float. float type requires less memory and mostly consumes less time of a processor to complete a mathematical operation. So, there is a benefit for using floats in DEM simulation specially when GPU is targeted for computations.
Build options for PhasicFlow:
* **serial (double or float type)**: execution on one cpu core
* **OpenMp (double or float type)**: execution on multiple cores of a CPU
* **cuda (double or float type)**: execution on cuda-enabled GPUs
- **serial (double or float type)**: execution on one cpu core
- **OpenMp (double or float type)**: execution on multiple cores of a CPU
- **cuda (double or float type)**: execution on cuda-enabled GPUs
for more information on building PhasicFlow, please refer to the [installation guide](./howToBuild-V1.0.md).
## Preprocessing tools
Preprocessing tools are used to facilitate the process of case setup. They include tools for defining initial state of particles and geometry conversion.
* **particlesPhasicFlow** tool can be used to define the initial position of particles (for example at t = 0 s) and to set the initial field values for particles (like velocity, orientation, acceleration and etc).
* **geometryPhasicFlow** converts user inputs for walls into a data structures that is used by PhasicFlow.
## 2. Preprocessing tools
PhasicFlow provides a set of tools for preprocessing the simulation case. These tools are used to define the initial state of particles, walls and other parameters that are required for running a simulation.
- [**particlesPhasicFlow**](./../../utilities/particlesPhasicFlow/) tool can be used to define the initial position of particles (for example at t = 0 s) and to set the initial field values for particles (like velocity, orientation, acceleration, etc.).
## Models and features for simulations
- [**geometryPhasicFlow**](./../../utilities/geometryPhasicFlow/) converts user inputs for walls into a data structure that is used by PhasicFlow.
## 3. Solvers for simulations
### General representation of walls
- [**sphereGranFlow**](./../../solvers/sphereGranFlow/) is a solver for simulating the flow of spherical particles with particle insertion mechanism. A full set of tutorial on various possible simulations can be found here: [sphereGranFlow tutorial](./../../tutorials/sphereGranFlow/).
- [**grainGranFlow**](./../../solvers/grainGranFlow/) is a solver for simulating the flow of course-grained particles with particle insertion mechanism. A full set of tutorial on various possible simulations can be found here: [grainGranFlow tutorial](./../../tutorials/grainGranFlow/).
- [**iterateGeometry**](./../../solvers/iterateGeometry/) is a solver testing motion of walls without simulating particles. Since simulating with particles may take a long time and we may want to check the motion of geometry to be correct before actual simulation, we created this utility to test the motion of walls. A set of tutorial on various possible simulations can be found here: [iterateGeometry tutorial](./../../tutorials/iterateGeometry/).
## 4. Postprocessing tools
- [**pFlowToVTK**](./../../utilities/pFlowToVTK) is used to convert simulation results into vtk file format. vtk file format can be read by Paraview for visualizing the results.
- [**postprocessPhasicFlow**](./../../utilities/postprocessPhasicFlow/) is a tool for performing various averaging and summation on the fields. Particle probing is also possible.
## 5. Models and features for simulations
### 5.1. General representation of walls
Walls can be defined in three ways in PhasicFlow:
* **Builtin walls** in PhasicFlow that include plane wall, cylinder/cone wall, cuboid, circle.
* **stl wall** that reads the data of the wall from an ASCII stl file.
* **foamPatch wall** that reads the OpenFOAM mesh and converts the boundary patches into PhasicFlow walls (this feature is only available when performing CFD-DEM simulation using OpenFOAM).
Walls can be fixed or in motion during simulations. Various motion models are implemented to cover most of the wall motions in phasicFlow ([see the source code] (./../../../src/MotionModel/)):
* **fixedWall** model, in which all walls are fixed. This model is mostly useful for granular flow under gravity or gas-solid flows (CFD-DEM).
* **rotatingAxisMotion** model, in which walls are rotating around an axis of rotation with specified rotation speed. This model covers a wide range of granular flows in which the whole or a part of geometry is rotating, like mixers.
* **multiRotatingAxisMotion** model, in which a combination of rotations can be specified. One axis of rotation can itself have another axis of rotation, and so on. This creates the possibility of defining very complex motion pattern for walls, like what we see in Nauta blenders.
* **vibratingMotion** model, in which walls vibrates based on a sinusoidal model with specified frequency and amplitude.
- **Builtin walls** in PhasicFlow that include plane wall, cylinder/cone wall, cuboid, circle.
- **stl wall** that reads the data of the wall from an ASCII stl file.
- **foamPatch wall** that reads the OpenFOAM mesh and converts the boundary patches into PhasicFlow walls (this feature is only available when performing CFD-DEM simulation using OpenFOAM).
Walls can be fixed or in motion during simulations. Various motion models are implemented to cover most of the wall motions in phasicFlow ([see the source code](./../../src/MotionModel/)):
- **stationay** model, in which all walls are fixed. This model is mostly useful for granular flow under gravity or gas-solid flows (CFD-DEM).
- **rotatingAxis** model, in which walls are rotating around an axis of rotation with specified rotation speed. This model covers a wide range of granular flows in which the whole or a part of geometry is rotating, like mixers.
- **multiRotatingAxis** model, in which a combination of rotations can be specified. One axis of rotation can itself have another axis of rotation, and so on. This creates the possibility of defining very complex motion pattern for walls, like what we see in Nauta blenders.
- **vibrating** model, in which walls vibrates based on a sinusoidal model with specified frequency and amplitude.
In addition to these models, the user can add other motion models to the code based on their need.
### 5.2. High precision integeration methods
### High precision integeration methods
The precision of integration in a DEM simulation is very important. Since sudden changes in the interaction forces occur during simulations (when objects contact or when they rebound). High precision integration methods makes it possible to accurately track position and velocity of objects (specially when they are in contact). When using these methods, it is possible to choose larger time steps for integration without loosing accuracy and causing instability in the simulation. Although a high-precision integration requires more computations, but the benefits of choosing larger time steps in simulation can totally compensate it.
Various integration methods are implemented in PhasicFlow:
| Integration Method | Order | Type|
|Integration Method | Order | Type|
| :--- | :---: | :---: |
| AdamsBashforth2 | 2 | one-step |
| AdamsBashforth3 | 3 | one-step |
| AdamsBashforth4 | 4 | one-step |
| AdamsBashforth5 | 5 | one-step |
| AdamsMoulton3 | 3 | predictor-corrector |
| AdamsMoulton4 | 4 | predictor-corrector |
| AdamsMoulton5 | 5 | predictor-corrector |
| AdamsMoulton3 | 3 | predictor-corrector (not active)|
| AdamsMoulton4 | 4 | predictor-corrector (not active)|
| AdamsMoulton5 | 5 | predictor-corrector (not active)|
### 5.3. Contact force models (needs improvement)
### Contact force models
Linear and non-linear visco-elastic contact force models are considered in the simulation. In addition to these, limited and non-limited Coulomb's friction model can be used to account for the friction between objects. For spherical objects, rolling friction can also be specified between bodies in contact.
In addition, for course-grained particles simulation, we developed a speciall set of***
### Particle insertion
Particles can be inserted during simulation from specified region at specified rate and time interval. Any number of insertion regions can be defined in a simulation. Various region types are considered here: box, cylinder and sphere. Particles are inserted into the simulation through the specified region.
### 5.4. Particle insertion
### restarting/resuming a simulation
It is possible to resume a simulation fron any time-folder that is avaiable in the simulation case setup directory. PhasicFlow restart the simulation from that time folder.
Particles can be inserted during simulation from specified region at specified rate and time interval. Any number of insertion regions can be defined in a simulation. Various region types are considered here: `box`, `cylinder` and `sphere`. Particles are inserted into the simulation through the specified region.
## Postprocessing tools
### 5.5. restarting/resuming a simulation
* **pFlowToVTK** is used to convert simulation results into vtk file format. vtk file format can be read by Paraview for visualizing the results.
* **postprocessPhasicFlow** is a tool for performing various cell-based averaging on the fields.
It is possible to resume a simulation from any time-folder that is available in the simulation case setup directory. PhasicFlow restarts the simulation from that time folder.
### 5.6. Postprocessing data during simulation
PhasicFlow provides a powerful in-simulation postprocessing module that allows users to analyze particle data in real-time while the simulation is running. This feature enables:
- **Real-time data analysis** without waiting for simulation completion
- **Region-based processing** in spheres, along lines, or at specific points
- **Various statistical operations** including weighted averages and sums of particle properties
- **Individual particle tracking** to monitor specific particles throughout simulation
- **Multiple processing methods** including arithmetic mean, uniform distribution, and Gaussian distribution
- **Particle filtering** based on properties like diameter, mass, etc.
- **Flexible time control** options for when postprocessing should be executed
To activate in-simulation postprocessing, users need to:
1. Create a `postprocessDataDict` file in the `settings` directory with appropriate configurations
2. Add `libs ("libPostprocessData.so")` and `auxFunctions postprocessData` to the `settings/settingsDict` file
Results are written to output files in the case directory with timestamps, allowing users to monitor simulation behavior as it progresses without interrupting the simulation. for more information on how to use this feature, please refer to the [PostprocessData](./../../src/PostprocessData/) module.
The same postprocessing module can also be used after simulation completion through the [`postprocessPhasicFlow`](./../../utilities/postprocessPhasicFlow/) utility.

View File

@ -59,7 +59,7 @@ pFlow::selectorStridedRange::selectorStridedRange(
end_(dict.getValOrSet<uint32>("end", pStruct.size())),
stride_(dict.getValOrSet<uint32>("stride", 1u))
{
begin_ = max(begin_, 1u);
begin_ = max(begin_, 0u);
end_ = min(end_, static_cast<uint32>(pStruct.size()));
stride_ = max(stride_, 1u);

View File

@ -52,13 +52,12 @@ bool pFlow::stlFile::readSolid
(
iIstream& is,
realx3x3Vector & vertecies,
word & name,
real scaleFactor
word & name
)
{
token tok;
is>> tok;
is >> tok;
if(!checkWordToken(is, tok, "solid")) return false;
// check if there is a name associated with solid
@ -72,7 +71,6 @@ bool pFlow::stlFile::readSolid
while (nWords < 20 )
{
if( badInput(is, tok) ) return false;
//if(!tok.isWord()) return false;
nWords++;
if(tok.isWord() && tok.wordToken() != "facet" )
{
@ -105,16 +103,58 @@ bool pFlow::stlFile::readSolid
vertecies.clear();
while(true )
{
is>>tok;
is >> tok;
if( badInput(is,tok) || !tok.isWord() )return false;
word wTok = tok.wordToken();
if( wTok == "endsolid" ) return true; // end of solid
if( wTok == "endsolid" )// end of solid
{
// check if there is a name associated with endsolid
is >> tok;
if( !badInput(is, tok) && !is.eof())
{
word endName = "";
int32 nWords =0;
while (nWords < 20 )
{
if( badInput(is, tok) ) return false;
nWords++;
if(tok.isWord())
{
endName += tok.wordToken();
}
else if( tok.isNumber())
{
auto val = tok.number();
endName += real2Word(val);
}
else if( tok.isPunctuation())
{
endName += tok.pToken();
}
else if (tok.isWord())
{
is.putBack(tok);
break;
}
else
{
return false;
}
is >> tok;
if(is.eof())return true;
}
}
return true;
}
if( wTok != "facet" ) return false;
// read facet
is.putBack(tok);
realx3x3 tri;
if( !readFacet(is, tri, scaleFactor) ) return false;
if( !readFacet(is, tri) ) return false;
vertecies.push_back(tri);
@ -127,8 +167,7 @@ bool pFlow::stlFile::readSolid
bool pFlow::stlFile::readFacet
(
iIstream& is,
realx3x3& tri,
real scaleFactor
realx3x3& tri
)
{
token tok;
@ -164,9 +203,9 @@ bool pFlow::stlFile::readFacet
if(!checkNumberToken(is, tok, v.y()))return false;
is>>tok;
if(!checkNumberToken(is, tok, v.z()))return false;
if( i==0 ) tri.x() = v * scaleFactor;
if( i==1 ) tri.y() = v * scaleFactor;
if( i==2) tri.z() = v * scaleFactor;
if( i==0 ) tri.x() = v;
if( i==1 ) tri.y() = v;
if( i==2) tri.z() = v;
}
is>> tok;
if(!checkWordToken(is, tok, "endloop")) return false;
@ -291,7 +330,7 @@ void pFlow::stlFile::addSolid
bool pFlow::stlFile::read(real scaleFactor)
bool pFlow::stlFile::read()
{
solids_.clear();
solidNames_.clear();
@ -305,7 +344,7 @@ bool pFlow::stlFile::read(real scaleFactor)
realx3x3Vector vertecies;
word name;
if(!readSolid(is, vertecies, name, scaleFactor))
if(!readSolid(is, vertecies, name))
{
ioErrorInFile(is.name(), is.lineNumber());
return false;

View File

@ -52,9 +52,9 @@ protected:
// - protected members
bool readSolid(iIstream& is, realx3x3Vector & vertecies, word & name, real scaleFactor);
bool readSolid(iIstream& is, realx3x3Vector & vertecies, word & name);
bool readFacet(iIstream& is, realx3x3& tri, real scaleFactor);
bool readFacet(iIstream& is, realx3x3& tri);
bool writeSolid(iOstream& os, const realx3x3Vector& vertecies, const word& name)const;
@ -91,7 +91,7 @@ public:
void addSolid(const word& name, realx3x3Vector&& vertecies);
// - clear current content and read from file
bool read(real scaleFactor);
bool read();
// - write the current contnet to file
bool write()const;

View File

@ -26,33 +26,61 @@ Licence:
bool pFlow::stlWall::readSTLWall
(
const dictionary& dict
const dictionary& dict
)
{
auto fileName = dict.getVal<word>("file");
auto fileName = dict.getVal<word>("file");
real scale = dict.getValOrSet("scale", static_cast<real>(1.0));
real scale = dict.getValOrSet("scale", static_cast<real>(1.0));
realx3 transform = dict.getValOrSet<realx3>("transform", realx3(0));
auto scaleFirst = dict.getValOrSet("scaleFirst", Logical("Yes"));
fileSystem file("./stl",fileName);
stlFile stl(file);
if(!stl.read())
{
fatalErrorInFunction <<
" error in reading stl file "<< file <<endl;
return false;
}
// Scale and transform the stl vertex
realx3x3Vector newStlVertx;
for(uint64 i = 0; i < stl.size(); i++)
{
for(uint64 j = 0; j < stl.solid(i).size(); j++)
{
realx3x3 tri;
if(scaleFirst)
{
tri.x() = stl.solid(i)[j].x() * scale + transform.x();
tri.y() = stl.solid(i)[j].y() * scale + transform.y();
tri.z() = stl.solid(i)[j].z() * scale + transform.z();
}
else
{
tri.x() = (stl.solid(i)[j].x() + transform.x()) * scale;
tri.y() = (stl.solid(i)[j].y() + transform.y()) * scale;
tri.z() = (stl.solid(i)[j].z() + transform.z()) * scale;
}
newStlVertx.push_back(tri);
}
}
// Insert the new vertex to the triangles_
for(uint64 i = 0; i < stl.size(); i++)
{
auto it = triangles_.end();
triangles_.insert(it, newStlVertx.begin(), newStlVertx.end());
}
fileSystem file("./stl",fileName);
stlFile stl(file);
if(!stl.read(scale))
{
fatalErrorInFunction <<
" error in reading stl file "<< file <<endl;
return false;
}
for(uint64 i=0; i<stl.size(); i++)
{
auto it = triangles_.end();
triangles_.insert(it, stl.solid(i).begin(), stl.solid(i).end());
}
return true;
return true;
}
@ -61,13 +89,13 @@ pFlow::stlWall::stlWall()
pFlow::stlWall::stlWall
(
const dictionary& dict
const dictionary& dict
)
:
Wall(dict)
Wall(dict)
{
if(!readSTLWall(dict))
{
fatalExit;
}
if(!readSTLWall(dict))
{
fatalExit;
}
}

View File

@ -0,0 +1,149 @@
# geometryPhasicFlow Utility
## Overview
`geometryPhasicFlow` is a preprocessing utility for Discrete Element Method (DEM) simulations in phasicFlow. It converts wall geometry definitions from the `geometryDict` file into the internal geometry data structures used by the phasicFlow simulation engine.
This utility reads geometry definitions including wall types, material properties, and motion models from the `geometryDict` file located in the `settings` folder of your simulation case directory. It then processes these definitions to create the necessary triangulated surfaces and motion models that will be used during the simulation.
## Usage
Run the utility from your case directory containing the `settings` folder:
```bash
geometryPhasicFlow
```
For fluid-particle coupling simulations:
```bash
geometryPhasicFlow -c
```
## Wall Types
phasicFlow supports several built-in wall types that can be defined in the `geometryDict`:
1. **planeWall** - Flat wall defined by four points (p1, p2, p3, p4)
2. **cylinderWall** - Cylindrical wall defined by two axis points and radius
3. **cuboidWall** - Box-shaped wall defined by center point and dimensions
4. **stlWall** - Complex geometry imported from an STL file
## Motion Models
Walls can be associated with different motion models:
1. **stationary** - Fixed walls (no movement)
2. **rotatingAxis** - Rotation around a specified axis
3. **multiRotatingAxis** - Multiple rotations (for complex motions)
4. **vibrating** - Oscillating motion with specified frequency and amplitude
5. **conveyorBelt** - Creates a conveyor belt effect with constant tangential velocity
## geometryDict File Structure
The geometryDict file requires the following structure:
```C++
// Motion model selection
motionModel <motionModelName>;
// Motion model specific information
<motionModelName>Info
{
// Motion model parameters
// ...
}
// Wall surfaces definitions
surfaces
{
<wallName1>
{
type <wallType>; // Wall type (planeWall, cylinderWall, etc.)
// Wall type specific parameters
// ...
material <materialName>; // Material name for this wall
motion <motionName>; // Motion component name
}
<wallName2>
{
// Another wall definition
// ...
}
// Additional walls as needed
}
```
## Example
Here's a simple example of a `geometryDict` file for a rotating drum:
```C++
// Rotation around an axis
motionModel rotatingAxis;
rotatingAxisInfo
{
rotAxis
{
p1 (0.0 0.0 0.0); // First point for axis of rotation
p2 (0.0 0.0 1.0); // Second point for axis of rotation
omega 1.214; // Rotation speed (rad/s)
}
}
surfaces
{
cylinder
{
type cylinderWall; // Type of wall
p1 (0.0 0.0 0.0); // Begin point of cylinder axis
p2 (0.0 0.0 0.1); // End point of cylinder axis
radius1 0.12; // Radius at p1
radius2 0.12; // Radius at p2
resolution 24; // Number of divisions
material prop1; // Material name
motion rotAxis; // Motion component name
}
wall1
{
type planeWall; // Type of wall
p1 (-0.12 -0.12 0.0); // First point
p2 ( 0.12 -0.12 0.0); // Second point
p3 ( 0.12 0.12 0.0); // Third point
p4 (-0.12 0.12 0.0); // Fourth point
material prop1; // Material name
motion rotAxis; // Motion component name
}
}
```
## STL File Support
For complex geometries, you can use STL files:
```C++
wallName
{
type stlWall; // Type is STL wall
file filename.stl; // File name in ./stl folder
scale 1.0; // Optional scale for changing the size of surface
transform (0 0 0); // Optional translation vector
scaleFirst Yes; // Scale first or translate first
material wallMat; // Material name
motion rotAxis; // Motion component name
}
```
STL files should be placed in an `stl` folder in your case directory.
## See Also
- [particlesPhasicFlow](../particlesPhasicFlow) - Utility for creating initial particle configurations
- [pFlowToVTK](../pFlowToVTK) - Utility for converting simulation results to VTK format
- [Tutorials](../../tutorials) - Example cases demonstrating phasicFlow capabilities