+ All Categories
Home > Documents > POROUS MEDIA FLOW S - NSF REU: Interdisciplinary...

POROUS MEDIA FLOW S - NSF REU: Interdisciplinary...

Date post: 10-Oct-2020
Category:
Upload: others
View: 0 times
Download: 0 times
Share this document with a friend
1
Transcript
Page 1: POROUS MEDIA FLOW S - NSF REU: Interdisciplinary ...reu.cct.lsu.edu/documents/2014-posters/FortinoGarcia...BENCHMARKING PERFORMANCE OF POROUS MEDIA FLOW SIMULATIONS FORTINO GARCIA[1],

BENCHMARKING PERFORMANCE OFPOROUS MEDIA FLOW SIMULATIONS

FORTINO GARCIA[1] , MAYANK TYAGI[2] , AND KRISHNASWAMY NANDAKUMAR[2]

[1] RICE UNIVERSITY, [2] LOUISIANA STATE UNIVERSITY

ABSTRACTFluid flow through porous materials is at thecenter of many engineering and scientific ap-plications such as oil production and filtrationprocesses. The significant progress in high-performance computing in recent decades has im-mensely aided the research in these fields, espe-cially by providing a rich level of details aboutthe transport phenomena occurring at pore-scalelevel.In this work,the goal is to use the versatileopen-source multiphysics package OpenFOAMto solve the flow in porous media problem. ThisCFD toolbox – capable of parallelization – pro-vides the user with a variety of solvers for dif-ferent physical problems. Moreover, all the pre-processing steps involved in solving this difficultproblem (mesh generation, domain decomposi-tion) can be performed in OpenFOAM. The prob-lem of the laminar flow of an incompressible fluidthrough a random, polydispersed granular packis studied. As in any other numerical study, it isnecessary to understand the scaling possibilitiesand the bottlenecks of the codes. The set-up ofthe problem along with the results of profiling thesolver with the software IPM (Integrated Perfor-mance Monitoring) are provided for a variety ofcases.

METHODS & HARDWARE

The porous medium used in this work is consistedof 1,000 spherical particles of varying diameters ≈24-114 µm with a porosity of 37%. The fluid flowin this problem can be modeled using the Navier-Stokes equations of motion. The boundary condi-tions in this problem are:

1. Imposed pressure gradient along one of thecartesian coordinates (z-axis).

2. No-slip boundary condition at any solid(grain) surface.

3. The velocity at the inlet surface is computedusing the known pressure field at that patchfrom the flux at a normal direction to the in-let faces.

Figure 1: (Left) 3D View of Mesh, (Center) Inlet Patch,(Right) Outlet Patch

The mesh used in this study is generated usingthe SnappyHexMesh utility of OpenFOAM. Thisutility is not designed to create volume meshesof complicated, three-dimensional void spaces ofvarying sizes which are common in any porousmaterial. Therefore, certain considerations needto be taken regarding the input values for a few ofthe parameters of this utility (surface refinement,feature angle, and snap controls). Figure 1 showsthe result of the generated mesh.

The problem is solved on LSU’s SuperMike-IIcluster using the workq nodes, each of which con-sists of:

• Two 2.6 GHz 8-Core Sandy Bridge Xeon 64-bit Processors

• 32GB 1666MHz Ram• 500GB HD• 40 Gigabit/sec Infiniband network interface• 1 Gigabit Ethernet network interface• Red Hat Enterprise Linux 6

Figure 2: Pressure Solution Over Entire Mesh

Figure 3: Velocity Solution Across (Left) YZ, (Center)XZ, (Center) XY View

Figure 4: Velocity Profiles On Inlet and Outlet Patches

REFERENCES[1] AlOnazi, Amani. "Design and Optimization ofOpenFOAM-based CFD Applications for Modern Hy-brid and Heterogeneous HPC Platforms." month(2013):89.

FUTURE RESEARCHUnfortunately due to outside constraints, the project was unable to proceed and do runs on Intel XeonPhi Coprocessors. In the future, work must be done to compile OpenFOAM on such an achitecture andfurther studied.

GRANTThis material is based upon work supported by the Na-tional Science Foundation under award OCI-1263236with support from the Center for Computation & Tech-nology at Louisiana State University.

CONCLUSION

sdfvsdfvsdvdf sajdbhajsdcvasdcv

PROFILER RESULTS AND CONCLUSIONS

• As more and more processors are added, efficiency is lost to startup (which includes loading themesh) and the region of the code which was not profiled (likely communication between processesin between profiled sections).

• There was a large growth in the percentage use of MPI_Probe and MPI_Allreduce as the numberof processors increased.

• This bottleneck usage of MPI_Probe is likely a result of inconsistent size distributions of data beingtransmitted between processors. A method to keep track of the size of data (or amount of elementsbeing sent and received) would greatly improve the speed and performance of the icoFoam solver.

• It is known that OpenFOAM is not optimized to avoid cache misses[1]. Improvements in datastorage and cache optimization may be able to reduce the MPI communication overhead.

Recommended