Difference between revisions of "Dominic Röhm"
(43 intermediate revisions by 6 users not shown)  
Line 2:  Line 2:  
name=Röhm, Dominic  name=Röhm, Dominic  
status=PhD student  status=PhD student  
−  phone=  +  phone=67705 
−  room=  +  room=1.077 
email=Dominic.Roehm  email=Dominic.Roehm  
−  image=  +  image=1223.jpg 
−  category=  +  category=former 
+  topical=electrokinetics  
+  topical2=espresso  
+  board=0  
}}  }}  
−  ==Research interests==  +  == Research interests == 
−  Investigation of the heterogeneous nucleation, i.e. the early stage of crystal growth, in a colloidal model system near a wall by Molecular Dynamics computer simulations. My focus is on the inﬂuence of the hydrodynamic interaction, which is often neglected, since nucleation is considered as a quasistatic process. However, recent experiments have shown, that the kind of thermalization of the sample has a drastic inﬂuence on the nucleation rate. The rate decreases by an order of magnitude, if the sample is molten by a turbulent ﬂow rather than a laminar ﬂow. In our simulations, hydrodynamic interactions are incorporated by coupling the particles to a lattice ﬂuid representing the solvent. Since the computation of the hydrodynamic interaction is still orders of magnitude more expensive than classical interactions, our MD simulation software ESPResSo employs GPUs for the calculation of the lattice ﬂuid, which was realized during my diploma thesis. By this, we can investigate systems, that previously required a small computer cluster, using a single GPU. By using a cluster equipped with GPUs, we want to use this speedup to systematically investigate the inﬂuence of hydrodynamics on the heterogeneous nucleation.  +  Investigation of the heterogeneous nucleation, i.e. the early stage of crystal growth, in a colloidal model system near a wall by Molecular Dynamics computer simulations. My focus is on the inﬂuence of the hydrodynamic interaction, which is often neglected, since nucleation is considered as a quasistatic process. 
+  [[File:Cryst_vmd.jpeg200pxthumbleftSnapshot of a partly crystallized system without hydrodynamicslink=dominic_clone]]  
+  However, recent experiments have shown, that the kind of thermalization of the sample has a drastic inﬂuence on the nucleation rate. The rate decreases by an order of magnitude, if the sample is molten by a turbulent ﬂow rather than a laminar ﬂow. In our simulations, hydrodynamic interactions are incorporated by coupling the particles to a lattice ﬂuid representing the solvent.  
+  [[File:Colloidal_crystal.png200pxthumbrightSnapshot of a colloidal system with hydrodynamicslink=Crystal green water.flv]]  
+  Since the computation of the hydrodynamic interaction is still orders of magnitude more expensive than classical interactions, our MD simulation software ESPResSo employs GPUs for the calculation of the lattice ﬂuid, which was realized during my diploma thesis. By this, we can investigate systems, that previously required a small computer cluster, using a single GPU. By using a cluster equipped with GPUs, we want to use this speedup to systematically investigate the inﬂuence of hydrodynamics on the heterogeneous nucleation.  
+  Another field of interest are heterogeneous multiscale methods to model shockwave propagation in metals. The multiscale simulation is based on a nonoscillatory highresolution scheme for twodimensional hyperbolic  
+  conservation laws proposed by Jiang and Tadmor. I applied this general framework to a set of differential equations for elastodynamics in order to model materials under extreme conditions. Me and my collaborators computed the evolution of a mechanical wave in a perfect copper crystal on the macroscale by evaluating stress and energy fluxes on the microscale. A finitevolume method was used as the macroscale solver, which launches for  
+  every volume element a lightweighted MD simulation (called CoMD) to incorporate details from the micro scale. Since the execution of an MD simulation is rather costly, we reduced the number of actual MD  
+  simulations through the use of an adaptive sampling scheme. Our adaptive scheme utilizes a keyvalue database for ordinary Kriging and a gradient analysis to reduce the number of finerscale response functions. Additionally, a mapping scheme has been implemented to handle duplicated calls of the MD simulation efficiently.  
+  
+  [[File:oneDwave.jpg]]  
+  
+  Kriging, also known as Best Linear Unbiased Prediction (BLUP), estimates an unknown value at a certain location by using weighted averages of the neighboring points. It also provides an error estimate, which we use  
+  to decide whether a value is accepted or if we have to evaluate the finerscale response function with the help of CoMD. So far I focus on how the accuracy of the physical values is affected by several thresholds in our adaptive scheme and their connection to the overall performance. It seems that the enhanced adaptive scheme  
+  is sufficiently robust and efficient to allow for the future inclusion of details present in real materials, e.g. interstitials, vacancies, and phase boundaries.  
== Diploma Thesis ==  == Diploma Thesis ==  
Line 15:  Line 32:  
In coarsegrained Molecular dynamics (MD) simulations of large macromolecules, the number of solvent molecules is normally so large that most of the computation time is spent on the solvent. For this reason one is interested in replacing the solvent by a lattice fluid using the LatticeBoltzmann (LB) method. The LB method is well known and on large length and  In coarsegrained Molecular dynamics (MD) simulations of large macromolecules, the number of solvent molecules is normally so large that most of the computation time is spent on the solvent. For this reason one is interested in replacing the solvent by a lattice fluid using the LatticeBoltzmann (LB) method. The LB method is well known and on large length and  
timescales it leads to a hydrodynamic ﬂow ﬁeld that satisﬁes the NavierStokes equation. If the lattice fluid should be coupled to a conventional MD simulation of the coarsegrained particles, it is necessary to thermalize the fluid. While the MD particles are easily coupled via friction terms to the fluid, the correct thermalization of the lattice fluid requires to switch into mode space, which makes thermalized LB more complex and computationally expensive.  timescales it leads to a hydrodynamic ﬂow ﬁeld that satisﬁes the NavierStokes equation. If the lattice fluid should be coupled to a conventional MD simulation of the coarsegrained particles, it is necessary to thermalize the fluid. While the MD particles are easily coupled via friction terms to the fluid, the correct thermalization of the lattice fluid requires to switch into mode space, which makes thermalized LB more complex and computationally expensive.  
+  [[File:Title2.png200pxthumbleftSnapshot of a turbulent flow field computed on a GPUlink=File:Turb_flow_in_channel.flv]]  
However, the LB method is particularly well suited for the highly parallel architecture of graphics processors (GPUs). I am working on a fully thermalized GPULB implementation which is coupled to a MD that is running on a conventional CPU using the simulation package ESPResSo [http://www.espressomd.org]. This implementation is on a single NVIDIA GTX480 or C2050 about 50 times faster than on a recent INTEL XEON E5620 quadcore, therefore replacing a full compute rack by a single desktop PC with a highend graphics card. Furthermore, due to communication overhead problems of the LB CPU code, the performance of a single NVIDIA Tesla C2050 can not achieved. Performance measurements using a AMD Opteron CPU cluster (1.9GHz) showed, that even to 96 CPU nodes are up to 12 times slower then a single GPU.  However, the LB method is particularly well suited for the highly parallel architecture of graphics processors (GPUs). I am working on a fully thermalized GPULB implementation which is coupled to a MD that is running on a conventional CPU using the simulation package ESPResSo [http://www.espressomd.org]. This implementation is on a single NVIDIA GTX480 or C2050 about 50 times faster than on a recent INTEL XEON E5620 quadcore, therefore replacing a full compute rack by a single desktop PC with a highend graphics card. Furthermore, due to communication overhead problems of the LB CPU code, the performance of a single NVIDIA Tesla C2050 can not achieved. Performance measurements using a AMD Opteron CPU cluster (1.9GHz) showed, that even to 96 CPU nodes are up to 12 times slower then a single GPU.  
== Publications ==  == Publications ==  
+  <bibentry>  
+  roehm15a  
+  kratzer15b  
+  roehm14a  
+  rouetleduc14a  
+  roehm13a  
+  arnold13a  
+  rohm12a  
+  </bibentry>  
== Curriculum vitae ==  == Curriculum vitae ==  
Line 24:  Line 51:  
{  {  
−  * May  +  * May 2011  ... Doctorate studies at Institute for Computational Physics (University of Stuttgart) 
−  * May  +  * Aug. 2011 Win of the NVIDIA Best Program Award, a CUDA competition held at the 20th International Conference on Discrete Simulation of Fluid Dynamics 2011, Fargo, USA. 
−  * Oct. 2005  May  +  * May 2011 Diploma in Physics at University of Stuttgart {{Downloadroehm_diplomathesis.pdfDiplomarbeit}} 
+  * Oct. 2005  May 2011 Studies of Physics at the University of Stuttgart  
+  * June 2013  Nov. 2013 Internship at the Los Alamos National Laboratory, USA  
}  } 
Latest revision as of 11:17, 20 October 2015
PhD student
Office:  1.077 

Phone:  +49 711 68567705 
Fax:  +49 711 68563658 
Email:  Dominic.Roehm _at_ icp.unistuttgart.de 
Address:  Dominic Röhm Institute for Computational Physics Universität Stuttgart Allmandring 3 70569 Stuttgart Germany 
Research interests
Investigation of the heterogeneous nucleation, i.e. the early stage of crystal growth, in a colloidal model system near a wall by Molecular Dynamics computer simulations. My focus is on the inﬂuence of the hydrodynamic interaction, which is often neglected, since nucleation is considered as a quasistatic process.
However, recent experiments have shown, that the kind of thermalization of the sample has a drastic inﬂuence on the nucleation rate. The rate decreases by an order of magnitude, if the sample is molten by a turbulent ﬂow rather than a laminar ﬂow. In our simulations, hydrodynamic interactions are incorporated by coupling the particles to a lattice ﬂuid representing the solvent.
Since the computation of the hydrodynamic interaction is still orders of magnitude more expensive than classical interactions, our MD simulation software ESPResSo employs GPUs for the calculation of the lattice ﬂuid, which was realized during my diploma thesis. By this, we can investigate systems, that previously required a small computer cluster, using a single GPU. By using a cluster equipped with GPUs, we want to use this speedup to systematically investigate the inﬂuence of hydrodynamics on the heterogeneous nucleation. Another field of interest are heterogeneous multiscale methods to model shockwave propagation in metals. The multiscale simulation is based on a nonoscillatory highresolution scheme for twodimensional hyperbolic conservation laws proposed by Jiang and Tadmor. I applied this general framework to a set of differential equations for elastodynamics in order to model materials under extreme conditions. Me and my collaborators computed the evolution of a mechanical wave in a perfect copper crystal on the macroscale by evaluating stress and energy fluxes on the microscale. A finitevolume method was used as the macroscale solver, which launches for every volume element a lightweighted MD simulation (called CoMD) to incorporate details from the micro scale. Since the execution of an MD simulation is rather costly, we reduced the number of actual MD simulations through the use of an adaptive sampling scheme. Our adaptive scheme utilizes a keyvalue database for ordinary Kriging and a gradient analysis to reduce the number of finerscale response functions. Additionally, a mapping scheme has been implemented to handle duplicated calls of the MD simulation efficiently.
Kriging, also known as Best Linear Unbiased Prediction (BLUP), estimates an unknown value at a certain location by using weighted averages of the neighboring points. It also provides an error estimate, which we use to decide whether a value is accepted or if we have to evaluate the finerscale response function with the help of CoMD. So far I focus on how the accuracy of the physical values is affected by several thresholds in our adaptive scheme and their connection to the overall performance. It seems that the enhanced adaptive scheme is sufficiently robust and efficient to allow for the future inclusion of details present in real materials, e.g. interstitials, vacancies, and phase boundaries.
Diploma Thesis
LatticeBoltzmannSimulations on GPUs
In coarsegrained Molecular dynamics (MD) simulations of large macromolecules, the number of solvent molecules is normally so large that most of the computation time is spent on the solvent. For this reason one is interested in replacing the solvent by a lattice fluid using the LatticeBoltzmann (LB) method. The LB method is well known and on large length and timescales it leads to a hydrodynamic ﬂow ﬁeld that satisﬁes the NavierStokes equation. If the lattice fluid should be coupled to a conventional MD simulation of the coarsegrained particles, it is necessary to thermalize the fluid. While the MD particles are easily coupled via friction terms to the fluid, the correct thermalization of the lattice fluid requires to switch into mode space, which makes thermalized LB more complex and computationally expensive.
However, the LB method is particularly well suited for the highly parallel architecture of graphics processors (GPUs). I am working on a fully thermalized GPULB implementation which is coupled to a MD that is running on a conventional CPU using the simulation package ESPResSo [1]. This implementation is on a single NVIDIA GTX480 or C2050 about 50 times faster than on a recent INTEL XEON E5620 quadcore, therefore replacing a full compute rack by a single desktop PC with a highend graphics card. Furthermore, due to communication overhead problems of the LB CPU code, the performance of a single NVIDIA Tesla C2050 can not achieved. Performance measurements using a AMD Opteron CPU cluster (1.9GHz) showed, that even to 96 CPU nodes are up to 12 times slower then a single GPU.
Publications

Röhm, Dominic and Pavel, Robert S and Barros, Kipton and RouetLeduc, Bertrand and McPherson, Allen L and Germann, Timothy C and Junghans, Christoph.
"Distributed Database Kriging for adaptive sampling (D2KAS)".
Computer Physics Communications , 2015.

Kratzer, Kai and Röhm, Dominic and Arnold, Axel.
"Homogeneous and Heterogeneous Crystallization of Charged Colloidal Particles".
In High Performance Computing in Science and Engineering 2014, pages 31–45.
Springer, 2015.

Röhm, Dominic and Kesselheim, Stefan and Arnold, Axel.
"Hydrodynamic interactions slow down crystallization of soft colloids".
Soft Matter 10(30)(5503–5509), 2014.
[PDF] (2 MB) [URL] 
Bertrand RouetLeduc and Kipton Barros and Emmanuel Cieren and Venmugil Elango and Christoph Junghans and Turab Lookman and Jamaludin MohdYusof and Robert S. Pavel and Axel Y. Rivera and Dominic Röhm and Allen L. McPherson and Timothy C. Germann.
"Spatial adaptive sampling in multiscale simulation".
Computer Physics Communications (7)(1857–1864), 2014.
[URL] 
Röhm, Dominic and Kratzer, Kai and Arnold, Axel.
"Heterogeneous and Homogeneous Crystallization of Soft Spheres in Suspension".
In High Performance Computing in Science and Engineering '13, pages 33–52. Editors: Nagel, Wolfgang E. and Kroener, Dietmar H. and Resch, Michael M.,
Springer International Publishing, 2013.
[DOI] [URL] 
Axel Arnold and Olaf Lenz and Stefan Kesselheim and Rudolf Weeber and Florian Fahrenberger and Dominic Röhm and Peter Košovan and Christian Holm.
"ESPResSo 3.1 – Molecular Dynamics Software for CoarseGrained Models".
In Meshfree Methods for Partial Differential Equations VI, volume 89 of Lecture Notes in Computational Science and Engineering, pages 1–23. Editors: M. Griebel and M. A. Schweitzer,
Springer Berlin Heidelberg, 2013.
[PDF] (380 KB) [DOI] 
D. Röhm and A. Arnold.
"Lattice Boltzmann simulations on GPUs with ESPResSo".
European Physical Journal Special Topics 210(89–100), 2012.
[PDF] (419 KB)
Curriculum vitae
Scientific education
 May 2011  ... Doctorate studies at Institute for Computational Physics (University of Stuttgart)
 Aug. 2011 Win of the NVIDIA Best Program Award, a CUDA competition held at the 20th International Conference on Discrete Simulation of Fluid Dynamics 2011, Fargo, USA.
 May 2011 Diploma in Physics at University of Stuttgart Diplomarbeit (2.19 MB)
 Oct. 2005  May 2011 Studies of Physics at the University of Stuttgart
 June 2013  Nov. 2013 Internship at the Los Alamos National Laboratory, USA