F. Xavier Trias
home              about me              publications           research             teaching             personal interest


The main lines of my on-going research are listed below. For further details you can check my publications.
Most of the numerical simulations have been carried out with the in-house STG-code (for Soria Trias Gorobets,  main developers of the code).

Regularization modeling & Large-Eddy Simulation (LES)

Although formally derived from different principles, regularization and LES equations share many features and objectives. Both approaches aim to reduce the dynamical complexity of the original Navier-Stokes equations resulting into a new set of PDE that are more amenable to be numerically solved on a coarse mesh. The regularization methods basically alter the convective terms to reduce the production of small scales of motion. The first outstanding approach in this direction goes back to Leray. Other regularization models have also been proposed and tested in the last decade. Although the underlying idea remains the same,  the list of properties from the Navier-Stokes equations that are exactly preserved differs.

Here, we propose to preserve the symmetries and conservation properties of the original convective term. Doing so, the production of smaller and smaller scales of motion is restrained in an unconditionally stable manner. Then, the only additional ingredient is a self-adjoint linear filter [9] whose local filter length is determined from the requirement that vortex-stretching must be stopped at the scale set by the grid [8]. The on-going research focuses on the development of new regularization models [15] while exploring their connections with LES models.

On the other hand, a simple approach to discretize the viscous terms with spatially varying eddy-viscosity has been presented in [18]. The numerical approximation of this term may be quite cumbersome especially for high-order staggered formulations. To circumvent this problem, an alternative form of the viscous term has been derived. From a numerical point-of-view, the most remarkable property of this new form is that it can be straightforwardly implemented by simply re-using operators that are already available in any code. Moreover, for constant viscosity, formulations become identical to the original formulation in a natural manner.

Presentation to European Turbulence Conference (ETC14), September 2013, Lyon (pdf).
Presentation to "Connections Between Regularized and Large-Eddy Simulation Methods for Turbulence" workshop at BIRS, May 2012, Banff, Alberta (Canada) (pdf)
Presentation to European Turbulence Conference (ETC13), September 2011, Warsaw (pdf).
Presentation to Parallel CFD conference, May 2011, Barcelona (pdf).

Natural convection flows

Buoyancy-driven flows in enclosed cavities have been the subject of numerous experimental and numerical studies in the last decades. Despite the great effort devoted there are still many questions that remain open. Firstly, significant discrepancies are still observed between numerical and experimental studies. They are strongly connected with the role of the transitional thermal boundary layer: numerical results provide strong evidences that the flow structure cannot be capture well unless the transition point at the vertical boundary is correctly located [8]. At relatively high Rayleigh numbers, LES models have consistently failed on accurately predicting the transition of the vertical boundary layer for an air-filled differentially heated cavity (DHC) of aspect ratio 5 and Ra=4.5e10. Actually, for this configuration, recent DNS results have revealed that the transition of the vertical boundary layer occurs at more downstream positions than those observed in the experiments [17]. The above-mentioned symmetry-preserving regularization models have shown their ability to capture well the general pattern of the flow even for very coarse meshes. On the other hand, the heat transfer scaling at (very) high Rayleigh number is also one of the fundamental questions on natural convection that remains open. Most recent developments on regularization modeling may also help to elucidate this issue in a near future.

Presentation to 7th International Conference on Computational Heat and Mass Transfer (ICCHMT), July 2011, Istambul (pdf).

Movie gallery of DNSs of natural convection flows

Forced convection flows

Most advanced numerical techniques will be used to perform direct simulations of several forced convection flows. These simulations give new insights into the physics of turbulence and provide indispensable data for the further progress of turbulence modeling. Examples of thereof are (i) a turbulent flow around a wall-mounted cube in a channel flow at Re_tau = 590 (Re = 7235, based on the cube height and the bulk velocity) and (ii) a turbulent plane impinging jet at Re=20000 (based on the bulk inlet velocity and the nozzle width) and aspect ratio 4 [13]. Regarding the latter configuration significant discrepancies have been observed respect to the experimental works presented in the literature. They are mainly attributed to the effect of the outflow boundary conditions usually located at x/B = 10 ∼ 15, whereas the main recirculating region extends clearly to more downstream locations. Time-averaged DNS results have revealed that the main recirculating flow cannot be captured well unless the outflow is placed at least at 40B from the jet centreline approximately. This suggests that previous experimental data may not be adequate to study the flow configuration far from the jet.

Presentation to Parallel CFD conference (PCFD09), May 2009, San Francisco (pdf).

Movie gallery of DNSs of forced convection flows

Numerical methods for CFD

The incompressible Navier-Stokes equations form an excellent mathematical model of turbulent flows. Unfortunately, attempts at performing direct numerical simulations (DNS) with the available computational resources and numerical methods are limited to relatively low-Reynolds-numbers. Regarding the numerical algorithms, cost reductions can be achieved by one or more of the following: (1) decreasing the number of grid points using more accurate numerical schemes, (2) reducing the computational cost per iteration, or (3) using larger time steps, all without affecting the quality of the numerical solution. With regard to the time-integration schemes an efficient self-adaptive strategy for the explicit time integration of the Navier-Stokes equations has been recently proposed [12]. It is based on a one-parameter second-order-explicit scheme. First, the eigenvalues of our dynamical system are bounded by means of an almost inexpensive method. Second, the linear stability domain of the time-integration method is adapted in order to maximize the time-step. To do so, the control parameter is automatically tuned. The method works independently of the underlying spatial mesh and therefore is suitable for both structured and unstructured codes. Compared with the standard CFL-based approach CPU cost reductions of up to  2.9 (structured) and  4.3 (unstructured) have been measured.

Regarding the first issue the on-going research focuses on the development of fully-conservative schemes for unstructured meshes and the appropriate cure for the well-known checkerboard problem for collocated formulations [23]. Namely, the crucial symmetry properties of the underlying differential operators are exactly preserved, i.e., the convective operator is approximated by a skew-symmetric matrix and the diffusive operator by a symmetric, positive-definite matrix. Moreover, a novel approach to eliminate the checkerboard spurious modes without introducing any non-physical dissipation is proposed. To do so, a fully-conservative regularization of the convective term is used. The supraconvergence of the method is numerically showed and the treatment of boundary conditions is discussed. Finally, the new discretization method is successfully tested for a buoyancy-driven turbulent flow in a differentially heated cavity.

Parallel Poisson solvers and HPC

The progress in DNS is closely related with the efficient use of modern high performance computing (HPC) systems that offer a rapidly growing computing power. Since the irruption of multi-core architectures, this trend is mainly based on increasing both the number of nodes and the number of cores per node. However, the number of cores tends to grow faster than the memory size and the network bandwidth. These tendencies bring new problems that must be solved in order to exploit efficiently the new computing potential. In this context, the Poisson equation, which arises from the incompressibility constraint and has to be solved at least once per time step, is usually the most time-consuming and difficult-to-parallelize part of the DNS algorithm. The Poisson used in our code is restricted to problems with one uniform periodic direction. It is a combination of a block preconditioned Conjugate Gradient (PCG) method and a Fast Fourier Transform (FFT) [5]. The Fourier diagonalization decomposes the original system into a set of mutually independent 2D systems that are solved by means of the PCG algorithm. The most ill-conditioned systems correspond to the lowest frequencies in the spectral space. In this case, to avoid a slow convergence, the PCG solver is replaced by a Direct Schur complement Decomposition (DSD) method.

The initial version of the Poisson solver was conceived for single-core processors and therefore, the distributed memory model with message-passing interface (MPI) was used. The irruption of multi-core architectures motivated the use of a two-level hybrid MPI + OpenMP parallelization with the shared memory model on the second level [11]. Numerical experiments show that, within its range of efficient scalability, the previous MPI-only parallelization is slightly outperformed by the MPI + OpenMP approach. But more importantly, the hybrid parallelization has allowed to significantly extend the range of efficient scalability. Here, the solver has been successfully tested up to 12800 CPU cores for meshes with up to 1e9 grid points. However, estimations based on the presented results show that this range can be potentially stretched up until 200,000 cores approximately.

Following the current trends in HPC, the use of computing accelerators (GPUs in particular) is being implemented by means of the hardware independent OpenCL standard [21]. It has been chosen due to the fact that it is supported by all of the main hardware vendors, including NVidia, Intel, AMD, IBM. For further information you can visit the webpage of my friend and colleague Dr.Andrey Gorobets.

Presentation to Parallel CFD conference (PCFD11), May 2011, Barcelona (pdf).

                                                                                  xavi@cttc.upc.edu       xavitrias@gmail.com