Research

“Fall in love with some activity, and do it! Nobody ever figures out what life is all about, and it doesn't matter. Explore the world. Nearly everything is really interesting if you go into it deeply enough.”

― Richard P. Feynman

Our approach in fluid dynamics research is broad, leveraging theoretical, computational, and statistical techniques, with an emphasis on developing a hybrid modeling framework, combining physics-based models with the versatility of data-driven approaches.

Sponsored Research

Physics-Guided Multifidelity Learning for Characterization of Blunt-Body Dynamic Stability

NASA Early Stage Innovations (ESI) Award 80NSSC23K0231, 2023-2026.

Space travel to planets and moons with a sensible atmosphere requires an atmospheric entry vehicle to deliver payloads safely from orbit to the surface. The entry vehicle generally has a blunt forebody to withstand heating during the high-speed entry phase. However, blunt-body vehicles become dynamically unstable once they slow down to supersonic and transonic speeds. The instabilities cause the angle-of-attack to oscillate, gain amplitude in time and diverge to a point where the vehicle tumbles, resulting in a catastrophic event. The physical mechanisms leading to the dynamic stability and its characteristics remain challenging after decades of meticulous work due to massive flow separation, complex wake flow, and unsteady pressure field of dramatically changing flight and flow conditions of the descending and decelerating vehicle. This proposed research aims to develop hybrid physics-data modeling approaches for space exploration. We focus on innovating a holistic physics-guided machine learning framework for characterizing the dynamic stability and performance of reentry vehicle systems. Our framework is, therefore, motivated to provide a trustworthy learning platform with enhanced model fusion, feature engineering, and symbolic regression capabilities. We will explore the feasibility of new learning approaches to elucidate new physical insights in describing vehicle stability and identify how to utilize multimodal resources extracted from experiments and high-fidelity simulations effectively.

Physics-reinforced Machine Learning Algorithms for Multiscale Closure Model Discovery

DOE Early Career Research Program Award DE-SC0019290,  2018-2023.

Advances in artificial intelligence have led to a renaissance in learning and extracting patterns from complex data. Despite successes in other areas, applying machine learning techniques in the field of fluid mechanics is relatively new. Most efforts are primarily focused on finding parameters for existing turbulence models. This project explores a big data approach that learns from physical constraints without assuming any heuristics for underlying turbulence physics. Our overall research program will develop novel physics-reinforced data-driven approaches for geophysical turbulence. The research will also involve deep learning approaches that can discover closure models for complex multiscale systems. Research insights will facilitate the building of improved numerical weather prediction models and better parameterization strategies for DOE mission-relevant challenges.

Collaborative Research: Data-Driven Variational Multiscale Reduced Order Models for Biomedical and Engineering Applications

NSF Division Of Mathematical Sciences DMS-2012255, 2020-2024.

Mathematical models are a fundamental tool for improving our knowledge of natural and industrial processes. Their use in practice depends on their reliability and efficiency. Reliability requires a fine-tuning of the model parameters and an accurate assessment of the sensitivity to noisy inputs. Efficiency is particularly critical in optimization problems, where the computational procedure identifies the best working conditions of a complex system. These requirements lead to solving many times models with millions or even billions of unknowns. This process may require days or weeks of computations on high-performance computing facilities. To mitigate these costs, we need new modeling strategies that allow model-runs in minutes to hours on local computing facilities (such as a laptop). Reduced order models (ROMs) are extremely low-dimensional approximations that can decrease the computational cost of current computational models by orders of magnitude. Having in mind biomedical and wind-engineering applications, this project proposes novel methods of model reduction. Data and numerical results from the expensive (or high-fidelity) models are combined with machine learning approaches, to obtain ROMs that attain both efficiency and accuracy at an unprecedented level. The new data-driven ROM framework will finally make possible the numerical simulation of aortic dissections, pediatric surgery, or wind farm optimization on a laptop in minutes, and aims at becoming a critical and trustworthy tool in decision-making processes.

Development of reduced-order system models for next generation model-predictive control of comfort cooling equipment

Center for Advancement of Science & Technology (OCAST), 2022-2025.

To enable maximum energy utilization and grid flexibility, building energy use needs to be well understood. Comfort and ventilation systems and equipment represent 50% of building energy use (roughly 20% of total US energy use) and are therefore the most critical to target. The inability of the existing equipment models to accurately account for the building performance is a main limitation in achieving the set energy conservation goals. The actual building energy conservation is twice the predicted energy utilization for many residential and office building spaces. Additionally, the effect of varying load conditions and transient operation of the system is less understood, which influences grid flexibility. This project will provide the necessary transient building modeling tools and experimental datasets to empower the industry for equipment designing of next generation buildings. In this project, we will address these limitations by first generating the needed dynamic datasets of unitary equipment utilizing Hardware-In-the-Loop (HIL) techniques in OSU's state-of-the-art facilities. These datasets will then be used to create high-fidelity dynamic models. Using the high-fidelity model, in parallel, a reduced order model will be developed and the experimental data generated in this project will be used to validate the reduced-order model developed and quantify the effect of model order reduction.  

FME NorthWind (Norwegian Research Centre on Wind Energy)

The Research Council of Norway, 2021-2029.

FME Northwind brings together about 50 partners from research and industry all around the world.  [web page]

The Centres for Environment-friendly Energy Research (FME) carry out long-term research targeted towards renewable energy, energy efficiency, CCS and social science aspects of energy research. The centres selected for funding must demonstrate the potential for innovation and value creation. Research activities are carried out in close collaboration between research groups, trade and industry, and the public administration, and key tasks include international cooperation and researcher training. The centres are established for a period of maximum eight years (5 + 3). FME NorthWind will bring forward outstanding research and innovation to reduce the cost of wind energy, facilitate its sustainable development, create jobs and grow exports.  

KPN Hole Cleaning Monitoring in Drilling with Distributed Sensors and Hybrid Methods

The Research Council of Norway, 2021-2025.

Effective hole cleaning during the drilling process is one of the prerequisites to reducing the incurred economic and environmental cost. Current practice is mostly based on sophisticated physics-based calculations done before the operation starts, in some operations with real-time update during operations, and on human assessment of a limited number of measured parameters like for example trends in hook load when picking up and slacking off the drill string when making connections. Introduction of high bandwidth data transmission from sensors at many positions along the string, calls for methods that make full use of the increasing number of measured parameters to determine hole cleaning status more accurately and reliably. Accordingly, this project proposes to develop novel hybrid modelling approaches that will combine the interpretability, robust foundation and understanding of a physics-based modelling approach with the accuracy, efficiency and automatic pattern-identification capabilities of advanced machine learning and artificial intelligence algorithms for an efficient and improved monitoring of the hole cleaning process during drilling operations.

Develop design criteria for psychrometric air sampler and mixer apparatus for use in ASHRAE test standards

ASHRAE (RP-1733) Technical Committee TC 8.11 - Unitary and Room Air Conditioners and Heat Pumps,  2018-2020.

The project scope covers developing the necessary testing methods for the mixers, developing new mixers and air samplers, developing their performance, and as a final step, evaluate overall in-situ performance of the newly developed devices with coil tests of 1.5 ton, 3 ton, and 5 ton size. The objective of this project is to provide (i) design recommendations for measuring bulk air conditions and (ii) methods for validating the performance of a sampler and mixer (where a mixer can be utilized) combination that would provide the most accurate bulk temperature and humidity measurement at indoor air inlet and indoor air outlet.

Effect of inlet duct and damper design on ASHRAE 37/116 fan performance and static pressure measurements

ASHRAE (RP-1743) Technical Committee TC 8.11 - Unitary and Room Air Conditioners and Heat Pumps, 2017-2020.

Investigation of the effects of inlet duct design onto fan power, optimum static pressure measurement location, and air flow profile at exit of inlet duct. This project offers specific guidelines for the design of the inlet ductwork to reduce the design space towards inlet duct designs that are known to have little effect onto the equipment performance, as shown by comparable outlet flow profiles obtained from a validated CFD method and experiments.

Blind deconvolution of massively separated turbulent flows

NASA Oklahoma Space Grant Consortium/NASA EPSCoR Research Initiation Grant, 2017-2018.

Turbulence models are a key tool in understanding complex flow physics in many applications. In this project, we have explored the feasibility of a heuristics-free turbulence modeling framework, with the goal of developing robust and accurate closure models for the coarse-grained simulations of flows in massively separated turbulent flow regimes.

Featured Research

Hybrid Analysis and Modeling

I anticipate that fusion of physics-based and data-driven modeling techniques in the context of digital twins is a highly viable approach, and it will have a disruptive impact on the development of faster and more accurate computational methods. I refer to this fusion of approaches as hybrid analysis and modeling. The disruptive step with hybrid modeling is that it can be both interactive and evolving. While each and every sector is steadily moving from data-sparse regimes to data-rich regimes, the abundance of data is starting to impact how the modeling industry works. Therefore, a key ingredient in my vision is to develop differentiated hybrid modeling approaches for scientific and engineering applications and make these techniques available for industry to be used in design and operation by means of developing highly capable digital twins.

Reinforcement Learning

A central challenge in the computational modeling and simulation of a multitude of science applications is to achieve robust and accurate closures for their coarse-grained representations due to underlying highly nonlinear multiscale interactions. These closure models are common in many nonlinear spatiotemporal systems to account for losses due to reduced order representations, including many transport phenomena in fluids. Previous data-driven closure modeling efforts have mostly focused on supervised learning approaches using high fidelity simulation data. On the other hand, reinforcement learning (RL) is a powerful yet relatively uncharted method in spatiotemporally extended systems. We put forth a modular dynamic closure modeling and discovery framework to stabilize the Galerkin projection based reduced order models that may arise in many nonlinear spatiotemporal dynamical systems with quadratic nonlinearity. 

Dynamic Data Assimilation

Data assimilation has been extensively used for integrating models and observations to improve weather forecasts and climate projections. The Earth system models are derived from fundamental governing equations and require the use of supercomputers to solve them numerically. One of the challenges in the data assimilation cycle is the huge computational costs associated with these models. Therefore, we explore new approaches that can alleviate the cost of model integration within the data assimilation cycle. Recent progress in machine learning offers an opportunity to build computationally efficient surrogate models solely from the observations and these cheap-to-evaluate surrogate models can accelerate the limiting bottlenecks of the data assimilation cycle. 

Nonlinear Proper Orthogonal Decomposition

Autoencoder techniques find increasingly common use in reduced order modeling as a means to create a latent space. This reduced order representation offers a modular data-driven modeling approach for nonlinear dynamical systems when integrated with a time series predictive model. In this letter, we put forth a nonlinear proper orthogonal decomposition (POD) framework, which is an end-to-end Galerkin-free model combining autoencoders with long short-term memory networks for dynamics. By eliminating the projection error due to the truncation of Galerkin models, a key enabler of the proposed nonintrusive approach is the kinematic construction of a nonlinear mapping between the full-rank expansion of the POD coefficients and the latent space where the dynamics evolve. We test our framework for model reduction of a convection-dominated system, which is generally challenging for reduced order models. Our approach not only improves the accuracy, but also significantly reduces the computational cost of training and testing. 

Machine Learning for Fluid Dynamics: Turbulence Modeling

In the past few decades, an exponential increase in computational power, algorithmic advances and experimental data collection strategies have seen an explosion in modelling efforts which leverage information obtained from physical data. In our lab, we are developing physics-constrained machine learning tools to identify a nonlinear relationship between the filtered and unfiltered quantities to obtain reliable closure models for large eddy simulations. Our research led various mechanisms to exploit the strengths of different functional or structural closure modeling strategies while preserving trends from direct numerical simulations.  In these studies, we developed several data-driven machine learning subgrid-scale models for large eddy simulations (LES) by considering both regression and classification points of view. The ongoing research in my group is to provide a basis to generate predictive technologies for a broad spectrum of closure modeling problems, which could facilitate improved numerical weather prediction and climate research tools.

Physics Guided Machine Learning

Recent applications of machine learning, in particular deep learning, motivate the need to address the generalizability of the statistical inference approaches in physical sciences. To this end, we introduce a modular physics guided machine learning framework to improve the accuracy of such data-driven predictive engines. The chief idea in our approach is to augment the knowledge of the simplified theories with the underlying learning process. To emphasise on their physical importance, our architecture consists of adding certain features at intermediate layers rather than in the input layer. By addressing the generalizability concerns, our results suggest that the proposed feature enhancement approach significantly reduces modeling uncertainty and can be effectively used in many scientific machine learning applications, especially for the systems where we can use a theoretical, empirical, or simplified model to guide the learning module.

Feature Engineering and Symbolic Regression

To address the key limitation of the black-box learning methods, we have exploited to use symbolic regression as a principle for identifying relations and operators that are related to the turbulence system dynamics. This approach combines evolutionary computation with feature engineering to provide us a tool to create new models. So far, our approach mainly involves gene expression programming (GEP) and sequential threshold ridge regression (STRidge) algorithms. We demonstrate our results in three different applications: (i) equation discovery, (ii) truncation error analysis, and (iii) hidden physics discovery, for which we include both predicting unknown source terms from a set of sparse observations and discovering subgrid scale closure models. We illustrate that both GEP and STRidge algorithms are able to distill the Smagorinsky model from an array of tailored features in solving the Kraichnan turbulence problem. Our results demonstrate the huge potential of these techniques in complex physics problems, and reveal the importance of feature selection and feature engineering in model discovery approaches.

Reduced Order Modeling

We are developing robust hybrid analytics approaches and algorithms combining physics-based models with machine learning frameworks for both intrusive and non-intrusive reduced order modeling paradigms of nonlinear and nonstationary systems. Analyzing the similarities between LES and ROM, we explore and exploit a series of physics-based, data-driven, and hybrid closure models to stabilize ROM emulators that enable more accurate near realtime predictions, specifically designed for systems involving general circulation models.

Digital Twins

A digital twin is defined as a virtual representation of a physical asset enabled through data and simulators for real-time prediction, monitoring, control and optimization of the asset for improved decision making throughout the life cycle of the asset and beyond. With the recent wave of digitalization, the latest trend in every industry is to build systems and approaches that will help it not only during the conceptualization, prototyping, testing and design optimization phase but also during the operation phase. In our lab, we exploit digital twin technologies to facilitate new computational models in systems relevant to fluid dynamics.

Wind Energy & Aviation Safety

Microscale terrain-induced turbulence impacts wind energy and aviation related activities significantly. Through collaborations with the Norwegian University of Science and Technology (NTNU) and the Computational Science and Engineering group at SINTEF in Norway, we are exploring the feasibility of a digital twin approach in the context of wind power production and aviation safety. In terms of basic science, we aim to provide an in-depth fundamental understanding of terrain-induced turbulence and wake-vortex dynamics. Our main effort consists of the development of a wind power forecast system for windfarms and a turbulence alert system for airports.

Big Data Cybernetics

In the context of upcoming technologies like digital twin, the role of cybernetics is to steer the system toward an optimal set-point. The difference, called the error signal, is applied as feedback to the controller, which generates a system input to bring the output set-point closer to the reference. With the availability of more and more sensors and communication technologies, an increasingly larger volume of data (in fact big data) is being made available in realtime. This concept offers many new perspectives to our rapidly digitized society and its seamless interactions with many different fields.

Interface Learning

While immense advances in computational mathematics and scientific computing have come to fruition, such simulations always suffer a curse of dimensionality limiting turnaround. In our lab, we introduce a machine learning framework to learn physically accurate interface boundary conditions without the need to unnecessarily resolve the whole computational domain. In other words, we aim at minimizing (indeed, preventing) communications from surrounding regions to the region we are most interested in. We advocate  the development of such interface learning methodologies to model the information exchange at the interface, an emerging topic which will have far reaching impact on a large variety of problems in science and engineering.

Sponsors

We are grateful for the support from DOE, NSF, NASA, ASHRAE, NVIDIA, and RCN.