Skip to main content

Robust design under mixed aleatory/epistemic uncertainties using gradients and surrogates

Abstract

In this paper, mixed aleatory/epistemic uncertainties in a robust design problem are propagated via the use of box-constrained optimizations and surrogate models. The assumption is that the uncertain input parameters can be divided into a set only containing aleatory uncertainties and a set with only epistemic uncertainties. Uncertainties due to the epistemic inputs can then be propagated via a box-constrained optimization approach, while the uncertainties due to aleatory inputs can be propagated via sampling. A statistics-of-intervals approach is used in which the box-constrained optimization results are treated as a random variable and multiple optimizations need to be performed to quantify the aleatory uncertainties via sampling. A Kriging surrogate is employed to model the variation of the optimization results with respect to the aleatory variables enabling exhaustive Monte-Carlo sampling to determine the desired statistics for each robust design iteration. This approach is applied to the robust design of a transonic NACA 0012 airfoil where shape design variables are assumed to have epistemic uncertainties and the angle of attack and Mach number are considered to have aleatory uncertainties. The very good scalability of the framework in the number of epistemic variables is demonstrated as well.

Introduction and motivation

Computational methods have been playing an increasingly important role in science and engineering analysis and design over the last several decades, due to the rapidly advancing capabilities of computer hardware, as well as increasingly sophisticated and capable numerical algorithms. However, in spite of the rapid advances and acceptance of numerical simulations, serious deficiencies remain in terms of accuracy, uncertainty, and validation for many applications. Many real-world problems involve input data that is noisy or uncertain, due to measurement or modeling errors, approximate modeling parameters [1], manufacturing tolerances [2], in-service wear-and-tear, or simply the unavailability of the information at the time of the decision [3]. These imprecise or unknown inputs are important in the design process and need to be quantified in some fashion. To this end, uncertainty quantification (UQ) has emerged as an important area in modern computational engineering. Today, it is no longer sufficient to predict specific objectives using a particular physical model with deterministic inputs. Rather, a probability distribution function (PDF) or interval bound of the simulation objectives is required depending on whether aleatory or epistemic uncertainties are involved [4]. Epistemic uncertainty (or type B, or reducible uncertainty) represents a lack of knowledge about the appropriate value to use for a quantity, i.e. there is a single correct (but unknown value) [5]. This may be, for example, because a quantity has not been measured sufficiently accurately or because the model neglects certain effects. In contrast, uncertainty characterized by inherent natural randomness is called aleatory uncertainty (or type A, or irreducible uncertainty). For discrete variables, this randomness is parameterized by the probability of each possible value. For continuous variables, the randomness is parameterized by a PDF. Regulatory agencies and design teams are increasingly being asked to specifically characterize and quantify epistemic uncertainty and separate its effect from that of aleatory uncertainty [6].

Probabilistic assessment of uncertainty in computational models consists of three major phases [7, 8]:

  1. 1.

    Data assimilation in which the input parameters are characterized as aleatory or epistemic (via appropriate PDFs or interval bounds) from observations and physical evidence

  2. 2.

    Uncertainty propagation in which the input variabilities are propagated through the mathematical model

  3. 3.

    Characterization of the outputs of the numerical simulation in terms of their statistical properties

Arguably, the computationally most expensive part of UQ is the second phase. A mixed aleatory/epistemic UQ typically relies on a nested sampling strategy. Although the required number of samples grows extremely fast, these strategies are conceptually easy to understand and are capable of separating the effects of each type of uncertainty [8, 9]. For nested strategies, samples are typically first drawn from the epistemic variables and for each set of epistemic variables, the distribution of the output due to the aleatory variables is determined using sampling of the aleatory variables. The simplest approach for sampling is the Monte-Carlo (MC) method [10] which for expensive mathematical models (e.g., high-fidelity physics-based simulations) becomes very quickly prohibitively expensive due to the large number of model evaluations. For example, the number of samples required for the epistemic uncertainty grows exponentially fast with the number of epistemic variables [8]. To alleviate some of the cost, surrogates can be created as a function of all variables and samples extracted according to a nested strategy. For relatively low dimensions, this strategy can be effective and, when combined with gradient-enhancement, could be applied to problems of moderate dimension [11]. However, once the number of epistemic variables increases sufficiently, surrogate-based approaches will again become prohibitively expensive as the required number of training points increases exponentially fast for an accurate surrogate model known as “curse of dimensionality”. In order to address this concern, combinations of sampling and optimization approaches have been explored [9, 12]. The idea is that for mixed aleatory/epistemic problems, the goal of the uncertainty quantification is to produce a region in which the function is contained with a specific level of confidence, known as a P-Box [8] or horsetail as shown in Figure 1. The bounds of the confidence interval of the output distribution must itself be an interval in order to account for the epistemic uncertainties. Because only the bounds of this box are required, the sampling with respect to the epistemic variables can be replaced by one maximization and one minimization problem.

Figure 1
figure 1

A P-Box where every line corresponds to a different set of values of the epistemic variables.

In principle, these mixed sampling/optimization approaches may be posed in two ways: determining intervals of statistics and determining statistics of intervals:

  • Intervals-of-statistics can be viewed as an optimization under uncertainty problem with the metric of the optimization defined as a relevant statistic of the aleatory distribution, such as the mean and variance, bounds on a confidence interval, or a reliability index [9, 13]. For each step in the optimization, the aleatory uncertainty is quantified, and the relevant statistics of the distribution are calculated and used as the objective function for the optimization.

  • Statistics-of-interval poses an optimization problem for each set of aleatory variables, and repeated optimization evaluations over the epistemic design space can be used to determine the relevant statistics of the interval [12].

In the statistics-of-interval approach, gradient-based optimization methods can be employed, assuming that the global extrema in the epistemic design space can be found this way, reducing the cost of each optimization and ensuring very good scaling as the number of epistemic variables increases if adjoint capabilities [14, 15] are used. To reduce the number of required optimizations for low statistical errors, a surrogate model of the optimization results can be constructed with respect to the aleatory variables which can then be sampled exhaustively, ensuring that fewer optimizations are required to characterize the statistics of the interval accurately.

A last important observation for the work in this paper is that deterministic optimization tools are widely used in engineering practice; however, engineering designs do not operate exactly at their design point due to physical variability in the environment. These small variations can deteriorate the performance of deterministically optimized designs. It is, therefore, necessary to account for these uncertainties in the optimization process using optimization under uncertainty (OUU) techniques, which implies that UQ is used in the optimization loop instead of a deterministic simulation. Beginning with the seminal works of Beale [16], Dantzig [17], and Tintner [18], OUU has experienced rapid development in both theory and algorithms. Dantzig considers planning under uncertainty as one of the most important open problems in optimization [19, 20]. Good overviews of the state of the art in the field of OUU are provided by Beyera et al. [21], Sahinidis [19], Giunta et al. [22] and Li [23]. An important subfield in OUU is robust optimization (RO) [24, 25], which can be subdivided into robust-design-based methods and reliability-based methods [26]. Robust design improves the quality of a product by minimizing the effect of the causes of variation without eliminating these causes. The objective here is to optimize the mean performance and minimize its variation, while maintaining feasibility with probabilistic constraints, hence the robust design concentrates on the probability distribution near the mean values. The ability to identify and catalog overly conservative design margins resulting from applying safety factors on top of other safety factors, for example, is an important application for the robust design, which is being increasingly viewed as an enabling technology for design of aerospace, civil, and automotive structures subject to uncertainty [2730]. The reliability-based methods, on the other hand, are predominantly used for risk analysis by computing the probability of failure of a system. Thus, reliability approaches concentrate on the rare events at the tails of the probability distribution.

The outline of the remainder of this paper is as follows: section 'Optimization with mixed aleatory/epistemic uncertainty’ describes the employed OUU approach for mixed aleatory/epistemic uncertainties in detail. Application results of the presented approach are given in section 'Robust design of a transonic airfoil’ and section 'Conclusions’ concludes this paper.

Optimization with mixed aleatory/epistemic uncertainty

A conventional constrained optimization problem for an objective function, J, that is a function of input variables, D, state variables, q(D), and simulation outputs, f(D) = F(q(D),D), can be written as

min D J = J ( f , q , D ) s.t. 0 = R ( q , D ) 0 g ( f , q , D ) .
(1)

Here, the state equation residuals, R, are expressed as an equality constraint, and other system constraints, g, are represented as general inequality constraints. Note, that R (and g) could represent any class of models, however, if gradient information is to be used the models must be differentiable and if surrogate models are to be employed successfully the models must also be relatively smooth. In the case where the input variables are precisely known, all functions dependent on D are deterministic. However, given uncertain inputs all functions in Equation (1) can no longer be treated deterministically.

Objective function evaluation

In this work, the design variables are assumed to have only aleatory or only epistemic uncertainty. Let α represent the variables associated with aleatory uncertainties and β represent variables with epistemic uncertainties, for example, geometric shape variables subject to manufacturing tolerances, or flow boundary conditions subject to random fluctuations. The design variables D = (D α ,D β ) are considered to be either the mean values of aleatory uncertainties which are assumed to be statistically independent and normally distributed with αN( D α , σ D 2 ), or the midpoint of bounds on epistemic uncertainties with βI(D), where I(D) = [ D β -s D ,D β  + s D ]. Note that σ D and s D are treated as fixed but this could be easily changed. One could also derive equations for correlated and/or non-normally distributed aleatory variables; however, the analysis and resulting equations become more complex [31] and are beyond the scope of this paper.

In order to account for both types of uncertainty, sampling is performed for the aleatory variables while optimization is performed over the epistemic variables as described in the introduction. Let f(D) = f(α,β) represent the output of interest of a simulation with uncertain inputs then the optimization can be represented mathematically as follows:

f max ( α ) = max β I ( D ) f ( α , β )
(2)
f min ( α ) = min β I ( D ) f ( α , β ) .
(3)

The functional outputs f max and f min can now be treated as random variables, since their only inputs are random variables with associated probability distributions. In the remainder of this paper, the subscript ext (for extrema) will be used as a placeholder for either max or min. To characterize the probability distribution of f ext, one must extract repeated samples of f ext according to the underlying PDF of α. Each sampling entails solving the appropriate optimization problem, Equation (2) or (3), for the specified sample of α. For these optimizations, an L-BFGS [32, 33] algorithm that can utilize function and gradient information is used in this work, thereby reducing the cost of each optimization and ensuring excellent scaling in the number of variables with epistemic uncertainties.

Nonetheless, because of the expense of these optimizations, strategies to reduce the number of samples and thus the computational cost associated with sampling must be employed. For this work, a surrogate is created for f ext as a function of the aleatory variables, which enables the extraction of a large number of samples in order to obtain accurate statistics for very low computational cost. Because the number of aleatory variables used here is relatively small, the required number of training points for an accurate surrogate is small, necessitating only a small amount of optimizations. Because the optimization results are viewed as general random variables, any surrogate can be used to represent the aleatory dependence of the variables. A Kriging surrogate model is employed in this work. The details of the construction of this particular Kriging model, which can utilize gradient and Hessian information and employ a dynamic training point selection, is described in previously published papers [3437]. The center of the Kriging domain is prescribed by the mean value of α, D α , and the boundary is taken to be two standard deviations σ D away in all aleatory input dimensions. This implies that for the normally distributed input variables α more than 97% of all MC samples fall within the Kriging domain and the less accurate extrapolation capabilities of the Kriging surrogate model only need to be used for a small fraction of the samples. Since the purpose of this article is a robust design and not the accurate prediction of the tail statistics, this approach leads to very good results as demonstrated in section 'Robust design of a transonic airfoil’.

The deterministic optimization problem (1) can now be rewritten. The objective function can be written in terms of mean values of the functional outputs, f ̄ ext , and typically also becomes a function of the variances, Var f ext , for example, for robust design optimizations, objective functions are typically of the form

J= w 1 f ̄ ext + w 2 Var f ext ,
(4)

where w i are some user-specified weights. The state equation residual equality constraint, R, is treated deterministically and thus needs to be satisfied for all values of α and β. The inequality constraints can be cast into a probabilistic statement such that the probability that the constraints are satisfied is greater than or equal to a desired or specified probability, P k . This statement can be transformed [38] into a constraint involving mean values and standard deviations (also called moment matching formulation [39]) and the entire OUU problem can be expressed as [31, 40]

min α , β J = J ( f ̄ ext , Var f ext , q , α , β ) s.t. 0 = R ( q , α , β ) 0 g ( f ̄ ext , q , α , β ) - k σ g ,
(5)

where k is the number of standard deviations, σ g , that the constraint g must be displaced in order to achieve P k . The software package Ipopt (Interior Point Optimizer) [41] for large-scale nonlinear optimization with constraints is used for the solution of the OUU problem given by Equation (5). Ipopt also allows users to impose bound or box constraints on the design variables which can be very helpful in ensuring the stability of the flow analysis by preventing the exploration of too extreme regions of the design space.

Gradient evaluation

The gradient of the objective function, J, given by Equation (4) with respect to design variables associated with aleatory uncertainties is given by

d J d D α = J f ̄ ext d f ̄ ext d D α + J Var f ext d Var f ext d D α = w 1 d f ̄ ext d D α + w 2 d Var f ext d D α .
(6)

A Kriging surrogate is built to calculate f ̄ ext and Var f ext using N training points for each of which one has to calculate f ext by solving an optimization problem as given by Equation (2) or (3). This Kriging surrogate is then sampled extensively Ñ times for inputs αk, k = 1,…,Ñ chosen based on their underlying probability distribution function. In this case, αD α  + σ D Z with ZN(0,1) and the Kriging predictions are represented by f ̂ ext ( α k ). The mean of the simulation output can then be approximated by

f ̄ ext 1 Ñ k = 1 Ñ f ̂ ext ( α k )
(7)

and the derivative can be approximated at the same time with little computational overhead via [42]

d f ̄ ext d D α 1 Ñ k = 1 Ñ d f ̂ ext ( α k ) d α k d α k d D α = 1 Ñ k = 1 Ñ d f ̂ ext ( α k ) d α k ,
(8)

where it is relatively straightforward to calculate d f ̂ ext ( α k ) d α k from the Kriging surrogate model [42, 43]. This is especially true if the Kriging construction process can be gradient-enhanced since this derivative needs to be readily available for this.

Similarly, the variance and its derivative can be approximated as

Var f ext 1 Ñ k = 1 Ñ f ̂ ext 2 ( α k ) - f ̄ ext 2
(9)
d Var f ext d D α 2 Ñ k = 1 Ñ f ̂ ext ( α k ) d f ̂ ext ( α k ) d α k - 2 f ̄ ext d f ̄ ext d D α .
(10)

The gradient of the objective function, J, with respect to design variables associated with epistemic uncertainties is given by

d J d D β = J f ̄ ext d f ̄ ext d D β + J Var f ext d Var f ext d D β = w 1 d f ̄ ext d D β + w 2 d Var f ext d D β .
(11)

However, it is not trivial to calculate d f ̄ ext d D β and d Var f ext d D β , where D β represents midpoints of epistemic uncertainty intervals since moving the midpoint will lead, in general, to different extrema for the training points and thus to a different Kriging surrogate which when sampled leads to different values of f ̄ ext and Var f ext . In contrast, the aleatory gradient was easy to obtain since one only has to take into account how the sample points change while being able to reuse the same Kriging surrogate. The current workaround is to use the approximations

d f ̄ ext d D β df d D β D α , D β d Var f ext d D β 0
(12)

that is the derivative of the mean value is approximated by the derivative of just f with respect to D β at the mean values of the aleatory uncertainty variables α and midpoints of the intervals for the epistemic variables β. This derivative is, in general, non-zero since for the epistemic optimizations, the extreme value is typically encountered on the interval bound. The variances for the problems studied in this paper are much smaller than the mean values which allow the neglection of d Var f ext d D β . The following section will demonstrate that the presented approach can lead to successful robust designs.

Robust design of a transonic airfoil

The steady inviscid flow over a transonic NACA 0012 airfoil is considered as a flow example which is described in more detail in Mani and Mavriplis [44, 45]. The computational mesh has about 20,000 triangular elements. The non-dimensionalized pressure contours for an angle of attack of 1.25° and a free-stream Mach number of 0.755 are shown in Figure 2 leading to a lift and drag coefficient of C l  = 0.268 and C d  = 0.00521, respectively.

Figure 2
figure 2

Pressure contours and mesh for angle of attack of 1.25° and a free-stream Mach number of 0.755.

In order to perform a robust lift-constrained drag minimization under mixed aleatory/epistemic uncertainty, one shape design variable on the upper surface and one on the lower surface which control the magnitude of Hicks-Henne sine bump functions [46] are allowed to vary. The resulting deformation of the mesh is calculated via a linear tension spring analogy [44, 47]. Both shape design variables are assumed to have epistemic uncertainties due to manufacturing tolerances. A zero value corresponds to the original NACA 0012 airfoil and s D u , l is taken to be 0.005. Figure 3 shows the original NACA 0012 airfoil and the airfoils resulting from design variable values of ±0.005.

Figure 3
figure 3

NACA 0012 airfoil (black) and airfoils resulting from two shape design variable values of ±0 . 005 (gray).

The angle of attack and free-stream Mach number are assumed to have aleatory uncertainties which are both modeled with normal distributions. The mean values are given by the design variable values, D AoA and D M , and the standard deviations are prescribed as σ D AoA =0.1 and σ D M =0.01, respectively. A robust design problem as given by Equation (5) can be posed by using

J:= C ̄ d max + σ C d max 2
(13)

as objective function and

g:= C ̄ l min - C l σ g := σ C l min
(14)

as inequality constraint to maintain a target lift coefficient of C l =0.6. Box constraints on all four design variables are used to prevent the generation of invalid geometries from the mesh movement algorithm and solver convergence issues:

D u , l [-0.025,0.025] D AoA [0,1.85] D M [0.6,0.78]
(15)

Even though one flow solve takes only about 10 s on 12 Intel Xeon processors with 3.33 GHz each it is still prohibitively expensive to obtain the mixed aleatory/epistemic optimization under uncertainty results through either nested sampling or exhaustive sampling of optimization results. In order to provide validation for the OUU framework with mixed aleatory/epistemic uncertainty the uncertainty propagations of aleatory and epistemic variables are validated only for the initial and optimized points and also only using 3,000 sample points. But before these combined results are shown, the uncertainty propagations of aleatory and epistemic variables are validated separately.

First, optimization is used to propagate the epistemic uncertainties within the problem. For this test, the aleatory variables are fixed at their mean value taken to be D AoA = 1.25 and D M  = 0.755, and optimization is performed over the epistemic variables D u  = D l  = 0 to determine the associated intervals for the output functions of interest. The interval produced through optimization is validated by performing Latin hypercube sampling (with 500 samples plus the corners of the domain) over the epistemic variables, again with the aleatory variables fixed at their mean values. The excellent agreement can be seen in Table 1. Note that the optimizations only took a few functions and adjoint gradient evaluation each.

Table 1 Validation of epistemic uncertainty propagation

With the optimization portion of the method validated, the ability of the Kriging surrogate model to capture the aleatory variation of the output functions of interest is tested next. For this test, the original NACA 0012 airfoil is used (i.e. no epistemic uncertainty), and sampling from Kriging surrogates (build from a varying number of training points, N) is performed over the aleatory variables D AoA = 1.25 and D M  = 0.755, respectively. In order to provide validated data, full nonlinear MC (NLMC) sampling is performed over the aleatory variables, and both distributions are characterized by calculating statistics of interest using the same samples. For a reasonable trade-off between acquiring accurate statistics and computational cost for the NLMC, 3,000 samples are used. Because the epistemic variables for this test are fixed, each training point for the Kriging or sample point for the NLMC requires only a single CFD simulation. A summary of these comparisons can be found in Table 2.

Table 2 Comparison of NLMC and Kriging aleatory uncertainty propagation

The Kriging model constructed from only 13 training points yields reasonable results for a fraction of the cost of a full NLMC simulation. Thus, all the required Kriging response surfaces for the actual robust design runs are constructed from thirteen training points and the sampling is performed using Ñ = 105 Latin hypercube samples to keep the statistical error small. Lastly, in Table 3, a comparison of NLMC and Kriging predictions is presented using the same 3,000 samples for the initial airfoil and flow conditions (D u  = D l  = 0, D AoA = 1.25 and D M =0.755) which demonstrates the good quality of the predictions of the proposed approach for statistics of the lift and drag coefficients. Note that this time, each training point for the Kriging or sample point for the NLMC requires a solution of the optimization problem given by Equation (2) or (3).

Table 3 Comparison of NLMC and Kriging predictions for the initial guess with two shape design variables

Using the presented framework for the entire robust design gives the results presented in Table 4. The number of required optimization iterations for convergence (norm of gradient less than 10-4) varies between 12 and 22 for all the presented cases. One can see that the average drag increases as the desired probability, P k , of maintaining the target lift coefficient of C l =0.6 is increased. The principal mechanism of achieving this higher probability is to increase the mean Mach number. Note that a deterministic lift-constrained drag minimization yields a minimal drag of C d  = 1.36×10-3 at a Mach number of 0.734 and a lower angle of attack of 1.58°. In Table 5, a comparison of NLMC and Kriging predictions using the same 3,000 samples for the optimal design with k = 1 is presented which demonstrates the quality of the predictions for statistics of the lift and drag coefficients. Also shown in the same table are the resulting statistics if the constructed Kriging is sampled Ñ = 105 times. The original NACA 0012 as well as the deterministically and robustly (k = 2) optimized airfoils are all shown in Figure 4. One can see that the robustly optimized airfoil looks different from the deterministically optimized one especially along the lower surface.

Table 4 Robust design results with two shape design variables
Table 5 Comparison of NLMC and Kriging predictions for optimal design with two shape design variables obtained for k =1
Figure 4
figure 4

NACA 0012 at α  = 1 . 25 (gray), deterministically (black) and robustly ( k  = 2, red) optimized airfoils (two shape design variables).

The total number of CFD function equivalent evaluations is approximately:

Number of optimization iterations × 2 (one for minimum lift and one for maximum drag) × 13 (number of training points) × number of optimization iterations per epistemic optimization × 2 (one function and one gradient call) ≈1,600.

Scalability of the framework

In order to demonstrate the scalability of the framework, the number of epistemic design variables is increased from two to six and later to fourteen. First, three shape design variables are placed on the upper surface and three on the lower surface (at 40%, 60% and 80% chords) and Figure 5 shows the original NACA 0012 airfoil and the airfoils resulting from perturbations of all six shape design variables of ±0.005. The box constraints to prevent invalid meshes and flow convergence issues are as follows:

D 1 , 6 [-0.01,0.01] D 2 - 5 [-0.02,0.02] D AoA [0,1.85] D M [0.6,0.78],
(16)
Figure 5
figure 5

NACA 0012 airfoil (black) and airfoils resulting from six shape design variable values of ±0 . 005 (in gray).

where D 1,6 are the shape design variables closest to the trailing edge on the lower and upper surfaces, respectively. The robust design results are presented in Table 6. The number of required optimization iterations for convergence (again norm of gradient less than 10-4) varies between 9 and 27 for all the presented cases.

Table 6 Robust design results with six shape design variables

Again, the average drag and mean Mach number increase as the desired probability, P k , of maintaining the target lift coefficient is increased. Compared with Table 4, one also gets lower drag coefficients since the required Mach number to maintain the same lift was reduced through shape modifications. In Table 7, a comparison of NLMC and Kriging predictions using the same 3,000 samples for the optimal design with k = 2 is presented which once again demonstrates the quality of the predictions for statistics of the lift and drag coefficients. The total number of CFD function equivalent evaluations has increased from the two epistemic variable case and is approximately as follows: Number of optimization iterations × 2 × 13 × number of optimization iterations per epistemic optimization × 2 ≈1,900. This increase is mostly due to the fact that Ipopt now requires a few more iterations to converge the outer optimization problem. However, this is only an increase of about 20% though the number of epistemic variables increased from two to six. The original NACA 0012 as well as the deterministically and robustly (k = 2) optimized airfoils are all shown in Figure 6. Once again, one can see that the robustly optimized airfoil looks different from the deterministically optimized one this time especially along the upper surface.

Table 7 Comparison of NLMC and Kriging predictions for the optimal design with six shape design variables obtained for k =2
Figure 6
figure 6

NACA 0012 at α  = 1 . 25 (gray), deterministically (black) and robustly ( k  = 2, red) optimized airfoils (six shape design variables).

As a last demonstration, the number of epistemic design variables is increased from 6 to 14. Therefore, seven shape design variables are placed on the upper surface and seven on the lower surface (at 20%, 30%, 40%, 50%, 60%, 80%, and 90% chord) and Figure 7 shows the original NACA 0012 airfoil and the airfoils resulting from perturbations of all fourteen shape design variables of ±0.0025. The box constraints to prevent invalid meshes and flow convergence issues are as follows:

D 1 , 2 , 13 , 14 [-0.00125,0.00125] D 3 - 12 [-0.01,0.01] D AoA [0,4] D M [0.3,0.78],
(17)
Figure 7
figure 7

NACA 0012 airfoil (black) and airfoils resulting from fourteen shape design variable values of ±0 . 0025 (in gray).

where D 1,14 are the shape design variables closest to the trailing edge on the lower and upper surface, respectively. The target lift coefficient is increased to C l =0.8 to make this problem more difficult and the results are presented in Table 8. The number of required optimization iterations for convergence (again norm of gradient less than 10-4>) varies between 16 to 23 for all the presented cases. The average drag and mean Mach numbers increase as the desired probability of maintaining the target lift coefficient is increased. The total number of CFD function equivalent evaluations has increased again from the two and six epistemic variable case to approximately 2,000, which is only a modest increase considering how many more epistemic variables are used.

Table 8 Robust design results with fourteen shape design variables

Conclusions

This article describes the use of gradient-based optimizations and Kriging surrogate models for the propagation of mixed aleatory/epistemic uncertainties for a robust lift-constrained drag minimization problem. Uncertainty due to epistemic variables is propagated via a box-constrained optimization approach, while the uncertainty due to aleatory variables is propagated via sampling of a Kriging surrogate model built with the optimization results. This statistics-of-intervals approach makes robust design under mixed aleatory/epistemic uncertainty possible while at the same time keeping the computational cost for these types of problems manageable.

References

  1. Luckring JM, Hemsch MJ, Morrison JH: Uncertainty in computational aerodynamics. In the 41st AIAA Aerospace Sciences Meeting & Exhibit, Reno, 6–9 January 2003

    Book  Google Scholar 

  2. Gumbert CR, Newman PA, Hou GJ: Effect of random geometric uncertainty on the computational design of 3-D wing. In the 20th AIAA Applied Aerodynamics Conference, St. Louis, 24–26 June 2002

    Google Scholar 

  3. Ben-Tal A, Ghaoui LE, Nemirovski A: Foreword: special issue on robust optimization. Math. Prog 2006,107(1–2):1–3.

    Article  Google Scholar 

  4. Pilch M, Trucano TG, Helton JC: Ideas underlying quantification of margins and uncertainties (QMU): a white paper. Tech. Rep. SAND2006–5001, Sandia National Laboratories, Albuquerque, NM (2006)

    Google Scholar 

  5. Helton JC, Johnson JD, Oberkampf WL, Storlie CB: A sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Tech. Rep. SAND2006–5557, Sandia National Laboratories, Albuquerque, NM (2006)

    Google Scholar 

  6. Diegert K, Klenke S, Novotny G, Paulsen R, Pilch M, Trucano T: Toward a more rigorous application of margins and uncertainties within the nuclear weapons life cycle - a Sandia perspective. Tech. Rep. SAND2007–6219, Sandia National Laboratories, Albuquerque, NM (2007)

    Google Scholar 

  7. Oberkampf WL, Barone MF: Measures of agreement between computation and experiment: validation metrics. J. Comput. Phys 2006, 217: 5–36. 10.1016/j.jcp.2006.03.037

    Article  Google Scholar 

  8. Roy CJ, Oberkampf WL: A comprehensive framework for verification, validation, and uncertainty quantification in scientific computing. Comput. Methods Appl. Mech. Eng 2011,200(25–28):2131–2144.

    Article  MathSciNet  Google Scholar 

  9. Eldred MS, Swiler: Efficient Algorithms for Mixed Aleatory-Epistemic Uncertainty Quantification with Application to Radiation-Hardened Electronics. Tech. Rep. SAND2009–5805, Sandia National Laboratories, Albuquerque, NM (2009)

    Google Scholar 

  10. Metropolis N, Ulam S: The Monte Carlo method. J. Am. Stat. Assoc 1949, 44: 335–341. 10.1080/01621459.1949.10483310

    Article  MathSciNet  Google Scholar 

  11. Bettis BR, Hosder S: Uncertainty quantification in hypersonic reentry flows due to aleatory and epistemic uncertainties. In the 49th AIAA Aerospace SciencesMeeting including the New Horizons Forum and Aerospace Exposition, Orlando, FL, 4–7 January 2011

    Book  Google Scholar 

  12. Lockwood B, Anitescu M, Mavriplis DJ: Mixed aleatory/epistemic uncertainty quantification for hypersonic flows via gradient-based optimization and surrogate models. In the 50th AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition, Nashville, Tennessee, 9–12 January 2012

    Book  Google Scholar 

  13. Gu X, Renaud J, Batill S, Brach R, Budhiraja A: Worst case propagated uncertainty of multidisciplinary systems in robust design optimization. Struct. Multidisciplinary Optimization 2000,20(3):190–213. 10.1007/s001580050148

    Article  Google Scholar 

  14. Pironneau O: On optimum design in fluid mechanics. J. Fluid Mech 1974,64(1):97–110. 10.1017/S0022112074002023

    Article  MathSciNet  Google Scholar 

  15. Errico RM: What is an adjoint model? Bull. Am. Meteorological Soc 1997,8(11):2577–2591.

    Article  Google Scholar 

  16. Beale EML: On minimizing a convex function subject to linear inequalities. J. R. Stat. Soc 1955, 17B: 173–184.

    MathSciNet  Google Scholar 

  17. Dantzig GB: Linear programming under uncertainty. Manage. Sci 1955, 1: 197–206. 10.1287/mnsc.1.3-4.197

    Article  MathSciNet  Google Scholar 

  18. Tintner G: Stochastic linear programming with applications to agricultural economics. In Proceedings of the Second Symposium in Linear Programming Edited by: Antosiewicz HA. Washington, DC, January 27–29 1955

  19. Sahinidis NV: Optimization under uncertainty: state-of-the-art and opportunities. Comput. Chem. Eng 2004, 28: 971–983. 10.1016/j.compchemeng.2003.09.017

    Article  Google Scholar 

  20. Dantzig GB, Infanger G: Stochastic Programming: The State of the Art in Honor of George B. Dantzig. New York: Springer; 2010.

    Google Scholar 

  21. Beyera HG, Sendhoff B: Robust optimization - a comprehensive survey. Comput. Methods Appl. Mech. Eng 2007,196(33–34):3190–3218.

    Article  Google Scholar 

  22. Giunta AA, Eldred MS, Swiler LP, Trucano TG, Wojtkiewicz SF: Perspectives on Optimization under Uncertainty: Algorithms and Applications. In the 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Albany, NY, 30 August - 1 September 2004

    Google Scholar 

  23. Li M: Robust Optimization and sensitivity analysis with multi-objective genetic algorithms: single- and multi-disciplinary applications. PhD thesis, University of Maryland (2007)

    Google Scholar 

  24. Kouvelis P, Yu G: Robust Discrete Optimization and Its Applications. Boston: Kluwer; 1997.

    Book  Google Scholar 

  25. Ben-Tal A, Ghaoui LE, Nemirovski A: Robust Optimization. Princeton Series in Applied Mathematics. Princeton: Princeton University Press; 2009.

    Google Scholar 

  26. Zang C, Friswell MI, Mottershead JE: A review of robust optimal design and its application in dynamics. Comput. Struct 2005, 83: 315–326. 10.1016/j.compstruc.2004.10.007

    Article  Google Scholar 

  27. Chen W, Allen J, Tsui K, Mistree F: Procedure for robust design: minimizing variations caused by noise factors and control factors. J. Mech. Design 1996,118(4):478–485. 10.1115/1.2826915

    Article  Google Scholar 

  28. Chen W, Du: Towards a better understanding of modeling feasibility robustness in engineering design. J. Mech. Design 1999,122(4):385–394.

    Google Scholar 

  29. Mourelatos Z, Liang J: A methodology for trading-off performance and robustness under uncertainty. J. Mech. Design 2006, 128: 856. 10.1115/1.2202883

    Article  Google Scholar 

  30. Zaman K, McDonald M, Mahadevan S, Green L: Robustness-based design optimization under data uncertainty. Struct. Multidisciplinary Optimization 2011,44(2):183–197. 10.1007/s00158-011-0622-2

    Article  Google Scholar 

  31. Putko MM, Newmann PA, Taylor III AC, Green LL: Approach for uncertainty propagation and robust design in CFD using sensitivity derivatives. In the 15th AIAA Computational Fluid Dynamics Conference, Anaheim, CA, 11–14 June 2001

  32. Byrd RH, Lu P, Nocedal J, Zhu C: A limited memory algorithm for bound constrained optimization. SIAM J. Sci. Comput 1995,16(5):1190–1208. 10.1137/0916069

    Article  MathSciNet  Google Scholar 

  33. Zhu C, Byrd RH, Lu P, Nocedal J: L-BFGS-B: A Limited Memory FORTRAN Code for Solving Bound Constrained Optimization Problems. Tech. Rep. NAM-11, Department of Electrical Engineering and Computer Science, Northwestern University, Evanston, Illinois, USA (1994)

    Google Scholar 

  34. Yamazaki W, Mouton S, Carrier G: Efficient design optimization by physics-based direct manipulation free-form deformation. In the 12th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Victoria, British Columbia, 10–12 September 2008

  35. Rumpfkeil MP, Yamazaki W, Mavriplis DJ: Uncertainty analysis utilizing gradient and Hessian information. Sixth International Conference on Computational Fluid Dynamics ICCFD6, St. Petersburg, Russia, July 2010

  36. Yamazaki W, Rumpfkeil MP, Mavriplis DJ: Design Optimization Utilizing Gradient/Hessian Enhanced Surrogate Model. In the 28th AIAA Applied Aerodynamics Conference, Chicago, Illinois, 28 June - 1 July 2010

  37. Rumpfkeil MP, Yamazaki W, Mavriplis DJ: A Dynamic Sampling Method for Kriging and Cokriging Surrogate Models. In the 49th AIAA Aerospace Sciences meeting including the New Horizons Forum and Aerospace Exposition, Orlando, FL, 4–7 January 2011

  38. Du X, Chen W: Methodology for managing the effect of uncertainty in simulation-based design. AIAA J 2000,38(8):1471–1478. 10.2514/2.1125

    Article  Google Scholar 

  39. Parkinson A, Sorensen C, Pourhassan N: A general approach for robust optimal design. Trans. ASME 1993, 115: 74–80. 10.1115/1.2919328

    Article  Google Scholar 

  40. Putko MM, Taylor III AC, Newmann PA, Green LL: Approach for input uncertainty propagation and robust design in CFD using sensitivity derivatives. J. Fluids Eng 2002,124(1):60–69. 10.1115/1.1446068

    Article  Google Scholar 

  41. Waechter A, Biegler: On the implementation of a primal-dual interior point filter line search algorithm for large-scale nonlinear programming. Math. Prog 2006,106(1):25–57. 10.1007/s10107-004-0559-y

    Article  Google Scholar 

  42. Rumpfkeil MP: Optimization under uncertainty using gradients, Hessians, and surrogate models. AIAA J 2013,51(2):444–451. 10.2514/1.J051847

    Article  Google Scholar 

  43. Han ZH, Zimmermann R, Goertz S: On improving efficiency and accuracy of variable-fidelity surrogate modeling in aero-data for loads context. CEAS 2009 European Air and Space Conference Manchester, 26–29 October 2009

  44. Mani K, Mavriplis DJ: Unsteady discrete adjoint formulation for two-dimensional flow problems with deforming meshes. AIAA J 2008,46(6):1351–1364. 10.2514/1.29924

    Article  Google Scholar 

  45. Mani K, Mavriplis DJ: Adjoint-based sensitivity formulation for fully coupled unsteady aeroelasticity problems. AIAA J 2009,47(8):1902–1915. 10.2514/1.40582

    Article  Google Scholar 

  46. Hicks R, Henne P: Wing design by numerical optimization. J Aircraft 1978,15(7):407–412. 10.2514/3.58379

    Article  Google Scholar 

  47. Batina JT: Unsteady euler airfoil solutions using unstructured dynamic meshes. AIAA J 1990,28(8):1381–1388. 10.2514/3.25229

    Article  Google Scholar 

Download references

Acknowledgments

This work was partially supported by the University of Dayton Research Council seed grants. I would also like to thank Karthik Mani for making his flow and adjoint solver available as well as Wataru Yamazaki for his Kriging model.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Markus P Rumpfkeil.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Rumpfkeil, M.P. Robust design under mixed aleatory/epistemic uncertainties using gradients and surrogates. J. Uncertain. Anal. Appl. 1, 7 (2013). https://doi.org/10.1186/2195-5468-1-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/2195-5468-1-7

Keywords