- ホーム
- > 洋書
- > 英文書
- > Science / Mathematics
Full Description
An expert reference on building surrogate models, using them for optimization, their associated prediction uncertainty, and potential failures, with practical implementation in MATLAB
Surrogate Modeling and Optimization explains the meaning of different surrogate models and provides an in-depth understanding of such surrogates, emphasizing how much uncertainty is associated with them, and when and how a surrogate model can fail in approximating complex functions, helping readers understand theory through practical implementation in MATLAB. This book enables readers to obtain an accurate approximate function using as few samples as possible, thereby allowing them to replace expensive computer simulations and experiments during design optimization, sensitivity analysis, and/or uncertainty quantification.
The book is organized into three parts. Part I introduces the basics of surrogate modeling. Part II reviews various theories and algorithms of design optimization. Part III presents advanced topics in surrogate modeling, including the Kriging surrogate, neural network models, multi-fidelity surrogates, and efficient global optimization using Kriging surrogates.
Each chapter contains a multitude of examples and exercise problems. Lecture slides and a solution manual for exercise problems are available for instructors on a companion website.
Topics discussed in Surrogate Modeling and Optimization include:
Various designs of experiments, such as those developed for linear and quadratic polynomial response surfaces (PRS) in a boxlike design space
Criteria for constrained and unconstrained optimization and the most important optimization theories
Various numerical algorithms for gradient-based optimization
Gradient-free optimization algorithms, often referred to as global search algorithms, which do not require gradient or Hessian information
Detailed explanations and implementation on Kriging surrogates, often referred to as Gaussian Process, especially when samples include noise
The combination of a small number of high-fidelity samples with many low-fidelity samples to improve prediction accuracy
Neural network models, focusing on training uncertainty and its effect on prediction uncertainty
Efficient global optimization using either polynomial response surfaces or Kriging surrogates
Surrogate Modeling and Optimization is an essential learning companion for senior-level undergraduate and graduate students in all engineering disciplines, including mechanical, aerospace, civil, biomedical, and electrical engineering. The book is also valuable for industrial practitioners who apply surrogate models to solve their optimization problems.
Contents
Preface xiii
Acknowledgment xvii
About the Companion Website xix
Part I Basics of Surrogate Modeling 1
1 Introduction to Surrogate Models 3
1.1 What Is Surrogate Modeling? 3
1.2 Surrogate Models 5
1.3 Design of Experiments: Sampling 7
1.4 Interpolation Versus Extrapolation 8
1.5 Flowchart of Surrogate Modeling 10
1.6 Overview of Surrogate Modeling 12
1.7 Smoothness and Loss Function 14
2 Polynomial Response Surfaces 17
2.1 Introduction 17
2.2 Curve Fitting 19
2.3 Linear Regression 23
2.3.1 Polynomial Response Surface 23
2.3.2 Polynomial Response Surface in Multiple Dimensions 28
2.3.3 Curse of Dimensionality 30
2.3.4 Assumptions in Linear Regression 32
2.4 Goodness of Fit 33
2.4.1 Estimation of Noise in Samples 34
2.4.2 Coefficient of Multiple Determination 38
2.4.3 Cross-validation 43
2.5 Confidence of Coefficients and Backward Elimination 48
2.6 Prediction Variance 51
2.6.1 Prediction Uncertainty 52
2.6.2 Sample Sensitivity 53
2.6.3 Prediction Variance with Variable Noise 55
2.7 Outliers 57
2.8 Statistical View of Linear Regression 59
Exercise 65
3 Design of Experiments 71
3.1 Introduction 71
3.2 Design of Experiments in Box-like Domains 73
3.2.1 Scaling of Input Variables 73
3.2.2 Interpolation, Extrapolation, and Prediction Variance 75
3.2.3 Designs for Linear Polynomial Response Surfaces 79
3.2.4 Designs for Quadratic Polynomial Response Surfaces 80
3.3 Optimal Design of Experiments 90
3.3.1 D-Optimal Design 91
3.3.2 A-Optimal Design 95
3.3.3 G-Optimal Design 96
3.3.4 Minimum Bias Design 99
3.4 Space-Filling Design of Experiments 104
3.4.1 Monte Carlo Simulation 104
3.4.2 Latin Hypercube Sampling 105
3.4.3 Orthogonal Arrays 109
3.5 Review of Various Designs of Experiments 111
3.5.1 Guideline for Selecting Designs of Experiments 111
3.5.2 Good Practice for Design of Experiments 112
Exercise 113
Part II Design Optimization 117
4 Optimization Definition and Formulation 119
4.1 Introduction 119
4.2 Design Optimization Definition 120
4.2.1 Design Optimization Process 120
4.2.2 Design Variables and Feasible Domain 122
4.2.3 Graphical Optimization 126
4.3 Optimization Problem Formulation 128
4.3.1 Three-step Problem Definition 128
4.3.2 Standard Form 129
4.3.3 Normalization 130
4.3.4 Convex Function and Convex Problem 133
4.4 Optimality Criteria 135
4.4.1 Global Versus Local Optimum 135
4.4.2 Unconstrained Optimization 136
4.4.3 Constrained Optimization 141
4.4.4 Effect of Constraint Limit 149
4.4.5 Sensitivity of Optimum Solution to Parameters 151
Exercise 153
5 Numerical Optimization Algorithms 161
5.1 Introduction 161
5.2 Overview of the Numerical Optimization Process 162
5.3 Determination of Step Size 164
5.3.1 Descent Direction 164
5.3.2 Step-Size Termination Criterion 165
5.3.3 Interval Reduction Method 166
5.3.4 Quadratic Interpolation Method 167
5.4 Unconstrained Optimization Algorithms 168
5.4.1 Steepest Descent Method 169
5.4.2 Conjugate Gradient Method 171
5.4.3 Newton Method 173
5.4.4 Quasi-Newton Method 174
5.4.5 Rate of Convergence 177
5.5 Constrained Optimization Using Unconstrained Algorithms 178
5.5.1 Lagrange Multiplier Method 179
5.5.2 Penalty Function Method 180
5.6 Constrained Optimization Using Direct Methods 182
5.6.1 Sequential Linear Programming (SLP) Method 182
5.6.2 Quadratic Programming (QP) Subproblem 183
5.6.3 Constrained Steepest Descent Method 184
5.6.4 Feasible Direction Method 185
5.6.5 Constrained Quasi-Newton Method 186
5.7 Matlab Optimization Toolbox 187
5.8 Practical Suggestions for Numerical Optimization 191
Exercise 194
6 Global Search Optimization Algorithms 197
6.1 Introduction 197
6.2 Nelder-Mead Sequential Simplex Algorithm 198
6.3 DIRECT Method 202
6.3.1 Lipschitzian Optimization 202
6.3.2 DIRECT in 1D 204
6.3.3 DIRECT Algorithm 205
6.4 Genetic Algorithms 208
6.4.1 Representation of Design 209
6.4.2 Genetic Operators 211
6.4.3 Procedure of Genetic Algorithms 212
6.4.4 Genetic Algorithm in Matlab 215
6.4.5 When to Use Genetic Algorithm? 216
6.5 Particle Swarm Optimization 217
6.6 Simulated Annealing Optimization 221
Exercise 225
Part III Advanced Topics in Surrogate Modeling 227
7 Kriging Surrogate-Gaussian Process Model 229
7.1 Introduction 229
7.2 Kriging Philosophy 230
7.2.1 Correlation Between Two Random Variables 231
7.2.2 Kriging Surrogate Approximation 233
7.2.3 Correlation Model 235
7.3 Kriging Surrogate Model 238
7.3.1 Global Function and Distribution of Errors 238
7.3.2 Local Departure 243
7.3.3 Hyperparameters and Likelihood Function 248
7.4 Issues in Determining Hyperparameters 252
7.4.1 Lower and Upper Bounds of Hyperparameter 252
7.4.2 Finding Optimum Value of Hyperparameter 253
7.4.3 Issues Related to the Number of Samples 255
7.4.4 Computational Cost of Kriging Surrogate 256
7.4.5 Hyperparameter and Extrapolation Accuracy 261
7.4.6 Uncertainty in Kriging Predictions 263
7.4.7 Effect of Global Function 269
7.5 Numerical Implementation of Kriging Surrogate 270
7.6 Kriging with Nuggets—Fitting with Noisy Data (Gaussian Process Regression) 282
7.6.1 Kriging Surrogate with Correlated Noise 284
7.6.2 Kriging Surrogate with Homogeneous Noise 285
Exercise 290
8 Neural Network Model 295
8.1 Introduction 295
8.2 Feedforward Neural Network Model 297
8.2.1 Concept of Feedforward Neural Network 297
8.2.2 Feedforward Mechanism 299
8.2.3 Activation Functions 302
8.2.4 Backpropagation Process 306
8.3 Matlab Functions for Feedforward Neural Network 310
8.4 Uncertainty Quantification in Neural Network Models 316
8.4.1 Training Uncertainty 317
8.4.2 Sampling Uncertainty 321
8.4.3 Confidence Intervals and Prediction Intervals 325
8.5 Issues in Feedforward Neural Network 327
8.5.1 Adaptive Learning Rate 327
8.5.2 Scaling Input Data 328
8.5.3 Overfitting 329
8.6 Neural Networks with Constraints 340
8.6.1 Penalty Method for Constraints 342
8.6.2 Regularization Using Soft-maximum 343
8.6.3 Backpropagation of Penalty Constraints 346
8.6.4 Updating Penalty Parameter 347
8.6.5 Numerical Examples 349
Exercise 350
9 Multi-fidelity Surrogate Models 353
9.1 Introduction 353
9.2 Multi-fidelity Surrogate Models 356
9.3 Regression-based Multi-fidelity Surrogate 362
9.4 Kriging-based Multi-fidelity Surrogate 369
9.4.1 Multi-fidelity Surrogate with Low-fidelity Function 369
9.4.2 Multi-fidelity Surrogate with Low-fidelity Samples 378
9.5 Sampling Strategy for Multi-fidelity Surrogate Modeling 386
9.5.1 Selecting Locations for LF and HF Samples 387
9.5.2 Allocating LF and HF Samples 388
9.6 Challenges and Recommendations 392
9.6.1 Deciding Whether to Use LF Samples 392
9.6.2 Choosing Between Multiple LF Datasets 393
9.6.3 Selecting ρ for Other Surrogates 393
9.6.4 Recommendations on Using MF Surrogates 393
Exercise 394
10 Efficient Global Optimization 397
10.1 Introduction 397
10.2 Efficient Global Optimization 399
10.2.1 Expected Improvement 400
10.2.2 Probability of Improvement 404
10.2.3 Adaptive Target for Probability of Improvement 410
10.2.4 Expected Feasibility 411
10.3 Efficient Global Optimization Using Polynomial Response Surface 413
10.4 Efficient Global Optimization Using Kriging Surrogate 421
Exercise 427
References 429
Index 435



