【AP】a pratical guide to robust optimization(1)
Navigator
Paper Link
a pratical guide to robust optimization
Pub Date
2015.01.03. Omega
Abstract
Robust optimization is very useful for practice, since it is tailored to the information at hand, and it leads to computationally tractable formulations.
The aim of this paper is to help practitioners to understand robust optimization and to successfully apply it in practice. We provide a breif introduction to robust optimization, and also describe import do’s and don’s for using it in practice. We use many small examples to illustrate discussions.
Introduction
在優化問題中,如果資料存在不確定性,那麼可以使用魯棒優化(robust optimization
)或者隨機優化(stochastic optimization
)求解.
隨機優化(SO): 假設不確定資料的概率分佈已知
SO has an important assumptionm, i.e.,
the true probability distribution of uncertain data to be known or estimated
. If this condition is met and reformulation of the uncertain optimization problem is computationally tractable, then SO is the methodology to solve the uncertain optimization problem at hand1.
魯棒優化(RO):不確定資料在某種集合/約束中
RO does not assume that probability distributions are known, but instead it assumes that
the uncertain data resides in the so-called uncertainty set
. Additionally, basic versions of RO assume ‘hard’ constraints, i.e. constraint violation cannot be allowed for any realization of the data in the uncertainty set2.
魯棒優化在金融中應用,可以參考Lobo的論文3.
Introduction to robust optimization
robust optimization paradigm
For the sake of exposition
, we use an uncertain linear optimization problem, but we point out that most of discussions in this paper can be generalized for other classes of uncertain optimization problems.
一般的不確定性線性規劃問題可以建模如下
min
x
,
(
c
,
A
,
d
)
∈
U
c
T
x
:
A
x
≤
d
(1)
\min_{x, (\mathbf{c}, \mathbf{A}, \mathbf{d})\in\mathcal{U}} \mathbf{c}^T\mathbf{x}:\mathbf{A}\mathbf{x}\leq \mathbf{d}\tag{1}
x,(c,A,d)∈UmincTx:Ax≤d(1)
其中,
c
∈
R
n
,
A
∈
R
m
×
n
\mathbf{c}\in\mathbb{R}^n, \mathbf{A}\in\mathbb{R}^{m\times n}
c∈Rn,A∈Rm×n,
d
∈
R
m
\mathbf{d}\in\mathbb{R}^m
d∈Rm定義了不確定係數,
U
\mathcal{U}
U則是使用者定義的不確定集合,基本的魯棒優化基於如下三條假設4:
A.1 All decision variables x ∈ R n \mathbf{x}\in\mathbb{R}^n x∈Rn represent ‘here and now’ decisions: they should get specific numerical values as a result of solving the problem before the actual data ‘reveals itself’.
A.2 The decision maker is fully responsible for consequences of the decisions to be made when, and only when, the actual data is within the prespecified uncertainty set U \mathcal{U} U.
A.3 The constraints of the uncertain problem in question are ‘hard’, i.e., the decision maker tolerate violations of constraints when the data is in the prespecified uncertainty set U \mathcal{U} U.
除了基本假設之外,不失一般性,我們可以加上如下假設
- 目標函式(
objective
)為確定 - 約束條件的右端是確定的
-
U
\mathcal{U}
U是緊的且是凸的(
compact and convex
) - 不確定性和約束是有關的(
the uncertainty is constraint-wise
)
但是這幾個假設都不是嚴格的.
Below, we explain the technical reasons of why these four assumptions are not restrictive.
E1 如果目標函式中係數向量
c
\mathbf{c}
c為不確定的,並且係數在不確定集合
C
\mathcal{C}
C中
min
x
max
c
∈
C
{
c
T
x
:
A
x
≤
d
∀
A
∈
U
}
\min_x\max_{\mathbf{c}\in\mathcal{C}} \{\mathbf{c}^T\mathbf{x}:\mathbf{A}\mathbf{x}\leq \mathbf{d}\quad \forall \mathbf{A}\in\mathcal{U}\}
xminc∈Cmax{cTx:Ax≤d∀A∈U}
引入輔助變數
t
∈
R
t\in\mathbb{R}
t∈R,可以將問題進行等效轉化
min
x
,
t
{
t
:
c
T
x
−
t
≤
0
∀
c
∈
C
,
A
x
≤
d
∀
A
∈
U
}
\min_{x, t}\{t: \mathbf{c}^T\mathbf{x}-t\leq \mathbf{0}\quad \forall\mathbf{c}\in\mathcal{C}, \mathbf{A}\mathbf{x}\leq \mathbf{d}\quad \forall\mathbf{A}\in\mathcal{U}\}
x,tmin{t:cTx−t≤0∀c∈C,Ax≤d∀A∈U}
E2 第二條假設不是嚴格地因為右端可以通過引入額外變數
x
n
+
1
=
−
1
x_{n+1}=-1
xn+1=−1轉為不確定係數.
E3 集合
U
\mathcal{U}
U可以被包含它的最小它的凸包(convex hull
)代替.
because testing the feasibility of a solution with respect to U \mathcal{U} U is equivalent to taking the spectrum of the left hand side of a constraint over U \mathcal{U} U. which yields the same optimal objective value if the maximization is c o n v ( U ) conv(\mathcal{U}) conv(U)
E4 考慮一個具有兩個約束條件和不確定引數
d
1
d_1
d1和
d
2
d_2
d2
{
x
1
+
d
1
≤
0
x
2
+
d
2
≤
0
U
=
{
d
∈
R
2
:
d
1
≥
0
,
d
2
≥
0
,
d
1
+
d
2
≤
1
}
\left\{ \begin{aligned} &x_1+d_1\leq 0\\ &x_2+d_2\leq 0\\ &\mathcal{U}=\{\mathbf{d}\in\mathbb{R}^2: d_1\geq 0, d_2\geq 0, d_1+d_2\leq 1 \} \end{aligned} \right.
⎩⎪⎨⎪⎧x1+d1≤0x2+d2≤0U={d∈R2:d1≥0,d2≥0,d1+d2≤1}
U
i
=
[
0
,
1
]
\mathcal{U}_i=[0, 1]
Ui=[0,1]是
U
\mathcal{U}
U在
d
i
d_i
di上的投影.
如果設定
c
∈
R
n
\mathbf{c}\in\mathbb{R}^n
c∈Rn,且
d
∈
R
m
\mathbf{d}\in\mathbb{R}^m
d∈Rm為確定值,那麼方程
(
1
)
(1)
(1)的robust reformation (robust counterpart, RC)
為
min
x
{
c
T
x
:
A
(
ζ
)
x
≤
d
∀
ζ
∈
Z
}
(2)
\min_x\{\mathbf{c}^T\mathbf{x}: \mathbf{A}(\pmb{\zeta})\mathbf{x}\leq \mathbf{d}\quad \forall \pmb{\zeta}\in\mathcal{Z}\}\tag{2}
xmin{cTx:A(ζζζ)x≤d∀ζζζ∈Z}(2)
其中
Z
⊂
R
L
\mathcal{Z}\subset \mathbb{R}^L
Z⊂RL是自定義的原始不確定集合(primitive uncertainty set
),問題的解
x
∈
R
n
\mathbf{x}\in\mathbb{R}^n
x∈Rn為robust feasible solution
,並且解滿足不確定約束條件
A
(
ζ
)
x
≤
d
∀
ζ
∈
Z
\mathbf{A}(\pmb{\zeta})\mathbf{x}\leq \mathbf{d}\quad \forall \pmb{\zeta}\in\mathcal{Z}
A(ζζζ)x≤d∀ζζζ∈Z.
將方程
(
2
)
(2)
(2)中的單項約束條件提出來可以得到
(
a
+
P
ζ
)
T
x
≤
d
∀
ζ
∈
Z
(3)
(\mathbf{a}+\mathbf{P}\pmb{\zeta})^T\mathbf{x}\leq d\quad \forall \pmb{\zeta}\in\mathcal{Z}\tag{3}
(a+Pζζζ)Tx≤d∀ζζζ∈Z(3)
In the left-hand side of ( 3 ) (3) (3), we use a factor model to formulate a single constraint of ( 2 ) (2) (2) as an affine function a + P ζ \mathbf{a}+\mathbf{P}\pmb{\zeta} a+Pζζζ of the primitive uncertain parameter ζ ∈ Z \pmb{\zeta}\in\mathcal{Z} ζζζ∈Z, where a ∈ R n \pmb{a}\in\mathbb{R}^n aaa∈Rn and P ∈ R n × L \mathbf{P}\in\mathbb{R}^{n\times L} P∈Rn×L. One of the most famous example of such factor model is the
3-factor model of Fama and French
, which models different type of assets as linear functions of a limited number of uncertain economic factors. To point out, the dimension of the general uncertain parameter is often much higher than that of primitive uncertain parameter (n>>L).
solving the robust counterpart
Notice that ( 3 ) (3) (3) contains infinitely many constraints due to the for all ( ∀ ) (\forall) (∀) quantifier imposed by the worst case formulation, i.e. it seems intractable in its current form. There are two ways to deal with this.
第一種方法包含三個步驟
,設定不確定集合為polyhedral uncertainty set
Z
=
{
ζ
:
D
ζ
+
q
≥
0
}
\mathcal{Z}=\{\pmb{\zeta}: \mathbf{D}\pmb{\zeta}+\mathbf{q}\geq \mathbf{0}\}
Z={ζζζ:Dζζζ+q≥0}
其中,
D
∈
R
m
×
L
,
ζ
∈
R
L
\mathbf{D}\in\mathbb{R}^{m\times L}, \pmb{\zeta}\in\mathbb{R}^L
D∈Rm×L,ζζζ∈RL以及
q
∈
R
m
\mathbf{q}\in\mathbb{R}^m
q∈Rm.
Step 1(worst case reformulation)
將模型
(
3
)
(3)
(3)進行等效變形
a
T
x
+
max
ζ
:
D
ζ
+
q
≥
0
(
P
T
x
)
T
ζ
≤
d
(4)
\mathbf{a}^T\mathbf{x}+\max_{\pmb{\zeta}:\mathbf{D}\pmb{\zeta}+\mathbf{q}\geq 0}(\mathbf{P}^T\mathbf{x})^T\pmb{\zeta}\leq d\tag{4}
aTx+ζζζ:Dζζζ+q≥0max(PTx)Tζζζ≤d(4)
Step 2(duality)
將模型
(
4
)
(4)
(4)中的最大值部分轉為對偶形式(The inner maximization problem and its dual yield the same optimal objective value by strong duality
).
a T x + min w { q T w : D T w = − P T x , w ≥ 0 } ≤ d (5) \mathbf{a}^T\mathbf{x}+\min_\mathbf{w} \{\mathbf{q}^T\mathbf{w}:\mathbf{D}^T\mathbf{w}=-\mathbf{P}^T\mathbf{x}, \mathbf{w}\geq \mathbf{0}\}\leq d\tag{5} aTx+wmin{qTw:DTw=−PTx,w≥0}≤d(5)
Step 3(RC)
可以捨棄模型
(
5
)
(5)
(5)中的最小值約束,只要找到一個使模型成立的
w
\mathbf{w}
w即可
∃
w
:
a
T
x
+
q
T
w
≤
d
,
D
T
w
=
−
P
T
x
,
w
≥
0
(6)
\exist \mathbf{w}:\mathbf{a}^T\mathbf{x}+\mathbf{q}^T\mathbf{w}\leq d, \quad \mathbf{D}^T\mathbf{w}=-\mathbf{P}^T\mathbf{x},\quad \mathbf{w}\geq \mathbf{0}\tag{6}
∃w:aTx+qTw≤d,DTw=−PTx,w≥0(6)
Table 1 presents the tractable robust counterparts of an uncertain linear optimization problem for different classes of uncertainty sets.
第二種方法(Adversarial approach
).
If the robust counterpart cannot be written as or approximate by tractable reformulation, we advocate to perform the so-called adversarial approach.
對於第
i
i
i個約束中的不確定引數,設定有限的場景(a finite set of scenarios
)
S
i
⊂
Z
i
S_i\subset \mathcal{Z}_i
Si⊂Zi,迭代求解不同場景下問題的解,最終得到robust optimal solution
5. 原文描述如下
在Bertsimas等人的論文6中對該方法和reformulation
方法進行了對比,發現該方法的計算速度要快於方法1. 或者當不確定引數的分佈為已知時,Calfiore和Campi提出了隨機取樣的方法7
The randomized approach substitutes the infinitely many robust constraints with a finite set of constraints that are randomly sampled. It is shown that such a randomized approach is an accurate approximation of the original uncertain problem provided that a sufficient number of samples is drawn8.
Pareto efficiency
Iancu等人9發現在RO
中只關注worst case
的情況會產生非帕累托最優解,基於此他們提出了一種新的問題求解,該問題的解釋帕累托最優的.
In this new problem, the objective is optimized for a scenario in the interior of the uncertainty set, e.g., for the nominal scenario, while the worst case objective is constrained to be not worse than the robust optimial objective value.
Adjustable robust optimization
Some decision variables can be adjusted at a later moment in time according to a decision rule, which is a function of the uncertain data. The
adjustable RC, ARC
is given as follows:
min
x
,
y
(
⋅
)
)
{
c
T
x
:
A
(
ζ
)
x
+
B
y
(
ζ
)
≤
d
∀
ζ
∈
Z
}
(7)
\min_{x, y(\cdot))}\{\mathbf{c}^T\mathbf{x}:\mathbf{A}(\pmb{\zeta})\mathbf{x}+\mathbf{B}\mathbf{y}(\pmb{\zeta})\leq \mathbf{d}\quad \forall \pmb{\zeta}\in\mathcal{Z}\}\tag{7}
x,y(⋅))min{cTx:A(ζζζ)x+By(ζζζ)≤d∀ζζζ∈Z}(7)
其中
x
∈
R
n
\mathbf{x}\in\mathbb{R}^n
x∈Rn為第一階段(here and now
)決策在
ζ
∈
R
L
\pmb{\zeta}\in\mathbb{R}^L
ζζζ∈RL之前作出,
y
∈
R
k
\mathbf{y}\in\mathbb{R}^k
y∈Rk為第二階段(wait and see
)決策,根據實際資料進行調整,矩陣
B
∈
R
m
×
k
\mathbf{B}\in\mathbb{R}^{m\times k}
B∈Rm×k 為係數矩陣.
但是ARC
問題一般是難以求解的,一般對
y
(
ζ
)
\mathbf{y}(\pmb{\zeta})
y(ζζζ)進行如下近似(they yield computationally tractable affinely ARC, AARC
)
y
(
ζ
)
:
=
y
0
+
Q
ζ
(8)
\mathbf{y}(\pmb{\zeta}):=\mathbf{y}^0+\mathbf{Q}\pmb{\zeta}\tag{8}
y(ζζζ):=y0+Qζζζ(8)
代入到方程
(
7
)
(7)
(7)中得到
min
x
,
y
0
,
Q
{
c
T
x
:
A
(
ζ
)
x
+
B
y
0
+
B
Q
ζ
≤
d
∀
ζ
∈
Z
}
\min_{x, y^0, Q}\{\mathbf{c}^T\mathbf{x}:\mathbf{A}(\pmb{\zeta})\mathbf{x}+\mathbf{B}\mathbf{y}^0+\mathbf{B}\mathbf{Q}\pmb{\zeta}\leq \mathbf{d}\quad \forall \pmb{\zeta}\in\mathcal{Z}\}
x,y0,Qmin{cTx:A(ζζζ)x+By0+BQζζζ≤d∀ζζζ∈Z}
該模型可以使用上述三個步驟推導求解.
To point out, ARO is less conservative than the classic RO approach, since it yields more flexible decisions that can be adjusted according to the realized portion of data at a given stage. More precisely,
ARO yields optimal objective values that are at least as good as that of standard RO approach
. In addition, aside from introducing additional variables and constraints, the AARC does not introduce additional computational complexity to that of RO with fixed recourse, and it can be straightforwardly adopted to the classic RO framework.
關於非線性決策規則的ARC
變形同樣存在,可以參考Georghiou的論文10. 關於包含整型變數的問題,可以參考Bertsimas和Caramanis的論文11.
Robust optimization procedure
RO模型的應用流程見Table-2
Robust and Convex Optimization with Application in Finance ↩︎
reformulation versus cutting planes for robust optimization ↩︎
Uncertain convex programs: randomized solutions and
confidence levels ↩︎The Exact Feasibility of Randomized Solutions of Uncertain Convex Programs ↩︎
generalized decision rule approximations for Stochastic Programming ↩︎
相關文章
- Database | 淺談Query Optimization (1)Database
- Perceptron, Support Vector Machine and Dual Optimization Problem (1)Mac
- Website Performance OptimizationWebORM
- SQL Server OptimizationSQLServer
- Oracle SQL optimizationOracleSQL
- Swift之旅_Language Guide1SwiftGUIIDE
- MySQL 8.0 Reference Manual(讀書筆記54節--Optimization and Indexes(1))MySql筆記Index
- 胖AP和瘦AP的區別
- robust 熱修復實踐
- AP History
- Trust Region Policy OptimizationRust
- C++ Empty Class OptimizationC++
- AP(Access Pointer)
- 學會Zynq(1)搭建Zynq-7000 AP SoC處理器
- Oracle SQL optimization-2(zt)OracleSQL
- Database | 淺談Query Optimization (2)Database
- Memory-Efficient Adaptive OptimizationAPT
- Robust Loop Closure by Textual Cues in Challenging EnvironmentsOOP
- SAVIOR Securing Autonomous Vehicles with Robust Physical Invariants
- Android熱補丁之Robust(三)坑和解Android
- Communication Complexity of Convex Optimization
- Perceptron, Support Vector Machine and Dual Optimization Problem (2)Mac
- Perceptron, Support Vector Machine and Dual Optimization Problem (3)Mac
- Join Query Optimization with Deep Reinforcement Learning AlgorithmsGo
- SQLite Learning、SQL Query Optimization In Multiple RuleSQLite
- Availability and Optimization of Free Space in a Data Block(五)AIBloC
- Optimization with Function-Based Indexes (201)FunctionIndex
- AP server 對時監控Server
- AP Management 付款流程簡述
- Deep Robust Multi-Robot Re-localisation in Natural Environments
- 02改善深層神經網路-Optimization+methods-第二週程式設計作業1神經網路程式設計
- Oracle9.2.0.1.0 for HP-UX11iv1 Installation Guide(轉)OracleUXGUIIDE
- A guide to this in JavaScriptGUIIDEJavaScript
- 【Papers】Robust Lane Detection via Expanded Self Attention 論文解讀
- Ensemble distillation for robust model fusion in federated learning論文筆記筆記
- 偽AP檢測技術研究
- apache ap 併發測試工具Apache
- AP自動計稅設定