Adaboost 原理推导 学习笔记
Adaboost的基本思路如下:
- 给每个样本一个权重,初始化所有样本权重相同
- 使用当前样本权重,训练一个(简单)模型
- 根据模型结果,给判断正确的样本降权,给判断错误的样本加权
- 使用新的样本权重,重新训练(简单)模型,重复若干轮
- 将若干轮的(简单)模型线性合并为复合模型,作为最终模型
现有包含N个样本的数据集T
[T = { ({x_1},{y_1}),({x_2},{y_2}),...,({x_N},{y_N})} ,y in { - 1,1} ]
假设第m轮训练的简单模型为Gm(x),经过第m轮训练后,复合模型为Fm(x),令第m轮简单模型在复合模型中的权重为am,则可得
[{F_m}(x) = {F_{m - 1}}(x) + {a_m}{G_m}(x),{G_m}(x) in { - 1,1} ]
使用指数损失函数
[egin{array}{l}
{L_m} = sumlimits_i {exp ( - {y_i}{F_m}({x_i}))} \
= sumlimits_i {exp ( - {y_i}{F_{m - 1}}({x_i}) - {a_m}{y_i}{G_m}({x_i}))} \
= sumlimits_i {exp ( - {y_i}{F_{m - 1}}({x_i}))exp ( - {a_m}{y_i}{G_m}({x_i}))}
end{array}]
可见求和项中第一个指数项跟第m轮训练无关,只跟第m轮训练以前有关。令
[{omega _{mi}} = exp ( - {y_i}{F_{m - 1}}({x_i}))]
将Gm(x)想办法消掉,可得
[egin{array}{l}
{L_m} = sumlimits_i {{omega _{mi}}exp ( - {a_m}{y_i}{G_m}({x_i}))} \
= sumlimits_{{y_i} = {G_m}({x_i})} {{omega _{mi}}exp ( - {a_m})} + sumlimits_{{y_i}
e {G_m}({x_i})} {{omega _{mi}}exp ({a_m})} \
= {e^{ - {a_m}}}sumlimits_{{y_i} = {G_m}({x_i})} {{omega _{mi}}} + {e^{{a_m}}}sumlimits_{{y_i}
e {G_m}({x_i})} {{omega _{mi}}} \
= {e^{ - {a_m}}}(sumlimits_i {{omega _{mi}}} - sumlimits_{{y_i}
e {G_m}({x_i})} {{omega _{mi}}} ) + {e^{{a_m}}}sumlimits_{{y_i}
e {G_m}({x_i})} {{omega _{mi}}} \
= {e^{ - {a_m}}}sumlimits_i {{omega _{mi}}} + ({e^{{a_m}}} - {e^{ - {a_m}}})sumlimits_{{y_i}
e {G_m}({x_i})} {{omega _{mi}}}
end{array}]
要求损失函数的最小值,可以通过求导,令Lm对am求导并令其等于0,可得
[frac{{partial {L_m}}}{{partial {a_m}}} = - {e^{ - {a_m}}}sumlimits_i {{omega _{mi}}} + ({e^{{a_m}}} + {e^{ - {a_m}}})sumlimits_{{y_i} e {G_m}({x_i})} {{omega _{mi}}} = 0]
[egin{array}{l}
({e^{{a_m}}} + {e^{ - {a_m}}})sumlimits_{{y_i}
e {G_m}({x_i})} {{omega _{mi}}} = {e^{ - {a_m}}}sumlimits_i {{omega _{mi}}} \
(1 + {e^{ - 2{a_m}}}) = frac{{sumlimits_i {{omega _{mi}}} }}{{sumlimits_{{y_i}
e {G_m}({x_i})} {{omega _{mi}}} }}\
{e^{ - 2{a_m}}} = frac{{sumlimits_i {{omega _{mi}}} }}{{sumlimits_{{y_i}
e {G_m}({x_i})} {{omega _{mi}}} }} - 1\
{a_m} = - frac{1}{2}ln (frac{{sumlimits_i {{omega _{mi}}} }}{{sumlimits_{{y_i}
e {G_m}({x_i})} {{omega _{mi}}} }} - 1)
end{array}]
至此Adaboost的最基本的公式已经推导完成了
但可以注意到ωmi可以递推
[egin{array}{l}
{omega _{mi}} = exp ( - {y_i}{F_{m - 1}}({x_i}))\
= exp ( - {y_i}{F_{m - 2}}({x_i}) - {a_{m - 1}}{y_i}{G_{m - 1}}({x_i}))\
= exp ( - {y_i}{F_{m - 2}}({x_i}))exp ( - {a_{m - 1}}{y_i}{G_{m - 1}}({x_i}))\
= exp ( - {y_i}{F_{m - 3}}({x_i}))exp ( - {a_{m - 2}}{y_i}{G_{m - 2}}({x_i}))exp ( - {a_{m - 1}}{y_i}{G_{m - 1}}({x_i}))\
= prodlimits_{j = 1}^{m - 1} {exp ( - {a_j}{y_i}{G_j}({x_i}))}
end{array}]
可见ωmi不仅只与第m轮训练以前训练的若干个模型相关,并且只与这些模型对样本i的预测值相关,并且为连乘形式。又由于每个样本的权重也可以是连乘形式,所以可以很自然地将ωmi与每轮训练每个样本的权重联系起来
如果将ωmi作为每轮训练每个样本的权重,则am的计算方程中
[{sumlimits_i {{omega _{mi}}} }]
为该轮训练所有样本的权重之和
[{sumlimits_{{y_i} e {G_m}({x_i})} {{omega _{mi}}} }]
为该轮训练所有区分错误样本的权重之和
如果令每轮训练所有样本的权重之和等于1,根据的am计算方程不会影响am的取值,等比例缩放也不会影响样本间的权重关系,因此可令第m轮训练样本的样本权重为
[{w_{mi}} = frac{{{omega _{mi}}}}{{sumlimits_i {{omega _{mi}}} }}]
可知
[sumlimits_i {{w_{mi}}} = 1]
[{w_{mi}} = frac{{{omega _{mi}}}}{{sumlimits_i {{omega _{mi}}} }} = frac{{exp ( - {a_{m - 1}}{y_i}{G_{m - 1}}({x_i})){omega _{(m - 1)i}}}}{{sumlimits_i {exp ( - {a_{m - 1}}{y_i}{G_{m - 1}}({x_i})){omega _{(m - 1)i}}} }} = frac{{exp ( - {a_{m - 1}}{y_i}{G_{m - 1}}({x_i}))frac{{{omega _{(m - 1)i}}}}{{sumlimits_i {{omega _{(m - 1)i}}} }}}}{{sumlimits_i {exp ( - {a_{m - 1}}{y_i}{G_{m - 1}}({x_i}))frac{{{omega _{(m - 1)i}}}}{{sumlimits_i {{omega _{(m - 1)i}}} }}} }} = frac{{exp ( - {a_{m - 1}}{y_i}{G_{m - 1}}({x_i})){w_{(m - 1)i}}}}{{sumlimits_i {exp ( - {a_{m - 1}}{y_i}{G_{m - 1}}({x_i})){w_{(m - 1)i}}} }}]
再令em为第m轮训练的误差
[{e_m} = sumlimits_{{y_i} e {G_m}({x_i})} {{w_{mi}}} = frac{{sumlimits_{{y_i} e {G_m}({x_i})} {{omega _{mi}}} }}{{sumlimits_i {{omega _{mi}}} }}]
可得
[{a_m} = frac{1}{2}ln (frac{{{e_m}}}{{1 - {e_m}}})]
至此,Adaboost所有公式证毕
参考文献: