ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

Xavier Initialization, Kaming Initialization 权重初始化

2022-04-28 06:00:23  阅读:193  来源: 互联网

标签:mathbb begin end text Initialization Kaming Var Xavier align


Xavier Initialization (\(\text{Understanding the difficulty of training deep feedforward neural networks}\))

\(\text{Goal: The goal of Xavier Initialization is to initialize the weights such that the variance of the activations are the same across every layer. This constant variance helps prevent the gradient from exploding or vanishing.}\)

\(\text{Suppose we have an input: } X \text{ with }n\text{ components and a linear neuron with random weights }W\text{, we can write: }\)

\[\begin{align} Y = W_1X_1+W_2X_2+...+W_nX_n \end{align} \]

\(\text{Consider one term: }W_iX_i,\text{ the variance is }\)

\[\begin{align} Var(W_iX_i) = \mathbb{E}(X_i)^2Var(W_i)+\mathbb{E}(W_i)^2Var(X_i)+Var(X_i)Var(W_i) \end{align} \]

\(\text{If we assume that the variables are zero-mean, which can be simplified as:}\)

\[\begin{align} Var(W_iX_i) = Var(X_i)Var(W_i) \end{align} \]

\(\text{Furthermore, if we assume that }X_i,W_i\text{ are i.i.d, then we can get: }\)

\[\begin{align} Var(Y) &= Var(\sum_iW_iX_i)\\ &=n \cdot Var(W_i)Var(X_i) \end{align} \]

\(\large \text{From now we can observe, the output's variance }Var(Y)\text{ is also related to the input's variance, but scaled by }n\cdot Var(W_i). \text{ Therefore, if we want to control our output's variance (e.g. as the same as the input's variance), we need such conditions:}\)

\[Var(W_i) = \frac{1}{n} = \frac{1}{fan\_in} \]

\(\text{You can see this is actually the Forward process, if we consider the BackPropagation, we might need:}\)

\[Var(W_i) =\frac{1}{fan\_out} \]

\(\text{However, in real NN architectures, it's not common to have same number of inputs and outputs neurons. To compromise this, we take the average of two:}\)

\[\begin{align} Var(W_i) = \frac{2}{fan\_in+fan\_out} \end{align} \]

\(\text{In summary, the assumptions we need to derive our results:}\)
\(\text{I. }W,X\text{ are zero-mean}\)
\(\text{II. }W,X\text{ are I.I.D}\)
\(\text{III. Biases are initialized as zeros}\)
\(\text{IV. We use the }\tanh()\) \(\text{ activation function, which is approximately as linear in small inputs: }Var(a^{[l]})\approx Var(z^{[l]})\)

Kaiming Initialization

网上很多博客解释的并不好,大多只是介绍结果。这里参考论文来推导一下:
\(\text{From Xavier's results, we know that: }Var[y_l] = n_lVar[w_lx_l]. \text{ Then if }w_l \text{ has zero-mean, we can further obtain: }\)

\[\begin{align} Var[y_l] &= n_lVar[w_l](\mathbb{E}(x_l)^2+Var[x_l])\\ &=n_lVar[w_l]\mathbb{E}(x_l^2) \end{align} \]

对于\(ReLU\)函数,\(x_l = \max\{0,y_{l-1}\}\),因此\(\mathbb{E}(x_l^2)\neq Var(x_l)\).
假设\(w_{l-1}\)关于\(0\)对称分布,\(b_{l-1}=0\). 现考虑\(\mathbb{E}(x_l^2)=\mathbb{E}(\max(0,y_{l-1})^2)\). 因此 \(y_{l-1}\)也关于\(0\)对称分布。注意到:

\[\begin{align} \mathbb{P}(y_{l-1}>0)&=\mathbb{P}(w_{l-1}x_{l-1}>0)\\ &=\mathbb{P}[(w_{l-1}>0 \& x_{l-1}>0) \|(w_{l-1}<0 \& x_{l-1}<0)]\\ &=\mathbb{P}(x_{l-1}>0)\mathbb{P}(w_{l-1}>0)+\mathbb{P}(x_{l-1}<0)\mathbb{P}(w_{l-1}<0)\\ &=\frac{1}{2}\mathbb{P}(x_{l-1}>0)+\frac{1}{2}\mathbb{P}(x_{l-1}<0)\\ &=1/2 \end{align} \]

进一步:

\[\begin{align} \mathbb{E}[x_l^2] &=\mathbb{E}[\max(0,y_{l-1})^2]\\ &=\frac{1}{2}\mathbb{E}[y_{l-1}^2]\\ &=\frac{1}{2}Var[y_{l-1}] \end{align} \]

因此:

\[\begin{align} Var[y_l] &= \frac{1}{2}n_lVar[w_l]Var[y_{l-1}] \end{align} \]

\(\text{Consider all }L\text{ layers: }\)

\[\begin{align} Var[y_l] = Var[y_1](\prod_{l=2}^L\frac{1}{2}n_lVar[w_l]) \end{align} \]

A proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially.

从这可以很明显地发现,一个充分条件就是:

\[\begin{align} \frac{1}{2}n_lVar[w_l]=1 \end{align} \]

This leads to a zero-mean Gaussian distribution whose standard deviation (std) is \(\sqrt{2/n_l}\). This is our way of initialization. We also initialize \(b = 0\).

\(\text{The difference between this Kaiming initialization and the Xavier one is again the 1/2 that comes from the ReLU activation function.}\)

标签:mathbb,begin,end,text,Initialization,Kaming,Var,Xavier,align
来源: https://www.cnblogs.com/xinyu04/p/16201369.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有