【概率論】一維隨機變數

bit_cabinzkk發表於2021-01-03

未經同意,禁止轉載

本文為本人在校學習筆記,若有疑問或謬誤,歡迎探討、指出。

【概率論】一維隨機變數

  • 離散型:有限個 / 可列無限多個取值
  • 連續型:分佈函式是由一個非負可積函式變上限積分得到的(易知是連續的)
  • 混合型:不是離散也不是連續型

證明分佈函式右連續:

海涅定理 + 極限

證明 P { x = a } = F ( a − 0 ) P\{x=a\} = F(a-0) P{x=a}=F(a0)

  • 證明:連續型隨機變數的分佈函式一定連續

    轉化為證明由可積函式(概率密度)$f(x) $ 的變上限積分 ∫ − ∞ x f ( x ) d x \int_{-\infty}^xf(x) \mathrm{d}x xf(x)dx 一定連續

lim ⁡ δ → 0 ∫ − ∞ x + δ f ( x ) d x ⇒ lim ⁡ δ → 0 ( ∫ − ∞ x f ( x ) d x + ∫ x x + δ f ( x ) d x ) only need to prove  lim ⁡ δ → 0 ∫ x x + δ f ( x ) d x = 0 \begin{aligned} & \lim_{\delta\to0} \int_{-\infty}^{x+\delta}f(x)\mathrm{d}x \\ & \Rightarrow \lim_{\delta\to0} (\int_{-\infty}^{x}f(x)\mathrm{d}x + \int_{x}^{x+\delta}f(x)\mathrm{d}x) \\ \quad \\ & \text{only need to prove } \lim_{\delta\to0}\int_{x}^{x+\delta}f(x)\mathrm{d}x = 0\\ \end{aligned} \\ δ0limx+δf(x)dxδ0lim(xf(x)dx+xx+δf(x)dx)only need to prove δ0limxx+δf(x)dx=0

f ( x )  is integrabel, so it is bounded ∃ M ,  s.t. − M < f ( x ) < M ∵ − M δ ≤ ∫ x x + δ f ( x ) d x ≤ M δ ⇒ lim ⁡ δ → 0 − M δ ≤ lim ⁡ δ → 0 ∫ x x + δ f ( x ) d x ≤ lim ⁡ δ → 0 M δ ⇒ 0 ≤ lim ⁡ δ → 0 ∫ x x + δ f ( x ) d x ≤ 0 ∴ Squeeze Theorem:  lim ⁡ δ → 0 ∫ x x + δ f ( x ) d x = 0 f(x)\text{ is integrabel, so it is bounded} \\ \exists M, \text{ s.t.}-M < f(x) < M \\ \begin{aligned} & \because -M\delta \le \int_{x}^{x+\delta}f(x)\mathrm{d}x \le M\delta \\ & \Rightarrow \lim_{\delta\to0} -M\delta \le \lim_{\delta\to0} \int_{x}^{x+\delta}f(x)\mathrm{d}x \le \lim_{\delta\to0} M\delta \\ & \Rightarrow 0 \le \lim_{\delta\to0} \int_{x}^{x+\delta}f(x)\mathrm{d}x \le 0 \\ & \therefore \text{Squeeze Theorem: } \lim_{\delta\to0} \int_{x}^{x+\delta}f(x)\mathrm{d}x = 0 \\ \end{aligned} \\ f(x) is integrabel, so it is boundedM, s.t.M<f(x)<MMδxx+δf(x)dxMδδ0limMδδ0limxx+δf(x)dxδ0limMδ0δ0limxx+δf(x)dx0Squeeze Theorem: δ0limxx+δf(x)dx=0

lim ⁡ δ → 0 ∫ − ∞ x + δ f ( x ) d x = ∫ − ∞ x f ( x ) d x , ∫ − ∞ x f ( x ) d x  is continuous \lim_{\delta\to0} \int_{-\infty}^{x+\delta}f(x)\mathrm{d}x = \int_{-\infty}^{x}f(x)\mathrm{d}x, \\ \int_{-\infty}^{x}f(x)\mathrm{d}x\text{ is continuous} δ0limx+δf(x)dx=xf(x)dx,xf(x)dx is continuous

離散型

(0-1)分佈 Bernoulli Distribution

(兩點分佈、伯努利分佈) X ∼ B ( 1 , p ) X \sim B(1, p) XB(1,p)

可以視為僅重複一次的伯努利試驗

  • 隨機變數
    X ( e ) = { 0 , when  e = e 1 , 1 , when  e = e 2 , X(e) = \begin{cases} 0, & \text{when } e=e_1, \\ 1, & \text{when } e=e_2, \\ \end{cases} X(e)={0,1,when e=e1,when e=e2,

  • 分佈律
    P { X = k } = p k ( 1 − p ) 1 − k , k = 0 , 1 P\{X = k\} = p^k (1-p)^{1-k},\quad k=0,1 P{X=k}=pk(1p)1k,k=0,1

  • 數字特徵

    • 數學期望
      E ( X ) = 1 ⋅ p + 0 ⋅ ( 1 − p ) = p E(X) = 1\cdot p + 0\cdot (1-p) = p E(X)=1p+0(1p)=p

    • 方差
      D ( X ) = E ( X 2 ) − E 2 ( X ) = p − p 2 ⇒ D ( X ) = p ( 1 − p ) D(X) = E(X^2) - E^2(X) = p - p^2 \\ \Rightarrow D(X) = p(1-p) D(X)=E(X2)E2(X)=pp2D(X)=p(1p)

二項分佈 Binomial Distribution

X ∼ B ( n , p ) X \sim B(n, p) XB(n,p)

n重伯努利試驗中 A A A 發生 X X X 次的概率服從二項分佈

  • 伯努利試驗:每次實驗只有兩種結果( A , A ‾ A, \overline{A} A,A),則稱該試驗為Bernoulli試驗

  • 分佈律

    A A A 的成功概率為 p p p, 則有
    P { X = k } = C n k p k ( 1 − p ) n − k , k = 0 , 2 , 3 , . . . , n P\{X=k\} = C_n^k p^k (1-p)^{n-k}, \quad k=0,2,3,...,n P{X=k}=Cnkpk(1p)nk,k=0,2,3,...,n

  • 趨勢

    image-20201117120718914
  • 數字特徵

    • 數學期望
      E ( X ) = n p E(X) = np E(X)=np
      推導:

    X i = { 1 , 第i次試驗成功 0 , 第i次試驗失敗 X_i = \begin{cases} 1, & \text{第i次試驗成功} \\ 0, & \text{第i次試驗失敗} \\ \end{cases} Xi={1,0,i次試驗成功i次試驗失敗

    X = ∑ i = 1 n X i X = \sum_{i=1}^n X_i X=i=1nXi

    E ( X ) = E ( ∑ i = 1 n X i ) = ∑ i = 1 n E ( X i ) = ∑ i = 1 n X i ⇒ E ( X ) = n p E(X) = E(\sum_{i=1}^n X_i) = \sum_{i=1}^n E(X_i) = \sum_{i=1}^n X_i \\ \Rightarrow E(X) = np E(X)=E(i=1nXi)=i=1nE(Xi)=i=1nXiE(X)=np

    • 方差
      D ( X ) = n p ( 1 − p ) D(X) = np(1-p) D(X)=np(1p)
      推導:

      各次伯努利試驗之間相互獨立,協方差項為0,則
      D ( X ) = D ( ∑ i = 1 n X i ) = ∑ i = 1 n D ( X i ) = n D ( X i ) = n p ( 1 − p ) \begin{aligned} D(X) & = D(\sum_{i=1}^n X_i) = \sum_{i=1}^n D(X_i) \\ & = nD(X_i) \\ & = np(1-p) \end{aligned} D(X)=D(i=1nXi)=i=1nD(Xi)=nD(Xi)=np(1p)

  • 計算性質:可加性
    X ∼ b ( n 1 , p ) , Y ∼ b ( n 2 , p ) X + Y ∼ b ( n 1 + n 2 , p ) X \sim b(n_1, p), Y \sim b(n_2, p) \\ X+Y \sim b(n_1+n_2, p) Xb(n1,p),Yb(n2,p)X+Yb(n1+n2,p)

泊松分佈 Poisson Distribution

X ∼ π ( λ ) X \sim \pi(\lambda) Xπ(λ)

  • 分佈律
    P { X = k } = λ k k ! e − λ , k = 0 , 1 , 2 , . . . P\{X=k\} = \frac{\lambda^k}{k!}e^{-\lambda},\quad k=0,1,2,... P{X=k}=k!λkeλ,k=0,1,2,...

  • 數字特徵

    • 數學期望

      E ( X ) = λ E(X) = \lambda E(X)=λ
      (推導):
      E ( x ) = ∑ k = 0 ∞ k ⋅ λ k k ! e − λ = λ e − λ ∑ k = 0 ∞ λ k − 1 ( k − 1 ) ! = λ e − λ e λ = λ \begin{aligned} E(x) & = \sum_{k=0}^\infty k\cdot \frac{\lambda^k}{k!}e^{-\lambda} \\ & = \lambda e^{-\lambda}\sum_{k=0}^\infty \frac{\lambda^{k-1}}{(k-1)!} \\ & = \lambda e^{-\lambda} e^\lambda \\ & = \lambda \\ \end{aligned} E(x)=k=0kk!λkeλ=λeλk=0(k1)!λk1=λeλeλ=λ

    • 方差
      D ( X ) = λ D(X) = \lambda D(X)=λ
      (推導):

      E ( X 2 ) = E [ X ( X − 1 ) + X ] = E [ X ( X − 1 ) ] + E ( X ) = ∑ k = 0 ∞ k ( k − 1 ) ⋅ λ k k ! e − λ + E ( x ) = e − λ ∑ k = 0 ∞ λ k − 2 ( k − 2 ) ! + E ( x ) = λ 2 e − λ ∑ k = 2 ∞ λ k − 2 ( k − 2 ) ! + E ( x ) = λ 2 e − λ ∑ i = 0 ∞ λ i i ! + E ( x ) = λ 2 e − λ e λ + E ( x ) = λ 2 + λ \begin{aligned} E(X^2) & = E[X(X-1) + X]\\ & = E[X(X-1)] + E(X) \\ & = \sum_{k=0}^\infty k (k-1)\cdot \frac{\lambda^k}{k!}e^{-\lambda} + E(x) \\ & = e^{-\lambda} \sum_{k=0}^\infty \frac{\lambda^{k-2}}{(k-2)!} + E(x) \\ & = \lambda^2e^{-\lambda} \sum_{k=2}^\infty \frac{\lambda^{k-2}}{(k-2)!} + E(x) \\ & = \lambda^2e^{-\lambda} \sum_{i=0}^\infty \frac{\lambda^i}{i!} + E(x) \\ & = \lambda^2e^{-\lambda}e^{\lambda} + E(x) \\ & = \lambda^2 + \lambda \end{aligned} E(X2)=E[X(X1)+X]=E[X(X1)]+E(X)=k=0k(k1)k!λkeλ+E(x)=eλk=0(k2)!λk2+E(x)=λ2eλk=2(k2)!λk2+E(x)=λ2eλi=0i!λi+E(x)=λ2eλeλ+E(x)=λ2+λ

      代入方差表示式得
      D ( X ) = E ( X 2 ) − E 2 ( X ) = λ 2 + λ − λ 2 = λ \begin{aligned} D(X) & = E(X^2) - E^2(X) \\ & = \lambda^2 + \lambda - \lambda^2 \\ & = \lambda \end{aligned} D(X)=E(X2)E2(X)=λ2+λλ2=λ

  • 計算性質:可加性
    X ∼ π ( λ 1 ) , Y ∼ π ( λ 2 ) X + Y ∼ π ( λ 1 + λ 2 ) X \sim \pi(\lambda_1), Y \sim \pi(\lambda_2) \\ X+Y \sim \pi(\lambda_1 + \lambda_2) Xπ(λ1),Yπ(λ2)X+Yπ(λ1+λ2)

  • 泊松定理

    λ > 0 \lambda > 0 λ>0 是一個常數, n n n 為任意正整數, 設 n p n = λ np_n = \lambda npn=λ, 則對於任一固定的非負整數有
    lim ⁡ n → ∞ C n k p k ( 1 − p ) n − k = λ k e − λ k ! \lim_{n\to\infty}C_n^k p^k (1-p)^{n-k} = \frac{\lambda^k e^{-\lambda}}{k!} nlimCnkpk(1p)nk=k!λkeλ

    樣本 n n n 足夠大時, p n p_n pn 會很小。

    λ = n p \lambda = np λ=np 值合適時(不太大也不太小),可用Poisson Distribution逼近Binomial Distribution

    C n k p k ( 1 − p ) n − k ≈ λ k e − λ k ! C_n^k p^k (1-p)^{n-k} \approx \frac{\lambda^k e^{-\lambda}}{k!} Cnkpk(1p)nkk!λkeλ

幾何分佈 Geometric Distribution

X ∼ G E ( p ) X \sim GE(p) XGE(p)

在n次Bernoulli試驗中,試驗k次才得到第一次成功的機率。也即:前k-1次皆失敗,第k次成功的概率。

幾何分佈是Pascal分佈當r=1時的特例。

  • 分佈律
    P { X = k } = ( 1 − p ) k − 1 p , k = 1 , 2 , . . . P\{X = k\} = (1-p)^{k-1} p, \quad k=1,2,... P{X=k}=(1p)k1p,k=1,2,...

  • 數字特徵

    • 數學期望
      E ( X ) = 1 p E(X) = \frac{1}{p} E(X)=p1
      直覺:命中率更高的,數學期望小(失敗次數少)。

      推導:
      E ( X ) = ∑ k = 0 ∞ k ( 1 − p ) k − 1 p = p ∑ k = 0 ∞ [ − ( 1 − p ) k ] ′ = − p ⋅ [ ∑ k = 0 ∞ ( 1 − p ) k ] ′ = − p ⋅ [ 1 p ] ′ = 1 p \begin{aligned} E(X) & = \sum_{k=0}^\infty k(1-p)^{k-1}p \\ & = p \sum_{k=0}^\infty [-(1-p)^{k}]' \\ & = -p \cdot [\sum_{k=0}^\infty (1-p)^{k}]' \\ & = -p \cdot [\frac{1}{p}]' \\ & = \frac{1}{p} \end{aligned} E(X)=k=0k(1p)k1p=pk=0[(1p)k]=p[k=0(1p)k]=p[p1]=p1
      (用等差乘等比數列方法求 a k = k ( 1 − p ) k − 1 a_k = k(1-p)^{k-1} ak=k(1p)k1 前n項和通項,再求極限,也能夠推出,但麻煩)

    • 方差
      D ( X ) = 1 − p p 2 D(X) = \frac{1-p}{p^2} D(X)=p21p

      推導:
      E ( X 2 ) = E [ X ( X − 1 ) ] + E ( X ) = ∑ k = 1 ∞ k ( k − 1 ) ( 1 − p ) k − 1 p + E ( X ) = ∑ k = 2 ∞ k ( k − 1 ) ( 1 − p ) k − 1 p + E ( X ) = p ( 1 − p ) ∑ k = 2 ∞ k ( k − 1 ) ( 1 − p ) k − 2 + E ( X ) = p ( 1 − p ) ∑ k = 2 ∞ [ ( 1 − p ) k ] ′ ′ + E ( X ) = p ( 1 − p ) [ ∑ k = 2 ∞ ( 1 − p ) k ] ′ ′ + E ( X ) = p ( 1 − p ) ⋅ [ ( 1 − p ) 2 p ] ′ ′ + E ( X ) = 2 ( 1 − p ) p 2 + 1 p = 2 − p p 2 \begin{aligned} E(X^2) & = E[X(X-1)] + E(X) \\ & = \sum_{k=1}^\infty k(k-1)(1-p)^{k-1}p + E(X) \\ & = \sum_{k=2}^\infty k(k-1)(1-p)^{k-1}p + E(X) \\ & = p(1-p)\sum_{k=2}^\infty k(k-1)(1-p)^{k-2} + E(X) \\ & = p(1-p)\sum_{k=2}^\infty [(1-p)^{k}]'' + E(X) \\ & = p(1-p)[\sum_{k=2}^\infty (1-p)^{k}]'' + E(X) \\ & = p(1-p) \cdot [\frac{(1-p)^2}{p}]'' + E(X) \\ & = \frac{2(1-p)}{p^2} + \frac{1}{p} \\ & = \frac{2-p}{p^2} \\ \end{aligned} \\ E(X2)=E[X(X1)]+E(X)=k=1k(k1)(1p)k1p+E(X)=k=2k(k1)(1p)k1p+E(X)=p(1p)k=2k(k1)(1p)k2+E(X)=p(1p)k=2[(1p)k]+E(X)=p(1p)[k=2(1p)k]+E(X)=p(1p)[p(1p)2]+E(X)=p22(1p)+p1=p22p

      故可計算
      D ( X ) = E ( X 2 ) − E 2 ( X ) D(X) = E(X^2) - E^2(X) D(X)=E(X2)E2(X)

  • 無記憶性

    推導:(從無記憶性的需求推匯出變數離散情況下的概率為幾何分佈)
    P { X > t + s ∣ X > t } = P { X > s } ⇔ P { X > t + s ∩ X > t } P { X > t } = P { X > s } ⇔ P { X > t + s } = P { X > s } P { X > t } \begin{aligned} & P\{X>t+s|X>t\} = P\{X>s\} \\ & \Leftrightarrow \frac{P\{X>t+s \cap X>t\}}{P\{X>t\}} = P\{X>s\} \\ & \Leftrightarrow P\{X>t+s\} = P\{X>s\}P\{X>t\} \\ \end{aligned} \\ P{X>t+sX>t}=P{X>s}P{X>t}P{X>t+sX>t}=P{X>s}P{X>t+s}=P{X>s}P{X>t}
    let y ( x ) = P { X > x } y(x) = P\{X > x\} y(x)=P{X>x}, then
    y ( t + s ) = y ( s ) y ( t ) y(t+s) = y(s)y(t) \\ y(t+s)=y(s)y(t)
    assume y ( 1 ) = q y(1) = q y(1)=q, therefore
    y ( 2 ) = y ( 1 + 1 ) = y 2 ( 1 ) = q 2 y ( 3 ) = y ( 1 + 2 ) = y 3 ( 1 ) = q 3 . . . y ( k ) = q k \begin{aligned} & y(2) = y(1+1) = y^2(1) = q^2 \\ & y(3) = y(1+2) = y^3(1) = q^3 \\ & ... \\ & y(k) = q^k \\ \end{aligned} y(2)=y(1+1)=y2(1)=q2y(3)=y(1+2)=y3(1)=q3...y(k)=qk

    ∵ P { X = k } = P { X > k − 1 } − P { X > k } = y ( k − 1 ) − y ( k ) = q k − 1 − q k = q k − 1 ( 1 − q ) \begin{aligned} \because P\{X = k\} & = P\{X > k-1\} - P\{X > k\} \\ & = y(k-1) - y(k) \\ & = q^{k-1} - q^k \\ & = q^{k-1}(1 - q) \\ \end{aligned} \\ P{X=k}=P{X>k1}P{X>k}=y(k1)y(k)=qk1qk=qk1(1q)

    let p = 1 − q p = 1 - q p=1q, then

    P { X = k } = ( 1 − p ) k − 1 p ∴ X ∼ G E ( p ) P\{X = k\} = (1-p)^{k-1}p \\ \therefore X \sim GE(p) P{X=k}=(1p)k1pXGE(p)

連續型

均勻分佈 Uniform Distribution

X ∼ U ( a , b ) X \sim U(a, b) XU(a,b)

  • 概率密度

f ( x ) = { 1 b − a , x > 0 0 , o t h e r f(x) = \begin{cases} \frac{1}{b-a}, & x>0\\ 0, & other \end{cases} f(x)={ba1,0,x>0other

  • 分佈函式

F ( x ) = { x − a b − a , x > 0 0 , o t h e r F(x) = \begin{cases} \frac{x-a}{b-a}, & x>0\\ 0, & other \end{cases} F(x)={baxa,0,x>0other

  • 圖線
均勻分佈概率密度和分佈函式
  • 數字特徵

    • 數學期望

      中點
      E ( X ) = 1 2 ( a + b ) E(X) = \frac{1}{2}(a+b) E(X)=21(a+b)

    • 方差
      D ( X ) = 1 12 ( b − a ) 3 D(X) = \frac{1}{12}(b-a)^3 D(X)=121(ba)3

  • 背景

    X ∼ U ( a , b ) X \sim U(a,b) XU(a,b) 時, X X X 取得 ( a , b ) (a,b) (a,b) 上某一區間內值的概率,與該區間長度成正比

指數分佈 Exponential Distribution

X ∼ E ( λ ) X \sim E(\lambda) XE(λ) or X ∼ E ( θ ) X \sim E(\theta) XE(θ)

引數是 λ \lambda λ 的表示方法是站在伽馬分佈的角度,把 λ \lambda λ 作為伽馬分佈的 β \beta β 引數, α = 1 \alpha = 1 α=1,得到指數分佈 E ( λ ) = Γ ( 1 , λ ) E(\lambda) = \Gamma(1, \lambda) E(λ)=Γ(1,λ)

引數是 θ \theta θ 的表示方法是站在指數分佈表示式的角度(或均值的角度),有 E ( X ) = θ E(X) = \theta E(X)=θ

  • 概率密度

f ( x ) = { 1 θ e − 1 θ x , x > 0 0 , o t h e r f(x) = \begin{cases} \frac{1}{\theta}e^{-\frac{1}{\theta}x}, & x > 0 \\ 0, & other \end{cases} f(x)={θ1eθ1x,0,x>0other

  • 分佈函式

F ( x ) = { 1 − e − 1 θ x , x > 0 0 , o t h e r F(x) = \begin{cases} 1 - e^{-\frac{1}{\theta}x}, & x > 0 \\ 0, & other \end{cases} F(x)={1eθ1x,0,x>0other

  • 圖線

    x → 0 x \to 0 x0 時密度逼近最大值 1 / θ 1 / \theta 1/θ

    image-20201103081257059 image-20201103081358240
  • 數字特徵

    • 數學期望
      E ( X ) = θ = 1 λ E(X) = \theta = \frac{1}{\lambda} E(X)=θ=λ1
      推導:分部積分

    • 方差
      D ( X ) = θ 2 = 1 λ 2 D(X) = \theta^2 = \frac{1}{\lambda^2} D(X)=θ2=λ21

  • 無記憶性

P { X > t + s ∣ X > t } = P { X > s } P\{X>t+s|X>t\} = P\{X>s\} P{X>t+sX>t}=P{X>s}

推導:(從無記憶性的需求推匯出變數連續情況下的概率為指數分佈)

P { X > t + s ∣ X > t } = P { X > s } ⇔ P { X > t + s ∩ X > t } P { X > t } = P { X > s } ⇔ P { X > t + s } = P { X > s } P { X > t } \begin{aligned} & P\{X>t+s|X>t\} = P\{X>s\} \\ & \Leftrightarrow \frac{P\{X>t+s \cap X>t\}}{P\{X>t\}} = P\{X>s\} \\ & \Leftrightarrow P\{X>t+s\} = P\{X>s\}P\{X>t\} \\ \end{aligned} \\ P{X>t+sX>t}=P{X>s}P{X>t}P{X>t+sX>t}=P{X>s}P{X>t+s}=P{X>s}P{X>t}

let y ( x ) = P { X > x } y(x) = P\{X>x\} y(x)=P{X>x}, then

y ( t + s ) = y ( s ) y ( t ) ⇒ y ( t + s ) − y ( s ) = y ( s ) y ( t ) − y ( s ) ⇒ y ( t + s ) − y ( s ) = y ( s ) y ( t ) − y ( s ) y ( 0 ) ⇒ y ( t + s ) − y ( s ) t = y ( s ) y ( t ) − y ( s ) y ( 0 ) t ⇒ lim ⁡ t → 0 y ( t + s ) − y ( s ) t = lim ⁡ t → 0 y ( s ) y ( t ) − y ( s ) y ( 0 ) t ⇒ y ′ ( s ) = y ( s ) ⋅ y ′ ( 0 ) ⇒ y ′ ( s ) y ( s ) = y ′ ( 0 ) ⇒ ∫ y ′ ( s ) y ( s ) d s = ∫ y ′ ( 0 ) d s ⇒ ln ⁡ ∣ y ( x ) ∣ = y ′ ( 0 ) ⋅ s + C ⇒ y ( x ) = e y ′ ( 0 ) ⋅ s + C \begin{aligned} & y(t+s) = y(s)y(t) \\ & \\ & \Rightarrow y(t+s)-y(s) = y(s)y(t)-y(s) \\ & \Rightarrow y(t+s)-y(s) = y(s)y(t)-y(s)y(0) \\ & \Rightarrow \frac{y(t+s)-y(s)}{t} = \frac{y(s)y(t)-y(s)y(0)}{t} \\ & \Rightarrow \lim_{t\to0}\frac{y(t+s)-y(s)}{t} = \lim_{t\to0}\frac{y(s)y(t)-y(s)y(0)}{t} \\ & \Rightarrow y'(s) = y(s) \cdot y'(0) \\ & \Rightarrow \frac{y'(s)}{y(s)} = y'(0) \\ & \Rightarrow \int \frac{y'(s)}{y(s)}\mathrm{d}s = \int y'(0) \mathrm{d}s\\ & \Rightarrow \ln|y(x)| = y'(0)\cdot s + C\\ & \Rightarrow y(x) = e^{y'(0)\cdot s + C} \end{aligned} y(t+s)=y(s)y(t)y(t+s)y(s)=y(s)y(t)y(s)y(t+s)y(s)=y(s)y(t)y(s)y(0)ty(t+s)y(s)=ty(s)y(t)y(s)y(0)t0limty(t+s)y(s)=t0limty(s)y(t)y(s)y(0)y(s)=y(s)y(0)y(s)y(s)=y(0)y(s)y(s)ds=y(0)dslny(x)=y(0)s+Cy(x)=ey(0)s+C
let − 1 θ = y ′ ( 0 ) -\frac{1}{\theta} = y'(0) θ1=y(0), x = s x = s x=s, and y ( 0 ) = 1 ⇒ C = 0 y(0) = 1 \Rightarrow C = 0 y(0)=1C=0, then
y ( x ) = e − 1 θ x y(x) = e^{-\frac{1}{\theta}x} y(x)=eθ1x

or let − λ = y ′ ( 0 ) -\lambda = y'(0) λ=y(0), then
y ( x ) = e − λ x y(x) = e^{-\lambda x} y(x)=eλx
so
P { X > x } = e − 1 θ x ⇒ F ( x ) = P { X ≤ x } = 1 − e − 1 θ x \begin{aligned} & P\{X>x\} = e^{-\frac{1}{\theta}x} \\ & \Rightarrow F(x) = P\{X \le x\} = 1 - e^{-\frac{1}{\theta}x} \end{aligned} P{X>x}=eθ1xF(x)=P{Xx}=1eθ1x

伽馬分佈 Gamma Distribution

X ∼ Γ ( α , β ) X \sim \Gamma(\alpha, \beta) XΓ(α,β)

image-20201126140744580

服從指數分佈 E ( λ ) E(\lambda) E(λ) 也即服從伽馬分佈 Γ ( 1 , λ ) \Gamma(1, \lambda) Γ(1,λ)是伽馬分佈的一種特例

  • 伽馬函式性質
    Γ ( α + 1 ) = α Γ ( α ) , Γ ( α ) = ( α − 1 ) Γ ( α − 1 ) , … \Gamma(\alpha+1) = \alpha\Gamma(\alpha), \\ \Gamma(\alpha) = (\alpha-1)\Gamma(\alpha-1), \\ \dots Γ(α+1)=αΓ(α),Γ(α)=(α1)Γ(α1),
    數字特徵

    • 數學期望

      積分中使用 u = x / β u = x / \beta u=x/β 換元,使用伽馬函式的性質,易得

    E ( X ) = α β E(X) = \alpha\beta E(X)=αβ

    • 方差

    先積分計算 E ( X 2 ) E(X^2) E(X2) ,同樣使用還原和伽馬函式的性質,易得
    E ( X 2 ) = α ( 1 + α ) β 2 E(X^2) = \alpha (1+\alpha) \beta^2 E(X2)=α(1+α)β2
    故有
    D ( X ) = α β 2 D(X) = \alpha\beta^2 D(X)=αβ2

正態分佈 Normal Distribution

X ∼ N ( μ , σ 2 ) X \sim N(\mu, \sigma^2) XN(μ,σ2)

又名為高斯分佈(Gauss Distribution)。

引數 μ \mu μ (數學期望)控制分佈集中位置,引數 σ \sigma σ (標準差)控制分佈的離散程度(越大越離散)

  • 概率密度

f ( x ) = 1 2 π ⋅ σ e − ( x − μ ) 2 2 σ 2 , − ∞ < x < ∞ f(x) = \frac{1}{\sqrt{2\pi}\cdot\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}},\quad -\infty<x<\infty f(x)=2π σ1e2σ2(xμ)2,<x<

  • 分佈函式

F ( x ) = 1 2 π σ ∫ − ∞ x e − ( x − μ ) 2 2 σ 2 d x , − ∞ < x < ∞ F(x) = \frac{1}{\sqrt{2\pi}\sigma}\int_{-\infty}^{x}e^{-\frac{(x-\mu)^2}{2\sigma^2}} \mathrm{d}x,\quad -\infty<x<\infty F(x)=2π σ1xe2σ2(xμ)2dx,<x<

  • 圖線

    image-20201103081515257 image-20201103081548328
  • 標準正態分佈 X ∼ N ( 0 , 1 ) X \sim N(0, 1) XN(0,1)

    概率密度: φ ( x ) = 1 2 π e − x 2 2 , − ∞ < x < ∞ \varphi(x) = \frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}},\quad -\infty<x<\infty φ(x)=2π 1e2x2,<x<

    分佈函式: Φ ( x ) = 1 2 π ∫ − ∞ x e − x 2 2 d x , − ∞ < x < ∞ \Phi(x) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}e^{-\frac{x^2}{2}} \mathrm{d}x,\quad -\infty<x<\infty Φ(x)=2π 1xe2x2dx,<x<

    • 標準正態分佈函式易知以下規律:

      1. Φ ( − x ) = 1 − Φ ( x ) \Phi(-x) = 1 - \Phi(x) Φ(x)=1Φ(x)
      2. P { ∣ X ∣ < a } = 2 Φ ( a ) − 1 P\{|X|< a\} = 2\Phi(a) - 1 P{X<a}=2Φ(a)1
    • 任一正態分佈 X ∼ N ( 0 , 1 ) X \sim N(0, 1) XN(0,1) 通過線性替換
      Z = X − μ σ Z = \frac{X-\mu}{\sigma} Z=σXμ
      都能夠得到標準正態分佈 Z ∼ N ( 0 , 1 ) Z \sim N(0, 1) ZN(0,1)

相關文章