- 贝叶斯理论是非常经典的分类算法,我们一般用表示样本,用表示可能的分类类别,则#card=math&code=P%28%5Comega_j%7C%20x%29)表示属于这一类别的概率。
- 贝叶斯决策论是在概率框架下实施决策的基本方法,本质上是如何基于概率和误判损失来选择最优的类别标记.
2.1贝叶斯公式的推导
- 根据条件概率公式,我们有
%20%3D%20P(A%7CB)P(B)%3DP(B%7CA)P(A)%5C%5C%0A%5CRightarrow%20P(A%7CB)%3D%20%5Cfrac%7BP(B%7CA)P(A)%7D%7BP(B)%7D#card=math&code=P%28AB%29%20%3D%20P%28A%7CB%29P%28B%29%3DP%28B%7CA%29P%28A%29%5C%5C%0A%5CRightarrow%20P%28A%7CB%29%3D%20%5Cfrac%7BP%28B%7CA%29P%28A%29%7D%7BP%28B%29%7D)
这就是贝叶斯公式最简单的基本形式,其中
- #card=math&code=P%28A%29)是先验概率(prior),指的是样本中各种情况出现的概率
- #card=math&code=P%28B%7CA%29)是似然(likelihood),表示A发生的条件下,B出现的概率
- #card=math&code=P%28A%7CB%29)是后验概率(posterior)
现在我们假设有若干个特征,对于数据集D中的一个样本x,有:
%26%3D%5Cfrac%7BP(x%7C%5Comegaj)P(%5Comega_j)%7D%7BP(x)%7D%5C%5C%0AP(x)%26%3D%5Csum%7Bj%3D1%7D%5E%7Bc%7DP(x%7C%5Comegaj)P(%5Comega_j)%20%20%5Cend%7Baligned%7D#card=math&code=%5Cbegin%7Baligned%7D%0AP%28%5Comega_j%7Cx%29%26%3D%5Cfrac%7BP%28x%7C%5Comega_j%29P%28%5Comega_j%29%7D%7BP%28x%29%7D%5C%5C%0AP%28x%29%26%3D%5Csum%7Bj%3D1%7D%5E%7Bc%7DP%28x%7C%5Comega_j%29P%28%5Comega_j%29%20%20%5Cend%7Baligned%7D)
- 贝叶斯定理可以用于分类问题的决策,而用贝叶斯定理进行决策的实质是通过概率分布情况来使得分类错误的可能性最小化,上面提到的公式事实上是基于后验概率来进行分类决策的,也称为Optimal Bayes Decision Rule
- 贝叶斯决策也可能会碰到一些特殊情况,比如当先验概率相等的时候,只需要比较likelihood
就可以,当likelihood一样大小的时候只需要比较先验概率就可以。
2.2贝叶斯公式的Loss
2.2.1条件风险
- 可以定义一个loss function来估计贝叶斯公式的loss,我们设#card=math&code=%5Clambda%28%5Calpha_i%7C%5Comega_j%29)表示将原本类型为的样本分类成了所带来的loss(也可以叫risk),则将数据集D中的样本x分类为所带来的条件风险(Condition Risk)可以表示为:
%3D%5Csum%5Climits%7Bj%3D1%7D%5Climits%5E%7Bc%7D%5Clambda(%5Calpha_i%7C%5Comega_j)P(%5Comega_j%7Cx)%0A#card=math&code=R%28%5Calpha_i%7Cx%29%3D%5Csum%5Climits%7Bj%3D1%7D%5Climits%5E%7Bc%7D%5Clambda%28%5Calpha_i%7C%5Comega_j%29P%28%5Comega_j%7Cx%29%0A)
- 而对于所有可能的,可以对其条件风险进行积分,得到总的条件风险
p(x)dx%0A#card=math&code=R%3D%5Cint%20R%28%5Calpha_i%7Cx%29p%28x%29dx%0A)
- 可以记#card=math&code=%5Clambda%7Bij%7D%3D%5Clambda%28%5Calpha_i%7C%5Comega_j%29)则对于一个二分类问题,我们只需要比较#card=math&code=R%28%5Calpha_i%7Cx%29)的大小,将其展开之后发现只需要比较%7D%7BP(x%7C%5Comega_2)%7D#card=math&code=%5Cfrac%7BP%28x%7C%5Comega_1%29%7D%7BP%28x%7C%5Comega_2%29%7D)和![](https://g.yuque.com/gr/latex?%5Cfrac%7B%5Clambda%7B12%7D-%5Clambda%7B22%7D%7D%7B%5Clambda%7B21%7D-%5Clambda%7B11%7D%7D%5Ctimes%20%5Cfrac%7BP(%5Comega_2)%7D%7BP(%5Comega_1)%7D#card=math&code=%5Cfrac%7B%5Clambda%7B12%7D-%5Clambda%7B22%7D%7D%7B%5Clambda%7B21%7D-%5Clambda_%7B11%7D%7D%5Ctimes%20%5Cfrac%7BP%28%5Comega_2%29%7D%7BP%28%5Comega_1%29%7D)的大小。
2.2.2 0-1 loss
- 一种简单的loss定义是损失在分类正确的时候为0,错误的时候为1,即
%20%5C%5C%0A1%20%5Cquad%20(i%5Cneq%20j)%0A%5Cend%7Baligned%7D%0A%5Cright.#card=math&code=%5Clambda_%7Bij%7D%3D%5Cleft%5C%7B%0A%5Cbegin%7Baligned%7D%0A0%20%5Cquad%20%28i%3Dj%29%20%5C%5C%0A1%20%5Cquad%20%28i%5Cneq%20j%29%0A%5Cend%7Baligned%7D%0A%5Cright.)
将其原本的条件风险表达式,我们可以得到
%3D%5Csum%5Climits%7Bj%3D1%7D%5Climits%5E%7Bc%7D%5Clambda(%5Calpha_i%7C%5Comega_j)P(%5Comega_i%7Cx)%3D%5Csum%5Climits%7Bj%5Cnot%3Di%7D%7BP(%5Comegai%7Cx)%7D%3D1-P(%5Comega_i%7Cx)%5Cend%7Baligned%7D#card=math&code=%5Cbegin%7Baligned%7D%0AR%28%5Calpha_i%7Cx%29%3D%5Csum%5Climits%7Bj%3D1%7D%5Climits%5E%7Bc%7D%5Clambda%28%5Calphai%7C%5Comega_j%29P%28%5Comega_i%7Cx%29%3D%5Csum%5Climits%7Bj%5Cnot%3Di%7D%7BP%28%5Comega_i%7Cx%29%7D%3D1-P%28%5Comega_i%7Cx%29%5Cend%7Baligned%7D)
此时我们进行决策的话只需要比较#card=math&code=P%28%5Comega_i%7Cx%29)的大小,而根据贝叶斯公式,我们只需要比P(%5Comega_i)#card=math&code=P%28x%7C%5Comega_i%29P%28%5Comega_i%29)的大小,因此下面我们就来解决这一部分的计算问题。
2.2.3 代码实现
- 在学完这一部分的内容之后我尝试了完成蔡登老师ML课程的作业,在给定的框架上实现一个最简单形式的贝叶斯二分类器,需要编写的部分包括likelihood的计算、posterior的计算,以及错误分类和risk的计算,这个作业主要分成了三个部分,首先是实现likelihood并基于likehood进行二分类,然后是实现posterior并基于posterior进行分类,分别计算两种分类的误判数目并比较,然后是计算risk,其中我有如下几点收获:
- 这个作业的样本分布是离散的,并且有一定的上下界,因此可以先获取样本值的上下界,然后统计样本的分布情况,再进行后续计算
- 在计算likelihood的时候,要计算的实际上是当前分类下,此类样本的所占比例,因此每种样本下每种特征的个数除以每种分类下样本总数,就是likelihood
- 在计算posterior的时候,#card=math&code=P%28%5Comega_j%29)是每种类别占全部样本的比例,而#card=math&code=P%28x%29)是当前特征属性值x的样本的likelihood和先验概率的乘积之和,也就是说#card=math&code=P%28x%29)其实是多个值,并不是一整个值,而是该特征的每一种值都有一个#card=math&code=P%28x%29)
- 在计算误判个数的时候要先根据分类依据确定每种特征属性值的最后分类结果,然后遍历所有的测试集找出分类错误的
- 在计算risk的时候,也是每种特征属性的值都有一个对应的risk
# 似然的计算,可以直接基于似然进行判别和决策
def likelihood(x):
"""
LIKELIHOOD Different Class Feature Likelihood
INPUT: x, features of different class, C-By-N numpy array
C is the number of classes, N is the number of different feature
OUTPUT: l, likelihood of each feature(from smallest feature to biggest feature)
given by each class, C-By-N numpy array
"""
C, N = x.shape
l = np.zeros((C, N))
# 这里其实给出的样本x的结构是每种分类下面不同特征属性值的分布情况,因此可以先求出每种类别的样本和
# 再计算得到每种特征属性值对应的分布情况就可以
class_sum = np.sum(x, axis=1)
for i in range(C):
for j in range(N):
l[i, j] = x[i, j] / class_sum[i]
return l
# 基于贝叶斯公式的后验概率计算,并用后验概率作为分类的依据
def posterior(x):
"""
POSTERIOR Two Class Posterior Using Bayes Formula
INPUT: x, features of different class, C-By-N vector
C is the number of classes, N is the number of different feature
OUTPUT: p, posterior of each class given by each feature, C-By-N matrix
"""
C, N = x.shape
l = likelihood(x)
total = np.sum(x)
p = np.zeros((C, N))
# TODO
# begin answer
class_sum = np.sum(x, axis=1)
prior = class_sum / total
p_x = np.zeros(N)
for j in range(N):
for i in range(C):
p_x[j] += l[i, j] * prior[i]
for i in range(C):
for j in range(N):
p[i, j] = l[i, j] * prior[i] / p_x[j]
# end answer
return p
2.3参数估计 Parameter Estimation
- 经过刚才的推导我们发现,最后只需要计算P(%5Comega_i)#card=math&code=P%28x%7C%5Comega_i%29P%28%5Comega_i%29)就可以进行贝叶斯决策,而#card=math&code=P%28%5Comega_i%29)是可以直接在样本中计算出来的,因为监督学习中每个样本都是有label的,可以非常容易地计算出#card=math&code=P%28%5Comega_i%29),问题就在于如何计算#card=math&code=P%28x%7C%5Comega_i%29),也就是类别中出现的样本值为x的概率。
- 我们可以用数据集D的样本来对#card=math&code=P%28x%7C%5Comega_i%29)进行估计,而估计的主要方法有极大似然法(Maximum-Likelihood)和贝叶斯估计法两种。
2.3.1正态分布 Normal Distribution
- 我们需要先回忆一下概率论中学过的正态分布的相关知识,因为后面极大似然估计中会用到。正态分布也叫做高斯分布,很明显这个分布是数学王子高斯发现的,正态分布的形式如下:
- 对于一维变量我们有:
%3D%5Cmathcal%7BN%7D(x%7C%5Cmu%2C%5Csigma%5E2)%3D%5Cfrac%7B1%7D%7B%5Csqrt%7B2%5Cpi%5Csigma%5E2%7D%7D%5Cexp%5Clbrace%20-%5Cfrac%7B(x-%5Cmu)%5E2%7D%7B2%5Csigma%5E2%7D%5Crbrace%0A#card=math&code=P%28x%7C%5Cmu%2C%5Csigma%5E2%29%3D%5Cmathcal%7BN%7D%28x%7C%5Cmu%2C%5Csigma%5E2%29%3D%5Cfrac%7B1%7D%7B%5Csqrt%7B2%5Cpi%5Csigma%5E2%7D%7D%5Cexp%5Clbrace%20-%5Cfrac%7B%28x-%5Cmu%29%5E2%7D%7B2%5Csigma%5E2%7D%5Crbrace%0A&height=49&width=376)
并且%3D%5Cmu%2Cvar(x)%3D%5Csigma%5E2#card=math&code=E%28x%29%3D%5Cmu%2Cvar%28x%29%3D%5Csigma%5E2),而对于d维的向量x,多元高斯分布的参数是d维的均值向量 和的对称正定协方差矩阵
%3D%5Cmathcal%7BN%7D(x%7C%5Cmu%2C%5CSigma%5E2)%3D%5Cfrac%7B1%7D%7B(%7B2%5Cpi%7D)%5E%7B%5Cfrac%7Bd%7D%7B2%7D%7D%7C%5CSigma%7C%5E%7B%5Cfrac%7B1%7D%7B2%7D%7D%7D%5Cexp%5B-%5Cfrac%7B1%7D%7B2%7D(%5Cboldsymbol%20x-%5Cmu)%5E%7BT%7D%5CSigma%5E%7B-1%7D(%5Cboldsymbol%20x-%5Cmu)%5D%0A#card=math&code=P%28%5Cboldsymbol%20x%7C%5Cmu%2C%5CSigma%29%3D%5Cmathcal%7BN%7D%28x%7C%5Cmu%2C%5CSigma%5E2%29%3D%5Cfrac%7B1%7D%7B%28%7B2%5Cpi%7D%29%5E%7B%5Cfrac%7Bd%7D%7B2%7D%7D%7C%5CSigma%7C%5E%7B%5Cfrac%7B1%7D%7B2%7D%7D%7D%5Cexp%5B-%5Cfrac%7B1%7D%7B2%7D%28%5Cboldsymbol%20x-%5Cmu%29%5E%7BT%7D%5CSigma%5E%7B-1%7D%28%5Cboldsymbol%20x-%5Cmu%29%5D%0A&height=52&width=493)
2.3.2极大似然法MLE的参数估计
- 我们假定需要进行分类的变量在某一类别下其样本的值是服从高斯分布的,则有
%3DP(x%7C%5Comega_i)P(%5Comega_i)%3DP(x%7C%5Comega_i%2C%5Ctheta_i)P(%5Comega_i)%2C%20%5Ctheta_i%3D(%5Cmu_i%2C%5Csigma_i)%0A#card=math&code=P%28%5Comega_i%7Cx%29%3DP%28x%7C%5Comega_i%29P%28%5Comega_i%29%3DP%28x%7C%5Comega_i%2C%5Ctheta_i%29P%28%5Comega_i%29%2C%20%5Ctheta_i%3D%28%5Cmu_i%2C%5Csigma_i%29%0A)
其中为待估计的参数。我们定义整个数据集D中类别为的子集是,其对于参数的似然为
%3D%5Cprod%5Climits%7Bx_k%5Cin%20D_i%7DP(x_k%7C%5Ctheta_i)%0A#card=math&code=P%28D_i%7C%5Ctheta%29%3D%5Cprod%5Climits%7Bx_k%5Cin%20D_i%7DP%28x_k%7C%5Ctheta_i%29%0A)
- 极大似然法的基本思路就是让这个数据集的似然达到最大,而达到最大的时候的参数值就是我们要求的参数估计值,因为它使得数据集中可能属于这个类别的样本的概率达到了最大。而为了防止数值过小造成下溢,可以采用对数似然
%20%3D%20%5Cln%20P(Di%7C%5Ctheta)%20%3D%20%5Csum%5Climits%7Bxk%5Cin%20D_i%7D%5Cln%20P(x_k%7C%5Ctheta_i)%0A#card=math&code=l%28%5Ctheta%29%20%3D%20%5Cln%20P%28D_i%7C%5Ctheta%29%20%3D%20%5Csum%5Climits%7Bx_k%5Cin%20D_i%7D%5Cln%20P%28x_k%7C%5Ctheta_i%29%0A)
- 我们的目标就是#card=math&code=%5Ctheta%5E%2A%3Darg%5Cmax%5Climits_%7B%5Ctheta%7D%20l%28%5Ctheta%29),我们之前已经假设了某一类别下的x服从正态分布,则有
%20%26%20%3D%20%5Cln%20(%5Cfrac%7B1%7D%7B(%7B2%5Cpi%7D)%5E%7B%5Cfrac%7Bd%7D%7B2%7D%7D%7C%5CSigma%7C%5E%7B%5Cfrac%7B1%7D%7B2%7D%7D%7D%5Cexp%5B-%5Cfrac%7B1%7D%7B2%7D(x_k-%5Cmu_i)%5E%7BT%7D%5CSigma%5E%7B-1%7D(x_k-%5Cmu_i)%5D)%5C%5C%0A%26%20%3D%20-%5Cfrac%7Bd%7D%7B2%7D%5Cln%20(2%5Cpi)-%5Cfrac%7B1%7D%7B2%7D%5Cln%20%7C%5CSigma%7C-%5Cfrac%7B1%7D2%20(x_k-%5Cmu_i)%5E%7BT%7D%5CSigma%5E%7B-1%7D(x_k-%5Cmu_i)%0A%5Cend%7Baligned%7D#card=math&code=%5Cbegin%7Baligned%7D%0A%5Cln%20P%28x_k%7C%5Ctheta_i%29%20%26%20%3D%20%5Cln%20%28%5Cfrac%7B1%7D%7B%28%7B2%5Cpi%7D%29%5E%7B%5Cfrac%7Bd%7D%7B2%7D%7D%7C%5CSigma%7C%5E%7B%5Cfrac%7B1%7D%7B2%7D%7D%7D%5Cexp%5B-%5Cfrac%7B1%7D%7B2%7D%28x_k-%5Cmu_i%29%5E%7BT%7D%5CSigma%5E%7B-1%7D%28x_k-%5Cmu_i%29%5D%29%5C%5C%0A%26%20%3D%20-%5Cfrac%7Bd%7D%7B2%7D%5Cln%20%282%5Cpi%29-%5Cfrac%7B1%7D%7B2%7D%5Cln%20%7C%5CSigma%7C-%5Cfrac%7B1%7D2%20%28x_k-%5Cmu_i%29%5E%7BT%7D%5CSigma%5E%7B-1%7D%28x_k-%5Cmu_i%29%0A%5Cend%7Baligned%7D)
- 则对求偏导数得到
%7D%7B%5Cpartial%20%5Cmu_i%7D%3D%5CSigma%5E%7B-1%7D(x_k-%5Cmu_i)%5Cend%7Baligned%7D#card=math&code=%5Cbegin%7Baligned%7D%0A%5Cfrac%7B%5Cpartial%20%5Cln%20P%28x_k%7C%5Ctheta_i%29%7D%7B%5Cpartial%20%5Cmu_i%7D%3D%5CSigma%5E%7B-1%7D%28x_k-%5Cmu_i%29%5Cend%7Baligned%7D)
我们需要让对数似然函数取得最值,则对其求偏导数可得到
%3D0%5CRightarrow%20%5Cmui%3D%5Cfrac%7B1%7D%7Bn%7D%5Csum%7Bxk%5Cin%20D_i%7D%20x_k%5Cend%7Baligned%7D#card=math&code=%5Cbegin%7Baligned%7D%0A%5Csum%7Bxk%5Cin%20D_i%7D%5CSigma%5E%7B-1%7D%28x_k-%5Cmu_i%29%3D0%5CRightarrow%20%5Cmu_i%3D%5Cfrac%7B1%7D%7Bn%7D%5Csum%7Bx_k%5Cin%20D_i%7D%20x_k%5Cend%7Baligned%7D)
- 同理可以对进行求导可以得到
%7D%7B%5Cpartial%20%5CSigma_i%7D%3D%5Cfrac%7B1%7D%7B2%5CSigma%7D%2B%5Cfrac%7B1%7D%7B2%5CSigma%5E2%7D(x_k-%5Cmu_i)(x_k-%5Cmu_i)%5ET%5Cend%7Baligned%7D#card=math&code=%5Cbegin%7Baligned%7D%0A%5Cfrac%7B%5Cpartial%20%5Cln%20P%28x_k%7C%5Ctheta_i%29%7D%7B%5Cpartial%20%5CSigma_i%7D%3D%5Cfrac%7B1%7D%7B2%5CSigma%7D%2B%5Cfrac%7B1%7D%7B2%5CSigma%5E2%7D%28x_k-%5Cmu_i%29%28x_k-%5Cmu_i%29%5ET%5Cend%7Baligned%7D)
因此可以求得的估计值
(xk-%5Cmu_i)%5ET%3D%5Csum%7Bxk%5Cin%20D_i%7D%7C%7Cx_k-%5Cmu_i%7C%7C%5E2%5Cend%7Baligned%7D#card=math&code=%5Cbegin%7Baligned%7D%0A%5CSigma%5E2%3D%5Csum%7Bxk%5Cin%20D_i%7D%28x_k-%5Cmu_i%29%28x_k-%5Cmu_i%29%5ET%3D%5Csum%7Bx_k%5Cin%20D_i%7D%7C%7Cx_k-%5Cmu_i%7C%7C%5E2%5Cend%7Baligned%7D)
- 这些参数按照上面的估计公式计算之后可以带入原本的likelihood表达式计算出likelihood,进一步计算出posterior
2.3.3 参数估计的代码实现
- 下面给出了用正态分布进行贝叶斯参数估计的代码:
def gaussian_pos_prob(X, Mu, Sigma, Phi):
"""
GAUSSIAN_POS_PROB Posterior probability of GDA.
Compute the posterior probability of given N data points X
using Gaussian Discriminant Analysis where the K gaussian distributions
are specified by Mu, Sigma and Phi.
Inputs:
'X' - M-by-N numpy array, N data points of dimension M.
'Mu' - M-by-K numpy array, mean of K Gaussian distributions.
'Sigma' - M-by-M-by-K numpy array (yes, a 3D matrix), variance matrix of
K Gaussian distributions.
'Phi' - 1-by-K numpy array, prior of K Gaussian distributions.
Outputs:
'p' - N-by-K numpy array, posterior probability of N data points
with in K Gaussian distribsubplots_adjustutions.
"""
N = X.shape[1]
K = Phi.shape[0]
p = np.zeros((N, K))
# 先计算likelihood
likelihood = np.zeros((N, K))
for i in range(N):
p_x = 0
for j in range(K):
x_minus_mu = X[:, i] - Mu[:, j]
sigma = Sigma[:, :, j]
det_sigma = np.linalg.det(sigma)
inv_sigma = np.linalg.inv(sigma)
base = 1.0 / (2 * np.pi * np.sqrt(np.abs(det_sigma)))
exponent = np.matmul(np.matmul(x_minus_mu.T, inv_sigma), x_minus_mu) * -0.5
likelihood[i, j] = base * np.exp(exponent)
p_x += likelihood[i, j] * Phi[j]
for j in range(K):
p[i, j] = likelihood[i, j] * Phi[j] / p_x
return p
2.3.4贝叶斯估计
- 极大似然法是频率学派的方法,而贝叶斯估计则是贝叶斯派的估计方法,区别在于极大似然法MLE认为估计的参数是一个fixed value但是贝叶斯派则认为 它是随机的变量. 把训练集D作为变量,则有
%3D%5Cfrac%7BP(x%7C%5Comega_i%2CD)P(%5Comega_i%2CD)%7D%7B%5Csum%20P(x%7C%5Comega_i%2CD)P(%5Comega_i%2CD)%7D%0A#card=math&code=P%28%5Comega_i%7Cx%2CD%29%3D%5Cfrac%7BP%28x%7C%5Comega_i%2CD%29P%28%5Comega_i%2CD%29%7D%7B%5Csum%20P%28x%7C%5Comega_i%2CD%29P%28%5Comega_i%2CD%29%7D%0A)
又可以化简为
%3D%5Cfrac%7BP(x%7C%5Comega_i%2CD_i)P(%5Comega_i)%7D%7B%5Csum%20P(x%7C%5Comega_i%2CD_i)P(%5Comega_i)%7D%0A#card=math&code=P%28%5Comega_i%7Cx%2CD%29%3D%5Cfrac%7BP%28x%7C%5Comega_i%2CD_i%29P%28%5Comega_i%29%7D%7B%5Csum%20P%28x%7C%5Comega_i%2CD_i%29P%28%5Comega_i%29%7D%0A)
2.4朴素贝叶斯 Naive Bayes
- 朴素贝叶斯分类器的基本思想是,既然我们的困难是#card=math&code=P%28x%7C%5Comega_j%29)涉及到x所有属性的联合概率不好估计,那我们就把联合概率的计算难度降到最低,
也就是假设x的所有属性(也可以叫做特征)是互相独立的,此时对于d维的样本,贝叶斯的公式变成了
%3D%5Cfrac%7BP(%5Comega)P(x%7C%5Comega)%7D%7BP(x)%7D%3D%5Cfrac%7BP(%5Comega)%7D%7BP(x)%7D%5Cprod%7Bi%3D1%7D%5EdP(x_i%7C%5Comega)%5Cend%7Baligned%7D#card=math&code=%5Cbegin%7Baligned%7D%0AP%28%5Comega%7Cx%29%3D%5Cfrac%7BP%28%5Comega%29P%28x%7C%5Comega%29%7D%7BP%28x%29%7D%3D%5Cfrac%7BP%28%5Comega%29%7D%7BP%28x%29%7D%5Cprod%7Bi%3D1%7D%5EdP%28x_i%7C%5Comega%29%5Cend%7Baligned%7D)
- 类似地,对于所有类别来说P(x)是相同的,因此朴素贝叶斯分类的目标就是
%3D%5Carg%5Cmax%7Bc%5Cin%20Y%7DP(%5Comega)%5Cprod%7Bi%3D1%7D%5E%7Bd%7DP(xi%7C%5Comega)%0A#card=math&code=h%7Bnb%7D%28x%29%3D%5Carg%5Cmax%7Bc%5Cin%20Y%7DP%28%5Comega%29%5Cprod%7Bi%3D1%7D%5E%7Bd%7DP%28x_i%7C%5Comega%29%0A)
训练集D中,令表示第c类样本构成的集合,则类的先验概率
%3D%5Cfrac%7B%7CD_c%7C%7D%7B%7CD%7C%7D%5Cend%7Baligned%7D#card=math&code=%5Cbegin%7Baligned%7D%0AP%28c%29%3D%5Cfrac%7B%7CD_c%7C%7D%7B%7CD%7C%7D%5Cend%7Baligned%7D)
- 对于离散的属性而言,可以令表示中第i个特征的取值为的样本组成的集合,则条件概率可以估计为
%3D%5Cfrac%7B%7CD%7Bc%2Cx_i%7D%7C%7D%7B%7CD_c%7C%7D%5Cend%7Baligned%7D#card=math&code=%5Cbegin%7Baligned%7D%0AP%28x_i%7C%5Comega%29%3D%5Cfrac%7B%7CD%7Bc%2Cx_i%7D%7C%7D%7B%7CD_c%7C%7D%5Cend%7Baligned%7D)
- 拉普拉斯修正Laplas Smoothing:样本存在局限性,不可能所有的特征都恰好在样本中出现,特别是朴素贝叶斯的完全独立假设使得样本可能的特征数量变得特别多,我们可以假定所有的特征大致上都是均匀分布的,通过在训练集上添加K个特征(K种不同的特征,每种类型各一个)使得每种特征都可以出现,此时的先验概率估算公式变成了:
%3D%5Cfrac%7B%7CD%7Bc%2Cx_i%7D%7C%2B1%7D%7B%7CD_c%7C%2B1%7D%0A#card=math&code=P%28x_i%7C%5Comega%29%3D%5Cfrac%7B%7CD%7Bc%2Cx_i%7D%7C%2B1%7D%7B%7CD_c%7C%2B1%7D%0A)