本文由 简悦 SimpRead 转码, 原文地址 blog.csdn.net

DINA 模型解析

1 引言

近年来随着在线教育中试题资源数量爆炸式的增长,学生很难在海量的试题资源中找到合适的试题,因此面向学生的试题推荐方法应运而生。一些现代推荐系统采用基于模型协同过滤的推荐方法中矩阵分解技术进行推荐,但是可解释性较差,因此大多与认知诊断模型相结合。在认知心理学中,认知诊断模型可以较好地从知识点层面对学生的认知状态进行建模。现有的认知诊断模型包括连续型和离散型。其中,项目反应理论 (Item Response Theory,IRT) 是一种典型的连续型认知诊断模型。它根据学生答题情况,通过对试题和学生进行联合建模,来推出试题参数以及学生潜在能力。DINA 模 型 (Deterministic Inputs,Noisy“And”gate model) 是一种典型的离散型认知诊断模型。该模型将学生描述成一个多维的知识点掌握向量,从学生实际作答结果入手进行诊断。由于 DINA 模型简单,参数的可解释性较好,且 DINA 模型的复杂性不受属性个数的影响。

2 DINA 模型

这里要讨论的 DINA(deterministic input,noisy “and” gate)模型是属于认知诊断模型中的潜在分类模型。DINA 模型适用于对二值计分项目测验进行认知诊断。在这里我们用 X i j {X{ij}} Xij来表示第 i ( i = 1 , . . . , I ) i(i = 1,…,I) i(i=1,…,I) 个学生对第 j ( j = 1 , . . . . , J ) j(j = 1,….,J) j(j=1,….,J) 道题的回答情况。当 X i j = 1 {X{ij}} = 1 Xij=1 时表示回答正确, X i j = 0 {X{ij}} = 0 Xij=0 表示回答错误。 q j k {q{jk}} qjk表示在正确回答第 j j j 道题时是否需要知识点 k k k, q j k = 1 {q{jk}}=1 qjk=1 表示需要, q j k = 0 {q{jk}=0} qjk=0 表示不需要。 α i = { α i 1 , α i 2 , ⋯   , α i k } {\alpha{i}={ {\alpha{i1}},{\alpha{i2}},\cdots,{\alpha{ik}} }} αi={αi1,αi2,⋯,αik} 表示第 i i i 个学生所具备的知识点向量,这里的 K K K 表示所有有关的属性个数,当这个向量中的第 k k k 个元素为 1 时表示第 i i i 个被试者掌握了第 k k k 个属性,当这个向量中的第 k k k 个元素为 0 时表示第 i i i 个被试者没有掌握第 k k k 个属性,即 α i k = 1 {\alpha {ik}} = 1 αik=1 表示学生掌握知识点, α i k = 0 {\alpha {ik}} = 0 αik=0 表示未掌握。这里的属性可以包括技能、或是表达又或是认知过程。大多数认知诊断模型的执行都需要建立一个 j j j 行 k k k 列的 Q Q Q 矩阵,在 Q Q Q 矩阵中每一个元素是 1 或者是 0. 我们把 Q Q Q 矩阵中的 j j j 行 k k k 列元素表示成 q j k {q{jk}} qjk ,表示在正确回答第 j j j 个项目时是否需要属性 k k k,当 q j k = 1 {q{jk}=1} qjk=1 时表示需要,当 q j k = 0 {q_{jk}=0} qjk=0 是表示不需要。由此可见 Q 矩阵是十分重要的,它可以被看做是一个认知设计阵,它明确地给出了每个项目关于认知的详细信息。为方便推导,更多的符号描述参考下表。

符号表示

符号 描述
X X X 学生试题得分矩阵
X i j {X_{ij}} Xij 学生 i i i 在试题 j j j 上的得分
Q Q Q 知识点考察矩阵
q j k {q_{jk}} qjk 试题 j j j 对知识点 k k k 的考察情况
α i {\alpha _i} αi 学生 i i i 的知识点掌握情况
α i k {\alpha _{ik}} αik 学生 i i i 对知识点 k k k 的掌握情况
η i j {\eta _{ij}} ηij 学生 i i i 在试题 j j j 的潜在作答情况

学生 i i i 在试题 j j j 的潜在作答情况:
(2.1) η i j = ∏ k = 1 K α i k q j k {\eta {ij}} = \prod\limits{k = 1}^K {\alpha {ik}^{{q{jk}}}} {\rm{ }} \tag{2.1} ηij=k=1∏Kαikqjk(2.1)
η i j = 1 {\eta_{ij}=1} ηij=1 表示学生 i i i 答对试题 j j j,从公式(2.1)得出学生 i i i 已掌握试题 j j j 包含的全知识点;为 0 表示答错,学生 i i i 对于试题 j j j 中的知识点至少有一个没有掌握。
DINA 模型联合试题知识点关联矩阵 - Q Q Q 矩阵和学生答题情况 X X X 矩阵对学生进行建模,引入试题参数(slip、guess)

s j {s_j} sj:学生在掌握了试题 j j j 所考察的所有知识点的情况下做错的概率;
g j {g_j} gj:学生在并不完全掌握试题 j j j 所考察的所有知识点下猜对的概率。

在已知学生 i i i 的知识点掌握情况 α i {\alpha{i}} αi的条件下,答对试题 j j j 的概率:
(2.2) P j ( α i ) = P ( X i j = 1 ∣ α i ) = g j 1 − η i j ( 1 − s j ) η i j {P_j}({\alpha _i}) = P({X
{ij}} = 1|{\alpha i}) = g_j^{1 - {\eta {ij}}}{(1 - {sj})^{{\eta {ij}}} } \tag{2.2} Pj(αi)=P(Xij=1∣αi)=gj1−ηij(1−sj)ηij(2.2)

3 DINA 模型参数的边缘极大似然估计

DINA 模型是在给定了学生的知识点掌握向量 α i {\alpha{i}} αi(也称之为属性向量)下的反应数据 X i j {X{ij}} Xij的条件分布。在这里假设给定属性向量的条件下学生对每道题的反应是独立的,因此得 X i {X{i}} Xi的条件分布:
(2.3) L ( X i ∣ α i ) = ∏ j = 1 J P j ( α i ) X i j [ 1 − P j ( α i ) ] 1 − X i j L({X_i}|{\alpha _i}) = \prod\limits
{j = 1}^J {{Pj}{{({\alpha _i})}^{{X{ij}}}}} {[1 - {Pj}({\alpha _i})]^{1 - {X{ij}}}} \tag{2.3} L(Xi∣αi)=j=1∏JPj(αi)Xij[1−Pj(αi)]1−Xij(2.3)
对于 I I I 个学生,反应数据 X X X 的条件分布:
(2.4) L ( X ∣ α ) = ∏ i = 1 I L ( X i ∣ α i ) = ∏ i = 1 I ∏ j = 1 J P j ( α i ) X i j [ 1 − P j ( α i ) ] 1 − X i j L(X|\alpha ) = \prod\limits{i = 1}^I {L({X_i}|{\alpha _i})} = \prod\limits{i = 1}^I {\prod\limits{j = 1}^J {{P_j}{{({\alpha _i})}^{{X{ij}}}}{{[1 - {Pj}({\alpha _i})]}^{1 - {X{ij}}}}} } \tag{2.4} L(X∣α)=i=1∏IL(Xi∣αi)=i=1∏Ij=1∏JPj(αi)Xij[1−Pj(αi)]1−Xij(2.4)
接下来为了获得 β ^ = ( s ^ 1 , g ^ 1 , ⋯   , s ^ J , g ^ J ) \hat \beta = ({\hat s1},{\hat g_1},\cdots,{\hat s_J},{\hat g_J}) β1,gJ,g^J),我们给出反应数据的边际似然如下:
(2.5) L ( X ) = ∏ i = 1 I L ( X i ) = ∏ i = 1 I ∑ l = 1 L ( X i ∣ α l ) P ( α l ) L(X) = \prod\limits
{i = 1}^I {L({Xi})} = \prod\limits{i = 1}^I {\sum\limits_{l = 1}^L {({X_i}|{\alpha _l})P({\alpha _l})} } \tag{2.5} L(X)=i=1∏IL(Xi)=i=1∏Il=1∑L(Xi∣αl)P(αl)(2.5)
上式中的 L ( X i ) L({X_i}) L(Xi) 是第 i i i 个学生的反应向量 X i {X_i} Xi的边际似然, P ( α l ) P({\alpha _l}) P(αl) 是属性向量的先验概率,并且 L = 2 k L = {2^k} L=2k。通过使用 EM 算法以边缘似然为基础进行参数估计。接下来 DINA 模型的参数估计的详细算法被给出。
在(2.2)中式中正确的反应概率 P j ( α i ) {P_j}({\alpha _i}) Pj(αi) 可以被重新表示为:
(2.6) P j ( α l ) = { g j i f α l , q j ≠ q j , q j 1 − s j i f α l , q j = q j , q j {P_j}({\alpha _l})=

simpread-DINA 模型解析与实现_门前大桥下 - CSDN 博客_dina 模型 - 图1

\tag{2.6}

Pj(αl)={gj1−sjifαl,qj̸=qj,qjifαl,qj=qj,qj(2.6)

上式中

q j q_j qj

是矩阵中的第

j j j

行。当

α l , q j = q j , q j {\alpha_l,q_j} αl,qj=qj,qj

时,

η i j = 1 {\eta_ij=1} ηij=1

。当

α l , q j ≠ q j , q j {\alpha_l^,{q_j}\ne q_j^,q_j} αl,qj̸=qj,qj

η i j = 0 {\eta_ij=0} ηij=0

。为了方便,我们令

β j 0 = g j {\beta _{j0}} = {g_j} βj0=gj

,

β j 1 = s j {\beta _{j1}} = {s_j} βj1=sj

,为了获得的极大似然估计,我们写出对数似然函数:

(2.7) l ( X ) = log ⁡ ∏ i = 1 I L ( X i ) = ∑ i = 1 I log ⁡ L ( X i ) l(X)=\log \prod_{i=1}{I}\log{L(X_i)}\tag{2.7} l(X)=logi=1∏IL(Xi)=i=1∑IlogL(Xi)(2.7)
进而
(2.8) ∂ l ( X ) ∂ β j η = ∑ i = 1 I ∂ L ( X i ) ∂ β j η / L ( X i ) = ∑ i = 1 I 1 L ( X i ) ∑ l = 1 L P ( α l ) ∂ L ( X i ∣ α l ) ∂ β j η

simpread-DINA 模型解析与实现_门前大桥下 - CSDN 博客_dina 模型 - 图2%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%20%26amp%3B%3D%20%5Csum%5Climits%7Bi%20%3D%201%7D%5EI%20%7B%5Cfrac%7B%20%7B%5Cpartial%20L(%7BXi%7D)%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%7D%20%2FL(%7BXi%7D)%20%5C%5C%20%26amp%3B%3D%5Csum%5Climits%7Bi%20%3D%201%7D%5EI%20%7B%5Cfrac%7B1%7D%7B%20%7BL(%7BXi%7D)%7D%7D%7D%20%5Csum%5Climits%7Bl%20%3D%201%7D%5EL%20%7BP(%7B%5Calpha%20l%7D)%5Cfrac%7B%20%7B%5Cpartial%20L(%7BX_i%7D%7C%7B%5Calpha%20_l%7D)%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%7D%20%5C%5C%20%5Cend%7Baligned%7D%0A#card=math&code=%5Cbegin%7Baligned%7D%20%5Cfrac%7B%20%7B%5Cpartial%20l%28X%29%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%20%26amp%3B%3D%20%5Csum%5Climits%7Bi%20%3D%201%7D%5EI%20%7B%5Cfrac%7B%20%7B%5Cpartial%20L%28%7BXi%7D%29%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%7D%20%2FL%28%7BXi%7D%29%20%5C%5C%20%26amp%3B%3D%5Csum%5Climits%7Bi%20%3D%201%7D%5EI%20%7B%5Cfrac%7B1%7D%7B%20%7BL%28%7BXi%7D%29%7D%7D%7D%20%5Csum%5Climits%7Bl%20%3D%201%7D%5EL%20%7BP%28%7B%5Calpha%20l%7D%29%5Cfrac%7B%20%7B%5Cpartial%20L%28%7BX_i%7D%7C%7B%5Calpha%20_l%7D%29%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%7D%20%5C%5C%20%5Cend%7Baligned%7D%0A)

\tag{2.8}

∂βjη∂l(X)=i=1∑I∂βjη∂L(Xi)/L(Xi)=i=1∑IL(Xi)1l=1∑LP(αl)∂βjη∂L(Xi∣αl)(2.8)

现在

(2.9) ∂ L ( X i ∣ α l ) ∂ β j η = ∏ j , ≠ j P j , ( α l ) X i j , [ 1 − P j , ( α l ) ] 1 − X i j , ∂ P j ( α l ) X i j [ 1 − P j ( α l ) ] 1 − X i j ∂ β j η \frac{\partial L(Xi|\alpha_l)}{\partial\beta{j\eta}}=\prod{j^,\ne j}{{P{{j^,}}}{{({\alpha l})},}}}}}{{[1 - {P{{j^,}}}({\alpha l})]}^{1 - {X{i{j^,}}}}}}{\frac{{\partial {Pj}{{({\alpha _l})}^{{X{ij}}}}{{[1 - {Pj}({\alpha _l})]}^{1 - {X{ij}}}}}}{{\partial {\beta _{j\eta }}}}} \tag{2.9} ∂βjη∂L(Xi∣αl)=j,̸=j∏Pj,(αl)Xij,[1−Pj,(αl)]1−Xij,∂βjη∂Pj(αl)Xij[1−Pj(αl)]1−Xij(2.9)

对于(2.9)式等号右边的分式等于

(2.10) [ 1 − P j ( α l ) ] 1 − X i j X i j P j ( α l ) X i j − 1 ∂ P j ( α l ) ∂ β j η + P j ( α l ) X i j ( 1 − X i j ) [ 1 − P j ( α l ) ] 1 − X i j − 1 − ∂ P j ( α l ) ∂ β j η = P j ( α l ) X i j [ 1 − P j ( α l ) ] 1 − X i j ∂ P j ( α l ) ∂ β j η [ X i j P j ( α l ) − 1 − X i j 1 − P j ( α l ) ] = P j ( α l ) X i j [ 1 − P j ( α l ) ] 1 − X i j ∂ P j ( α l ) ∂ β j η [ X i j − P j ( α l ) P j ( α l ) ( 1 − P j ( α l ) ) ] {[1 - {Pj}({\alpha _l})]^{1 - {X{ij}}}}{X{ij}}{P_j}{({\alpha _l})^{{X{ij}} - 1}}\frac{{\partial {Pj}({\alpha _l})}}{{\partial {\beta {j\eta }}}} + {Pj}{({\alpha _l})^{{X{ij}}}}(1 - {X{ij}}){[1 - {P_j}({\alpha _l})]^{1 - {X{ij}} - 1}}\frac{{ - \partial {Pj}({\alpha _l})}}{{\partial {\beta {j\eta }}}} \ = {Pj}{({\alpha _l})^{{X{ij}}}}{[1 - {Pj}({\alpha _l})]^{1 - {X{ij}}}}\frac{{\partial {Pj}({\alpha _l})}}{{\partial {\beta {j\eta }}}}\left[ {\frac{{{X{ij}}}}{{{P_j}({\alpha _l})}} - \frac{{1 - {X{ij}}}}{{1 - {Pj}({\alpha _l})}}} \right] \ = {P_j}{({\alpha _l})^{{X{ij}}}}{[1 - {Pj}({\alpha _l})]^{1 - {X{ij}}}}\frac{{\partial {Pj}({\alpha _l})}}{{\partial {\beta {j\eta }}}}\left[ {\frac{{{X_{ij}}{\rm{ - }}{P_j}({\alpha _l})}}{{{P_j}({\alpha _l})(1 - {P_j}({\alpha _l}))}}} \right]{\rm{ }} \tag{2.10} [1−Pj(αl)]1−XijXijPj(αl)Xij−1∂βjη∂Pj(αl)+Pj(αl)Xij(1−Xij)[1−Pj(αl)]1−Xij−1∂βjη−∂Pj(αl)=Pj(αl)Xij[1−Pj(αl)]1−Xij∂βjη∂Pj(αl)[Pj(αl)Xij−1−Pj(αl)1−Xij]=Pj(αl)Xij[1−Pj(αl)]1−Xij∂βjη∂Pj(αl)Pj(αl)(1−Pj(αl))Xij−Pj(αl)

在把(2.10)式代入(2.9)有

(2.11) ∂ L ( X i ∣ α l ) ∂ β j η = [ ∏ j = 1 J P j ( α l ) X i j [ 1 − P j ( α l ) ] 1 − X i j ] ∂ P j ( α l ) ∂ β j η [ X i j − P j ( α l ) P j ( α l ) ( 1 − P j ( α l ) ) ] = L ( X i ∣ α l ) ∂ P j ( α l ) ∂ β j η [ X i j − P j ( α l ) P j ( α l ) ( 1 − P j ( α l ) ) ] \frac{{\partial L({Xi}|{\alpha _l})}}{{\partial {\beta {j\eta }}}} = \left[ {\prod\limits{j = 1}^J {{P_j}{{({\alpha _l})}^{{X{ij}}}}{{[1 - {Pj}({\alpha _l})]}^{1 - {X{ij}}}}} } \right]\frac{{\partial {Pj}({\alpha _l})}}{{\partial {\beta {j\eta }}}}\left[ {\frac{{{X{ij}} - {P_j}({\alpha _l})}}{{{P_j}({\alpha _l})(1 - {P_j}({\alpha _l}))}}} \right] \ {\rm{ }} = L({X_i}|{\alpha _l})\frac{{\partial {P_j}({\alpha _l})}}{{\partial {\beta {j\eta }}}}\left[ {\frac{{{X_{ij}} - {P_j}({\alpha _l})}}{{{P_j}({\alpha _l})(1 - {P_j}({\alpha _l}))}}} \right]{\rm{ }} \tag{2.11} ∂βjη∂L(Xi∣αl)=[j=1∏JPj(αl)Xij[1−Pj(αl)]1−Xij]∂βjη∂Pj(αl)[Pj(αl)(1−Pj(αl))Xij−Pj(αl)]=L(Xi∣αl)∂βjη∂Pj(αl)Pj(αl)(1−Pj(αl))Xij−Pj(αl)

在把(2.11)代入(2.8)得到

(2.12) ∂ l ( X ) ∂ β j η = ∑ l = 1 L ∂ P j ( α l ) ∂ β j η [ 1 P j ( α l ) ( 1 − P j ( α l ) ) ] ∑ i = 1 I L ( X i ∣ α l ) P ( α l ) L ( X i ) [ X i j − P j ( α l ) ] = ∑ l = 1 L ∂ P j ( α l ) ∂ β j η [ 1 P j ( α l ) ( 1 − P j ( α l ) ) ] ∑ i = 1 I P ( α l ∣ X i ) [ X i j − P j ( α l ) ] = ∑ l = 1 L ∂ P j ( α l ) ∂ β j η [ 1 P j ( α l ) ( 1 − P j ( α l ) ) ] [ ∑ i = 1 I P ( α l ∣ X i ) X i j − P j ( α l ) ∑ i = 1 I P ( α l ∣ X i ) ] = ∑ l = 1 L ∂ P j ( α l ) ∂ β j η [ 1 P j ( α l ) ( 1 − P j ( α l ) ) ] [ R j l − P j ( α l ) I l ]

simpread-DINA 模型解析与实现_门前大桥下 - CSDN 博客_dina 模型 - 图3%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%20%26amp%3B%3D%20%5Csum%5Climits%7Bl%20%3D%201%7D%5EL%20%7B%5Cfrac%7B%20%7B%5Cpartial%20%7BPj%7D(%7B%5Calpha%20_l%7D)%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%7D%20%5Cleft%5B%20%7B%5Cfrac%7B1%7D%7B%20%7B%20%7BPj%7D(%7B%5Calpha%20_l%7D)(1%20-%20%7BP_j%7D(%7B%5Calpha%20_l%7D))%7D%7D%7D%20%5Cright%5D%5Csum%5Climits%7Bi%20%3D%201%7D%5EI%20%7B%5Cfrac%7B%20%7BL(%7BXi%7D%7C%7B%5Calpha%20_l%7D)P(%7B%5Calpha%20_l%7D)%7D%7D%7B%20%7BL(%7BX_i%7D)%7D%7D%7D%20%5B%7BX%7Bij%7D%7D%20-%20%7BPj%7D(%7B%5Calpha%20_l%7D)%5D%20%5C%5C%20%26amp%3B%3D%5Csum%5Climits%7Bl%20%3D%201%7D%5EL%20%7B%5Cfrac%7B%20%7B%5Cpartial%20%7BPj%7D(%7B%5Calpha%20_l%7D)%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%7D%20%5Cleft%5B%20%7B%5Cfrac%7B1%7D%7B%20%7B%20%7BPj%7D(%7B%5Calpha%20_l%7D)(1%20-%20%7BP_j%7D(%7B%5Calpha%20_l%7D))%7D%7D%7D%20%5Cright%5D%5Csum%5Climits%7Bi%20%3D%201%7D%5EI%20%7BP(%7B%5Calpha%20l%7D%7C%7BX_i%7D)%7D%20%5B%7BX%7Bij%7D%7D%20-%20%7BPj%7D(%7B%5Calpha%20_l%7D)%5D%20%5C%5C%20%26amp%3B%3D%5Csum%5Climits%7Bl%20%3D%201%7D%5EL%20%7B%5Cfrac%7B%20%7B%5Cpartial%20%7BPj%7D(%7B%5Calpha%20_l%7D)%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%7D%20%5Cleft%5B%20%7B%5Cfrac%7B1%7D%7B%20%7B%20%7BPj%7D(%7B%5Calpha%20_l%7D)(1%20-%20%7BP_j%7D(%7B%5Calpha%20_l%7D))%7D%7D%7D%20%5Cright%5D%5Cleft%5B%20%7B%5Csum%5Climits%7Bi%20%3D%201%7D%5EI%20%7BP(%7B%5Calpha%20l%7D%7C%7BX_i%7D)%7BX%7Bij%7D%7D%20-%20%7BPj%7D(%7B%5Calpha%20_l%7D)%5Csum%5Climits%7Bi%20%3D%201%7D%5EI%20%7BP(%7B%5Calpha%20l%7D%7C%7BX_i%7D)%7D%20%7D%20%7D%20%5Cright%5D%20%5C%5C%20%26amp%3B%3D%5Csum%5Climits%7Bl%20%3D%201%7D%5EL%20%7B%5Cfrac%7B%20%7B%5Cpartial%20%7BPj%7D(%7B%5Calpha%20_l%7D)%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%7D%20%5Cleft%5B%20%7B%5Cfrac%7B1%7D%7B%20%7B%20%7BPj%7D(%7B%5Calpha%20_l%7D)(1%20-%20%7BP_j%7D(%7B%5Calpha%20_l%7D))%7D%7D%7D%20%5Cright%5D%5B%7BR%7Bjl%7D%7D%20-%20%7BPj%7D(%7B%5Calpha%20_l%7D)%7BI_l%7D%5D%7B%5Crm%7B%20%7D%7D%20%5Cend%7Baligned%7D%0A#card=math&code=%5Cbegin%7Baligned%7D%20%5Cfrac%7B%20%7B%5Cpartial%20l%28X%29%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%20%26amp%3B%3D%20%5Csum%5Climits%7Bl%20%3D%201%7D%5EL%20%7B%5Cfrac%7B%20%7B%5Cpartial%20%7BP_j%7D%28%7B%5Calpha%20_l%7D%29%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%7D%20%5Cleft%5B%20%7B%5Cfrac%7B1%7D%7B%20%7B%20%7BPj%7D%28%7B%5Calpha%20_l%7D%29%281%20-%20%7BP_j%7D%28%7B%5Calpha%20_l%7D%29%29%7D%7D%7D%20%5Cright%5D%5Csum%5Climits%7Bi%20%3D%201%7D%5EI%20%7B%5Cfrac%7B%20%7BL%28%7BXi%7D%7C%7B%5Calpha%20_l%7D%29P%28%7B%5Calpha%20_l%7D%29%7D%7D%7B%20%7BL%28%7BX_i%7D%29%7D%7D%7D%20%5B%7BX%7Bij%7D%7D%20-%20%7BPj%7D%28%7B%5Calpha%20_l%7D%29%5D%20%5C%5C%20%26amp%3B%3D%5Csum%5Climits%7Bl%20%3D%201%7D%5EL%20%7B%5Cfrac%7B%20%7B%5Cpartial%20%7BPj%7D%28%7B%5Calpha%20_l%7D%29%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%7D%20%5Cleft%5B%20%7B%5Cfrac%7B1%7D%7B%20%7B%20%7BPj%7D%28%7B%5Calpha%20_l%7D%29%281%20-%20%7BP_j%7D%28%7B%5Calpha%20_l%7D%29%29%7D%7D%7D%20%5Cright%5D%5Csum%5Climits%7Bi%20%3D%201%7D%5EI%20%7BP%28%7B%5Calpha%20l%7D%7C%7BX_i%7D%29%7D%20%5B%7BX%7Bij%7D%7D%20-%20%7BPj%7D%28%7B%5Calpha%20_l%7D%29%5D%20%5C%5C%20%26amp%3B%3D%5Csum%5Climits%7Bl%20%3D%201%7D%5EL%20%7B%5Cfrac%7B%20%7B%5Cpartial%20%7BPj%7D%28%7B%5Calpha%20_l%7D%29%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%7D%20%5Cleft%5B%20%7B%5Cfrac%7B1%7D%7B%20%7B%20%7BPj%7D%28%7B%5Calpha%20_l%7D%29%281%20-%20%7BP_j%7D%28%7B%5Calpha%20_l%7D%29%29%7D%7D%7D%20%5Cright%5D%5Cleft%5B%20%7B%5Csum%5Climits%7Bi%20%3D%201%7D%5EI%20%7BP%28%7B%5Calpha%20l%7D%7C%7BX_i%7D%29%7BX%7Bij%7D%7D%20-%20%7BPj%7D%28%7B%5Calpha%20_l%7D%29%5Csum%5Climits%7Bi%20%3D%201%7D%5EI%20%7BP%28%7B%5Calpha%20l%7D%7C%7BX_i%7D%29%7D%20%7D%20%7D%20%5Cright%5D%20%5C%5C%20%26amp%3B%3D%5Csum%5Climits%7Bl%20%3D%201%7D%5EL%20%7B%5Cfrac%7B%20%7B%5Cpartial%20%7BPj%7D%28%7B%5Calpha%20_l%7D%29%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%7D%20%5Cleft%5B%20%7B%5Cfrac%7B1%7D%7B%20%7B%20%7BPj%7D%28%7B%5Calpha%20_l%7D%29%281%20-%20%7BP_j%7D%28%7B%5Calpha%20_l%7D%29%29%7D%7D%7D%20%5Cright%5D%5B%7BR%7Bjl%7D%7D%20-%20%7BP_j%7D%28%7B%5Calpha%20_l%7D%29%7BI_l%7D%5D%7B%5Crm%7B%20%7D%7D%20%5Cend%7Baligned%7D%0A)

{\tag{2.12}} ∂βjη∂l(X)=l=1∑L∂βjη∂Pj(αl)[Pj(αl)(1−Pj(αl))1]i=1∑IL(Xi)L(Xi∣αl)P(αl)[Xij−Pj(αl)]=l=1∑L∂βjη∂Pj(αl)[Pj(αl)(1−Pj(αl))1]i=1∑IP(αl∣Xi)[Xij−Pj(αl)]=l=1∑L∂βjη∂Pj(αl)[Pj(αl)(1−Pj(αl))1][i=1∑IP(αl∣Xi)Xij−Pj(αl)i=1∑IP(αl∣Xi)]=l=1∑L∂βjη∂Pj(αl)[Pj(αl)(1−Pj(αl))1]Rjl−Pj(αl)Il

这里的

P ( α l ∣ X i ) P({\alpha _l}|{X_i}) P(αl∣Xi)

是第

i i i

个学生拥有的第

l l l

个属性向量的后验概率,

I l = ∑ i = 1 I P ( α l ∣ X i ) Il=\sum{i=1}^{I}P(\alpha_l|X_i) Il=∑i=1IP(αl∣Xi)

是学生拥有属性的期望,

R j l = ∑ i = 1 I P ( α l ∣ X i ) X i j {R{jl}} = \sum\limits{i = 1}^I {P({\alpha l}|{X_i})} {X{ij}} Rjl=i=1∑IP(αl∣Xi)Xij

是正确回答第

j j j

个题目的学生拥有属性

α l \alpha_l αl

的期望。

对于试题

j j j

,等式(2.12)可以被写作

(2.13) ∂ l ( X ) ∂ β j η = ∑ { α l : α l , q j ≠ q j , q j } ∂ P j ( α l ) ∂ β j η [ 1 P j ( α l ) ( 1 − P j ( α l ) ) ] [ R j l − P j ( α l ) I l ] + ∑ { α l : α l , q j = q j , q j } ∂ P j ( α l ) ∂ β j η [ 1 P j ( α l ) ( 1 − P j ( α l ) ) ] [ R j l − P j ( α l ) I l ] = ∂ g j ∂ β j η [ 1 g j [ 1 − g j ] ] ∑ { α l : α l , q j ≠ q j , q j } [ R j l − g j I l ] + ∂ ( 1 − s j ) ∂ β j η [ 1 ( 1 − s j ) s j ] ∑ { α l : α l , q j = q j , q j } [ R j l − ( 1 − s j ) I l ] = ∂ g j ∂ β j η [ 1 g j [ 1 − g j ] ] [ R j l ( 0 ) − g j I j l ( 0 ) ] + ∂ ( 1 − s j ) ∂ β j η [ 1 ( 1 − s j ) s j ] [ R j l ( 1 ) − ( 1 − s j ) I j l ( 1 ) ]

simpread-DINA 模型解析与实现_门前大桥下 - CSDN 博客_dina 模型 - 图4%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%20%26amp%3B%3D%20%5Csum%5Climits%7B%5C%7B%20%7B%5Calpha%20l%7D%3A%5Calpha%20_l%5E%2C%7Bq_j%7D%20%5Cne%20q_j%5E%2C%7Bq_j%7D%5C%7D%20%7D%20%7B%5Cfrac%7B%20%7B%5Cpartial%20%7BP_j%7D(%7B%5Calpha%20_l%7D)%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%5Cleft%5B%20%7B%5Cfrac%7B1%7D%7B%20%7B%20%7BPj%7D(%7B%5Calpha%20_l%7D)(1%20-%20%7BP_j%7D(%7B%5Calpha%20_l%7D))%7D%7D%7D%20%5Cright%5D%7D%20%5B%7BR%7Bjl%7D%7D%20-%20%7BPj%7D(%7B%5Calpha%20_l%7D)%7BI_l%7D%5D%5C%5C%20%26amp%3B%2B%5Csum%5Climits%7B%5C%7B%20%7B%5Calpha%20l%7D%3A%5Calpha%20_l%5E%2C%7Bq_j%7D%20%3D%20q_j%5E%2C%7Bq_j%7D%5C%7D%20%7D%20%7B%5Cfrac%7B%20%7B%5Cpartial%20%7BP_j%7D(%7B%5Calpha%20_l%7D)%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%5Cleft%5B%20%7B%5Cfrac%7B1%7D%7B%20%7B%20%7BPj%7D(%7B%5Calpha%20_l%7D)(1%20-%20%7BP_j%7D(%7B%5Calpha%20_l%7D))%7D%7D%7D%20%5Cright%5D%7D%20%5B%7BR%7Bjl%7D%7D%20-%20%7BPj%7D(%7B%5Calpha%20_l%7D)%7BI_l%7D%5D%20%5C%5C%20%26amp%3B%3D%5Cfrac%7B%20%7B%5Cpartial%20%7Bg_j%7D%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%5Cleft%5B%20%7B%5Cfrac%7B1%7D%7B%20%7B%20%7Bgj%7D%5B1%20-%20%7Bg_j%7D%5D%7D%7D%7D%20%5Cright%5D%5Csum%5Climits%7B%5C%7B%20%7B%5Calpha%20l%7D%3A%5Calpha%20_l%5E%2C%7Bq_j%7D%20%5Cne%20q_j%5E%2C%7Bq_j%7D%5C%7D%20%7D%20%7B%5B%7BR%7Bjl%7D%7D%20-%20%7Bgj%7D%7BI_l%7D%5D%7D%20%5C%5C%20%26amp%3B%2B%5Cfrac%7B%20%7B%5Cpartial%20(1%20-%20%7Bs_j%7D)%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%5Cleft%5B%20%7B%5Cfrac%7B1%7D%7B%20%7B(1%20-%20%7Bsj%7D)%7Bs_j%7D%7D%7D%7D%20%5Cright%5D%5Csum%5Climits%7B%5C%7B%20%7B%5Calpha%20l%7D%3A%5Calpha%20_l%5E%2C%7Bq_j%7D%20%3D%20q_j%5E%2C%7Bq_j%7D%5C%7D%20%7D%20%7B%5B%7BR%7Bjl%7D%7D%20-%20(1%20-%20%7Bsj%7D)%7BI_l%7D%5D%7D%5C%5C%20%26amp%3B%3D%5Cfrac%7B%20%7B%5Cpartial%20%7Bg_j%7D%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%5Cleft%5B%20%7B%5Cfrac%7B1%7D%7B%20%7B%20%7Bgj%7D%5B1%20-%20%7Bg_j%7D%5D%7D%7D%7D%20%5Cright%5D%5Cleft%5B%20%7BR%7Bjl%7D%5E%7B(0)%7D%20-%20%7Bgj%7DI%7Bjl%7D%5E%7B(0)%7D%7D%20%5Cright%5D%2B%5Cfrac%7B%20%7B%5Cpartial%20(1%20-%20%7Bsj%7D)%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%5Cleft%5B%20%7B%5Cfrac%7B1%7D%7B%20%7B(1%20-%20%7Bsj%7D)%7Bs_j%7D%7D%7D%7D%20%5Cright%5D%5Cleft%5B%20%7BR%7Bjl%7D%5E%7B(1)%7D%20-%20(1%20-%20%7Bsj%7D)I%7Bjl%7D%5E%7B(1)%7D%7D%20%5Cright%5D%20%5Cend%7Baligned%7D%0A#card=math&code=%5Cbegin%7Baligned%7D%20%5Cfrac%7B%20%7B%5Cpartial%20l%28X%29%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%20%26amp%3B%3D%20%5Csum%5Climits%7B%5C%7B%20%7B%5Calpha%20l%7D%3A%5Calpha%20_l%5E%2C%7Bq_j%7D%20%5Cne%20q_j%5E%2C%7Bq_j%7D%5C%7D%20%7D%20%7B%5Cfrac%7B%20%7B%5Cpartial%20%7BP_j%7D%28%7B%5Calpha%20_l%7D%29%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%5Cleft%5B%20%7B%5Cfrac%7B1%7D%7B%20%7B%20%7BPj%7D%28%7B%5Calpha%20_l%7D%29%281%20-%20%7BP_j%7D%28%7B%5Calpha%20_l%7D%29%29%7D%7D%7D%20%5Cright%5D%7D%20%5B%7BR%7Bjl%7D%7D%20-%20%7BPj%7D%28%7B%5Calpha%20_l%7D%29%7BI_l%7D%5D%5C%5C%20%26amp%3B%2B%5Csum%5Climits%7B%5C%7B%20%7B%5Calpha%20l%7D%3A%5Calpha%20_l%5E%2C%7Bq_j%7D%20%3D%20q_j%5E%2C%7Bq_j%7D%5C%7D%20%7D%20%7B%5Cfrac%7B%20%7B%5Cpartial%20%7BP_j%7D%28%7B%5Calpha%20_l%7D%29%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%5Cleft%5B%20%7B%5Cfrac%7B1%7D%7B%20%7B%20%7BPj%7D%28%7B%5Calpha%20_l%7D%29%281%20-%20%7BP_j%7D%28%7B%5Calpha%20_l%7D%29%29%7D%7D%7D%20%5Cright%5D%7D%20%5B%7BR%7Bjl%7D%7D%20-%20%7BPj%7D%28%7B%5Calpha%20_l%7D%29%7BI_l%7D%5D%20%5C%5C%20%26amp%3B%3D%5Cfrac%7B%20%7B%5Cpartial%20%7Bg_j%7D%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%5Cleft%5B%20%7B%5Cfrac%7B1%7D%7B%20%7B%20%7Bgj%7D%5B1%20-%20%7Bg_j%7D%5D%7D%7D%7D%20%5Cright%5D%5Csum%5Climits%7B%5C%7B%20%7B%5Calpha%20l%7D%3A%5Calpha%20_l%5E%2C%7Bq_j%7D%20%5Cne%20q_j%5E%2C%7Bq_j%7D%5C%7D%20%7D%20%7B%5B%7BR%7Bjl%7D%7D%20-%20%7Bgj%7D%7BI_l%7D%5D%7D%20%5C%5C%20%26amp%3B%2B%5Cfrac%7B%20%7B%5Cpartial%20%281%20-%20%7Bs_j%7D%29%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%5Cleft%5B%20%7B%5Cfrac%7B1%7D%7B%20%7B%281%20-%20%7Bsj%7D%29%7Bs_j%7D%7D%7D%7D%20%5Cright%5D%5Csum%5Climits%7B%5C%7B%20%7B%5Calpha%20l%7D%3A%5Calpha%20_l%5E%2C%7Bq_j%7D%20%3D%20q_j%5E%2C%7Bq_j%7D%5C%7D%20%7D%20%7B%5B%7BR%7Bjl%7D%7D%20-%20%281%20-%20%7Bsj%7D%29%7BI_l%7D%5D%7D%5C%5C%20%26amp%3B%3D%5Cfrac%7B%20%7B%5Cpartial%20%7Bg_j%7D%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%5Cleft%5B%20%7B%5Cfrac%7B1%7D%7B%20%7B%20%7Bgj%7D%5B1%20-%20%7Bg_j%7D%5D%7D%7D%7D%20%5Cright%5D%5Cleft%5B%20%7BR%7Bjl%7D%5E%7B%280%29%7D%20-%20%7Bgj%7DI%7Bjl%7D%5E%7B%280%29%7D%7D%20%5Cright%5D%2B%5Cfrac%7B%20%7B%5Cpartial%20%281%20-%20%7Bsj%7D%29%7D%7D%7B%20%7B%5Cpartial%20%7B%5Cbeta%20%7Bj%5Ceta%20%7D%7D%7D%7D%5Cleft%5B%20%7B%5Cfrac%7B1%7D%7B%20%7B%281%20-%20%7Bsj%7D%29%7Bs_j%7D%7D%7D%7D%20%5Cright%5D%5Cleft%5B%20%7BR%7Bjl%7D%5E%7B%281%29%7D%20-%20%281%20-%20%7Bsj%7D%29I%7Bjl%7D%5E%7B%281%29%7D%7D%20%5Cright%5D%20%5Cend%7Baligned%7D%0A)

\tag{2.13} ∂βjη∂l(X)={αl:αl,qj̸=qj,qj}∑∂βjη∂Pj(αl)[Pj(αl)(1−Pj(αl))1][Rjl−Pj(αl)Il]+{αl:αl,qj=qj,qj}∑∂βjη∂Pj(αl)[Pj(αl)(1−Pj(αl))1][Rjl−Pj(αl)Il]=∂βjη∂gj[gj[1−gj]1]{αl:αl,qj̸=qj,qj}∑[Rjl−gjIl]+∂βjη∂(1−sj)[(1−sj)sj1]{αl:αl,qj=qj,qj}∑[Rjl−(1−sj)Il]=∂βjη∂gj[gj[1−gj]1][Rjl(0)−gjIjl(0)]+∂βjη∂(1−sj)[(1−sj)sj1]Rjl(1)−(1−sj)Ijl(1)

η = 0 ( β j 0 = g j ) \eta = 0({\beta _{j0}} = {g_j}) η=0(βj0=gj)

时,对于(2.13)式等号左边的加的前一项中的

∂ P j ( α l ) ∂ β j η = 1 \frac{{\partial {Pj}({\alpha _l})}}{{\partial {\beta {j\eta }}}} = 1 ∂βjη∂Pj(αl)=1

,而加号后面的

∂ P j ( α l ) ∂ β j η = 0 \frac{{\partial {Pj}({\alpha _l})}}{{\partial {\beta {j\eta }}}} = 0 ∂βjη∂Pj(αl)=0

,此时令式(2.13)为 0,求得第

j j j

个题目的猜测参数的估值。

(2.14) 1 g j ( 1 − g j ) [ R j l ( 0 ) − g j I j l ( 0 ) ] = 0 \frac{1}{{{gj}(1 - {g_j})}}\left[ {R{jl}^{(0)} - {gj}I{jl}^{(0)}} \right] = 0 \tag{2.14} gj(1−gj)1[Rjl(0)−gjIjl(0)]=0(2.14)

因为

1 g j ( 1 − g j ) ≠ 0 \frac{1}{{{g_j}(1 - {g_j})}} \ne 0 gj(1−gj)1̸=0

,所以通过上式有

(2.15) g ^ j = R j l ( 0 ) / I j l ( 0 ) {\hat gj} = R{jl}{(0)} \tag{2.15} g^j=Rjl(0)/Ijl(0)(2.15)

同理,当

η = 1 ( β j 1 = s j ) \eta = 1({\beta _{j1}} = {s_j}) η=1(βj1=sj)

,对于(2.13)式等号左边的加的前一项中的

∂ P j ( α l ) ∂ β j η = 0 \frac{{\partial {Pj}({\alpha _l})}}{{\partial {\beta {j\eta }}}} = 0 ∂βjη∂Pj(αl)=0

,而加号后面的

∂ P j ( α l ) ∂ β j η = 1 \frac{{\partial {Pj}({\alpha _l})}}{{\partial {\beta {j\eta }}}} = 1 ∂βjη∂Pj(αl)=1

,此时令式(2.13)为 0,求得第

j j j

个题目的失误参数的估值。

(2.16) − 1 ( 1 − s j ) s j [ R j l ( 1 ) − ( 1 − s j ) I j l ( 1 ) ] = 0 - \frac{1}{{(1 - {sj}){s_j}}}\left[ {R{jl}^{(1)} - (1 - {sj})I{jl}^{(1)}} \right] = 0 \tag{2.16} −(1−sj)sj1[Rjl(1)−(1−sj)Ijl(1)]=0(2.16)

因为

1 ( 1 − s j ) s j ≠ 0 \frac{1}{{(1 - {s_j}){s_j}}} \ne 0 (1−sj)sj1̸=0

,所以通过上式有

(2.17) s ^ j = [ I j l ( 1 ) − R j l ( 1 ) ] / I j l ( 1 ) {\hat sj} = [I{jl}^{(1)} - R_{jl}{(1)} \tag{2.17} s^j=[Ijl(1)−Rjl(1)]/Ijl(1)(2.17)

详细的算法流程将在下一部分介绍。

第一步:

给定

β = { s 1 , g 1 , . . . , s J , g J } \beta= {{s_1},{g_1},…,{s_J},{g_J}} β={s1,g1,…,sJ,gJ}

一组初值;

第二步:

根据已有的

β \beta β

以及下列公式计算

I j l (0) , R j l ( 0 ) , I j l ( 1 ) , R j l ( 1 ) I{jl}{(0)},I{jl}{(1)} Ijl(0),Rjl(0),Ijl(1),Rjl(1)

P ( α l ∣ X i ) = P ( X i ∣ α l ) P ( α l ) ∑ l = 1 L P ( X i ∣ α l ) P ( α l ) P({\alpha l}|{X_i}) = \frac{{P({X_i}|{\alpha _l})P({\alpha _l})}}{{\sum\limits{l = 1}^L {P({X_i}|{\alpha _l})P({\alpha _l})} }} P(αl∣Xi)=l=1∑LP(Xi∣αl)P(αl)P(Xi∣αl)P(αl)

R j l = ∑ i = 1 I P ( α l ∣ X i ) X i j I l = ∑ i = 1 I P ( α l ∣ X i ) R j l (0) = ∑ { α l : α l , q j < q j , q j } R j l I j l ( 0 ) = ∑ { α l : α l , q j < q j , q j } I l R j l ( 1 ) = ∑ { α l : α l , q j = q j , q j } R j l I j l ( 1 ) = ∑ { α l : α l , q j = q j , q j } I l {R{jl}} = \sum\limits{i = 1}^I {P({\alpha l}|{X_i})} {X{ij}} \quad{Il} = \sum\limits{i = 1}^I {P({\alpha l}|{X_i})} \ R{jl}^{(0)} = \sum\limits{{ {\alpha _l}:\alpha _l^,{q_j} < q_j^,{q_j}} } {{R{jl}}} \quad I{jl}^{(0)} = \sum\limits{{ {\alpha l}:\alpha _l^,{q_j} < q_j^,{q_j}} } {{I_l}} \ R{jl}^{(1)} = \sum\limits{{ {\alpha _l}:\alpha _l^,{q_j} = q_j^,{q_j}} } {{R{jl}}} \quad I{jl}^{(1)} = \sum\limits{{ {\alpha _l}:\alpha _l^,{q_j} = q_j^,{q_j}} } {{I_l}} Rjl=i=1∑IP(αl∣Xi)XijIl=i=1∑IP(αl∣Xi)Rjl(0)={αl:αl,qj<qj,qj}∑RjlIjl(0)={αl:αl,qj<qj,qj}∑IlRjl(1)={αl:αl,qj=qj,qj}∑RjlIjl(1)={αl:αl,qj=qj,qj}∑Il

第三步

:用第二步计算得到的

I j l (0) , R j l ( 0 ) , I j l ( 1 ) , R j l ( 1 ) I{jl}{(0)},I{jl}{(1)} Ijl(0),Rjl(0),Ijl(1),Rjl(1)

值依据下列公式计算新的

β \beta β

值:

s ^ j = [ I j l (1) − R j l ( 1 ) ] / I j l ( 1 ) g ^ j = R j l ( 0 ) / I j l ( 0 ) {\hat sj} = [I{jl}^{(1)} - R{jl}{(1)} \ {\hat g_j} = R{jl}{(0)} sj=Rjl(0)/Ijl(0)

第四步:

重复第二、三步,直到的每个

β \beta β

分量都收敛。

该算法 c 版本的实现后续发布。

reference

De La Torre, Jimmy. “DINA model and parameter estimation: A didactic.” Journal of educational and behavioral statistics 34.1 (2009): 115-130.