Lazy vs. Eager Learning
- Lazy vs. eager learning
- Lazy learning (e.g., instance-based learning): Simply stores training data (or only minor processing) and waits until it is given a test tuple
- Eager learning (the discussed methods so far): Given a set of training tuples, constructs a classification model before receiving new (e.g., test) data to classify
惰性学习(例如,基于实例的学习):简单地存储训练数据(或只进行少量处理),直到它得到一个测试元组
热切学习(目前讨论的方法):给定一组训练元组,在接收新的(如测试)数据进行分类之前,构造一个分类模型
- Lazy: less time in training but more time in predicting 训练时间少,但预测花费时间长。
- Accuracy
- Lazy method effectively uses a richer hypothesis space since it uses many local linear functions to form an implicit global approximation to the target function
- Eager: must commit to a single hypothesis that covers the entire instance space
Lazy Learner: Instance based method 基于实例的方法
- Instance-based learning:
- Store training examples and delay the processing (“lazy evaluation”) until a new instance must be classified
- Typical approaches
- k-nearest neighbor approach (KNN)
- Instances represented as points in a Euclidean space. 在欧几里得空间中表达实例点。
- Locally weighted regression 局部加权回归
- Constructs local approximation 构造局部逼近
- Case-based reasoning
- Uses symbolic representations and knowledge-based inference 基于符号和知识的推理
- k-nearest neighbor approach (KNN)