其他分享
首页 > 其他分享> > 集成学习(中)--2

集成学习(中)--2

作者:互联网

bagging原理与案例分析

bagging的思路

Bagging不仅仅集成模型最后的预测结果,同时采用一定策略来影响基模型训练,保证基模型可以服从一定的假设。是通过不同的采样增加模型的差异性

bagging原理分析

Sklearn为我们提供了 BaggingRegressor 与 BaggingClassifier
两种Bagging方法的API,两种方法的默认基模型是树模型。节点划分过程中所用的指标主要是信息增益GINI系数

案例

# 创建一个含有1000个样本20维特征的随机分类数据集
# test classification dataset
from sklearn.datasets import make_classification
# define dataset
X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, 
                           n_redundant=5, random_state=5)
# summarize the dataset
print(X.shape, y.shape)

# 使用重复的分层k-fold交叉验证来评估该模型,一共重复3次,每次有10个fold。
# evaluate bagging algorithm for classification
from numpy import mean
from numpy import std
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.ensemble import BaggingClassifier
# define dataset
X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=5)
# define the model
model = BaggingClassifier()
# evaluate the model
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
n_scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1, error_score='raise')
# report performance
print('Accuracy: %.3f (%.3f)' % (mean(n_scores), std(n_scores)))

参考资料:

https://github.com/datawhalechina/team-learning-data-mining/tree/master/EnsembleLearning

标签:集成,Bagging,classification,--,bagging,学习,采样,import,model
来源: https://blog.csdn.net/weixin_41577592/article/details/115795466