【集成学习(案例)】幸福感预测
作者:互联网
集成学习案例一 (幸福感预测)
背景介绍
幸福感是一个古老而深刻的话题,是人类世代追求的方向。与幸福感相关的因素成千上万、因人而异,大如国计民生,小如路边烤红薯,都会对幸福感产生影响。这些错综复杂的因素中,我们能找到其中的共性,一窥幸福感的要义吗?
另外,在社会科学领域,幸福感的研究占有重要的位置。这个涉及了哲学、心理学、社会学、经济学等多方学科的话题复杂而有趣;同时与大家生活息息相关,每个人对幸福感都有自己的衡量标准。如果能发现影响幸福感的共性,生活中是不是将多一些乐趣;如果能找到影响幸福感的政策因素,便能优化资源配置来提升国民的幸福感。目前社会科学研究注重变量的可解释性和未来政策的落地,主要采用了线性回归和逻辑回归的方法,在收入、健康、职业、社交关系、休闲方式等经济人口因素;以及政府公共服务、宏观经济环境、税负等宏观因素上有了一系列的推测和发现。
该案例为幸福感预测这一经典课题,希望在现有社会科学研究外有其他维度的算法尝试,结合多学科各自优势,挖掘潜在的影响因素,发现更多可解释、可理解的相关关系。
具体来说,该案例就是一个数据挖掘类型的比赛——幸福感预测的baseline。具体来说,我们需要使用包括个体变量(性别、年龄、地域、职业、健康、婚姻与政治面貌等等)、家庭变量(父母、配偶、子女、家庭资本等等)、社会态度(公平、信用、公共服务等等)等139维度的信息来预测其对幸福感的影响。
我们的数据来源于国家官方的《中国综合社会调查(CGSS)》文件中的调查结果中的数据,数据来源可靠可依赖:)
数据信息
赛题要求使用以上 139 维的特征,使用 8000 余组数据进行对于个人幸福感的预测(预测值为1,2,3,4,5,其中1代表幸福感最低,5代表幸福感最高)。
因为考虑到变量个数较多,部分变量间关系复杂,数据分为完整版和精简版两类。可从精简版入手熟悉赛题后,使用完整版挖掘更多信息。在这里我直接使用了完整版的数据。赛题也给出了index文件中包含每个变量对应的问卷题目,以及变量取值的含义;survey文件中为原版问卷,作为补充以方便理解问题背景。
评价指标
最终的评价指标为均方误差MSE,即:
S
c
o
r
e
=
1
n
∑
1
n
(
y
i
−
y
∗
)
2
Score = \frac{1}{n} \sum_1 ^n (y_i - y ^*)^2
Score=n11∑n(yi−y∗)2
导入package
import os
import time
import pandas as pd
import numpy as np
import seaborn as sns
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn import metrics
from datetime import datetime
import matplotlib.pyplot as plt
from sklearn.metrics import roc_auc_score, roc_curve, mean_squared_error,mean_absolute_error, f1_score
import lightgbm as lgb
import xgboost as xgb
from sklearn.ensemble import RandomForestRegressor as rfr
from sklearn.ensemble import ExtraTreesRegressor as etr
from sklearn.linear_model import BayesianRidge as br
from sklearn.ensemble import GradientBoostingRegressor as gbr
from sklearn.linear_model import Ridge
from sklearn.linear_model import Lasso
from sklearn.linear_model import LinearRegression as lr
from sklearn.linear_model import ElasticNet as en
from sklearn.kernel_ridge import KernelRidge as kr
from sklearn.model_selection import KFold, StratifiedKFold,GroupKFold, RepeatedKFold
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn import preprocessing
import logging
import warnings
warnings.filterwarnings('ignore') #消除warning
导入数据集
train = pd.read_csv("train.csv", parse_dates=['survey_time'],encoding='latin-1')
test = pd.read_csv("test.csv", parse_dates=['survey_time'],encoding='latin-1') #latin-1向下兼容ASCII
train = train[train["happiness"]!=-8].reset_index(drop=True)
train_data_copy = train.copy() #删去"happiness" 为-8的行
target_col = "happiness" #目标列
target = train_data_copy[target_col]
del train_data_copy[target_col] #去除目标列
data = pd.concat([train_data_copy,test],axis=0,ignore_index=True)
train.head()
id | happiness | survey_type | province | city | county | survey_time | gender | birth | nationality | ... | neighbor_familiarity | public_service_1 | public_service_2 | public_service_3 | public_service_4 | public_service_5 | public_service_6 | public_service_7 | public_service_8 | public_service_9 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 4 | 1 | 12 | 32 | 59 | 2015-08-04 14:18:00 | 1 | 1959 | 1 | ... | 4 | 50 | 60 | 50 | 50 | 30.0 | 30 | 50 | 50 | 50 |
1 | 2 | 4 | 2 | 18 | 52 | 85 | 2015-07-21 15:04:00 | 1 | 1992 | 1 | ... | 3 | 90 | 70 | 70 | 80 | 85.0 | 70 | 90 | 60 | 60 |
2 | 3 | 4 | 2 | 29 | 83 | 126 | 2015-07-21 13:24:00 | 2 | 1967 | 1 | ... | 4 | 90 | 80 | 75 | 79 | 80.0 | 90 | 90 | 90 | 75 |
3 | 4 | 5 | 2 | 10 | 28 | 51 | 2015-07-25 17:33:00 | 2 | 1943 | 1 | ... | 3 | 100 | 90 | 70 | 80 | 80.0 | 90 | 90 | 80 | 80 |
4 | 5 | 4 | 1 | 7 | 18 | 36 | 2015-08-10 09:50:00 | 2 | 1994 | 1 | ... | 2 | 50 | 50 | 50 | 50 | 50.0 | 50 | 50 | 50 | 50 |
5 rows × 140 columns
test.head()
id | survey_type | province | city | county | survey_time | gender | birth | nationality | religion | ... | neighbor_familiarity | public_service_1 | public_service_2 | public_service_3 | public_service_4 | public_service_5 | public_service_6 | public_service_7 | public_service_8 | public_service_9 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 8001 | 1 | 2 | 2 | 9 | 2015-07-24 10:30:00 | 2 | 1972 | 8 | 0 | ... | 4 | 80 | 80.0 | 60 | 80 | 80 | 80 | 80 | 80 | 80 |
1 | 8002 | 1 | 22 | 66 | 106 | 2015-07-12 15:38:00 | 2 | 1938 | 1 | 1 | ... | 5 | 90 | 80.0 | 80 | 80 | 80 | 80 | 70 | 80 | 80 |
2 | 8003 | 2 | 9 | 22 | 44 | 2015-07-05 09:36:00 | 2 | 1935 | 1 | 1 | ... | 5 | 95 | 95.0 | 80 | 90 | 80 | 95 | 95 | 80 | 90 |
3 | 8004 | 2 | 18 | 52 | 86 | 2015-07-19 10:10:00 | 2 | 1992 | 1 | 1 | ... | 4 | 80 | 80.0 | 70 | 90 | 80 | 80 | 70 | 60 | 50 |
4 | 8005 | 2 | 24 | 70 | 110 | 2015-08-03 11:41:00 | 1 | 1990 | 1 | 1 | ... | -8 | 60 | 50.0 | 0 | 30 | 40 | 50 | 60 | -2 | 60 |
5 rows × 139 columns
train.happiness.describe() #数据的基本信息
count 7988.000000
mean 3.867927
std 0.818717
min 1.000000
25% 4.000000
50% 4.000000
75% 4.000000
max 5.000000
Name: happiness, dtype: float64
数据预处理
首先需要对于数据中的连续出现的负数值进行处理。由于数据中的负数值只有-1,-2,-3,-8这几种数值,所以它们进行分别的操作,实现代码如下:
#make feature +5
#csv中有复数值:-1、-2、-3、-8,将他们视为有问题的特征,但是不删去
def getres1(row):
return len([x for x in row.values if type(x)==int and x<0])
def getres2(row):
return len([x for x in row.values if type(x)==int and x==-8])
def getres3(row):
return len([x for x in row.values if type(x)==int and x==-1])
def getres4(row):
return len([x for x in row.values if type(x)==int and x==-2])
def getres5(row):
return len([x for x in row.values if type(x)==int and x==-3])
#检查数据
data['neg1'] = data[data.columns].apply(lambda row:getres1(row),axis=1)
data.loc[data['neg1']>20,'neg1'] = 20 #平滑处理,最多出现20次
data['neg2'] = data[data.columns].apply(lambda row:getres2(row),axis=1)
data['neg3'] = data[data.columns].apply(lambda row:getres3(row),axis=1)
data['neg4'] = data[data.columns].apply(lambda row:getres4(row),axis=1)
data['neg5'] = data[data.columns].apply(lambda row:getres5(row),axis=1)
缺失值填充
填充缺失值,在这里我采取的方式是将缺失值补全,使用fillna(value),其中value的数值根据具体的情况来确定。例如将大部分缺失信息认为是零,将家庭成员数认为是1,将家庭收入这个特征认为是66365,即所有家庭的收入平均值。部分实现代码如下:
#填充缺失值 共25列 去掉4列 填充21列
#以下的列都是缺省的,视情况填补
data['work_status'] = data['work_status'].fillna(0)
data['work_yr'] = data['work_yr'].fillna(0)
data['work_manage'] = data['work_manage'].fillna(0)
data['work_type'] = data['work_type'].fillna(0)
data['edu_yr'] = data['edu_yr'].fillna(0)
data['edu_status'] = data['edu_status'].fillna(0)
data['s_work_type'] = data['s_work_type'].fillna(0)
data['s_work_status'] = data['s_work_status'].fillna(0)
data['s_political'] = data['s_political'].fillna(0)
data['s_hukou'] = data['s_hukou'].fillna(0)
data['s_income'] = data['s_income'].fillna(0)
data['s_birth'] = data['s_birth'].fillna(0)
data['s_edu'] = data['s_edu'].fillna(0)
data['s_work_exper'] = data['s_work_exper'].fillna(0)
data['minor_child'] = data['minor_child'].fillna(0)
data['marital_now'] = data['marital_now'].fillna(0)
data['marital_1st'] = data['marital_1st'].fillna(0)
data['social_neighbor']=data['social_neighbor'].fillna(0)
data['social_friend']=data['social_friend'].fillna(0)
# 这两个不是用 0 填补
data['hukou_loc']=data['hukou_loc'].fillna(1) #最少为1,表示户口
data['family_income']=data['family_income'].fillna(66365) #删除问题值后的平均值
除此之外,还有特殊格式的信息需要另外处理,比如与时间有关的信息处理,这里主要分为两部分进行处理:首先是将“连续”的年龄,进行分层处理,即划分年龄段,具体地在这里我们将年龄分为了6个区间。其次是计算具体的年龄,在Excel表格中,只有出生年月以及调查时间等信息,我们根据此计算出每一位调查者的真实年龄。具体实现代码如下:
#144+1 =145 列长度
#继续进行特殊的列进行数据处理
#读happiness_index.xlsx
data['survey_time'] = pd.to_datetime(data['survey_time'], format='%Y-%m-%d',errors='coerce') #防止时间格式不同的报错errors='coerce'
data['survey_time'] = data['survey_time'].dt.year #仅仅是year,方便计算年龄
data['age'] = data['survey_time']-data['birth']
# 这里就是调查时间减去出生年龄,比较简单
# print(data['age'],data['survey_time'],data['birth'])
#年龄分层 145+1=146 列长度
bins = [0,17,26,34,50,63,100]
data['age_bin'] = pd.cut(data['age'], bins, labels=[0,1,2,3,4,5])
在这里因为家庭的收入是连续值,所以不能再使用取众数的方法进行处理,这里就直接使用了均值进行缺失值的补全。
第三种方法是使用我们日常生活中的真实情况,例如“宗教信息”特征为负数的认为是“不信仰宗教”,并认为“参加宗教活动的频率”为1,即没有参加过宗教活动,主观的进行补全,这也是我在这一步骤中使用最多的一种方式。就像我自己填表一样,这里我全部都使用了我自己的想法进行缺省值的补全。
# 针对不同的特征进行不同的数据处理,会对后面的步骤产生影响
#对‘宗教’处理
data.loc[data['religion']<0,'religion'] = 1 #1为不信仰宗教
data.loc[data['religion_freq']<0,'religion_freq'] = 1 #1为从来没有参加过
#对‘教育程度’处理
data.loc[data['edu']<0,'edu'] = 4 #初中
data.loc[data['edu_status']<0,'edu_status'] = 0
data.loc[data['edu_yr']<0,'edu_yr'] = 0
#对‘个人收入’处理
data.loc[data['income']<0,'income'] = 0 #认为无收入
#对‘政治面貌’处理
data.loc[data['political']<0,'political'] = 1 #认为是群众
#对体重处理【666】,可能有人把公斤看成斤,这里很合理。
data.loc[(data['weight_jin']<=80)&(data['height_cm']>=160),'weight_jin']= data['weight_jin']*2
data.loc[data['weight_jin']<=60,'weight_jin']= data['weight_jin']*2 #个人的想法,哈哈哈,没有60斤的成年人吧
#对身高处理
data.loc[data['height_cm']<150,'height_cm'] = 150 #成年人的实际情况
#对‘健康’处理
data.loc[data['health']<0,'health'] = 4 #认为是比较健康
data.loc[data['health_problem']<0,'health_problem'] = 4
#对‘沮丧’处理
data.loc[data['depression']<0,'depression'] = 4 #一般人都是很少吧
#对‘媒体’处理
data.loc[data['media_1']<0,'media_1'] = 1 #都是从不
data.loc[data['media_2']<0,'media_2'] = 1
data.loc[data['media_3']<0,'media_3'] = 1
data.loc[data['media_4']<0,'media_4'] = 1
data.loc[data['media_5']<0,'media_5'] = 1
data.loc[data['media_6']<0,'media_6'] = 1
#对‘空闲活动’处理
data.loc[data['leisure_1']<0,'leisure_1'] = 1 #都是根据自己的想法
data.loc[data['leisure_2']<0,'leisure_2'] = 5
data.loc[data['leisure_3']<0,'leisure_3'] = 3
使用众数(代码中使用mode()来实现异常值的修正),由于这里的特征是空闲活动,所以采用众数对于缺失值进行处理比较合理。具体的代码参考如下:
data.loc[data['leisure_4']<0,'leisure_4'] = data['leisure_4'].mode() #取众数
data.loc[data['leisure_5']<0,'leisure_5'] = data['leisure_5'].mode()
data.loc[data['leisure_6']<0,'leisure_6'] = data['leisure_6'].mode()
data.loc[data['leisure_7']<0,'leisure_7'] = data['leisure_7'].mode()
data.loc[data['leisure_8']<0,'leisure_8'] = data['leisure_8'].mode()
data.loc[data['leisure_9']<0,'leisure_9'] = data['leisure_9'].mode()
data.loc[data['leisure_10']<0,'leisure_10'] = data['leisure_10'].mode()
data.loc[data['leisure_11']<0,'leisure_11'] = data['leisure_11'].mode()
data.loc[data['leisure_12']<0,'leisure_12'] = data['leisure_12'].mode()
data.loc[data['socialize']<0,'socialize'] = 2 #很少
data.loc[data['relax']<0,'relax'] = 4 #经常
data.loc[data['learn']<0,'learn'] = 1 #从不,哈哈哈哈
#对‘社交’处理
data.loc[data['social_neighbor']<0,'social_neighbor'] = 0
data.loc[data['social_friend']<0,'social_friend'] = 0
data.loc[data['socia_outing']<0,'socia_outing'] = 1
data.loc[data['neighbor_familiarity']<0,'social_neighbor']= 4
#对‘社会公平性’处理
data.loc[data['equity']<0,'equity'] = 4
#对‘社会等级’处理
data.loc[data['class_10_before']<0,'class_10_before'] = 3
data.loc[data['class']<0,'class'] = 5
data.loc[data['class_10_after']<0,'class_10_after'] = 5
data.loc[data['class_14']<0,'class_14'] = 2
#对‘工作情况’处理
data.loc[data['work_status']<0,'work_status'] = 0
data.loc[data['work_yr']<0,'work_yr'] = 0
data.loc[data['work_manage']<0,'work_manage'] = 0
data.loc[data['work_type']<0,'work_type'] = 0
#对‘社会保障’处理
data.loc[data['insur_1']<0,'insur_1'] = 1
data.loc[data['insur_2']<0,'insur_2'] = 1
data.loc[data['insur_3']<0,'insur_3'] = 1
data.loc[data['insur_4']<0,'insur_4'] = 1
data.loc[data['insur_1']==0,'insur_1'] = 0
data.loc[data['insur_2']==0,'insur_2'] = 0
data.loc[data['insur_3']==0,'insur_3'] = 0
data.loc[data['insur_4']==0,'insur_4'] = 0
取均值进行缺失值的补全(代码实现为means()),在这里因为家庭的收入是连续值,所以不能再使用取众数的方法进行处理,这里就直接使用了均值进行缺失值的补全。具体的代码参考如下:
#对家庭情况处理
family_income_mean = data['family_income'].mean()
data.loc[data['family_income']<0,'family_income'] = family_income_mean
data.loc[data['family_m']<0,'family_m'] = 2
data.loc[data['family_status']<0,'family_status'] = 3
data.loc[data['house']<0,'house'] = 1
data.loc[data['car']<0,'car'] = 0
data.loc[data['car']==2,'car'] = 0
data.loc[data['son']<0,'son'] = 1
data.loc[data['daughter']<0,'daughter'] = 0
data.loc[data['minor_child']<0,'minor_child'] = 0
#对‘婚姻’处理
data.loc[data['marital_1st']<0,'marital_1st'] = 0
data.loc[data['marital_now']<0,'marital_now'] = 0
#对‘配偶’处理
data.loc[data['s_birth']<0,'s_birth'] = 0
data.loc[data['s_edu']<0,'s_edu'] = 0
data.loc[data['s_political']<0,'s_political'] = 0
data.loc[data['s_hukou']<0,'s_hukou'] = 0
data.loc[data['s_income']<0,'s_income'] = 0
data.loc[data['s_work_type']<0,'s_work_type'] = 0
data.loc[data['s_work_status']<0,'s_work_status'] = 0
data.loc[data['s_work_exper']<0,'s_work_exper'] = 0
#对‘父母情况’处理
data.loc[data['f_birth']<0,'f_birth'] = 1945
data.loc[data['f_edu']<0,'f_edu'] = 1
data.loc[data['f_political']<0,'f_political'] = 1
data.loc[data['f_work_14']<0,'f_work_14'] = 2
data.loc[data['m_birth']<0,'m_birth'] = 1940
data.loc[data['m_edu']<0,'m_edu'] = 1
data.loc[data['m_political']<0,'m_political'] = 1
data.loc[data['m_work_14']<0,'m_work_14'] = 2
#和同龄人相比社会经济地位
data.loc[data['status_peer']<0,'status_peer'] = 2
#和3年前比社会经济地位
data.loc[data['status_3_before']<0,'status_3_before'] = 2
#对‘观点’处理
data.loc[data['view']<0,'view'] = 4
#对期望年收入处理
data.loc[data['inc_ability']<=0,'inc_ability']= 2
inc_exp_mean = data['inc_exp'].mean()
data.loc[data['inc_exp']<=0,'inc_exp']= inc_exp_mean #取均值
# #部分特征处理,取众数
# for i in range(1,9+1):
# data.loc[data['public_service_'+str(i)]<0,'public_service_'+str(i)] = data['public_service_'+str(i)].dropna().mode().values
# for i in range(1,13+1):
# data.loc[data['trust_'+str(i)]<0,'trust_'+str(i)] = data['trust_'+str(i)].dropna().mode().values
数据增广¶
这一步,我们需要进一步分析每一个特征之间的关系,从而进行数据增广
经过思考,这里我添加了如下的特征:第一次结婚年龄、最近结婚年龄、是否再婚、配偶年龄、配偶年龄差、各种收入比(与配偶之间的收入比、十年后预期收入与现在收入之比等等)、收入与住房面积比(其中也包括10年后期望收入等等各种情况)、社会阶级(10年后的社会阶级、14年后的社会阶级等等)、悠闲指数、满意指数、信任指数等等。
除此之外,我还考虑了对于同一省、市、县进行了归一化。例如同一省市内的收入的平均值等以及一个个体相对于同省、市、县其他人的各个指标的情况。同时也考虑了对于同龄人之间的相互比较,即在同龄人中的收入情况、健康情况等等。具体的实现代码如下:
#第一次结婚年龄 147
data['marital_1stbir'] = data['marital_1st'] - data['birth']
#最近结婚年龄 148
data['marital_nowtbir'] = data['marital_now'] - data['birth']
#是否再婚 149
data['mar'] = data['marital_nowtbir'] - data['marital_1stbir']
#配偶年龄 150
data['marital_sbir'] = data['marital_now']-data['s_birth']
#配偶年龄差 151
data['age_'] = data['marital_nowtbir'] - data['marital_sbir']
#收入比 151+7 =158
data['income/s_income'] = data['income']/(data['s_income']+1)
data['income+s_income'] = data['income']+(data['s_income']+1)
data['income/family_income'] = data['income']/(data['family_income']+1)
data['all_income/family_income'] = (data['income']+data['s_income'])/(data['family_income']+1)
data['income/inc_exp'] = data['income']/(data['inc_exp']+1)
data['family_income/m'] = data['family_income']/(data['family_m']+0.01)
data['income/m'] = data['income']/(data['family_m']+0.01)
#收入/面积比 158+4=162
data['income/floor_area'] = data['income']/(data['floor_area']+0.01)
data['all_income/floor_area'] = (data['income']+data['s_income'])/(data['floor_area']+0.01)
data['family_income/floor_area'] = data['family_income']/(data['floor_area']+0.01)
data['floor_area/m'] = data['floor_area']/(data['family_m']+0.01)
#class 162+3=165
data['class_10_diff'] = (data['class_10_after'] - data['class'])
data['class_diff'] = data['class'] - data['class_10_before']
data['class_14_diff'] = data['class'] - data['class_14']
#悠闲指数 166
leisure_fea_lis = ['leisure_'+str(i) for i in range(1,13)]
data['leisure_sum'] = data[leisure_fea_lis].sum(axis=1) #skew
#满意指数 167
public_service_fea_lis = ['public_service_'+str(i) for i in range(1,10)]
data['public_service_sum'] = data[public_service_fea_lis].sum(axis=1) #skew
#信任指数 168
trust_fea_lis = ['trust_'+str(i) for i in range(1,14)]
data['trust_sum'] = data[trust_fea_lis].sum(axis=1) #skew
#province mean 168+13=181
data['province_income_mean'] = data.groupby(['province'])['income'].transform('mean').values
data['province_family_income_mean'] = data.groupby(['province'])['family_income'].transform('mean').values
data['province_equity_mean'] = data.groupby(['province'])['equity'].transform('mean').values
data['province_depression_mean'] = data.groupby(['province'])['depression'].transform('mean').values
data['province_floor_area_mean'] = data.groupby(['province'])['floor_area'].transform('mean').values
data['province_health_mean'] = data.groupby(['province'])['health'].transform('mean').values
data['province_class_10_diff_mean'] = data.groupby(['province'])['class_10_diff'].transform('mean').values
data['province_class_mean'] = data.groupby(['province'])['class'].transform('mean').values
data['province_health_problem_mean'] = data.groupby(['province'])['health_problem'].transform('mean').values
data['province_family_status_mean'] = data.groupby(['province'])['family_status'].transform('mean').values
data['province_leisure_sum_mean'] = data.groupby(['province'])['leisure_sum'].transform('mean').values
data['province_public_service_sum_mean'] = data.groupby(['province'])['public_service_sum'].transform('mean').values
data['province_trust_sum_mean'] = data.groupby(['province'])['trust_sum'].transform('mean').values
#city mean 181+13=194
data['city_income_mean'] = data.groupby(['city'])['income'].transform('mean').values
data['city_family_income_mean'] = data.groupby(['city'])['family_income'].transform('mean').values
data['city_equity_mean'] = data.groupby(['city'])['equity'].transform('mean').values
data['city_depression_mean'] = data.groupby(['city'])['depression'].transform('mean').values
data['city_floor_area_mean'] = data.groupby(['city'])['floor_area'].transform('mean').values
data['city_health_mean'] = data.groupby(['city'])['health'].transform('mean').values
data['city_class_10_diff_mean'] = data.groupby(['city'])['class_10_diff'].transform('mean').values
data['city_class_mean'] = data.groupby(['city'])['class'].transform('mean').values
data['city_health_problem_mean'] = data.groupby(['city'])['health_problem'].transform('mean').values
data['city_family_status_mean'] = data.groupby(['city'])['family_status'].transform('mean').values
data['city_leisure_sum_mean'] = data.groupby(['city'])['leisure_sum'].transform('mean').values
data['city_public_service_sum_mean'] = data.groupby(['city'])['public_service_sum'].transform('mean').values
data['city_trust_sum_mean'] = data.groupby(['city'])['trust_sum'].transform('mean').values
#county mean 194 + 13 = 207
data['county_income_mean'] = data.groupby(['county'])['income'].transform('mean').values
data['county_family_income_mean'] = data.groupby(['county'])['family_income'].transform('mean').values
data['county_equity_mean'] = data.groupby(['county'])['equity'].transform('mean').values
data['county_depression_mean'] = data.groupby(['county'])['depression'].transform('mean').values
data['county_floor_area_mean'] = data.groupby(['county'])['floor_area'].transform('mean').values
data['county_health_mean'] = data.groupby(['county'])['health'].transform('mean').values
data['county_class_10_diff_mean'] = data.groupby(['county'])['class_10_diff'].transform('mean').values
data['county_class_mean'] = data.groupby(['county'])['class'].transform('mean').values
data['county_health_problem_mean'] = data.groupby(['county'])['health_problem'].transform('mean').values
data['county_family_status_mean'] = data.groupby(['county'])['family_status'].transform('mean').values
data['county_leisure_sum_mean'] = data.groupby(['county'])['leisure_sum'].transform('mean').values
data['county_public_service_sum_mean'] = data.groupby(['county'])['public_service_sum'].transform('mean').values
data['county_trust_sum_mean'] = data.groupby(['county'])['trust_sum'].transform('mean').values
#ratio 相比同省 207 + 13 =220
data['income/province'] = data['income']/(data['province_income_mean'])
data['family_income/province'] = data['family_income']/(data['province_family_income_mean'])
data['equity/province'] = data['equity']/(data['province_equity_mean'])
data['depression/province'] = data['depression']/(data['province_depression_mean'])
data['floor_area/province'] = data['floor_area']/(data['province_floor_area_mean'])
data['health/province'] = data['health']/(data['province_health_mean'])
data['class_10_diff/province'] = data['class_10_diff']/(data['province_class_10_diff_mean'])
data['class/province'] = data['class']/(data['province_class_mean'])
data['health_problem/province'] = data['health_problem']/(data['province_health_problem_mean'])
data['family_status/province'] = data['family_status']/(data['province_family_status_mean'])
data['leisure_sum/province'] = data['leisure_sum']/(data['province_leisure_sum_mean'])
data['public_service_sum/province'] = data['public_service_sum']/(data['province_public_service_sum_mean'])
data['trust_sum/province'] = data['trust_sum']/(data['province_trust_sum_mean']+1)
#ratio 相比同市 220 + 13 =233
data['income/city'] = data['income']/(data['city_income_mean'])
data['family_income/city'] = data['family_income']/(data['city_family_income_mean'])
data['equity/city'] = data['equity']/(data['city_equity_mean'])
data['depression/city'] = data['depression']/(data['city_depression_mean'])
data['floor_area/city'] = data['floor_area']/(data['city_floor_area_mean'])
data['health/city'] = data['health']/(data['city_health_mean'])
data['class_10_diff/city'] = data['class_10_diff']/(data['city_class_10_diff_mean'])
data['class/city'] = data['class']/(data['city_class_mean'])
data['health_problem/city'] = data['health_problem']/(data['city_health_problem_mean'])
data['family_status/city'] = data['family_status']/(data['city_family_status_mean'])
data['leisure_sum/city'] = data['leisure_sum']/(data['city_leisure_sum_mean'])
data['public_service_sum/city'] = data['public_service_sum']/(data['city_public_service_sum_mean'])
data['trust_sum/city'] = data['trust_sum']/(data['city_trust_sum_mean'])
#ratio 相比同个地区 233 + 13 =246
data['income/county'] = data['income']/(data['county_income_mean'])
data['family_income/county'] = data['family_income']/(data['county_family_income_mean'])
data['equity/county'] = data['equity']/(data['county_equity_mean'])
data['depression/county'] = data['depression']/(data['county_depression_mean'])
data['floor_area/county'] = data['floor_area']/(data['county_floor_area_mean'])
data['health/county'] = data['health']/(data['county_health_mean'])
data['class_10_diff/county'] = data['class_10_diff']/(data['county_class_10_diff_mean'])
data['class/county'] = data['class']/(data['county_class_mean'])
data['health_problem/county'] = data['health_problem']/(data['county_health_problem_mean'])
data['family_status/county'] = data['family_status']/(data['county_family_status_mean'])
data['leisure_sum/county'] = data['leisure_sum']/(data['county_leisure_sum_mean'])
data['public_service_sum/county'] = data['public_service_sum']/(data['county_public_service_sum_mean'])
data['trust_sum/county'] = data['trust_sum']/(data['county_trust_sum_mean'])
#age mean 246+ 13 =259
data['age_income_mean'] = data.groupby(['age'])['income'].transform('mean').values
data['age_family_income_mean'] = data.groupby(['age'])['family_income'].transform('mean').values
data['age_equity_mean'] = data.groupby(['age'])['equity'].transform('mean').values
data['age_depression_mean'] = data.groupby(['age'])['depression'].transform('mean').values
data['age_floor_area_mean'] = data.groupby(['age'])['floor_area'].transform('mean').values
data['age_health_mean'] = data.groupby(['age'])['health'].transform('mean').values
data['age_class_10_diff_mean'] = data.groupby(['age'])['class_10_diff'].transform('mean').values
data['age_class_mean'] = data.groupby(['age'])['class'].transform('mean').values
data['age_health_problem_mean'] = data.groupby(['age'])['health_problem'].transform('mean').values
data['age_family_status_mean'] = data.groupby(['age'])['family_status'].transform('mean').values
data['age_leisure_sum_mean'] = data.groupby(['age'])['leisure_sum'].transform('mean').values
data['age_public_service_sum_mean'] = data.groupby(['age'])['public_service_sum'].transform('mean').values
data['age_trust_sum_mean'] = data.groupby(['age'])['trust_sum'].transform('mean').values
# 和同龄人相比259 + 13 =272
data['income/age'] = data['income']/(data['age_income_mean'])
data['family_income/age'] = data['family_income']/(data['age_family_income_mean'])
data['equity/age'] = data['equity']/(data['age_equity_mean'])
data['depression/age'] = data['depression']/(data['age_depression_mean'])
data['floor_area/age'] = data['floor_area']/(data['age_floor_area_mean'])
data['health/age'] = data['health']/(data['age_health_mean'])
data['class_10_diff/age'] = data['class_10_diff']/(data['age_class_10_diff_mean'])
data['class/age'] = data['class']/(data['age_class_mean'])
data['health_problem/age'] = data['health_problem']/(data['age_health_problem_mean'])
data['family_status/age'] = data['family_status']/(data['age_family_status_mean'])
data['leisure_sum/age'] = data['leisure_sum']/(data['age_leisure_sum_mean'])
data['public_service_sum/age'] = data['public_service_sum']/(data['age_public_service_sum_mean'])
data['trust_sum/age'] = data['trust_sum']/(data['age_trust_sum_mean'])
【这部分特征工程的工作量还是很大】经过如上的操作后,最终我们的特征从一开始的131维,扩充为了272维的特征。接下来考虑特征工程、训练模型以及模型融合的工作。
print('shape',data.shape)
data.head()
shape (10956, 272)
id | survey_type | province | city | county | survey_time | gender | birth | nationality | religion | ... | depression/age | floor_area/age | health/age | class_10_diff/age | class/age | health_problem/age | family_status/age | leisure_sum/age | public_service_sum/age | trust_sum/age | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 1 | 12 | 32 | 59 | 2015 | 1 | 1959 | 1 | 1 | ... | 1.285211 | 0.410351 | 0.848837 | 0.000000 | 0.683307 | 0.521429 | 0.733668 | 0.724620 | 0.690649 | 0.337071 |
1 | 2 | 2 | 18 | 52 | 85 | 2015 | 1 | 1992 | 1 | 1 | ... | 0.733333 | 0.952824 | 1.179337 | 1.012552 | 1.344444 | 0.891344 | 1.359551 | 1.011792 | 1.166536 | 1.466460 |
2 | 3 | 2 | 29 | 83 | 126 | 2015 | 2 | 1967 | 1 | 0 | ... | 1.343537 | 0.972328 | 1.150485 | 1.190955 | 1.195762 | 1.055679 | 1.190955 | 0.966470 | 1.234718 | 1.085312 |
3 | 4 | 2 | 10 | 28 | 51 | 2015 | 2 | 1943 | 1 | 1 | ... | 1.111663 | 0.642329 | 1.276353 | 4.977778 | 1.199143 | 1.188329 | 1.162630 | 0.899346 | 1.254347 | 2.223827 |
4 | 5 | 1 | 7 | 18 | 36 | 2015 | 2 | 1994 | 1 | 1 | ... | 0.750000 | 0.587284 | 1.177106 | 0.000000 | 0.236957 | 1.116803 | 1.093645 | 1.045313 | 0.756717 | 1.298322 |
5 rows × 272 columns
我们还应该删去有效样本数很少的特征,例如负值太多的特征或者是缺失值太多的特征,这里我一共删除了包括“目前的最高教育程度”在内的9类特征,得到了最终的263维的特征
#272-9=263
#删除数值特别少的和之前用过的特征
del_list=['id','survey_time','edu_other','invest_other','property_other','join_party','province','city','county']
use_feature = [clo for clo in data.columns if clo not in del_list]
data.fillna(0,inplace=True) #还是补0
train_shape = train.shape[0] #一共的数据量,训练集
features = data[use_feature].columns #删除后所有的特征
X_train_263 = data[:train_shape][use_feature].values
y_train = target
X_test_263 = data[train_shape:][use_feature].values
X_train_263.shape #最终一种263个特征
(7988, 263)
这里选择了最重要的49个特征,作为除了以上263维特征外的另外一组特征
imp_fea_49 = ['equity','depression','health','class','family_status','health_problem','class_10_after',
'equity/province','equity/city','equity/county',
'depression/province','depression/city','depression/county',
'health/province','health/city','health/county',
'class/province','class/city','class/county',
'family_status/province','family_status/city','family_status/county',
'family_income/province','family_income/city','family_income/county',
'floor_area/province','floor_area/city','floor_area/county',
'leisure_sum/province','leisure_sum/city','leisure_sum/county',
'public_service_sum/province','public_service_sum/city','public_service_sum/county',
'trust_sum/province','trust_sum/city','trust_sum/county',
'income/m','public_service_sum','class_diff','status_3_before','age_income_mean','age_floor_area_mean',
'weight_jin','height_cm',
'health/age','depression/age','equity/age','leisure_sum/age'
]
train_shape = train.shape[0]
X_train_49 = data[:train_shape][imp_fea_49].values
X_test_49 = data[train_shape:][imp_fea_49].values
X_train_49.shape #最重要的49个特征
(7988, 49)
选择需要进行onehot编码的离散变量【这里是21个变量】进行one-hot编码,再合成为第三类特征,共383维。
cat_fea = ['survey_type','gender','nationality','edu_status','political','hukou','hukou_loc','work_exper','work_status','work_type',
'work_manage','marital','s_political','s_hukou','s_work_exper','s_work_status','s_work_type','f_political','f_work_14',
'm_political','m_work_14']
noc_fea = [clo for clo in use_feature if clo not in cat_fea]
onehot_data = data[cat_fea].values
enc = preprocessing.OneHotEncoder(categories = 'auto')
oh_data=enc.fit_transform(onehot_data).toarray()
oh_data.shape #变为onehot编码格式
X_train_oh = oh_data[:train_shape,:]
X_test_oh = oh_data[train_shape:,:]
X_train_oh.shape #其中的训练集
X_train_383 = np.column_stack([data[:train_shape][noc_fea].values,X_train_oh])#先是noc,再是cat_fea
X_test_383 = np.column_stack([data[train_shape:][noc_fea].values,X_test_oh])
X_train_383.shape
(7988, 383)
基于此,我们构建完成了三种特征工程(训练数据集),
-
其一是上面提取的最重要的49中特征,其中包括健康程度、社会阶级、在同龄人中的收入情况等等特征。
-
其二是扩充后的263维特征(这里可以认为是初始特征)
-
其三是使用One-hot编码后的特征
使用One-hot进行编码:有部分特征为分离值,例如性别中男女,男为1,女为2,我们想使用One-hot将其变为男为0,女为1,来增强机器学习算法的鲁棒性能;再如民族这个特征,原本是1-56这56个数值,如果直接分类会让分类器的鲁棒性变差,所以使用One-hot编码将其变为6个特征进行非零即一的处理。
特征建模
首先我们对于原始的263维的特征,使用lightGBM进行处理,这里我们使用5折交叉验证的方法:
1.lightGBM
##### lgb_263 #
#lightGBM决策树
lgb_263_param = {
'num_leaves': 7,
'min_data_in_leaf': 20, #叶子可能具有的最小记录数
'objective':'regression',
'max_depth': -1,
'learning_rate': 0.003,
"boosting": "gbdt", #用gbdt算法
"feature_fraction": 0.18, #例如 0.18时,意味着在每次迭代中随机选择18%的参数来建树
"bagging_freq": 1,
"bagging_fraction": 0.55, #每次迭代时用的数据比例
"bagging_seed": 14,
"metric": 'mse',
"lambda_l1": 0.1005,
"lambda_l2": 0.1996,
"verbosity": -1}
folds = StratifiedKFold(n_splits=5, shuffle=True, random_state=4) #交叉切分:5
oof_lgb_263 = np.zeros(len(X_train_263))
predictions_lgb_263 = np.zeros(len(X_test_263))
for fold_, (trn_idx, val_idx) in enumerate(folds.split(X_train_263, y_train)):
print("fold n°{}".format(fold_+1))
trn_data = lgb.Dataset(X_train_263[trn_idx], y_train[trn_idx])
val_data = lgb.Dataset(X_train_263[val_idx], y_train[val_idx])#train:val=4:1
num_round = 10000
lgb_263 = lgb.train(lgb_263_param, trn_data, num_round, valid_sets = [trn_data, val_data], verbose_eval=500, early_stopping_rounds = 800)
oof_lgb_263[val_idx] = lgb_263.predict(X_train_263[val_idx], num_iteration=lgb_263.best_iteration)
predictions_lgb_263 += lgb_263.predict(X_test_263, num_iteration=lgb_263.best_iteration) / folds.n_splits
print("CV score: {:<8.8f}".format(mean_squared_error(oof_lgb_263, target)))
fold n°1
Training until validation scores don't improve for 800 rounds
[500] training's l2: 0.500331 valid_1's l2: 0.533427
[1000] training's l2: 0.4525 valid_1's l2: 0.500777
[1500] training's l2: 0.426508 valid_1's l2: 0.487597
[2000] training's l2: 0.4084 valid_1's l2: 0.481467
[2500] training's l2: 0.394027 valid_1's l2: 0.477529
[3000] training's l2: 0.381774 valid_1's l2: 0.475263
[3500] training's l2: 0.370932 valid_1's l2: 0.474145
[4000] training's l2: 0.361149 valid_1's l2: 0.47294
[4500] training's l2: 0.352063 valid_1's l2: 0.472231
[5000] training's l2: 0.343702 valid_1's l2: 0.471765
[5500] training's l2: 0.335758 valid_1's l2: 0.47144
[6000] training's l2: 0.328102 valid_1's l2: 0.471331
[6500] training's l2: 0.320743 valid_1's l2: 0.471134
[7000] training's l2: 0.313889 valid_1's l2: 0.470992
[7500] training's l2: 0.307232 valid_1's l2: 0.470861
[8000] training's l2: 0.300824 valid_1's l2: 0.470847
[8500] training's l2: 0.294612 valid_1's l2: 0.470747
[9000] training's l2: 0.288702 valid_1's l2: 0.470792
Early stopping, best iteration is:
[8525] training's l2: 0.294291 valid_1's l2: 0.470632
fold n°2
Training until validation scores don't improve for 800 rounds
[500] training's l2: 0.504939 valid_1's l2: 0.514236
[1000] training's l2: 0.455927 valid_1's l2: 0.480216
[1500] training's l2: 0.429853 valid_1's l2: 0.466773
[2000] training's l2: 0.411906 valid_1's l2: 0.459944
[2500] training's l2: 0.397753 valid_1's l2: 0.456118
[3000] training's l2: 0.385615 valid_1's l2: 0.453513
[3500] training's l2: 0.37487 valid_1's l2: 0.451986
[4000] training's l2: 0.365165 valid_1's l2: 0.450867
[4500] training's l2: 0.356014 valid_1's l2: 0.449942
[5000] training's l2: 0.347677 valid_1's l2: 0.449112
[5500] training's l2: 0.339698 valid_1's l2: 0.448458
[6000] training's l2: 0.332008 valid_1's l2: 0.448121
[6500] training's l2: 0.324805 valid_1's l2: 0.44822
Early stopping, best iteration is:
[6067] training's l2: 0.331041 valid_1's l2: 0.447987
fold n°3
Training until validation scores don't improve for 800 rounds
[500] training's l2: 0.503954 valid_1's l2: 0.51823
[1000] training's l2: 0.45601 valid_1's l2: 0.481778
[1500] training's l2: 0.430579 valid_1's l2: 0.465715
[2000] training's l2: 0.413078 valid_1's l2: 0.456785
[2500] training's l2: 0.398959 valid_1's l2: 0.451336
[3000] training's l2: 0.386889 valid_1's l2: 0.448179
[3500] training's l2: 0.375929 valid_1's l2: 0.44658
[4000] training's l2: 0.366107 valid_1's l2: 0.444923
[4500] training's l2: 0.357063 valid_1's l2: 0.444236
[5000] training's l2: 0.348507 valid_1's l2: 0.443648
[5500] training's l2: 0.340358 valid_1's l2: 0.443224
[6000] training's l2: 0.332738 valid_1's l2: 0.442732
[6500] training's l2: 0.325277 valid_1's l2: 0.442314
[7000] training's l2: 0.318207 valid_1's l2: 0.442253
[7500] training's l2: 0.311511 valid_1's l2: 0.442414
Early stopping, best iteration is:
[6952] training's l2: 0.31887 valid_1's l2: 0.442143
fold n°4
Training until validation scores don't improve for 800 rounds
[500] training's l2: 0.505084 valid_1's l2: 0.512556
[1000] training's l2: 0.456559 valid_1's l2: 0.477796
[1500] training's l2: 0.429929 valid_1's l2: 0.465724
[2000] training's l2: 0.411692 valid_1's l2: 0.459847
[2500] training's l2: 0.397526 valid_1's l2: 0.45647
[3000] training's l2: 0.385483 valid_1's l2: 0.454713
[3500] training's l2: 0.37476 valid_1's l2: 0.453502
[4000] training's l2: 0.364943 valid_1's l2: 0.452811
[4500] training's l2: 0.355902 valid_1's l2: 0.451961
[5000] training's l2: 0.347386 valid_1's l2: 0.45147
[5500] training's l2: 0.339364 valid_1's l2: 0.451099
[6000] training's l2: 0.331798 valid_1's l2: 0.450918
[6500] training's l2: 0.324397 valid_1's l2: 0.450487
[7000] training's l2: 0.3175 valid_1's l2: 0.450161
[7500] training's l2: 0.310766 valid_1's l2: 0.450245
Early stopping, best iteration is:
[7169] training's l2: 0.315228 valid_1's l2: 0.45005
fold n°5
Training until validation scores don't improve for 800 rounds
[500] training's l2: 0.503623 valid_1's l2: 0.520436
[1000] training's l2: 0.455596 valid_1's l2: 0.485716
[1500] training's l2: 0.429871 valid_1's l2: 0.472261
[2000] training's l2: 0.411812 valid_1's l2: 0.465697
[2500] training's l2: 0.397307 valid_1's l2: 0.4619
[3000] training's l2: 0.385007 valid_1's l2: 0.459606
[3500] training's l2: 0.373939 valid_1's l2: 0.458301
[4000] training's l2: 0.363967 valid_1's l2: 0.457405
[4500] training's l2: 0.3547 valid_1's l2: 0.457031
[5000] training's l2: 0.345991 valid_1's l2: 0.456911
[5500] training's l2: 0.33777 valid_1's l2: 0.456732
[6000] training's l2: 0.33005 valid_1's l2: 0.456461
[6500] training's l2: 0.322639 valid_1's l2: 0.456643
Early stopping, best iteration is:
[5849] training's l2: 0.332304 valid_1's l2: 0.456373
CV score: 0.45343717
接着,我使用已经训练完的lightGBM的模型进行特征重要性的判断以及可视化,从结果我们可以看出,排在重要性第一位的是health/age,就是同龄人中的健康程度,与我们主观的看法基本一致。
#---------------特征重要性
pd.set_option('display.max_columns', None)
#显示所有行
pd.set_option('display.max_rows', None)
#设置value的显示长度为100,默认为50
pd.set_option('max_colwidth',100)
df = pd.DataFrame(data[use_feature].columns.tolist(), columns=['feature'])
df['importance']=list(lgb_263.feature_importance())
df = df.sort_values(by='importance',ascending=False)
plt.figure(figsize=(14,28))
sns.barplot(x="importance", y="feature", data=df.head(50))
plt.title('Features importance (averaged/folds)')
plt.tight_layout()
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Yg8da255-1621347235859)(output_39_0.png)]
后面,我们使用常见的机器学习方法,对于263维特征进行建模:
2.xgboost
##### xgb_263
#xgboost
xgb_263_params = {'eta': 0.02, #lr
'max_depth': 6,
'min_child_weight':3,#最小叶子节点样本权重和
'gamma':0, #指定节点分裂所需的最小损失函数下降值。
'subsample': 0.7, #控制对于每棵树,随机采样的比例
'colsample_bytree': 0.3, #用来控制每棵随机采样的列数的占比 (每一列是一个特征)。
'lambda':2,
'objective': 'reg:linear',
'eval_metric': 'rmse',
'silent': True,
'nthread': -1}
folds = KFold(n_splits=5, shuffle=True, random_state=2019)
oof_xgb_263 = np.zeros(len(X_train_263))
predictions_xgb_263 = np.zeros(len(X_test_263))
for fold_, (trn_idx, val_idx) in enumerate(folds.split(X_train_263, y_train)):
print("fold n°{}".format(fold_+1))
trn_data = xgb.DMatrix(X_train_263[trn_idx], y_train[trn_idx])
val_data = xgb.DMatrix(X_train_263[val_idx], y_train[val_idx])
watchlist = [(trn_data, 'train'), (val_data, 'valid_data')]
xgb_263 = xgb.train(dtrain=trn_data, num_boost_round=3000, evals=watchlist, early_stopping_rounds=600, verbose_eval=500, params=xgb_263_params)
oof_xgb_263[val_idx] = xgb_263.predict(xgb.DMatrix(X_train_263[val_idx]), ntree_limit=xgb_263.best_ntree_limit)
predictions_xgb_263 += xgb_263.predict(xgb.DMatrix(X_test_263), ntree_limit=xgb_263.best_ntree_limit) / folds.n_splits
print("CV score: {:<8.8f}".format(mean_squared_error(oof_xgb_263, target)))
fold n°1
[17:21:44] WARNING: ../src/objective/regression_obj.cu:171: reg:linear is now deprecated in favor of reg:squarederror.
[17:21:45] WARNING: ../src/learner.cc:573:
Parameters: { "silent" } might not be used.
This may not be accurate due to some parameters are only used in language bindings but
passed down to XGBoost core. Or some parameters are not used but slip through this
verification. Please open an issue if you find above cases.
[0] train-rmse:3.40430 valid_data-rmse:3.38320
[500] train-rmse:0.40228 valid_data-rmse:0.70834
[1000] train-rmse:0.26513 valid_data-rmse:0.71242
[1099] train-rmse:0.24433 valid_data-rmse:0.71306
fold n°2
[17:22:10] WARNING: …/src/objective/regression_obj.cu:171: reg:linear is now deprecated in favor of reg:squarederror.
[17:22:10] WARNING: …/src/learner.cc:573:
Parameters: { “silent” } might not be used.
This may not be accurate due to some parameters are only used in language bindings but
passed down to XGBoost core. Or some parameters are not used but slip through this
verification. Please open an issue if you find above cases.
[0] train-rmse:3.39813 valid_data-rmse:3.40792
[500] train-rmse:0.40907 valid_data-rmse:0.69761
[1000] train-rmse:0.27463 valid_data-rmse:0.69892
[1177] train-rmse:0.23845 valid_data-rmse:0.69960
fold n°3
[17:22:32] WARNING: …/src/objective/regression_obj.cu:171: reg:linear is now deprecated in favor of reg:squarederror.
[17:22:32] WARNING: …/src/learner.cc:573:
Parameters: { “silent” } might not be used.
This may not be accurate due to some parameters are only used in language bindings but
passed down to XGBoost core. Or some parameters are not used but slip through this
verification. Please open an issue if you find above cases.
[0] train-rmse:3.40184 valid_data-rmse:3.39305
[500] train-rmse:0.40937 valid_data-rmse:0.66234
[1000] train-rmse:0.27250 valid_data-rmse:0.66432
[1103] train-rmse:0.25088 valid_data-rmse:0.66447
fold n°4
[17:22:55] WARNING: …/src/objective/regression_obj.cu:171: reg:linear is now deprecated in favor of reg:squarederror.
[17:22:55] WARNING: …/src/learner.cc:573:
Parameters: { “silent” } might not be used.
This may not be accurate due to some parameters are only used in language bindings but
passed down to XGBoost core. Or some parameters are not used but slip through this
verification. Please open an issue if you find above cases.
[0] train-rmse:3.40239 valid_data-rmse:3.39007
[500] train-rmse:0.41029 valid_data-rmse:0.66686
[1000] train-rmse:0.27298 valid_data-rmse:0.66894
[1171] train-rmse:0.23766 valid_data-rmse:0.66965
fold n°5
[17:23:25] WARNING: …/src/objective/regression_obj.cu:171: reg:linear is now deprecated in favor of reg:squarederror.
[17:23:25] WARNING: …/src/learner.cc:573:
Parameters: { “silent” } might not be used.
This may not be accurate due to some parameters are only used in language bindings but
passed down to XGBoost core. Or some parameters are not used but slip through this
verification. Please open an issue if you find above cases.
[0] train-rmse:3.39343 valid_data-rmse:3.42640
[500] train-rmse:0.41504 valid_data-rmse:0.65243
[1000] train-rmse:0.27542 valid_data-rmse:0.65263
[1445] train-rmse:0.19255 valid_data-rmse:0.65457
CV score: 0.45902237
3. RandomForestRegressor随机森林
#RandomForestRegressor随机森林
folds = KFold(n_splits=5, shuffle=True, random_state=2019)
oof_rfr_263 = np.zeros(len(X_train_263))
predictions_rfr_263 = np.zeros(len(X_test_263))
for fold_, (trn_idx, val_idx) in enumerate(folds.split(X_train_263, y_train)):
print("fold n°{}".format(fold_+1))
tr_x = X_train_263[trn_idx]
tr_y = y_train[trn_idx]
rfr_263 = rfr(n_estimators=1600,max_depth=9, min_samples_leaf=9, min_weight_fraction_leaf=0.0,
max_features=0.25,verbose=1,n_jobs=-1)
#verbose = 0 为不在标准输出流输出日志信息
#verbose = 1 为输出进度条记录
#verbose = 2 为每个epoch输出一行记录
rfr_263.fit(tr_x,tr_y)
oof_rfr_263[val_idx] = rfr_263.predict(X_train_263[val_idx])
predictions_rfr_263 += rfr_263.predict(X_test_263) / folds.n_splits
print("CV score: {:<8.8f}".format(mean_squared_error(oof_rfr_263, target)))
fold n°1
[Parallel(n_jobs=-1)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 34 tasks | elapsed: 1.0s
[Parallel(n_jobs=-1)]: Done 184 tasks | elapsed: 4.2s
[Parallel(n_jobs=-1)]: Done 434 tasks | elapsed: 10.2s
[Parallel(n_jobs=-1)]: Done 784 tasks | elapsed: 18.6s
[Parallel(n_jobs=-1)]: Done 1234 tasks | elapsed: 29.3s
[Parallel(n_jobs=-1)]: Done 1600 out of 1600 | elapsed: 37.7s finished
[Parallel(n_jobs=8)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=8)]: Done 34 tasks | elapsed: 0.0s
[Parallel(n_jobs=8)]: Done 184 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 434 tasks | elapsed: 0.2s
[Parallel(n_jobs=8)]: Done 784 tasks | elapsed: 0.4s
[Parallel(n_jobs=8)]: Done 1234 tasks | elapsed: 0.5s
[Parallel(n_jobs=8)]: Done 1600 out of 1600 | elapsed: 0.7s finished
[Parallel(n_jobs=8)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=8)]: Done 34 tasks | elapsed: 0.0s
[Parallel(n_jobs=8)]: Done 184 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 434 tasks | elapsed: 0.2s
[Parallel(n_jobs=8)]: Done 784 tasks | elapsed: 0.3s
[Parallel(n_jobs=8)]: Done 1234 tasks | elapsed: 0.4s
[Parallel(n_jobs=8)]: Done 1600 out of 1600 | elapsed: 0.5s finished
fold n°2
[Parallel(n_jobs=-1)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 34 tasks | elapsed: 1.0s
[Parallel(n_jobs=-1)]: Done 184 tasks | elapsed: 4.8s
[Parallel(n_jobs=-1)]: Done 434 tasks | elapsed: 11.0s
[Parallel(n_jobs=-1)]: Done 784 tasks | elapsed: 19.4s
[Parallel(n_jobs=-1)]: Done 1234 tasks | elapsed: 28.8s
[Parallel(n_jobs=-1)]: Done 1600 out of 1600 | elapsed: 36.9s finished
[Parallel(n_jobs=8)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=8)]: Done 34 tasks | elapsed: 0.0s
[Parallel(n_jobs=8)]: Done 184 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 434 tasks | elapsed: 0.2s
[Parallel(n_jobs=8)]: Done 784 tasks | elapsed: 0.4s
[Parallel(n_jobs=8)]: Done 1234 tasks | elapsed: 0.6s
[Parallel(n_jobs=8)]: Done 1600 out of 1600 | elapsed: 0.8s finished
[Parallel(n_jobs=8)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=8)]: Done 34 tasks | elapsed: 0.0s
[Parallel(n_jobs=8)]: Done 184 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 434 tasks | elapsed: 0.2s
[Parallel(n_jobs=8)]: Done 784 tasks | elapsed: 0.4s
[Parallel(n_jobs=8)]: Done 1234 tasks | elapsed: 0.6s
[Parallel(n_jobs=8)]: Done 1600 out of 1600 | elapsed: 0.7s finished
fold n°3
[Parallel(n_jobs=-1)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 34 tasks | elapsed: 0.8s
[Parallel(n_jobs=-1)]: Done 184 tasks | elapsed: 4.2s
[Parallel(n_jobs=-1)]: Done 434 tasks | elapsed: 10.9s
[Parallel(n_jobs=-1)]: Done 784 tasks | elapsed: 20.5s
[Parallel(n_jobs=-1)]: Done 1234 tasks | elapsed: 31.4s
[Parallel(n_jobs=-1)]: Done 1600 out of 1600 | elapsed: 39.9s finished
[Parallel(n_jobs=8)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=8)]: Done 34 tasks | elapsed: 0.0s
[Parallel(n_jobs=8)]: Done 184 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 434 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 784 tasks | elapsed: 0.3s
[Parallel(n_jobs=8)]: Done 1234 tasks | elapsed: 0.5s
[Parallel(n_jobs=8)]: Done 1600 out of 1600 | elapsed: 0.6s finished
[Parallel(n_jobs=8)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=8)]: Done 34 tasks | elapsed: 0.0s
[Parallel(n_jobs=8)]: Done 184 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 434 tasks | elapsed: 0.3s
[Parallel(n_jobs=8)]: Done 784 tasks | elapsed: 0.4s
[Parallel(n_jobs=8)]: Done 1234 tasks | elapsed: 0.6s
[Parallel(n_jobs=8)]: Done 1600 out of 1600 | elapsed: 0.8s finished
fold n°4
[Parallel(n_jobs=-1)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 34 tasks | elapsed: 0.9s
[Parallel(n_jobs=-1)]: Done 184 tasks | elapsed: 3.9s
[Parallel(n_jobs=-1)]: Done 434 tasks | elapsed: 9.2s
[Parallel(n_jobs=-1)]: Done 784 tasks | elapsed: 16.8s
[Parallel(n_jobs=-1)]: Done 1234 tasks | elapsed: 26.0s
[Parallel(n_jobs=-1)]: Done 1600 out of 1600 | elapsed: 33.6s finished
[Parallel(n_jobs=8)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=8)]: Done 34 tasks | elapsed: 0.0s
[Parallel(n_jobs=8)]: Done 184 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 434 tasks | elapsed: 0.2s
[Parallel(n_jobs=8)]: Done 784 tasks | elapsed: 0.4s
[Parallel(n_jobs=8)]: Done 1234 tasks | elapsed: 0.6s
[Parallel(n_jobs=8)]: Done 1600 out of 1600 | elapsed: 0.8s finished
[Parallel(n_jobs=8)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=8)]: Done 34 tasks | elapsed: 0.0s
[Parallel(n_jobs=8)]: Done 184 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 434 tasks | elapsed: 0.2s
[Parallel(n_jobs=8)]: Done 784 tasks | elapsed: 0.4s
[Parallel(n_jobs=8)]: Done 1234 tasks | elapsed: 0.6s
[Parallel(n_jobs=8)]: Done 1600 out of 1600 | elapsed: 0.7s finished
fold n°5
[Parallel(n_jobs=-1)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 34 tasks | elapsed: 0.8s
[Parallel(n_jobs=-1)]: Done 184 tasks | elapsed: 4.5s
[Parallel(n_jobs=-1)]: Done 434 tasks | elapsed: 10.3s
[Parallel(n_jobs=-1)]: Done 784 tasks | elapsed: 17.8s
[Parallel(n_jobs=-1)]: Done 1234 tasks | elapsed: 27.4s
[Parallel(n_jobs=-1)]: Done 1600 out of 1600 | elapsed: 34.9s finished
[Parallel(n_jobs=8)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=8)]: Done 34 tasks | elapsed: 0.0s
[Parallel(n_jobs=8)]: Done 184 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 434 tasks | elapsed: 0.3s
[Parallel(n_jobs=8)]: Done 784 tasks | elapsed: 0.4s
[Parallel(n_jobs=8)]: Done 1234 tasks | elapsed: 0.5s
[Parallel(n_jobs=8)]: Done 1600 out of 1600 | elapsed: 0.7s finished
[Parallel(n_jobs=8)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=8)]: Done 34 tasks | elapsed: 0.0s
[Parallel(n_jobs=8)]: Done 184 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 434 tasks | elapsed: 0.2s
[Parallel(n_jobs=8)]: Done 784 tasks | elapsed: 0.4s
[Parallel(n_jobs=8)]: Done 1234 tasks | elapsed: 0.6s
[Parallel(n_jobs=8)]: Done 1600 out of 1600 | elapsed: 0.7s finished
CV score: 0.47951720
4. GradientBoostingRegressor梯度提升决策树
#GradientBoostingRegressor梯度提升决策树
folds = StratifiedKFold(n_splits=5, shuffle=True, random_state=2018)
oof_gbr_263 = np.zeros(train_shape)
predictions_gbr_263 = np.zeros(len(X_test_263))
for fold_, (trn_idx, val_idx) in enumerate(folds.split(X_train_263, y_train)):
print("fold n°{}".format(fold_+1))
tr_x = X_train_263[trn_idx]
tr_y = y_train[trn_idx]
gbr_263 = gbr(n_estimators=400, learning_rate=0.01,subsample=0.65,max_depth=7, min_samples_leaf=20,
max_features=0.22,verbose=1)
gbr_263.fit(tr_x,tr_y)
oof_gbr_263[val_idx] = gbr_263.predict(X_train_263[val_idx])
predictions_gbr_263 += gbr_263.predict(X_test_263) / folds.n_splits
print("CV score: {:<8.8f}".format(mean_squared_error(oof_gbr_263, target)))
fold n°1
Iter Train Loss OOB Improve Remaining Time
1 0.6541 0.0035 57.60s
2 0.6483 0.0034 53.24s
3 0.6595 0.0032 48.10s
4 0.6613 0.0031 45.30s
5 0.6424 0.0032 43.26s
6 0.6394 0.0033 42.15s
7 0.6526 0.0030 40.56s
8 0.6492 0.0026 40.06s
9 0.6245 0.0028 39.87s
10 0.6166 0.0027 39.22s
20 0.5776 0.0022 34.05s
30 0.5499 0.0020 31.87s
40 0.5415 0.0016 30.36s
50 0.5238 0.0013 29.10s
60 0.4839 0.0012 28.03s
70 0.4735 0.0009 26.73s
80 0.4548 0.0008 25.79s
90 0.4443 0.0006 24.99s
100 0.4317 0.0004 24.15s
200 0.3421 0.0002 15.88s
300 0.2954 0.0000 7.92s
400 0.2664 0.0000 0.00s
fold n°2
Iter Train Loss OOB Improve Remaining Time
1 0.6699 0.0035 34.82s
2 0.6548 0.0033 36.13s
3 0.6679 0.0030 34.96s
4 0.6660 0.0029 36.03s
5 0.6589 0.0029 35.56s
6 0.6452 0.0031 34.75s
7 0.6425 0.0027 34.56s
8 0.6447 0.0028 34.43s
9 0.6361 0.0029 33.96s
10 0.6365 0.0029 33.80s
20 0.6093 0.0020 33.03s
30 0.5649 0.0019 31.84s
40 0.5333 0.0015 31.08s
50 0.5054 0.0014 30.23s
60 0.4862 0.0012 29.68s
70 0.4830 0.0009 28.75s
80 0.4624 0.0008 28.37s
90 0.4467 0.0008 28.85s
100 0.4331 0.0005 28.17s
200 0.3458 0.0001 18.22s
300 0.2938 0.0000 9.12s
400 0.2668 -0.0000 0.00s
fold n°3
Iter Train Loss OOB Improve Remaining Time
1 0.6701 0.0035 42.15s
2 0.6746 0.0032 41.99s
3 0.6228 0.0034 42.85s
4 0.6495 0.0029 48.14s
5 0.6503 0.0028 53.01s
6 0.6554 0.0032 55.19s
7 0.6339 0.0034 53.67s
8 0.6380 0.0030 50.92s
9 0.6451 0.0030 48.74s
10 0.6393 0.0029 47.32s
20 0.6056 0.0022 40.89s
30 0.5704 0.0020 37.62s
40 0.5393 0.0018 35.40s
50 0.5202 0.0016 33.20s
60 0.5029 0.0012 32.06s
70 0.4647 0.0011 30.55s
80 0.4555 0.0007 29.46s
90 0.4437 0.0006 28.15s
100 0.4317 0.0004 26.97s
200 0.3382 0.0002 17.95s
300 0.2938 0.0000 8.60s
400 0.2568 -0.0001 0.00s
fold n°4
Iter Train Loss OOB Improve Remaining Time
1 0.6848 0.0031 18.99s
2 0.6668 0.0036 19.09s
3 0.6713 0.0032 19.55s
4 0.6696 0.0029 19.50s
5 0.6403 0.0030 19.37s
6 0.6554 0.0031 19.29s
7 0.6391 0.0028 19.16s
8 0.6243 0.0032 19.23s
9 0.6355 0.0031 19.09s
10 0.6344 0.0029 19.08s
20 0.6054 0.0025 18.50s
30 0.5588 0.0021 18.08s
40 0.5443 0.0017 17.58s
50 0.4945 0.0014 17.07s
60 0.5004 0.0012 16.56s
70 0.4743 0.0011 16.07s
80 0.4504 0.0009 15.58s
90 0.4376 0.0006 15.74s
100 0.4286 0.0006 15.76s
200 0.3405 0.0001 10.05s
300 0.3046 -0.0000 4.99s
400 0.2686 0.0000 0.00s
fold n°5
Iter Train Loss OOB Improve Remaining Time
1 0.6754 0.0037 18.55s
2 0.6571 0.0029 18.67s
3 0.6693 0.0032 19.27s
4 0.6450 0.0031 19.27s
5 0.6663 0.0029 19.02s
6 0.6362 0.0032 19.10s
7 0.6506 0.0028 19.21s
8 0.6373 0.0033 19.46s
9 0.6257 0.0032 19.34s
10 0.6352 0.0028 19.23s
20 0.6010 0.0021 18.73s
30 0.5667 0.0020 18.08s
40 0.5407 0.0016 17.55s
50 0.5052 0.0014 17.05s
60 0.4942 0.0010 16.49s
70 0.4751 0.0009 15.97s
80 0.4607 0.0010 15.50s
90 0.4511 0.0005 14.99s
100 0.4251 0.0003 14.49s
200 0.3349 0.0001 9.89s
300 0.3060 0.0001 4.97s
400 0.2624 -0.0000 0.00s
CV score: 0.45732889
5. ExtraTreesRegressor 极端随机森林回归
#ExtraTreesRegressor 极端随机森林回归
folds = KFold(n_splits=5, shuffle=True, random_state=13)
oof_etr_263 = np.zeros(train_shape)
predictions_etr_263 = np.zeros(len(X_test_263))
for fold_, (trn_idx, val_idx) in enumerate(folds.split(X_train_263, y_train)):
print("fold n°{}".format(fold_+1))
tr_x = X_train_263[trn_idx]
tr_y = y_train[trn_idx]
etr_263 = etr(n_estimators=1000,max_depth=8, min_samples_leaf=12, min_weight_fraction_leaf=0.0,
max_features=0.4,verbose=1,n_jobs=-1)
etr_263.fit(tr_x,tr_y)
oof_etr_263[val_idx] = etr_263.predict(X_train_263[val_idx])
predictions_etr_263 += etr_263.predict(X_test_263) / folds.n_splits
print("CV score: {:<8.8f}".format(mean_squared_error(oof_etr_263, target)))
fold n°1
[Parallel(n_jobs=-1)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 34 tasks | elapsed: 0.3s
[Parallel(n_jobs=-1)]: Done 184 tasks | elapsed: 1.3s
[Parallel(n_jobs=-1)]: Done 434 tasks | elapsed: 2.9s
[Parallel(n_jobs=-1)]: Done 784 tasks | elapsed: 5.1s
[Parallel(n_jobs=-1)]: Done 1000 out of 1000 | elapsed: 6.6s finished
[Parallel(n_jobs=8)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=8)]: Done 34 tasks | elapsed: 0.0s
[Parallel(n_jobs=8)]: Done 184 tasks | elapsed: 0.0s
[Parallel(n_jobs=8)]: Done 434 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 784 tasks | elapsed: 0.2s
[Parallel(n_jobs=8)]: Done 1000 out of 1000 | elapsed: 0.2s finished
[Parallel(n_jobs=8)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=8)]: Done 34 tasks | elapsed: 0.0s
[Parallel(n_jobs=8)]: Done 184 tasks | elapsed: 0.0s
[Parallel(n_jobs=8)]: Done 434 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 784 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 1000 out of 1000 | elapsed: 0.2s finished
fold n°2
[Parallel(n_jobs=-1)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 34 tasks | elapsed: 0.2s
[Parallel(n_jobs=-1)]: Done 184 tasks | elapsed: 1.1s
[Parallel(n_jobs=-1)]: Done 434 tasks | elapsed: 2.5s
[Parallel(n_jobs=-1)]: Done 784 tasks | elapsed: 4.6s
[Parallel(n_jobs=-1)]: Done 1000 out of 1000 | elapsed: 5.8s finished
[Parallel(n_jobs=8)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=8)]: Done 34 tasks | elapsed: 0.0s
[Parallel(n_jobs=8)]: Done 184 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 434 tasks | elapsed: 0.2s
[Parallel(n_jobs=8)]: Done 784 tasks | elapsed: 0.3s
[Parallel(n_jobs=8)]: Done 1000 out of 1000 | elapsed: 0.3s finished
[Parallel(n_jobs=8)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=8)]: Done 34 tasks | elapsed: 0.0s
[Parallel(n_jobs=8)]: Done 184 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 434 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 784 tasks | elapsed: 0.3s
[Parallel(n_jobs=8)]: Done 1000 out of 1000 | elapsed: 0.3s finished
fold n°3
[Parallel(n_jobs=-1)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 34 tasks | elapsed: 0.3s
[Parallel(n_jobs=-1)]: Done 184 tasks | elapsed: 1.4s
[Parallel(n_jobs=-1)]: Done 434 tasks | elapsed: 3.4s
[Parallel(n_jobs=-1)]: Done 784 tasks | elapsed: 5.7s
[Parallel(n_jobs=-1)]: Done 1000 out of 1000 | elapsed: 7.3s finished
[Parallel(n_jobs=8)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=8)]: Done 34 tasks | elapsed: 0.0s
[Parallel(n_jobs=8)]: Done 184 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 434 tasks | elapsed: 0.3s
[Parallel(n_jobs=8)]: Done 784 tasks | elapsed: 0.4s
[Parallel(n_jobs=8)]: Done 1000 out of 1000 | elapsed: 0.5s finished
[Parallel(n_jobs=8)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=8)]: Done 34 tasks | elapsed: 0.0s
[Parallel(n_jobs=8)]: Done 184 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 434 tasks | elapsed: 0.2s
[Parallel(n_jobs=8)]: Done 784 tasks | elapsed: 0.3s
[Parallel(n_jobs=8)]: Done 1000 out of 1000 | elapsed: 0.3s finished
fold n°4
[Parallel(n_jobs=-1)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 34 tasks | elapsed: 0.2s
[Parallel(n_jobs=-1)]: Done 184 tasks | elapsed: 1.1s
[Parallel(n_jobs=-1)]: Done 434 tasks | elapsed: 2.6s
[Parallel(n_jobs=-1)]: Done 784 tasks | elapsed: 4.8s
[Parallel(n_jobs=-1)]: Done 1000 out of 1000 | elapsed: 6.3s finished
[Parallel(n_jobs=8)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=8)]: Done 34 tasks | elapsed: 0.0s
[Parallel(n_jobs=8)]: Done 184 tasks | elapsed: 0.0s
[Parallel(n_jobs=8)]: Done 434 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 784 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 1000 out of 1000 | elapsed: 0.1s finished
[Parallel(n_jobs=8)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=8)]: Done 34 tasks | elapsed: 0.0s
[Parallel(n_jobs=8)]: Done 184 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 434 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 784 tasks | elapsed: 0.3s
[Parallel(n_jobs=8)]: Done 1000 out of 1000 | elapsed: 0.3s finished
fold n°5
[Parallel(n_jobs=-1)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 34 tasks | elapsed: 0.3s
[Parallel(n_jobs=-1)]: Done 184 tasks | elapsed: 1.3s
[Parallel(n_jobs=-1)]: Done 434 tasks | elapsed: 2.9s
[Parallel(n_jobs=-1)]: Done 784 tasks | elapsed: 5.3s
[Parallel(n_jobs=-1)]: Done 1000 out of 1000 | elapsed: 6.7s finished
[Parallel(n_jobs=8)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=8)]: Done 34 tasks | elapsed: 0.0s
[Parallel(n_jobs=8)]: Done 184 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 434 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 784 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 1000 out of 1000 | elapsed: 0.2s finished
[Parallel(n_jobs=8)]: Using backend ThreadingBackend with 8 concurrent workers.
[Parallel(n_jobs=8)]: Done 34 tasks | elapsed: 0.0s
[Parallel(n_jobs=8)]: Done 184 tasks | elapsed: 0.1s
[Parallel(n_jobs=8)]: Done 434 tasks | elapsed: 0.1s
CV score: 0.48638384
[Parallel(n_jobs=8)]: Done 784 tasks | elapsed: 0.2s
[Parallel(n_jobs=8)]: Done 1000 out of 1000 | elapsed: 0.3s finished
至此,我们得到了以上5种模型的预测结果以及模型架构及参数。其中在每一种特征工程中,进行5折的交叉验证,并重复两次(Kernel Ridge Regression,核脊回归),取得每一个特征数下的模型的结果。
train_stack2 = np.vstack([oof_lgb_263,oof_xgb_263,oof_gbr_263,oof_rfr_263,oof_etr_263]).transpose()
# transpose()函数的作用就是调换x,y,z的位置,也就是数组的索引值
test_stack2 = np.vstack([predictions_lgb_263, predictions_xgb_263,predictions_gbr_263,predictions_rfr_263,predictions_etr_263]).transpose()
#交叉验证:5折,重复2次
folds_stack = RepeatedKFold(n_splits=5, n_repeats=2, random_state=7)
oof_stack2 = np.zeros(train_stack2.shape[0])
predictions_lr2 = np.zeros(test_stack2.shape[0])
for fold_, (trn_idx, val_idx) in enumerate(folds_stack.split(train_stack2,target)):
print("fold {}".format(fold_))
trn_data, trn_y = train_stack2[trn_idx], target.iloc[trn_idx].values
val_data, val_y = train_stack2[val_idx], target.iloc[val_idx].values
#Kernel Ridge Regression
lr2 = kr()
lr2.fit(trn_data, trn_y)
oof_stack2[val_idx] = lr2.predict(val_data)
predictions_lr2 += lr2.predict(test_stack2) / 10
mean_squared_error(target.values, oof_stack2)
fold 0
fold 1
fold 2
fold 3
fold 4
fold 5
fold 6
fold 7
fold 8
fold 9
0.45084953965076313
接下来我们对于49维的数据进行与上述263维数据相同的操作
1.lightGBM
##### lgb_49
lgb_49_param = {
'num_leaves': 9,
'min_data_in_leaf': 23,
'objective':'regression',
'max_depth': -1,
'learning_rate': 0.002,
"boosting": "gbdt",
"feature_fraction": 0.45,
"bagging_freq": 1,
"bagging_fraction": 0.65,
"bagging_seed": 15,
"metric": 'mse',
"lambda_l2": 0.2,
"verbosity": -1}
folds = StratifiedKFold(n_splits=5, shuffle=True, random_state=9)
oof_lgb_49 = np.zeros(len(X_train_49))
predictions_lgb_49 = np.zeros(len(X_test_49))
for fold_, (trn_idx, val_idx) in enumerate(folds.split(X_train_49, y_train)):
print("fold n°{}".format(fold_+1))
trn_data = lgb.Dataset(X_train_49[trn_idx], y_train[trn_idx])
val_data = lgb.Dataset(X_train_49[val_idx], y_train[val_idx])
num_round = 12000
lgb_49 = lgb.train(lgb_49_param, trn_data, num_round, valid_sets = [trn_data, val_data], verbose_eval=1000, early_stopping_rounds = 1000)
oof_lgb_49[val_idx] = lgb_49.predict(X_train_49[val_idx], num_iteration=lgb_49.best_iteration)
predictions_lgb_49 += lgb_49.predict(X_test_49, num_iteration=lgb_49.best_iteration) / folds.n_splits
print("CV score: {:<8.8f}".format(mean_squared_error(oof_lgb_49, target)))
fold n°1
Training until validation scores don't improve for 1000 rounds
[1000] training's l2: 0.472272 valid_1's l2: 0.494642
[2000] training's l2: 0.43281 valid_1's l2: 0.474932
[3000] training's l2: 0.410613 valid_1's l2: 0.470028
[4000] training's l2: 0.392716 valid_1's l2: 0.46802
[5000] training's l2: 0.37718 valid_1's l2: 0.467416
[6000] training's l2: 0.363186 valid_1's l2: 0.467551
Early stopping, best iteration is:
[5347] training's l2: 0.372177 valid_1's l2: 0.46732
fold n°2
Training until validation scores don't improve for 1000 rounds
[1000] training's l2: 0.472137 valid_1's l2: 0.494339
[2000] training's l2: 0.43195 valid_1's l2: 0.475431
[3000] training's l2: 0.409428 valid_1's l2: 0.470698
[4000] training's l2: 0.391521 valid_1's l2: 0.469527
[5000] training's l2: 0.37619 valid_1's l2: 0.469591
Early stopping, best iteration is:
[4276] training's l2: 0.387095 valid_1's l2: 0.469235
fold n°3
Training until validation scores don't improve for 1000 rounds
[1000] training's l2: 0.47511 valid_1's l2: 0.488817
[2000] training's l2: 0.434833 valid_1's l2: 0.46631
[3000] training's l2: 0.41224 valid_1's l2: 0.461901
[4000] training's l2: 0.394582 valid_1's l2: 0.461318
[5000] training's l2: 0.379178 valid_1's l2: 0.461212
Early stopping, best iteration is:
[4775] training's l2: 0.382441 valid_1's l2: 0.460968
fold n°4
Training until validation scores don't improve for 1000 rounds
[1000] training's l2: 0.46809 valid_1's l2: 0.509447
[2000] training's l2: 0.42873 valid_1's l2: 0.492862
[3000] training's l2: 0.407085 valid_1's l2: 0.488668
[4000] training's l2: 0.389908 valid_1's l2: 0.485701
[5000] training's l2: 0.375145 valid_1's l2: 0.483995
[6000] training's l2: 0.361776 valid_1's l2: 0.482521
[7000] training's l2: 0.349565 valid_1's l2: 0.481566
[8000] training's l2: 0.338061 valid_1's l2: 0.480673
[9000] training's l2: 0.327257 valid_1's l2: 0.479762
[10000] training's l2: 0.317191 valid_1's l2: 0.47937
[11000] training's l2: 0.307637 valid_1's l2: 0.47894
[12000] training's l2: 0.298468 valid_1's l2: 0.478395
Did not meet early stopping. Best iteration is:
[12000] training's l2: 0.298468 valid_1's l2: 0.478395
fold n°5
Training until validation scores don't improve for 1000 rounds
[1000] training's l2: 0.469331 valid_1's l2: 0.50493
[2000] training's l2: 0.429579 valid_1's l2: 0.488534
[3000] training's l2: 0.4074 valid_1's l2: 0.485076
[4000] training's l2: 0.38985 valid_1's l2: 0.483391
[5000] training's l2: 0.374492 valid_1's l2: 0.482639
[6000] training's l2: 0.360724 valid_1's l2: 0.482522
[7000] training's l2: 0.348306 valid_1's l2: 0.482615
Early stopping, best iteration is:
[6270] training's l2: 0.357267 valid_1's l2: 0.482478
CV score: 0.47167691
2. xgboost
##### xgb_49
xgb_49_params = {'eta': 0.02,
'max_depth': 5,
'min_child_weight':3,
'gamma':0,
'subsample': 0.7,
'colsample_bytree': 0.35,
'lambda':2,
'objective': 'reg:linear',
'eval_metric': 'rmse',
'silent': True,
'nthread': -1}
folds = KFold(n_splits=5, shuffle=True, random_state=2019)
oof_xgb_49 = np.zeros(len(X_train_49))
predictions_xgb_49 = np.zeros(len(X_test_49))
for fold_, (trn_idx, val_idx) in enumerate(folds.split(X_train_49, y_train)):
print("fold n°{}".format(fold_+1))
trn_data = xgb.DMatrix(X_train_49[trn_idx], y_train[trn_idx])
val_data = xgb.DMatrix(X_train_49[val_idx], y_train[val_idx])
watchlist = [(trn_data, 'train'), (val_data, 'valid_data')]
xgb_49 = xgb.train(dtrain=trn_data, num_boost_round=3000, evals=watchlist, early_stopping_rounds=600, verbose_eval=500, params=xgb_49_params)
oof_xgb_49[val_idx] = xgb_49.predict(xgb.DMatrix(X_train_49[val_idx]), ntree_limit=xgb_49.best_ntree_limit)
predictions_xgb_49 += xgb_49.predict(xgb.DMatrix(X_test_49), ntree_limit=xgb_49.best_ntree_limit) / folds.n_splits
print("CV score: {:<8.8f}".format(mean_squared_error(oof_xgb_49, target)))
fold n°1
[17:31:00] WARNING: ../src/objective/regression_obj.cu:171: reg:linear is now deprecated in favor of reg:squarederror.
[17:31:00] WARNING: ../src/learner.cc:573:
Parameters: { "silent" } might not be used.
This may not be accurate due to some parameters are only used in language bindings but
passed down to XGBoost core. Or some parameters are not used but slip through this
verification. Please open an issue if you find above cases.
[0] train-rmse:3.40429 valid_data-rmse:3.38337
[500] train-rmse:0.52729 valid_data-rmse:0.72046
[841] train-rmse:0.46124 valid_data-rmse:0.72235
fold n°2
[17:31:04] WARNING: …/src/objective/regression_obj.cu:171: reg:linear is now deprecated in favor of reg:squarederror.
[17:31:04] WARNING: …/src/learner.cc:573:
Parameters: { “silent” } might not be used.
This may not be accurate due to some parameters are only used in language bindings but
passed down to XGBoost core. Or some parameters are not used but slip through this
verification. Please open an issue if you find above cases.
[0] train-rmse:3.39825 valid_data-rmse:3.40786
[500] train-rmse:0.53017 valid_data-rmse:0.70630
[1000] train-rmse:0.43944 valid_data-rmse:0.70865
[1250] train-rmse:0.40180 valid_data-rmse:0.71093
fold n°3
[17:31:09] WARNING: …/src/objective/regression_obj.cu:171: reg:linear is now deprecated in favor of reg:squarederror.
[17:31:09] WARNING: …/src/learner.cc:573:
Parameters: { “silent” } might not be used.
This may not be accurate due to some parameters are only used in language bindings but
passed down to XGBoost core. Or some parameters are not used but slip through this
verification. Please open an issue if you find above cases.
[0] train-rmse:3.40182 valid_data-rmse:3.39293
[500] train-rmse:0.53437 valid_data-rmse:0.66975
[1000] train-rmse:0.44188 valid_data-rmse:0.67244
[1163] train-rmse:0.41677 valid_data-rmse:0.67363
fold n°4
[17:31:14] WARNING: …/src/objective/regression_obj.cu:171: reg:linear is now deprecated in favor of reg:squarederror.
[17:31:14] WARNING: …/src/learner.cc:573:
Parameters: { “silent” } might not be used.
This may not be accurate due to some parameters are only used in language bindings but
passed down to XGBoost core. Or some parameters are not used but slip through this
verification. Please open an issue if you find above cases.
[0] train-rmse:3.40243 valid_data-rmse:3.39028
[500] train-rmse:0.53228 valid_data-rmse:0.68292
[872] train-rmse:0.46150 valid_data-rmse:0.68649
fold n°5
[17:31:17] WARNING: …/src/objective/regression_obj.cu:171: reg:linear is now deprecated in favor of reg:squarederror.
[17:31:17] WARNING: …/src/learner.cc:573:
Parameters: { “silent” } might not be used.
This may not be accurate due to some parameters are only used in language bindings but
passed down to XGBoost core. Or some parameters are not used but slip through this
verification. Please open an issue if you find above cases.
[0] train-rmse:3.39350 valid_data-rmse:3.42630
[500] train-rmse:0.53629 valid_data-rmse:0.66386
[1000] train-rmse:0.44346 valid_data-rmse:0.66732
[1109] train-rmse:0.42646 valid_data-rmse:0.66874
CV score: 0.47291073
3. GradientBoostingRegressor梯度提升决策树
folds = StratifiedKFold(n_splits=5, shuffle=True, random_state=2018)
oof_gbr_49 = np.zeros(train_shape)
predictions_gbr_49 = np.zeros(len(X_test_49))
#GradientBoostingRegressor梯度提升决策树
for fold_, (trn_idx, val_idx) in enumerate(folds.split(X_train_49, y_train)):
print("fold n°{}".format(fold_+1))
tr_x = X_train_49[trn_idx]
tr_y = y_train[trn_idx]
gbr_49 = gbr(n_estimators=600, learning_rate=0.01,subsample=0.65,max_depth=6, min_samples_leaf=20,
max_features=0.35,verbose=1)
gbr_49.fit(tr_x,tr_y)
oof_gbr_49[val_idx] = gbr_49.predict(X_train_49[val_idx])
predictions_gbr_49 += gbr_49.predict(X_test_49) / folds.n_splits
print("CV score: {:<8.8f}".format(mean_squared_error(oof_gbr_49, target)))
fold n°1
Iter Train Loss OOB Improve Remaining Time
1 0.6530 0.0035 23.08s
2 0.6715 0.0032 17.80s
3 0.6448 0.0034 15.80s
4 0.6676 0.0029 14.91s
5 0.6561 0.0030 14.55s
6 0.6389 0.0030 14.94s
7 0.6616 0.0027 14.92s
8 0.6557 0.0029 14.58s
9 0.6409 0.0026 14.49s
10 0.6545 0.0027 14.27s
20 0.6098 0.0022 13.01s
30 0.5791 0.0019 12.37s
40 0.5604 0.0018 12.11s
50 0.5267 0.0013 11.80s
60 0.5234 0.0010 11.59s
70 0.5023 0.0010 11.31s
80 0.4865 0.0007 11.06s
90 0.4780 0.0005 10.83s
100 0.4707 0.0005 10.62s
200 0.3974 0.0001 8.36s
300 0.3754 -0.0000 6.55s
400 0.3491 -0.0000 4.31s
500 0.3391 -0.0000 2.15s
600 0.3178 -0.0001 0.00s
fold n°2
Iter Train Loss OOB Improve Remaining Time
1 0.6717 0.0033 12.55s
2 0.6581 0.0031 12.38s
3 0.6657 0.0031 12.49s
4 0.6572 0.0027 12.32s
5 0.6360 0.0033 12.25s
6 0.6537 0.0035 12.34s
7 0.6518 0.0029 12.38s
8 0.6298 0.0027 12.55s
9 0.6367 0.0031 12.49s
10 0.6295 0.0029 12.43s
20 0.6054 0.0022 12.22s
30 0.5766 0.0016 11.90s
40 0.5341 0.0015 11.81s
50 0.5228 0.0013 11.59s
60 0.5054 0.0012 11.31s
70 0.4966 0.0007 11.10s
80 0.4801 0.0008 10.88s
90 0.4630 0.0006 10.70s
100 0.4521 0.0004 10.47s
200 0.3966 0.0000 8.29s
300 0.3720 -0.0000 6.20s
400 0.3388 -0.0001 4.13s
500 0.3251 -0.0000 2.07s
600 0.3041 -0.0001 0.00s
fold n°3
Iter Train Loss OOB Improve Remaining Time
1 0.6604 0.0036 12.04s
2 0.6546 0.0032 12.71s
3 0.6698 0.0033 12.52s
4 0.6615 0.0030 12.36s
5 0.6583 0.0029 12.29s
6 0.6521 0.0030 12.22s
7 0.6489 0.0030 12.35s
8 0.6452 0.0027 12.40s
9 0.6435 0.0030 12.39s
10 0.6254 0.0030 12.38s
20 0.6116 0.0022 12.05s
30 0.5752 0.0018 11.83s
40 0.5527 0.0017 11.65s
50 0.5295 0.0016 11.44s
60 0.5266 0.0010 11.27s
70 0.4788 0.0011 11.00s
80 0.4896 0.0007 10.80s
90 0.4631 0.0005 10.63s
100 0.4640 0.0004 10.42s
200 0.3865 -0.0000 8.79s
300 0.3597 0.0000 6.46s
400 0.3357 -0.0001 4.26s
500 0.3226 -0.0001 2.17s
600 0.2893 -0.0001 0.00s
fold n°4
Iter Train Loss OOB Improve Remaining Time
1 0.6679 0.0033 12.20s
2 0.6768 0.0034 12.24s
3 0.6345 0.0033 12.31s
4 0.6621 0.0031 12.26s
5 0.6631 0.0030 12.29s
6 0.6560 0.0032 12.23s
7 0.6403 0.0030 12.44s
8 0.6435 0.0031 12.62s
9 0.6302 0.0030 12.61s
10 0.6201 0.0025 12.55s
20 0.5991 0.0022 12.29s
30 0.5801 0.0018 11.91s
40 0.5560 0.0017 11.72s
50 0.5346 0.0015 11.52s
60 0.5182 0.0011 11.24s
70 0.5007 0.0009 11.06s
80 0.4950 0.0006 10.84s
90 0.4707 0.0007 10.61s
100 0.4505 0.0004 10.75s
200 0.4032 0.0001 8.84s
300 0.3604 0.0000 6.47s
400 0.3488 -0.0000 4.26s
500 0.3289 -0.0001 2.11s
600 0.3119 -0.0000 0.00s
fold n°5
Iter Train Loss OOB Improve Remaining Time
1 0.6645 0.0031 12.33s
2 0.6721 0.0032 12.16s
3 0.6444 0.0034 12.14s
4 0.6447 0.0031 12.02s
5 0.6526 0.0034 12.11s
6 0.6620 0.0029 12.05s
7 0.6540 0.0028 12.08s
8 0.6545 0.0026 12.44s
9 0.6501 0.0029 12.62s
10 0.6015 0.0029 13.03s
20 0.6065 0.0020 12.34s
30 0.5806 0.0020 11.92s
40 0.5539 0.0015 11.67s
50 0.5274 0.0013 11.43s
60 0.5029 0.0012 11.22s
70 0.5004 0.0009 11.03s
80 0.4893 0.0008 10.79s
90 0.4634 0.0007 10.58s
100 0.4669 0.0005 10.40s
200 0.3987 0.0000 8.27s
300 0.3781 0.0000 6.20s
400 0.3502 -0.0001 4.27s
500 0.3286 -0.0001 2.12s
600 0.3171 0.0000 0.00s
CV score: 0.47454711
至此,我们得到了以上3种模型的基于49个特征的预测结果以及模型架构及参数。其中在每一种特征工程中,进行5折的交叉验证,并重复两次(Kernel Ridge Regression,核脊回归),取得每一个特征数下的模型的结果。
train_stack3 = np.vstack([oof_lgb_49,oof_xgb_49,oof_gbr_49]).transpose()
test_stack3 = np.vstack([predictions_lgb_49, predictions_xgb_49,predictions_gbr_49]).transpose()
#
folds_stack = RepeatedKFold(n_splits=5, n_repeats=2, random_state=7)
oof_stack3 = np.zeros(train_stack3.shape[0])
predictions_lr3 = np.zeros(test_stack3.shape[0])
for fold_, (trn_idx, val_idx) in enumerate(folds_stack.split(train_stack3,target)):
print("fold {}".format(fold_))
trn_data, trn_y = train_stack3[trn_idx], target.iloc[trn_idx].values
val_data, val_y = train_stack3[val_idx], target.iloc[val_idx].values
#Kernel Ridge Regression
lr3 = kr()
lr3.fit(trn_data, trn_y)
oof_stack3[val_idx] = lr3.predict(val_data)
predictions_lr3 += lr3.predict(test_stack3) / 10
mean_squared_error(target.values, oof_stack3)
fold 0
fold 1
fold 2
fold 3
fold 4
fold 5
fold 6
fold 7
fold 8
fold 9
0.4701684286707195
接下来我们对于383维的数据进行与上述263以及49维数据相同的操作
1. Kernel Ridge Regression 基于核的岭回归
folds = KFold(n_splits=5, shuffle=True, random_state=13)
oof_kr_383 = np.zeros(train_shape)
predictions_kr_383 = np.zeros(len(X_test_383))
for fold_, (trn_idx, val_idx) in enumerate(folds.split(X_train_383, y_train)):
print("fold n°{}".format(fold_+1))
tr_x = X_train_383[trn_idx]
tr_y = y_train[trn_idx]
#Kernel Ridge Regression 岭回归
kr_383 = kr()
kr_383.fit(tr_x,tr_y)
oof_kr_383[val_idx] = kr_383.predict(X_train_383[val_idx])
predictions_kr_383 += kr_383.predict(X_test_383) / folds.n_splits
print("CV score: {:<8.8f}".format(mean_squared_error(oof_kr_383, target)))
fold n°1
fold n°2
fold n°3
fold n°4
fold n°5
CV score: 0.52067134
2. 使用普通岭回归
folds = KFold(n_splits=5, shuffle=True, random_state=13)
oof_ridge_383 = np.zeros(train_shape)
predictions_ridge_383 = np.zeros(len(X_test_383))
for fold_, (trn_idx, val_idx) in enumerate(folds.split(X_train_383, y_train)):
print("fold n°{}".format(fold_+1))
tr_x = X_train_383[trn_idx]
tr_y = y_train[trn_idx]
#使用岭回归
ridge_383 = Ridge(alpha=1200)
ridge_383.fit(tr_x,tr_y)
oof_ridge_383[val_idx] = ridge_383.predict(X_train_383[val_idx])
predictions_ridge_383 += ridge_383.predict(X_test_383) / folds.n_splits
print("CV score: {:<8.8f}".format(mean_squared_error(oof_ridge_383, target)))
fold n°1
fold n°2
fold n°3
fold n°4
fold n°5
CV score: 0.48880155
3. 使用ElasticNet 弹性网络
folds = KFold(n_splits=5, shuffle=True, random_state=13)
oof_en_383 = np.zeros(train_shape)
predictions_en_383 = np.zeros(len(X_test_383))
for fold_, (trn_idx, val_idx) in enumerate(folds.split(X_train_383, y_train)):
print("fold n°{}".format(fold_+1))
tr_x = X_train_383[trn_idx]
tr_y = y_train[trn_idx]
#ElasticNet 弹性网络
en_383 = en(alpha=1.0,l1_ratio=0.06)
en_383.fit(tr_x,tr_y)
oof_en_383[val_idx] = en_383.predict(X_train_383[val_idx])
predictions_en_383 += en_383.predict(X_test_383) / folds.n_splits
print("CV score: {:<8.8f}".format(mean_squared_error(oof_en_383, target)))
fold n°1
fold n°2
fold n°3
fold n°4
fold n°5
CV score: 0.53841117
4. 使用BayesianRidge 贝叶斯岭回归
folds = KFold(n_splits=5, shuffle=True, random_state=13)
oof_br_383 = np.zeros(train_shape)
predictions_br_383 = np.zeros(len(X_test_383))
for fold_, (trn_idx, val_idx) in enumerate(folds.split(X_train_383, y_train)):
print("fold n°{}".format(fold_+1))
tr_x = X_train_383[trn_idx]
tr_y = y_train[trn_idx]
#BayesianRidge 贝叶斯回归
br_383 = br()
br_383.fit(tr_x,tr_y)
oof_br_383[val_idx] = br_383.predict(X_train_383[val_idx])
predictions_br_383 += br_383.predict(X_test_383) / folds.n_splits
print("CV score: {:<8.8f}".format(mean_squared_error(oof_br_383, target)))
fold n°1
fold n°2
fold n°3
fold n°4
fold n°5
CV score: 0.48915263
至此,我们得到了以上4种模型的基于383个特征的预测结果以及模型架构及参数。其中在每一种特征工程中,进行5折的交叉验证,并重复两次(LinearRegression简单的线性回归),取得每一个特征数下的模型的结果。
train_stack1 = np.vstack([oof_br_383,oof_kr_383,oof_en_383,oof_ridge_383]).transpose()
test_stack1 = np.vstack([predictions_br_383, predictions_kr_383,predictions_en_383,predictions_ridge_383]).transpose()
folds_stack = RepeatedKFold(n_splits=5, n_repeats=2, random_state=7)
oof_stack1 = np.zeros(train_stack1.shape[0])
predictions_lr1 = np.zeros(test_stack1.shape[0])
for fold_, (trn_idx, val_idx) in enumerate(folds_stack.split(train_stack1,target)):
print("fold {}".format(fold_))
trn_data, trn_y = train_stack1[trn_idx], target.iloc[trn_idx].values
val_data, val_y = train_stack1[val_idx], target.iloc[val_idx].values
# LinearRegression简单的线性回归
lr1 = lr()
lr1.fit(trn_data, trn_y)
oof_stack1[val_idx] = lr1.predict(val_data)
predictions_lr1 += lr1.predict(test_stack1) / 10
mean_squared_error(target.values, oof_stack1)
fold 0
fold 1
fold 2
fold 3
fold 4
fold 5
fold 6
fold 7
fold 8
fold 9
0.4899770398704323
由于49维的特征是最重要的特征,所以这里考虑增加更多的模型进行49维特征的数据的构建工作。
1. KernelRidge 核岭回归
folds = KFold(n_splits=5, shuffle=True, random_state=13)
oof_kr_49 = np.zeros(train_shape)
predictions_kr_49 = np.zeros(len(X_test_49))
for fold_, (trn_idx, val_idx) in enumerate(folds.split(X_train_49, y_train)):
print("fold n°{}".format(fold_+1))
tr_x = X_train_49[trn_idx]
tr_y = y_train[trn_idx]
kr_49 = kr()
kr_49.fit(tr_x,tr_y)
oof_kr_49[val_idx] = kr_49.predict(X_train_49[val_idx])
predictions_kr_49 += kr_49.predict(X_test_49) / folds.n_splits
print("CV score: {:<8.8f}".format(mean_squared_error(oof_kr_49, target)))
fold n°1
fold n°2
fold n°3
fold n°4
fold n°5
CV score: 0.50601017
2. Ridge 岭回归
folds = KFold(n_splits=5, shuffle=True, random_state=13)
oof_ridge_49 = np.zeros(train_shape)
predictions_ridge_49 = np.zeros(len(X_test_49))
for fold_, (trn_idx, val_idx) in enumerate(folds.split(X_train_49, y_train)):
print("fold n°{}".format(fold_+1))
tr_x = X_train_49[trn_idx]
tr_y = y_train[trn_idx]
ridge_49 = Ridge(alpha=6)
ridge_49.fit(tr_x,tr_y)
oof_ridge_49[val_idx] = ridge_49.predict(X_train_49[val_idx])
predictions_ridge_49 += ridge_49.predict(X_test_49) / folds.n_splits
print("CV score: {:<8.8f}".format(mean_squared_error(oof_ridge_49, target)))
fold n°1
fold n°2
fold n°3
fold n°4
fold n°5
CV score: 0.49675167
3. BayesianRidge 贝叶斯岭回归
folds = KFold(n_splits=5, shuffle=True, random_state=13)
oof_br_49 = np.zeros(train_shape)
predictions_br_49 = np.zeros(len(X_test_49))
for fold_, (trn_idx, val_idx) in enumerate(folds.split(X_train_49, y_train)):
print("fold n°{}".format(fold_+1))
tr_x = X_train_49[trn_idx]
tr_y = y_train[trn_idx]
br_49 = br()
br_49.fit(tr_x,tr_y)
oof_br_49[val_idx] = br_49.predict(X_train_49[val_idx])
predictions_br_49 += br_49.predict(X_test_49) / folds.n_splits
print("CV score: {:<8.8f}".format(mean_squared_error(oof_br_49, target)))
fold n°1
fold n°2
fold n°3
fold n°4
fold n°5
CV score: 0.49784072
4. ElasticNet 弹性网络
folds = KFold(n_splits=5, shuffle=True, random_state=13)
oof_en_49 = np.zeros(train_shape)
predictions_en_49 = np.zeros(len(X_test_49))
#
for fold_, (trn_idx, val_idx) in enumerate(folds.split(X_train_49, y_train)):
print("fold n°{}".format(fold_+1))
tr_x = X_train_49[trn_idx]
tr_y = y_train[trn_idx]
en_49 = en(alpha=1.0,l1_ratio=0.05)
en_49.fit(tr_x,tr_y)
oof_en_49[val_idx] = en_49.predict(X_train_49[val_idx])
predictions_en_49 += en_49.predict(X_test_49) / folds.n_splits
print("CV score: {:<8.8f}".format(mean_squared_error(oof_en_49, target)))
fold n°1
fold n°2
fold n°3
fold n°4
fold n°5
CV score: 0.53985429
我们得到了以上4种新模型的基于49个特征的预测结果以及模型架构及参数。其中在每一种特征工程中,进行5折的交叉验证,并重复两次(LinearRegression简单的线性回归),取得每一个特征数下的模型的结果。
train_stack4 = np.vstack([oof_br_49,oof_kr_49,oof_en_49,oof_ridge_49]).transpose()
test_stack4 = np.vstack([predictions_br_49, predictions_kr_49,predictions_en_49,predictions_ridge_49]).transpose()
folds_stack = RepeatedKFold(n_splits=5, n_repeats=2, random_state=7)
oof_stack4 = np.zeros(train_stack4.shape[0])
predictions_lr4 = np.zeros(test_stack4.shape[0])
for fold_, (trn_idx, val_idx) in enumerate(folds_stack.split(train_stack4,target)):
print("fold {}".format(fold_))
trn_data, trn_y = train_stack4[trn_idx], target.iloc[trn_idx].values
val_data, val_y = train_stack4[val_idx], target.iloc[val_idx].values
#LinearRegression
lr4 = lr()
lr4.fit(trn_data, trn_y)
oof_stack4[val_idx] = lr4.predict(val_data)
predictions_lr4 += lr4.predict(test_stack1) / 10
mean_squared_error(target.values, oof_stack4)
fold 0
fold 1
fold 2
fold 3
fold 4
fold 5
fold 6
fold 7
fold 8
fold 9
0.49708948101136063
模型融合
这里对于上述四种集成学习的模型的预测结果进行加权的求和,得到最终的结果,当然这种方式是很不准确的。
#和下面作对比
mean_squared_error(target.values, 0.7*(0.6*oof_stack2 + 0.4*oof_stack3)+0.3*(0.55*oof_stack1+0.45*oof_stack4))
0.4554367742951356
更好的方式是将以上的4中集成学习模型再次进行集成学习的训练,这里直接使用LinearRegression简单线性回归的进行集成。
train_stack5 = np.vstack([oof_stack1,oof_stack2,oof_stack3,oof_stack4]).transpose()
test_stack5 = np.vstack([predictions_lr1, predictions_lr2,predictions_lr3,predictions_lr4]).transpose()
folds_stack = RepeatedKFold(n_splits=5, n_repeats=2, random_state=7)
oof_stack5 = np.zeros(train_stack5.shape[0])
predictions_lr5= np.zeros(test_stack5.shape[0])
for fold_, (trn_idx, val_idx) in enumerate(folds_stack.split(train_stack5,target)):
print("fold {}".format(fold_))
trn_data, trn_y = train_stack5[trn_idx], target.iloc[trn_idx].values
val_data, val_y = train_stack5[val_idx], target.iloc[val_idx].values
#LinearRegression
lr5 = lr()
lr5.fit(trn_data, trn_y)
oof_stack5[val_idx] = lr5.predict(val_data)
predictions_lr5 += lr5.predict(test_stack5) / 10
mean_squared_error(target.values, oof_stack5)
fold 0
fold 1
fold 2
fold 3
fold 4
fold 5
fold 6
fold 7
fold 8
fold 9
0.45068312762137575
结果保存
进行index的读取工作
submit_example = pd.read_csv('submit_example.csv',sep=',',encoding='latin-1')
submit_example['happiness'] = predictions_lr5
submit_example.happiness.describe()
count 2968.000000
mean 3.880144
std 0.459996
min 1.611800
25% 3.671302
50% 3.948390
75% 4.186912
max 5.057284
Name: happiness, dtype: float64
进行结果保存,这里我们预测出的值是1-5的连续值,但是我们的ground truth是整数值,所以为了进一步优化我们的结果,我们对于结果进行了整数解的近似,并保存到了csv文件中。
submit_example.loc[submit_example['happiness']>4.96,'happiness']= 5
submit_example.loc[submit_example['happiness']<=1.04,'happiness']= 1
submit_example.loc[(submit_example['happiness']>1.96)&(submit_example['happiness']<2.04),'happiness']= 2
submit_example.to_csv("submision.csv",index=False)
submit_example.happiness.describe()
count 2968.000000
mean 3.880132
std 0.460019
min 1.611800
25% 3.671302
50% 3.948390
75% 4.186912
max 5.000000
Name: happiness, dtype: float64
from sklearn.datasets import load_iris
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
iris = load_iris()
X_train,X_test,y_train,y_test = train_test_split(iris.data,iris.target,random_state=0)
print("Size of training set:{} size of testing set:{}".format(X_train.shape[0],X_test.shape[0]))
#### 1
best_score = 0
for gamma in [0.001,0.01,0.1,1,10,100]:
for C in [0.001,0.01,0.1,1,10,100]:
svm = SVC(gamma=gamma,C=C)#对于每种参数可能的组合,进行一次训练;
svm.fit(X_train,y_train)
score = svm.score(X_test,y_test)
if score > best_score:#找到表现最好的参数
best_score = score
best_parameters = {'gamma':gamma,'C':C}
print("Best score:{:.2f}".format(best_score))
#### 2
from sklearn.model_selection import GridSearchCV
#把要调整的参数以及其候选值 列出来;
param_grid = {"gamma":[0.001,0.01,0.1,1,10,100],
"C":[0.001,0.01,0.1,1,10,100]}
print("Parameters:{}".format(param_grid))
grid_search = GridSearchCV(SVC(),param_grid,cv=5) #实例化一个GridSearchCV类,cv交叉验证参数
X_train,X_test,y_train,y_test = train_test_split(iris.data,iris.target,random_state=10)
grid_search.fit(X_train,y_train) #训练,找到最优的参数,同时使用最优的参数实例化一个新的SVC estimator。
print("Test set score:{:.2f}".format(grid_search.score(X_test,y_test)))
print("Best parameters:{}".format(grid_search.best_params_))
#SVM模型有两个非常重要的参数C与gamma。
#C是惩罚系数,即对误差的容忍度(间隔大小,分类准确度)。C越高,说明越不能容忍出现误差,容易过拟合。C越小,容易欠拟合。C过大或过小,泛化能力变差
#gamma是选择RBF函数作为kernel后,该函数自带的一个参数。隐含地决定了数据映射到新的特征空间后的分布,gamma越大,支持向量越少,gamma值越小,支持向量越多。支持向量的个数影响训练与预测的速度。
#两者独立
Size of training set:112 size of testing set:38
Best score:0.97
Parameters:{'gamma': [0.001, 0.01, 0.1, 1, 10, 100], 'C': [0.001, 0.01, 0.1, 1, 10, 100]}
Test set score:0.97
Best parameters:{'C': 10, 'gamma': 0.1}
集成过程中用到的模型有:
lightGBM
xgboost
randomforestregressor
GradientBoostingRegressor
ExtraTreesRegressor
kernel Ridge regression
ridge regression
elasticNet
bayesian ridge
标签:集成,jobs,fold,幸福感,案例,train,l2,data,Parallel 来源: https://blog.csdn.net/lancecrazy/article/details/117002257