【决策树】泰坦尼克号幸存者预测项目
作者:互联网
项目目标
泰坦尼克号的沉没是历史上最著名的还难事件之一,在船上的2224名乘客和机组人员中,共造成1502人死亡。本次项目的目标是运用机器学习工具来预测哪些乘客能够幸免于难。
项目过程
- 导入并探索数据
- 处理缺失值,删除与预测无关的特征
- 将分类变量转换为数值型变量
- 实例化模型并进行交叉验证
- 模型预测
- 调参,得到最好的超参数
项目代码(Jupyter)
import pandas as pd from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import train_test_split from sklearn.model_selection import GridSearchCV from sklearn.model_selection import cross_val_score import matplotlib.pyplot as plt
data = pd.read_csv("Taitanic data.csv") data.head()
data.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 891 entries, 0 to 890 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 PassengerId 891 non-null int64 1 Survived 891 non-null int64 2 Pclass 891 non-null int64 3 Name 891 non-null object 4 Sex 891 non-null object 5 Age 714 non-null float64 6 SibSp 891 non-null int64 7 Parch 891 non-null int64 8 Ticket 891 non-null object 9 Fare 891 non-null float64 10 Cabin 204 non-null object 11 Embarked 889 non-null object dtypes: float64(2), int64(5), object(5) memory usage: 83.7+ KB
# 这里首先我们看看这些标签代表着什么 # PassengerId => 乘客ID # Pclass => 乘客等级(1/2/3等舱位) # Name => 乘客姓名 # Sex => 性别 # Age => 年龄 # SibSp => 堂兄弟/妹个数 # Parch => 父母与小孩个数 # Ticket => 船票信息 # Fare => 票价 # Cabin => 客舱 # Embarked => 登船港口
# 删除缺失值过多的列,以及和预测的y没有关系的列 data.drop(["Cabin", "Name", "Ticket"], inplace=True, axis=1) # 处理缺失值,对于缺失较多的列进行填补,对于缺失较少的列可以直接删除该条记录 data["Age"] = data["Age"].fillna(data["Age"].mean()) data = data.dropna() # 删除缺失值后重置索引 data.index = range(len(data)) data.tail()
data.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 889 entries, 0 to 888 Data columns (total 9 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 PassengerId 889 non-null int64 1 Survived 889 non-null int64 2 Pclass 889 non-null int64 3 Sex 889 non-null object 4 Age 889 non-null float64 5 SibSp 889 non-null int64 6 Parch 889 non-null int64 7 Fare 889 non-null float64 8 Embarked 889 non-null object dtypes: float64(2), int64(5), object(2) memory usage: 62.6+ KB
# 将分类变量转换为数值型变量 # 在说分类变量转数值型变量之前,我们首先要清楚数据的类型 # 数据可以分为定量数据和定性数据,定性数据又可以分为有序数据和无序数据,定量数据可以分为离散型数据和连续型数据 # 这个项目中我们要处理的数据是Sex和Embarked,前者属于定性数据中的无序数据,后者属于定性数据中的有序数据 # 在sklearn中可以进行变量转换的类有三个:OneHotEncoder\OrdinalEncoder\LableEncoder # 三者的区别在于: # 1.OneHotEncoder用于编码无序数据(针对特征) # 2.OrdinalEncoder用于编码有序数据(针对特征),可以保留数据的大小意义 # 3.LableEncoder用于编码标签变量,不会保留数据的大小意义
#将分类变量转换为数值型变量 from sklearn.preprocessing import OneHotEncoder,OrdinalEncoder # 编码Sex ohe = OneHotEncoder(sparse=False) data_Sex = ohe.fit_transform(data["Sex"].values.reshape(-1, 1)) # 查看编码后对应的特征名称并转换为DataFrame ohe.get_feature_names() data_Sex_df = pd.DataFrame(data_Sex, columns=["female","male"]) # 编码Embarked并转换为DataFrame oe = OrdinalEncoder() data_Embarked = oe.fit_transform(data["Embarked"].values.reshape(-1, 1)) data_Embarked_df = pd.DataFrame(data_Embarked, columns=["Embarked"])
print(data_Sex.shape) print(data_Embarked.shape) print(data.shape)
(889, 2) (889, 1) (889, 9)
# 删除Sex和Embarked data.drop(["Sex", "Embarked"], inplace=True, axis=1) # 将编码后的数据合并到原数据 newdata = pd.concat([data, data_Sex_df, data_Embarked_df], axis=1) newdata
# 划分特征与标签 X = newdata.iloc[:, newdata.columns != "Survived"] y = newdata.iloc[:,newdata.columns == "Survived"] # 划分训练集与测试集 Xtrain, Xtest, Ytrain, Ytest = train_test_split(X,y) clf = DecisionTreeClassifier(random_state=666) clf.fit(Xtrain, Ytrain) score_ = clf.score(Xtest, Ytest) score_
0.7713004484304933
# 交叉验证 cv_score = [] for i in range(2,10): score = cross_val_score(clf,X,y,cv=i).mean() cv_score.append(score) best_cv = cv_score.index(max(cv_score)) + 2 best_cv
5
# 网格搜索 parameters = {"splitter":('best','random') ,"max_depth":[*range(1,5)] ,"min_samples_leaf":[*range(1,10)] } clf = DecisionTreeClassifier(random_state=666) GS = GridSearchCV(clf, parameters, cv=best_cv) GS.fit(Xtrain,Ytrain) GS.best_score_
0.8138143867130513
GS.best_params_
{'max_depth': 3, 'min_samples_leaf': 1, 'splitter': 'best'}
标签:泰坦尼克号,non,Embarked,889,Sex,幸存者,null,data,决策树 来源: https://www.cnblogs.com/waterr/p/14425697.html