编程语言
首页 > 编程语言> > Python Spark DataFrame:用SparseVector替换null

Python Spark DataFrame:用SparseVector替换null

作者:互联网

在Spark中,我有以下名为“ df”的数据框,其中包含一些空条目:

+-------+--------------------+--------------------+                     
|     id|           features1|           features2|
+-------+--------------------+--------------------+
|    185|(5,[0,1,4],[0.1,0...|                null|
|    220|(5,[0,2,3],[0.1,0...|(10,[1,2,6],[0.1,...|
|    225|                null|(10,[1,3,5],[0.1,...|
+-------+--------------------+--------------------+

df.features1和df.features2是向量类型(可为空).然后,我尝试使用以下代码用SparseVectors填充空条目:

df1 = df.na.fill({"features1":SparseVector(5,{}), "features2":SparseVector(10, {})})

此代码导致以下错误:

AttributeError: 'SparseVector' object has no attribute '_get_object_id'

然后我在spark文档中找到以下段落:

fillna(value, subset=None)
Replace null values, alias for na.fill(). DataFrame.fillna() and DataFrameNaFunctions.fill() are aliases of each other.

Parameters: 
value – int, long, float, string, or dict. Value to replace null values with. If the value is a dict, then subset is ignored and value must be a mapping from column name (string) to replacement value. The replacement value must be an int, long, float, or string.

这是否解释了我无法用DataFrame中的SparseVectors替换空条目吗?还是这意味着在DataFrame中没有办法做到这一点?

我可以通过将DataFrame转换为RDD并用SparseVectors替换None值来实现我的目标,但是直接在DataFrame中执行此操作将更加方便.

有什么方法可以直接在DataFrame中执行此操作吗?
谢谢!

解决方法:

您可以使用udf:

from pyspark.sql.functions import udf, lit
from pyspark.ml.linalg import *

fill_with_vector = udf(
    lambda x, i: x if x is not None else SparseVector(i, {}),
    VectorUDT()
)

df = sc.parallelize([
    (SparseVector(5, {1: 1.0}), SparseVector(10, {1: -1.0})), (None, None)
]).toDF(["features1", "features2"])

(df
    .withColumn("features1", fill_with_vector("features1", lit(5)))
    .withColumn("features2", fill_with_vector("features2", lit(10)))
    .show())

# +-------------+---------------+
# |    features1|      features2|
# +-------------+---------------+
# |(5,[1],[1.0])|(10,[1],[-1.0])|
# |    (5,[],[])|     (10,[],[])|
# +-------------+---------------+

标签:pyspark-sql,apache-spark,pyspark,spark-dataframe,python
来源: https://codeday.me/bug/20191111/2022875.html