其他分享
首页 > 其他分享> > 在Pyspark中使用UDF函数时,密集向量应该是什么类型?

在Pyspark中使用UDF函数时,密集向量应该是什么类型?

作者:互联网

参见英文答案 > How to convert ArrayType to DenseVector in PySpark DataFrame?                                    1个
我想在pySpark中将List更改为Vector,然后将此列用于Machine Learning模型进行训练.但我的火花版本是1.6.0,没有VectorUDT().那么我应该在我的udf函数中返回什么类型?

from pyspark.sql import SQLContext
from pyspark import SparkContext, SparkConf
from pyspark.sql.functions import *
from pyspark.mllib.linalg import DenseVector
from pyspark.mllib.linalg import Vectors
from pyspark.sql.types import *


conf = SparkConf().setAppName('rank_test')
sc = SparkContext(conf=conf)
spark = SQLContext(sc)


df = spark.createDataFrame([[[0.1,0.2,0.3,0.4,0.5]]],['a'])
print '???'
df.show()
def list2vec(column):
    print '?????',column
    return Vectors.dense(column)
getVector = udf(lambda y: list2vec(y),DenseVector() )
df.withColumn('b',getVector(col('a'))).show()

我尝试了很多类型,这个DenseVector()给了我错误:

Traceback (most recent call last):
  File "t.py", line 21, in <module>
    getVector = udf(lambda y: list2vec(y),DenseVector() )
TypeError: __init__() takes exactly 2 arguments (1 given)

请帮帮我.

解决方法:

您可以使用向量和VectorUDT与UDF,

from pyspark.ml.linalg import Vectors, VectorUDT
from pyspark.sql import functions as F

ud_f = F.udf(lambda r : Vectors.dense(r),VectorUDT())
df = df.withColumn('b',ud_f('a'))
df.show()
+-------------------------+---------------------+
|a                        |b                    |
+-------------------------+---------------------+
|[0.1, 0.2, 0.3, 0.4, 0.5]|[0.1,0.2,0.3,0.4,0.5]|
+-------------------------+---------------------+

df.printSchema()
root
  |-- a: array (nullable = true)
  |    |-- element: double (containsNull = true)
  |-- b: vector (nullable = true)

关于VectorUDT,http://spark.apache.org/docs/2.2.0/api/python/_modules/pyspark/ml/linalg.html

标签:python,machine-learning,apache-spark,pyspark,apache-spark-mllib
来源: https://codeday.me/bug/20190622/1262696.html