python-尝试运行Word2Vec示例时PySpark中出现错误
作者:互联网
我正在尝试运行文档中给出的Word2Vec的非常简单的示例:
https://spark.apache.org/docs/1.4.1/api/python/_modules/pyspark/ml/feature.html#Word2Vec
from pyspark import SparkContext, SQLContext
from pyspark.mllib.feature import Word2Vec
sqlContext = SQLContext(sc)
sent = ("a b " * 100 + "a c " * 10).split(" ")
doc = sqlContext.createDataFrame([(sent,), (sent,)], ["sentence"])
model = Word2Vec(vectorSize=5, seed=42, inputCol="sentence", outputCol="model").fit(doc)
model.getVectors().show()
model.findSynonyms("a", 2).show()
TypeError Traceback (most recent call last)
<ipython-input-4-e57e9f694961> in <module>()
5 sent = ("a b " * 100 + "a c " * 10).split(" ")
6 doc = sqlContext.createDataFrame([(sent,), (sent,)], ["sentence"])
----> 7 model = Word2Vec(vectorSize=5, seed=42, inputCol="sentence", outputCol="model").fit(doc)
8 model.getVectors().show()
9 model.findSynonyms("a", 2).show()
TypeError: __init__() got an unexpected keyword argument 'vectorSize'
知道为什么这失败了吗?
解决方法:
您是指ml中的文档,但是从mllib包中导入的.在mllib中,Word2Vec在__init__中不接受任何参数.
您打算:
from pyspark.ml.feature import Word2Vec
输出:
+----+--------------------+
|word| vector|
+----+--------------------+
| a|[-0.3511952459812...|
| b|[0.29077222943305...|
| c|[0.02315592765808...|
+----+--------------------+
+----+-------------------+
|word| similarity|
+----+-------------------+
| b|0.29255685145799626|
| c|-0.5414068302988307|
+----+-------------------+
标签:word2vec,apache-spark,pyspark,machine-learning,python 来源: https://codeday.me/bug/20191119/2036599.html