python-PySpark安装错误
作者:互联网
我已按照包括this、this、this和this在内的各种博客文章中的说明在笔记本电脑上安装pyspark.但是,当我尝试从终端或jupyter笔记本电脑使用pyspark时,我一直收到以下错误.
我已经安装了所有必要的软件,如问题底部所示.
我已将以下内容添加到我的.bashrc中
function sjupyter_init()
{
#Set anaconda3 as python
export PATH=~/anaconda3/bin:$PATH
#Spark path (based on your computer)
SPARK_HOME=/opt/spark
export PATH=$SPARK_HOME:$PATH
export PYTHONPATH=$SPARK_HOME/python:/home/khurram/anaconda3/bin/python3
export PYSPARK_DRIVER_PYTHON="jupyter"
export PYSPARK_DRIVER_PYTHON_OPTS="notebook"
export PYSPARK_PYTHON=python3
}
我从终端执行sjupyter_init,然后执行jupyter笔记本,以使用pyspark启动jupyter笔记本.
在笔记本中,我执行以下操作没有错误
import findspark
findspark.init('/opt/spark')
from pyspark.sql import SparkSession
但是当我在下面执行时
spark = SparkSession.builder.appName("test").getOrCreate()
它导致此错误消息
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
18/01/20 17:10:06 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/spark/python/pyspark/sql/session.py", line 173, in getOrCreate
sc = SparkContext.getOrCreate(sparkConf)
File "/opt/spark/python/pyspark/context.py", line 334, in getOrCreate
SparkContext(conf=conf or SparkConf())
File "/opt/spark/python/pyspark/context.py", line 118, in __init__
conf, jsc, profiler_cls)
File "/opt/spark/python/pyspark/context.py", line 180, in _do_init
self._jsc = jsc or self._initialize_context(self._conf._jconf)
File "/opt/spark/python/pyspark/context.py", line 273, in _initialize_context
return self._jvm.JavaSparkContext(jconf)
File "/home/khurram/anaconda3/lib/python3.6/site-packages/py4j/java_gateway.py", line 1428, in __call__
answer, self._gateway_client, None, self._fqn)
File "/home/khurram/anaconda3/lib/python3.6/site-packages/py4j/protocol.py", line 320, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: java.lang.ExceptionInInitializerError
at org.apache.spark.SparkConf.validateSettings(SparkConf.scala:546)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:373)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:236)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.UnknownHostException: linux-0he7: linux-0he7: Name or service not known
at java.net.InetAddress.getLocalHost(InetAddress.java:1505)
at org.apache.spark.util.Utils$.findLocalInetAddress(Utils.scala:891)
at org.apache.spark.util.Utils$.org$apache$spark$util$Utils$$localIpAddress$lzycompute(Utils.scala:884)
at org.apache.spark.util.Utils$.org$apache$spark$util$Utils$$localIpAddress(Utils.scala:884)
at org.apache.spark.util.Utils$$anonfun$localHostName$1.apply(Utils.scala:941)
at org.apache.spark.util.Utils$$anonfun$localHostName$1.apply(Utils.scala:941)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.util.Utils$.localHostName(Utils.scala:941)
at org.apache.spark.internal.config.package$.<init>(package.scala:204)
at org.apache.spark.internal.config.package$.<clinit>(package.scala)
... 14 more
Caused by: java.net.UnknownHostException: linux-0he7: Name or service not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getLocalHost(InetAddress.java:1500)
... 23 more
我的操作系统详细信息是
OS:
OpenSuse Leap 42.2 64-bit
Java的:
khurram@linux-0he7:~> java -version
openjdk version "1.8.0_151"
斯卡拉
khurram@linux-0he7:~> scala -version
Scala code runner version 2.12.4 -- Copyright 2002-2017, LAMP/EPFL and Lightbend, Inc.
Hadoop 3.0
khurram@linux-0he7:~> echo $HADOOP_HOME
/opt/hadoop
Py4J
khurram@linux-0he7:~> pip show py4j
Name: py4j
Version: 0.10.6
Summary: Enables Python programs to dynamically access arbitrary Java objects
Home-page: https://www.py4j.org/
Author: Barthelemy Dagenais
Author-email: barthelemy@infobart.com
License: BSD License
Location: /home/khurram/anaconda3/lib/python3.6/site-packages
Requires:
khurram@linux-0he7:~>
我已经对Hadoop和Spark目录执行了chmod 777.
khurram@linux-0he7:~> ls -al /opt/
total 8
drwxr-xr-x 1 root root 96 Jan 19 20:22 .
drwxr-xr-x 1 root root 222 Jan 20 14:54 ..
lrwxrwxrwx 1 root root 18 Jan 19 20:22 hadoop -> /opt/hadoop-3.0.0/
drwxrwxrwx 1 khurram users 126 Dec 8 19:42 hadoop-3.0.0
lrwxrwxrwx 1 root root 30 Jan 19 19:40 spark -> /opt/spark-2.2.1-bin-hadoop2.7
drwxrwxrwx 1 khurram users 150 Jan 19 19:33 spark-2.2.1-bin-hadoop2.7
khurram@linux-0he7:~>
主机文件的内容
khurram@linux-0he7:> cat /etc/hosts
127.0.0.1 localhost
# special IPv6 addresses
::1 localhost ipv6-localhost ipv6-loopback
fe00::0 ipv6-localnet
ff00::0 ipv6-mcastprefix
ff02::1 ipv6-allnodes
ff02::2 ipv6-allrouters
ff02::3 ipv6-allhosts
解决方法:
Thrown to indicate that the IP address of a host could not be determined.
并将其抛出到堆栈跟踪的底部:
Caused by: java.net.UnknownHostException: linux-0he7: Name or service not known
查看您的提示符shell linux-0he7,所以我假设您使用的是本地模式.这意味着您的/ etc / hosts不包含linux-0he7.
添加
127.0.0.1 linux-0he7
到/ etc / hosts应该可以解决问题.
您也可以使用spark.driver.bindAddress和spark.driver.host为驱动程序使用特定的主机IP.
独立于例外,尚不支持Hadoop 3.0.0.我建议暂时使用2.x.
标签:python,apache-spark,pyspark,hadoop,jupyter-notebook 来源: https://codeday.me/bug/20191013/1909287.html