编程语言
首页 > 编程语言> > java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

作者:互联网

https://stackoverflow.com/questions/35652665/java-io-ioexception-could-not-locate-executable-null-bin-winutils-exe-in-the-ha

93

I'm not able to run a simple spark job in Scala IDE (Maven spark project) installed on Windows 7

Spark core dependency has been added.

val conf = new SparkConf().setAppName("DemoDF").setMaster("local")
val sc = new SparkContext(conf)
val logData = sc.textFile("File.txt")
logData.count()

Error:

16/02/26 18:29:33 INFO SparkContext: Created broadcast 0 from textFile at FrameDemo.scala:13
16/02/26 18:29:34 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
    at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)
    at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)
    at org.apache.hadoop.util.Shell.<clinit>(Shell.java:293)
    at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
    at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:362)
    at <br>org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)
    at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)
    at <br>org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
    at <br>org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)<br>
    at scala.Option.map(Option.scala:145)<br>
    at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:176)<br>
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:195)<br>
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)<br>
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)<br>
    at scala.Option.getOrElse(Option.scala:120)<br>
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)<br>
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)<br>
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)<br>
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)<br>
    at scala.Option.getOrElse(Option.scala:120)<br>
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)<br>
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)<br>
    at org.apache.spark.rdd.RDD.count(RDD.scala:1143)<br>
    at com.org.SparkDF.FrameDemo$.main(FrameDemo.scala:14)<br>
    at com.org.SparkDF.FrameDemo.main(FrameDemo.scala)<br>
share  improve this question    edited Jan 30 '17 at 20:56 Glenn Slayden 13.8k33 gold badges8383 silver badges9595 bronze badges asked Feb 26 '16 at 13:12 Elvish_Blade 97011 gold badge99 silver badges1212 bronze badges add a comment

12 Answers

ActiveOldestVotes 142  

Here is a good explanation of your problem with the solution.

  1. Download winutils.exe from http://public-repo-1.hortonworks.com/hdp-win-alpha/winutils.exe.
  2. SetUp your HADOOP_HOME environment variable on the OS level or programmatically:

    System.setProperty("hadoop.home.dir", "full path to the folder with winutils");

  3. Enjoy

share  improve this answer    edited Apr 9 '19 at 15:51     answered Feb 26 '16 at 13:21 Taky 4,96711 gold badge1616 silver badges2828 bronze badges show 1 more comment   67
  1. Download winutils.exe
  2. Create folder, say C:\winutils\bin
  3. Copy winutils.exe inside C:\winutils\bin
  4. Set environment variable HADOOP_HOME to C:\winutils
share  improve this answer    edited Aug 21 '17 at 7:09 Kenny John Jacob 43522 silver badges1313 bronze badges answered Sep 16 '16 at 7:26 Deokant Gupta 67155 silver badges22 bronze badges add a comment 26

Follow this:

  1. Create a bin folder in any directory(to be used in step 3).

  2. Download winutils.exe and place it in the bin directory.

  3. Now add System.setProperty("hadoop.home.dir", "PATH/TO/THE/DIR"); in your code.

share  improve this answer    edited Jan 11 '17 at 6:28     answered Jan 11 '17 at 6:22 Ani Menon 21.5k1313 gold badges7575 silver badges100100 bronze badges add a comment   4

if we see below issue

ERROR Shell: Failed to locate the winutils binary in the hadoop binary path

java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

then do following steps

  1. download winutils.exe from http://public-repo-1.hortonworks.com/hdp- win-alpha/winutils.exe.
  2. and keep this under bin folder of any folder you created for.e.g. C:\Hadoop\bin
  3. and in program add following line before creating SparkContext or SparkConf System.setProperty("hadoop.home.dir", "C:\Hadoop");
share  improve this answer    edited Jun 20 at 9:12 Community♦ 111 silver badge answered Sep 4 '17 at 14:16 Prem S 22522 silver badges88 bronze badges add a comment 4

On Windows 10 - you should add two different arguments.

(1) Add the new variable and value as - HADOOP_HOME and path (i.e. c:\Hadoop) under System Variables.

(2) Add/append new entry to the "Path" variable as "C:\Hadoop\bin".

The above worked for me.

share  improve this answer    answered Jun 27 '18 at 10:42 user1023627 17711 gold badge22 silver badges1111 bronze badges add a comment 4
1) Download winutils.exe from https://github.com/steveloughran/winutils 
2) Create a directory In windows "C:\winutils\bin
3) Copy the winutils.exe inside the above bib folder .
4) Set the environmental property in the code 
  System.setProperty("hadoop.home.dir", "file:///C:/winutils/");
5) Create a folder "file:///C:/temp" and give 777 permissions.
6) Add config property in spark Session ".config("spark.sql.warehouse.dir", "file:///C:/temp")"
share  improve this answer    answered Sep 30 '18 at 7:39 Sampat Kumar 30611 gold badge22 silver badges1212 bronze badges add a comment 2

I got the same problem while running unit tests. I found this workaround solution:

The following workaround allows to get rid of this message:

    File workaround = new File(".");
    System.getProperties().put("hadoop.home.dir", workaround.getAbsolutePath());
    new File("./bin").mkdirs();
    new File("./bin/winutils.exe").createNewFile();

from: https://issues.cloudera.org/browse/DISTRO-544

share  improve this answer    answered Apr 3 '18 at 18:28 Joabe Lucena 62155 silver badges1616 bronze badges add a comment 2

You can alternatively download winutils.exe from GITHub:

https://github.com/steveloughran/winutils/tree/master/hadoop-2.7.1/bin

replace hadoop-2.7.1 with the version you want and place the file in D:\hadoop\bin

If you do not have access rights to the environment variable settings on your machine, simply add the below line to your code:

System.setProperty("hadoop.home.dir", "D:\\hadoop");

标签:locate,bin,executable,scala,hadoop,winutils,apache,org,spark
来源: https://www.cnblogs.com/felixzh/p/14024071.html