其他分享
首页 > 其他分享> > Hadoop[03-03]基于DFS与ZKFC访问计数测试(Hadoop2.0)

Hadoop[03-03]基于DFS与ZKFC访问计数测试(Hadoop2.0)

作者:互联网

Hadoop[03-03]基于DFS与ZKFC访问计数测试(Hadoop2.0)


准备环境

准备多台虚拟机,启动dfs和zookeeper
详见链接:Hadoop2.0 启动DFS和Zookeeper

多台虚拟机部分数据如下

编号主机名主机域名ip地址
ToozkyToozky192.168.64.220
Toozky2Toozky2192.168.64.221
Toozky3Toozky3192.168.64.222

设置ssh免密连接
详见链接: Linux虚拟机ssh免密连接

资源列表

软件软件版本
VMwareVMware® Workstation 16 Pro
Xshell6
filezilla3.7.3

启动zookeeper、dfs

虚拟机①、②、③

zkServer.sh start

虚拟机①

以虚拟机①为namenode为例

start-all.sh

测试上传文件访问计数

IDEA创建普通Maven项目

pom.xml

project标签中
dependencies标签中添加依赖

    <dependencies>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>${hadoop.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>${hadoop.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>${hadoop.version}</version>
        </dependency>

        <!--日志依赖-->
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>1.6.1</version>
        </dependency>
        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.17</version>
        </dependency>

        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.11</version>
        </dependency>
    </dependencies>

添加build

    <build>
        <!--指定visitcount为导出jar名称-->
        <finalName>visitcount</finalName>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-assembly-plugin</artifactId>
                <version>3.0.0</version>
                <configuration>
                    <archive>
                        <manifest>
                            <!--指定程序主类-->
                            <mainClass>mapreduce.WordCountJobRun</mainClass>
                        </manifest>
                    </archive>
                    <descriptorRefs>
                    	<!--指定jar包名称追加描述-->
                    	<!--导出jar包名称为visitcount-jar-with-dependencies.jar-->
                        <descriptorRef>jar-with-dependencies</descriptorRef>
                    </descriptorRefs>
                </configuration>
                <executions>
                    <execution>
                        <id>make-assembly</id>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

mapreduce

在项目的main/java目录创建mapreduce层

VisitCountMapper.java

在mapreduce层创建VisitCountMapper.java

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

import java.io.IOException;

public class VisitCountMapper extends Mapper<LongWritable, Text,Text,LongWritable> {
    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        String s = value.toString();

        String[] split = s.split(" ");
        for (int i = 6; i < split.length; i+=9) {
            context.write(new Text(split[i]),new LongWritable(1));
        }
    }
}

由于数据文件以空格分隔字段(且换行符不分隔字段),且网址列是有规律的,所以for循环设置起始遍历下标为6,自增为9
在这里插入图片描述
如上图所示,第7、16、25……(9n-2)个为网址字段
所以下标取第6、15、24……(9n-3)

VisitCountReducer.java

在mapreduce层创建VisitCountReducer.java
编辑计数程序

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

import java.io.IOException;

public class VisitCountReducer extends Reducer<Text, LongWritable, Text,LongWritable> {
    @Override
    protected void reduce(Text key, Iterable<LongWritable> values, Context context) throws IOException, InterruptedException {
        Long sum=0l;
        for (LongWritable iterable : values) {
            sum+=iterable.get();
        }
        context.write(key,new LongWritable(sum));
    }
}

VisitCountJobRun.java

在mapreduce层创建VisitCountJobRun.java

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.io.IOException;
public class VisitCountJobRun {
    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
        System.setProperty("HADOOP_USER_NAME", "root");
        Configuration conf=new Configuration();
        Job job=new Job(conf);
        //String Hadoop_Url = "hdfs://Toozky:8020";
        job.setJarByClass(VisitCountJobRun.class);
        job.setMapperClass(VisitCountMapper.class);
        job.setReducerClass(VisitCountReducer.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(LongWritable.class);

        //job.setNumReduceTasks(1); 设置reduce 任务的个数
        //输入数据文件
        FileInputFormat.addInputPath(job,new Path("/input/logs.txt"));
        //输出数据文件
        FileOutputFormat.setOutputPath(job,new Path("/output/logs_deal"));

        //提交job
        boolean result = job.waitForCompletion(true);
        //执行成功后进行后续操作
        if (result) {
            System.out.println("访问计数任务已完成!");
        }
    }
}

通过下面相关设置文件的配置,程序就无需指定namenode域名了,而工作方式会变为,自行访问namenode(active)的地址

resources

在项目的/src/main/resources目录(若无则创建resources目录)
将hadoop安装目录中的core-site.xml、hdfs-site.xml、mapred-site.xml拷贝到resources
(虚拟机/home/hadoop2.6/etc/hadoop中,使用filezilla下载文件到本地)

core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://dfbz</value>
  </property>
  <property>
    <name>ha.zookeeper.quorum</name>
    <value>Toozky:2181,Toozky2:2181,Toozky3:2181</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/hadoop2.6</value>
  </property>
</configuration>

hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
  <property>
    <name>dfs.nameservices</name>
    <value>dfbz</value>
  </property>

  <property>
    <name>dfs.ha.namenodes.dfbz</name>
    <value>nn1,nn2</value>
  </property>

  <property>
    <name>dfs.namenode.rpc-address.dfbz.nn1</name>
    <value>Toozky:8020</value>
  </property>
  <property>
    <name>dfs.namenode.rpc-address.dfbz.nn2</name>
    <value>Toozky2:8020</value>
  </property>

  <property>
    <name>dfs.namenode.http-address.dfbz.nn1</name>
    <value>Toozky:50070</value>
  </property>
  <property>
    <name>dfs.namenode.http-address.dfbz.nn2</name>
    <value>Toozky2:50070</value>
  </property>
  <property>
    <name>dfs.namenode.shared.edits.dir</name>
    <value>qjournal://Toozky:8485;Toozky2:8485;Toozky3:8485/dfbz</value>
  </property>

  <property>
    <name>dfs.client.failover.proxy.provider.dfbz</name>
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  </property>

  <property>
    <name>dfs.ha.fencing.methods</name>
    <value>sshfence</value>
  </property>

  <property>
    <name>dfs.ha.fencing.ssh.private-key-files</name>
    <value>/root/.ssh/id_dsa</value>
  </property>

  <property>
    <name>dfs.journalnode.edits.dir</name>
    <value>/opt/journal/node/local/data</value>
  </property>
  <property>
    <name>dfs.ha.automatic-failover.enabled</name>
    <value>true</value>
  </property>
</configuration>

mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
  <property>
     <name>mapreduce.framework.name</name>
     <value>yarn</value>
  </property>
</configuration>

导出访问计数jar包,上传测试数据文件

visitcount-jar-with-dependencies.jar

单击IDEA右侧Maven菜单展开项目,点击package打包(导出jar包)
在这里插入图片描述
点击package后会在项目中生成target目录,选中所需要的jar包,ctrl+c复制
在这里插入图片描述
将visitcount-jar-with-dependencies.jar粘贴到方便操作的位置,并用filezilla发送到虚拟机①的/root目录

logs.txt

logs.txt文件中的测试数据如下
文件是Tomcat安装目录的logs的localhost_access_log.xxxx_xx_xx.txt复制重命名为logs.txt得到
在这里插入图片描述
用filezilla将logs.txt文件上传至虚拟机①的/root目录

上传测试数据至HDFS

在虚拟机①中验证上传情况

cd
ls

在DFS系统中创建/input目录,用于存放处理前的相关文件

hadoop dfs -mkdir /input

补充:
DFS中目录的删除hadoop dfs -rmr /目录或文件名
发送logs.txt到HDFS

cd
hadoop dfs -put logs.txt /input/

执行TestVisitCount项目

运行jar文件

cd
hadoop jar visitcount-jar-with-dependencies.jar

访问计数验证

浏览器验证

在浏览器地址栏输入Toozky:50070回车(namenode(active)域名:50070
点击Browse the file system进入DFS文件系统
点击output查看输出目录,点击logs_deal
看到_SUCCESS字样则为操作成功
点击 part-r-00000下载文件点击Download下载文件验证结果
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述


以上就是本期总结的全部内容,愿大家相互学习,共同进步!

标签:03,mapreduce,Hadoop,dfs,hadoop,import,apache,org,Hadoop2.0
来源: https://blog.csdn.net/u012175183/article/details/117355519