1 创建Hadoop用户
1.1 创建新用户
用户名为hadoop,使用/bin/bash作为shell
$ sudo useradd -m hadoop -s /bin/bash
1.2 修改密码
$ sudo passwd hadoop Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully
1.2 为hadoop用户添加管理员权限
$ sudo adduser hadoop sudo Adding user `hadoop' to group `sudo' ... Adding user hadoop to group sudo Done.
2 安装java环境
2.1 安装
$ sudo apt-get install default-jre default-jdk
2.2 配置环境变量
$ vim ~/.bashrc
后面加入export JAVA_HOME=/usr/lib/jvm/default-java
然后使环境变量生效:
$ source ~/.bashrc
2.3 测试java是否安装成功
$ echo $JAVA_HOME /usr/lib/jvm/default-java $ java -version openjdk version "1.8.0_191" OpenJDK Runtime Environment (build 1.8.0_191-8u191-b12-0ubuntu0.16.04.1-b12) OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode)
3 设置SSH
SSH是Secure Shell的缩写,SSH由客户端和服务端构成,服务端是一个守护进程,在后台运行并相应来自客户端的请求,客户端包含远程复制scp、安全文件传输sftp,远程登录slogin等运用程序。
Ubuntu已经默认安装了SSH客户端,还需要安装SSH服务端。
【注意】:Hadoop并没有提供SSH密码登录的形式,所以需要将所有机器配置为无密码登录。
3.1 安装SSH服务端
$ sudo apt-get install openssh-server
3.2 登录localhost
$ ssh localhost The authenticity of host 'localhost (127.0.0.1)' can't be established. ECDSA key fingerprint is SHA256:MCT7ubGt3sPlkvS9v//KhAoa7vBO+EVPJN/JXenC8XM. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts. hadoop@localhost's password: Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-42-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage 243 packages can be updated. 11 updates are security updates.
之后会在~/文件夹下发现一个.ssh文件
3.3 设置为无密码登录
$ cd ~/.ssh/ $ ssh-keygen -t rsa #出现提示直接按enter Generating public/private rsa key pair. Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/hadoop/.ssh/id_rsa. Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub. The key fingerprint is: SHA256:FaavA0T6j8XH0clbVu0pq5hkad7kADUBibL/76I2U00 hadoop@ubuntu The key's randomart image is: +---[RSA 2048]----+ | o.o.+ o| | . + . = + . ..| | + . o + + o..| | . o o E . = ..| | . o S = . o | | . * X . . | | + O B . | | + o = + | | ..+ +o | +----[SHA256]-----+ $ cat ./id_rsa.pub >> ./authorized_keys #加入授权
此时就直接使用$ ssh localhost,无密码登录了。
4 安装Hadoop
Hadoop的安装包括3中模式:
(1)单机模式:只在一台机器上运行,存储采用本地文件系统,没有采用分布式文件系统HDFS。
(2)伪分布式模式:存储采用分布式文件系统HDFS,但是HDFS的节点和数据节点都在同一节点。
(2)分布式模式:存储采用分布式文件系统HDFS,而且HDFS的节点和数据节点位于不同机器上。
Hadoop的下载:http://mirrors.cnnic.cn/apache/hadoop/common
4.1 单机模式配置
下载安装包后解压即可使用:
$ sudo tar -zxvf hadoop-2.7.1.tar.gz -C /usr/local $ cd /usr/local/ $ sudo mv ./hadoop-2.7.1/ ./hadoop # 将文件夹名改为hadoop $ sudo chown -R hadoop ./hadoop # 修改文件权限
查看Hadoop版本信息:
$ cd /usr/local/hadoop/bin $ ./hadoop version Hadoop 2.7.1 Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a Compiled by jenkins on 2015-06-29T06:04Z Compiled with protoc 2.5.0 From source with checksum fc0a1a23fc1868e4d5ee7fa2b28a58a This command was run using /usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.1.jar
Hadoop附带了很多例子,运行如下命令可以查看:
$ ./hadoop jar ../share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar An example program must be given as the first argument. Valid program names are: aggregatewordcount: An Aggregate based map/reduce program that counts the words in the input files. aggregatewordhist: An Aggregate based map/reduce program that computes the histogram of the words in the input files. bbp: A map/reduce program that uses Bailey-Borwein-Plouffe to compute exact digits of Pi. dbcount: An example job that count the pageview counts from a database. distbbp: A map/reduce program that uses a BBP-type formula to compute exact bits of Pi. grep: A map/reduce program that counts the matches of a regex in the input. join: A job that effects a join over sorted, equally partitioned datasets multifilewc: A job that counts words from several files. pentomino: A map/reduce tile laying program to find solutions to pentomino problems. pi: A map/reduce program that estimates Pi using a quasi-Monte Carlo method. randomtextwriter: A map/reduce program that writes 10GB of random textual data per node. randomwriter: A map/reduce program that writes 10GB of random data per node. secondarysort: An example defining a secondary sort to the reduce. sort: A map/reduce program that sorts the data written by the random writer. sudoku: A sudoku solver. teragen: Generate data for the terasort terasort: Run the terasort teravalidate: Checking results of terasort wordcount: A map/reduce program that counts the words in the input files. wordmean: A map/reduce program that counts the average length of the words in the input files. wordmedian: A map/reduce program that counts the median length of the words in the input files. wordstandarddeviation: A map/reduce program that counts the standard deviation of the length of the words in the input files.
下面运行grep程序
$ cd /usr/local/hadoop $ mkdir input $ cp ./etc/hadoop/*.xml ./input # 将配置文件复制到input目录下 $ ./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar grep ./input ./output 'dfs[a-z.]+' $ cat ./output/* # 查看运行结果 1 dfsadmin
运行成功后,可以看到grep程序将input文件夹作为输入,从文件夹中筛选出所有符合正则表达式dfs[a-z]+的单词,并把单词出现的次数的统计结果输出到/usr/local/hadoop/output文件夹下。
【注意】:如果再次运行上述命令,会报错,因为Hadoop默认不会覆盖output输出结果的文件夹,所有需要先删除output文件夹才能再次运行。
4.2 伪分布式模式配置
在单个节点(一台机器上)以伪分布式的方式运行。
4.2.1 修改配置文件
需要修改/usr/local/hadoop/etc/hadoop/文件夹下的core-site.xml和hdfs-site.xml 文件。
core-site.xml文件:
将
<configuration> </configuration>
修改为:
<configuration> <property> <name>hadoop.tmp.dir</name> <value>file:/usr/local/hadoop/tmp</value> <description>Abase for other temporary directories.</description> </property> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property> </configuration>
- hadoop.tmp.dir用于保存临时文件,如果没有配置这个参数,则默认使用的临时目录为/tmp/hadoo-hadoop,这个目录在Hadoop重启后会被系统清理掉。
- fs.defaultFS用于指定HDFS的访问地址。
hdfs-site.xml文件修改如下:
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/usr/local/hadoop/tmp/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/usr/local/hadoop/tmp/dfs/data</value> </property> </configuration>
- dfs.replicaion:指定副本数量,在分布式文件系统中,数据通常会被冗余的存储多份,以保证可靠性和安全性,但是这里用的是伪分布式模式,节点只有一个,也有就只有一个副本。
- dfs.namenode.name.di:设定名称节点元数据的保存目录
- dfs.datanode.data.dir:设定数据节点的数据保存目录
这里,名称节点和数据节点必须设定。
【注意】:Hadoop的运行方式是由配置文件决定的,如果想从伪分布式模式切换回单机模式,只需删除core-site.xml文件中的配置项即可
4.2.2 执行名称节点格式化
执行如下命令:
$ cd /usr/local/hadoop $ ./bin/hdfs namenode -format
【错误】:出现Exiting with status 1表示出现错误
19/01/11 18:38:02 ERROR namenode.NameNode: Failed to start namenode. java.lang.IllegalArgumentException: URI has an authority component at java.io.File.<init>(File.java:423) at org.apache.hadoop.hdfs.server.namenode.NNStorage.getStorageDirectory(NNStorage.java:329) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:276) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite(FSEditLog.java:247) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:985) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554) 19/01/11 18:38:02 INFO util.ExitUtil: Exiting with status 1 19/01/11 18:38:02 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1 ************************************************************/
【解决】:检查hdfs-site.xml的配置
如果出现/usr/local/hadoop/tmp/dfs/name has been successfully formatted.和 Exiting with status 0,表示格式化成功。
19/01/11 18:46:35 INFO common.Storage: Storage directory /usr/local/hadoop/tmp/dfs/name has been successfully formatted. 19/01/11 18:46:36 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 19/01/11 18:46:36 INFO util.ExitUtil: Exiting with status 0
4.2.3 启动Hadoop
$ cd /usr/local/hadoop $ ./sbin/start-dfs.sh
【错误】:
Starting namenodes on [localhost] localhost: Error: JAVA_HOME is not set and could not be found. localhost: Error: JAVA_HOME is not set and could not be found. Starting secondary namenodes [0.0.0.0]
【解决】:
$ echo $JAVA_HOME /usr/lib/jvm/default-java
查看后发现JAVA_HOME路径已经设置,那就只能将/hadoop/etc/hadoop/hadoop-env.sh文件的JAVA_HOME改为绝对路径了。将export JAVA_HOME=$JAVA_HOME改为
export JAVA_HOME=/usr/lib/jvm/default-java
用jps命令查看Hadoop是否启动成功,如果出现DataNode、NameNode、SecondaryNameNode的进程说明启动成功。
$ jps 4821 Jps 4459 DataNode 4348 NameNode 4622 SecondaryNameNode
如果还要问题,重复如下命令:
$ ./sbin/stop-dfs.sh # 关闭 $ rm -r ./tmp # 删除 tmp 文件,注意这会删除 HDFS中原有的所有数据 $ ./bin/hdfs namenode -format # 重新格式化名称节点 $ ./sbin/start-dfs.sh # 重启
4.2.4 使用浏览器查看HDFS信息
在浏览器中打开链接:http://localhost:50070/dfshealth.html#tab-overview
即可查看:

4.2.5 运行Hadoop伪分布式实例
$ cd /usr/local/hadoop $ ./bin/hdfs dfs -mkdir -p /user/hadoop # 在HDFS中创建用户目录 $ ./bin/hdfs dfs -mkdir input #在HDFS中创建hadoop用户对应的input目录 $ ./bin/hdfs dfs -put ./etc/hadoop/*.xml input #把本地文件复制到HDFS中 $ ./bin/hdfs dfs -ls input #查看文件列表 Found 8 items -rw-r--r-- 1 hadoop supergroup 4436 2019-01-11 19:35 input/capacity-scheduler.xml -rw-r--r-- 1 hadoop supergroup 1075 2019-01-11 19:35 input/core-site.xml -rw-r--r-- 1 hadoop supergroup 9683 2019-01-11 19:35 input/hadoop-policy.xml -rw-r--r-- 1 hadoop supergroup 1130 2019-01-11 19:35 input/hdfs-site.xml -rw-r--r-- 1 hadoop supergroup 620 2019-01-11 19:35 input/httpfs-site.xml -rw-r--r-- 1 hadoop supergroup 3518 2019-01-11 19:35 input/kms-acls.xml -rw-r--r-- 1 hadoop supergroup 5511 2019-01-11 19:35 input/kms-site.xml -rw-r--r-- 1 hadoop supergroup 690 2019-01-11 19:35 input/yarn-site.xml $ ./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar grep input output 'dfs[a-z.]+' .... $ ./bin/hdfs dfs -cat output/* #查看运行结果 1 dfsadmin 1 dfs.replication 1 dfs.namenode.name.dir 1 dfs.datanode.data.dir
再次运行需要删除output文件夹
$ ./bin/hdfs dfs -rm -r output # 删除 output 文件夹
4.2.6 关闭Hadoop
使用命令:
./sbin/stop-dfs.sh
下次启动时不需要再执行节点格式化命令(否则会报错),只需要直接运行start-dfs.sh命令即可。
5 总结
hadoop的安装步骤:
1 创建Hadoop用户
2 安装java环境
3 设置SSH
4 修改配置文件修改/usr/local/hadoop/etc/hadoop/文件夹下的core-site.xml和hdfs-site.xml 文件
5 相关命令
$ cd /usr/local/hadoop $ ./bin/hdfs namenode -format #格式化名称节点 (这个命令只需只需一次) $ ./sbin/start-dfs.sh #启动Hadoop $ jps #查看Hadoop是否成功启动 $ ./sbin/stop-dfs.sh # 关闭Hadoop $ rm -r ./tmp # 删除 tmp 文件,注意这会删除 HDFS中原有的所有数据 $ ./sbin/start-dfs.sh # 重启
http://www.savh.cn/thread-1298.htm
转载请注明:Savh.Cn 发表