当前位置:Gxlcms > mysql > ubuntu下hadoop2.3.0配置

ubuntu下hadoop2.3.0配置

时间:2021-07-01 10:21:17 帮助过:46人阅读

环境 系统:ubuntu12.4 hadoop版本:2.3.0 一。下载hadoop-2.3.0-tar.gz解压 二修改配置文件,配置文件都在${hadoop-2.3.0}/etc/hadoop路径下 1、core-site.xml configuration property namehadoop.tmp.dir/name value/usr/local/hadoop-2.3.0/tmp/hadoop-${u

环境系统:ubuntu12.4

hadoop版本:2.3.0

一。下载hadoop-2.3.0-tar.gz解压

二修改配置文件,配置文件都在${hadoop-2.3.0}/etc/hadoop路径下

1、core-site.xml



hadoop.tmp.dir
/usr/local/hadoop-2.3.0/tmp/hadoop-${user.name}


fs.defaultFS
hdfs://localhost:8020


2、hdfs-site.xml



dfs.namenode.name.dir
/usr/local/hadoop-2.3.0/tmp/dfs/name



dfs.datanode.data.dir
/usr/local/hadoop-2.3.0/tmp/dfs/data


dfs.replication
1


3、mapred-site.xml



mapreduce.framework.name
yarn


4、yarn-site.xml



yarn.resourcemanager.hostname
localhost



yarn.nodemanager.aux-services
mapreduce_shuffle



yarn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler

三、命令启动

hadoop脚本命令在${hadoop-2.3.0}/bin和${hadoop-2.3.0}/sbin目录下,可以根据路径执行命令

也可以配置环境变量,简便命令书写

使用命令:sudo /etc/profile

1

2

3

4

5

6

7

8

export HADOOP_HOME=/usr/local/hadoop-2.3.0

export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

export HADOOP_MAPRED_HOME=$HADOOP_HOME

export HADOOP_COMMON_HOME=$HADOOP_HOME

export HADOOP_HDFS_HOME=$HADOOP_HOME

export YARN_HOME=$HADOOP_HOME

export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop

初始化hadoop文件系统

hdfs namenode -format

四、启动和关闭hadoop

1. 启动脚本一:

sujx@ubuntu:~$ hadoop-daemon.sh start namenode

starting namenode, logging to /opt/hadoop-2.2.0/logs/hadoop-sujx-namenode-ubuntu.out
sujx@ubuntu:~$ hadoop-daemon.sh start datanode
starting datanode, logging to /opt/hadoop-2.2.0/logs/hadoop-sujx-datanode-ubuntu.out
sujx@ubuntu:~$ hadoop-daemon.sh start secondarynamenode
starting secondarynamenode, logging to /opt/hadoop-2.2.0/logs/hadoop-sujx-secondarynamenode-ubuntu.out
sujx@ubuntu:~$ jps
9310 SecondaryNameNode
9345 Jps
9140 NameNode
9221 DataNode
sujx@ubuntu:~$ yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /opt/hadoop-2.2.0/logs/yarn-sujx-resourcemanager-ubuntu.out
sujx@ubuntu:~$ yarn-daemon.sh start nodemanager
starting nodemanager, logging to /opt/hadoop-2.2.0/logs/yarn-sujx-nodemanager-ubuntu.out
sujx@ubuntu:~$ jps
9310 SecondaryNameNode
9651 NodeManager
9413 ResourceManager
9140 NameNode
9709 Jps
9221 DataNode
sujx@ubuntu:~$

2. 启动脚本二:

sujx@ubuntu:~$ start-dfs.sh
Starting namenodes on [hd2-single]
hd2-single: starting namenode, logging to /opt/hadoop-2.2.0/logs/hadoop-sujx-namenode-ubuntu.out
hd2-single: starting datanode, logging to /opt/hadoop-2.2.0/logs/hadoop-sujx-datanode-ubuntu.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /opt/hadoop-2.2.0/logs/hadoop-sujx-secondarynamenode-ubuntu.out
sujx@ubuntu:~$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop-2.2.0/logs/yarn-sujx-resourcemanager-ubuntu.out
hd2-single: starting nodemanager, logging to /opt/hadoop-2.2.0/logs/yarn-sujx-nodemanager-ubuntu.out
sujx@ubuntu:~$ jps
11414 SecondaryNameNode
10923 NameNode
11141 DataNode
12038 Jps
11586 ResourceManager
11811 NodeManager
sujx@ubuntu:~$

3. 启动脚本三:


sujx@ubuntu:~$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [hd2-single]
hd2-single: starting namenode, logging to /opt/hadoop-2.2.0/logs/hadoop-sujx-namenode-ubuntu.out
hd2-single: starting datanode, logging to /opt/hadoop-2.2.0/logs/hadoop-sujx-datanode-ubuntu.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /opt/hadoop-2.2.0/logs/hadoop-sujx-secondarynamenode-ubuntu.out
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop-2.2.0/logs/yarn-sujx-resourcemanager-ubuntu.out
hd2-single: starting nodemanager, logging to /opt/hadoop-2.2.0/logs/yarn-sujx-nodemanager-ubuntu.out
sujx@ubuntu:~$ jps
14156 NodeManager
14445 Jps
13267 NameNode
13759 SecondaryNameNode
13485 DataNode
13927 ResourceManager
sujx@ubuntu:~$

其实这三种方式最终效果都是相同,他们内部也都是相互调用关系。对应的结束脚本也简单:
1. 结束脚本一:
sujx@ubuntu:~$ hadoop-daemon.sh stop nodemanager
sujx@ubuntu:~$ hadoop-daemon.sh stop resourcemanager
sujx@ubuntu:~$ hadoop-daemon.sh stop secondarynamenode
sujx@ubuntu:~$ hadoop-daemon.sh stop datanode
sujx@ubuntu:~$ hadoop-daemon.sh stop namenode
2. 结束脚本二:
sujx@ubuntu:~$ stop-yarn.sh
sujx@ubuntu:~$ stop-dfs.sh
3. 结束脚本三:
sujx@ubuntu:~$ stop-all.sh

查看hadoop namenode或hdfs的状态:http://localhost:50070/

查看job运行情况:http://localhost:8088/

客户端访问的hdfs的地址端口:8020


客户端访问的YARN 的地址端口:8032

至此,单机伪分布就已经部署完毕。

人气教程排行