时间:2021-07-01 10:21:17 帮助过:45人阅读
安装版本 hadoop-2.0.0-cdh4.2.0hbase-0.94.2-cdh4.2.0hive-0.10.0-cdh4.2.0jdk1.6.0_38 安装前说明 安装目录为/opt 检查hosts文件 关闭防火墙 设置时钟同步 使用说明 安装hadoop、hbase、hive成功之后启动方式为: 启动dfs和mapreduce desktop1上执行start-
hadoop-2.0.0-cdh4.2.0
hbase-0.94.2-cdh4.2.0
hive-0.10.0-cdh4.2.0
jdk1.6.0_38
安装hadoop、hbase、hive成功之后启动方式为:
192.168.0.1 NameNode、Hive、ResourceManager
192.168.0.2 SSNameNode
192.168.0.3 DataNode、HBase、NodeManager
192.168.0.4 DataNode、HBase、NodeManager
192.168.0.6 DataNode、HBase、NodeManager
192.168.0.7 DataNode、HBase、NodeManager
192.168.0.8 DataNode、HBase、NodeManager
修改每台机器的名称
[root@desktop1 ~]# cat /etc/sysconfig/network NETWORKING=yes HOSTNAME=desktop1
在各个节点上修改/etc/hosts增加以下内容:
[root@desktop1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.0.1 desktop1 192.168.0.2 desktop2 192.168.0.3 desktop3 192.168.0.4 desktop4 192.168.0.6 desktop6 192.168.0.7 desktop7 192.168.0.8 desktop8
配置ssh无密码登陆 以下是设置desktop1上可以无密码登陆到其他机器上。
[root@desktop1 ~]# ssh-keygen [root@desktop1 ~]# ssh-copy-id -i .ssh/id_rsa.pub desktop2 [root@desktop1 ~]# ssh-copy-id -i .ssh/id_rsa.pub desktop3 [root@desktop1 ~]# ssh-copy-id -i .ssh/id_rsa.pub desktop4 [root@desktop1 ~]# ssh-copy-id -i .ssh/id_rsa.pub desktop6 [root@desktop1 ~]# ssh-copy-id -i .ssh/id_rsa.pub desktop7 [root@desktop1 ~]# ssh-copy-id -i .ssh/id_rsa.pub desktop8
[root@desktop1 ~]# service iptables stop
将jdk1.6.0_38.zip上传到/opt,并解压缩。 将hadoop-2.0.0-cdh4.2.0.zip上传到/opt,并解压缩。
在NameNode上配置以下文件:
core-site.xml fs.defaultFS指定NameNode文件系统,开启回收站功能。
hdfs-site.xml
dfs.namenode.name.dir指定NameNode存储meta和editlog的目录,
dfs.datanode.data.dir指定DataNode存储blocks的目录,
dfs.namenode.secondary.http-address指定Secondary NameNode地址。
开启WebHDFS。
slaves 添加DataNode节点主机
[root@desktop1 hadoop]# pwd
/opt/hadoop-2.0.0-cdh4.2.0/etc/hadoop
[root@desktop1 hadoop]# cat core-site.xml
fs.defaultFS
hdfs://desktop1
fs.trash.interval
10080
fs.trash.checkpoint.interval
10080
[root@desktop1 hadoop]# cat hdfs-site.xml
dfs.replication
1
hadoop.tmp.dir
/opt/data/hadoop-${user.name}
dfs.namenode.http-address
desktop1:50070
dfs.namenode.secondary.http-address
desktop2:50090
dfs.webhdfs.enabled
true
[root@desktop1 hadoop]# cat masters
desktop1
desktop2
[root@desktop1 hadoop]# cat slaves
desktop3
desktop4
desktop6
desktop7
desktop8
[root@desktop1 hadoop]# cat mapred-site.xml
mapreduce.framework.name
yarn
mapreduce.jobhistory.address
desktop1:10020
mapreduce.jobhistory.webapp.address
desktop1:19888
yarn.application.classpath
(这个路径很重要,要不然集成hive时候会提示找不到class)
[root@desktop1 hadoop]# cat yarn-site.xml
yarn.resourcemanager.resource-tracker.address
desktop1:8031
yarn.resourcemanager.address
desktop1:8032
yarn.resourcemanager.scheduler.address
desktop1:8030
yarn.resourcemanager.admin.address
desktop1:8033
yarn.resourcemanager.webapp.address
desktop1:8088
Classpath for typical applications.
yarn.application.classpath
$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,
$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,
$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,
$YARN_HOME/share/hadoop/yarn/*,$YARN_HOME/share/hadoop/yarn/lib/*,
$YARN_HOME/share/hadoop/mapreduce/*,$YARN_HOME/share/hadoop/mapreduce/lib/*
yarn.nodemanager.aux-services
mapreduce.shuffle
yarn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
yarn.nodemanager.local-dirs
/opt/data/yarn/local
yarn.nodemanager.log-dirs
/opt/data/yarn/logs
Where to aggregate logs
yarn.nodemanager.remote-app-log-dir
/opt/data/yarn/logs
yarn.app.mapreduce.am.staging-dir
/user
修改.bashrc环境变量,并将其同步到其他几台机器,并且source .bashrc
[root@desktop1 ~]# cat .bashrc
# .bashrc
alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# User specific environment and startup programs
export LANG=zh_CN.utf8
export JAVA_HOME=/opt/jdk1.6.0_38
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=./:$JAVA_HOME/lib:$JRE_HOME/lib:$JRE_HOME/lib/tools.jar
export HADOOP_HOME=/opt/hadoop-2.0.0-cdh4.2.0
export HIVE_HOME=/opt/hive-0.10.0-cdh4.2.0
export HBASE_HOME=/opt/hbase-0.94.2-cdh4.2.0
export HADOOP_MAPRED_HOME=${HADOOP_HOME}
export HADOOP_COMMON_HOME=${HADOOP_HOME}
export HADOOP_HDFS_HOME=${HADOOP_HOME}
export YARN_HOME=${HADOOP_HOME}
export HADOOP_YARN_HOME=${HADOOP_HOME}
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HDFS_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export YARN_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:$HIVE_HOME/bin
修改配置文件之后,使其生效。
[root@desktop1 ~]# source .bashrc
将desktop1上的/opt/hadoop-2.0.0-cdh4.2.0拷贝到其他机器上
第一次启动hadoop需要先格式化NameNode,该操作只做一次。当修改了配置文件时,需要重新格式化
[root@desktop1 hadoop]hadoop namenode -format
在desktop1上启动hdfs:
[root@desktop1 hadoop]#start-dfs.sh
在desktop1上启动mapreduce:
[root@desktop1 hadoop]#start-yarn.sh
在desktop1上启动historyserver:
[root@desktop1 hadoop]#mr-jobhistory-daemon.sh start historyserver
查看MapReduce:
http://desktop1:8088/cluster
查看节点:
http://desktop2:8042/
http://desktop2:8042/node
[root@desktop1 ~]# jps
5389 NameNode
5980 Jps
5710 ResourceManager
7032 JobHistoryServer
[root@desktop2 ~]# jps
3187 Jps
3124 SecondaryNameNode
[root@desktop3 ~]# jps
3187 Jps
3124 DataNode
5711 NodeManager