实验环境为Centos5.864位hadoop1.1.21个master2个salves
首先废话两句:哥们看到这个环境肯定有点小疑问,怎么会先用老古董配置呢。简要说明:Centos5.8两年前读大学时有在玩,玩东西当然是以自己的熟悉当敲门砖,选用hadoop1.1.1是因为2.6的libhadoop.so.1.0.0是AMD64版本,而自己的笔记本是Interl的处理器,安装过程中有报错,而重新编译2.6有mvn报错问题,故先用1系列,先玩起来再说(置顶帖有重编译的链接,自己尝试后未解决,后续还会动手试试)
下面进入正题:
首先是启动一切正常,如下所示:
./start-all.sh
startingnamenode,loggingto/usr/hadoop-1.1.2/libexec/../logs/hadoop-grid-namenode-backup01.out
backup03:startingdatanode,loggingto/usr/hadoop-1.1.2/libexec/../logs/hadoop-grid-datanode-backup03.out
backup02:startingdatanode,loggingto/usr/hadoop-1.1.2/libexec/../logs/hadoop-grid-datanode-backup02.out
backup01:startingsecondarynamenode,loggingto/usr/hadoop-1.1.2/libexec/../logs/hadoop-grid-secondarynamenode-backup01.out
startingjobtracker,loggingto/usr/hadoop-1.1.2/libexec/../logs/hadoop-grid-jobtracker-backup01.out
backup03:startingtasktracker,loggingto/usr/hadoop-1.1.2/libexec/../logs/hadoop-grid-tasktracker-backup03.out
backup02:startingtasktracker,loggingto/usr/hadoop-1.1.2/libexec/../logs/hadoop-grid-tasktracker-backup02.out
其次是各个进程的检查
4116Jps
3653JobTracker
3433NameNode
3581SecondaryNameNode
说明master正常启动
4112Jps
3634DataNode
3711TaskTracker
说明slaves正常启动
3840Jps
3362DataNode
3439TaskTracker
说明slaves正常启动
总结:到此可以看出master和各nodes各个线程正常启动
但是在游览器中显示
实际操作上导入数据到hdfs文件系统中是报错的,原因是节点未启动
下面是更多的配置信息:
各个master可以跟slaves进行免密码登陆,不管是通过主机名还是ip都可以免密码连接
master配置信息如下
[grid@backup01conf]$catmasters
backup01
[grid@backup01conf]$catslaves
backup02
backup03
[grid@backup01conf]$cathdfs-site.xml
dfs.replication
3
[grid@backup01conf]$catcore-site.xml
fs.default.name
hdfs://backup01:9000
fs.tmp.dir
/home/grid/hadoop/tmp
[grid@backup01conf]$catmapred-site.xml
mapred.job.tracker
backup01:9001
各个nodes的配置一样,因为是通过scp分发的
结论:不知道为什么nodes启动不了,请求支援
如果还需要更多的信息请明示,谢谢
分 -->
|