当前位置:Gxlcms > 数据库问题 > hadoop 的HDFS 的 standby namenode无法启动事故处理

hadoop 的HDFS 的 standby namenode无法启动事故处理

时间:2021-07-01 10:21:17 帮助过:9人阅读

standby namenode无法启动

现象:线上使用的2.5.0-cdh5.3.2版本Hadoop,开启了了NameNode HA,HA采用QJM方式。hadoop的集群的namenode的standby节点宕掉,重启无法启动成功。

1. standby namenode出现异常时的现象

异常现象1:
hadoop的集群的namenode的standby节点宕掉,重启无法启动成功。查看hadoop-hdfs-namenode-hostname.log,发现报出如下异常:" org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: Encountered exception on operation ReassignLeaseOp "

2016-12-02 04:00:50,567 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.10.110.110:50010 is added to blk_1189739479_116038994{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-5830f95b-cb53-4de4-977c-96b3eef06603:NORMAL|FINALIZED], ReplicaUnderConstruction[[DISK]DS-cf11c8ea-95b7-43f7-979f-a4a78c612b61:NORMAL|FINALIZED], ReplicaUnderConstruction[[DISK]DS-9e9b856f-6364-44b7-bc5a-b277423a6eda:NORMAL|FINALIZED], ReplicaUnderConstruction[[DISK]DS-dafd3788-bf7b-4512-8839-154a718a4297:NORMAL|FINALIZED]]} size 0
2016-12-02 04:00:50,568 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: Encountered exception on operation ReassignLeaseOp [leaseHolder=DFSClient_NONMAPREDUCE_-1432667135_1, path=/home/baoxin/hive/AccessToken/Token/20161202/2016120202, newHolder=HDFS_NameNode, opCode=OP_REASSIGN_LEASE, txid=746502271]
java.lang.IllegalStateException
    at com.google.common.base.Preconditions.checkState(Preconditions.java:129)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:648)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:228)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:137)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:820)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:801)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:232)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:331)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:284)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:301)
    at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:411)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:297)
2016-12-02 04:00:50,607 WARN org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:lei.chen (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2016-12-02 04:00:50,607 INFO org.apache.hadoop.ipc.Server: IPC Server handler 23 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getEZForPath from 10.10.110.152:40258 Call#0 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2016-12-02 04:00:50,722 WARN org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:lei.chen (auth:SIMPLE) cause:org.apache.hadoop.ipc.StandbyException: Operation category WRITE is not supported in state standby
2016-12-02 04:00:50,723 INFO org.apache.hadoop.ipc.Server: IPC Server handler 55 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.mkdirs from 10.10.110.152:40258 Call#1 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category WRITE is not supported in state standby
2016-12-02 04:00:51,050 FATAL org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unknown error encountered while tailing edits. Shutting down standby NN.
java.io.IOException: java.lang.IllegalStateException
    at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:238)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:137)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:820)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:801)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:232)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:331)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:284)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:301)
    at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:411)
    at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:297)
Caused by: java.lang.IllegalStateException
    at com.google.common.base.Preconditions.checkState(Preconditions.java:129)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:648)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:228)
    ... 9 more
2016-12-02 04:00:51,054 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2016-12-02 04:00:51,057 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at db-nn1.inc.mtime.com/1xx.1xx.11x.10x
************************************************************/

通过上面异常日志,发现namenode的standby节点无法重启,是因为加载 /home/baoxin/hive/AccessToken/Token/20161202/2016120202 遇到了异常。
从 HDFS 下载 /home/baoxin/hive/AccessToken/Token/20161202/2016120202这个文件,打开发现这个文件的最后一行是不完整的hive记录,说明该hdfs文件是损坏的。
于是,我们通过hadoop 命令,将这个文件删除了。然后重启standby 的namenode,发现问题依旧,异常依然是在加载home/baoxin/hive/AccessToken/Token/20161202/2016120202日志时,而且 使用 hadoop fsck -blocks 命令检查hdfs,发现出现了一个名字叫“null”的异常块。

** 异常现象2:**
hive sql 任务报出:“org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException): java.lang.ArrayIndexOutOfBoundsException”异常。

hive任务,hbase相关读写操作任务,都会报出"ArrayIndexOutOfBoundsException"的异常.
并且hive相关的MR任务根本无法成功提交到hadoop上。
但是workcout之类的纯MapReduce程序可以成功提交到hadoop集群的(hadoop jar的方式)。

hive> select count(*) from ECommerce.GoodsSubOrder; 
Total jobs = 1 
Launching Job 1 out of 1 
Number of reduce tasks determined at compile time: 1 
In order to change the average load for a reducer (in bytes): 
set hive.exec.reducers.bytes.per.reducer=<number> 
In order to limit the maximum number of reducers: 
set hive.exec.reducers.max=<number> 
In order to set a constant number of reducers: 
set mapreduce.job.reduces=<number> 
org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException): java.lang.ArrayIndexOutOfBoundsException 

at org.apache.hadoop.ipc.Client.call(Client.java:1411) 
at org.apache.hadoop.ipc.Client.call(Client.java:1364) 
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) 
at com.sun.proxy.$Proxy15.delete(Unknown Source) 
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:513) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:483) 
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) 
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) 
at com.sun.proxy.$Proxy16.delete(Unknown Source) 
at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1853) 
at org.apache.hadoop.hdfs.DistributedFileSystem$11.doCall(DistributedFileSystem.java:599) 
at org.apache.hadoop.hdfs.DistributedFileSystem$11.doCall(DistributedFileSystem.java:595) 
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) 
at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:595) 
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:539) 
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295) 
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292) 
at java.security.AccessController.doPrivileged(Native Method) 

2.异常处理过程:

1. 将报错的文件从Hadoop上删除(先备份),standby NN仍然起不来,错误信息一样

hadoop fs -rm /home/baoxin/hive/AccessToken/Token/20161202/2016120202

2.怀疑是Standby NameNode的editlog有问题

怀疑可能只是Standby上的fsimage文件或edits文件有问题,于是我们在Standby上执行了.


hdfs hadoop namenode -bootstrapStandby  ##改过程会自动从Active Namenode上获取最新的fsimage文件,
#并从Journalnode日志服务器上下载并执行新的edits文件。 加载edits时仍然遇到上面相同的报错。

因为NameNOde每次启动都需要加载editlog,所以为了跳过此操作,我们将Active进入安全模式,并进行
saveNameSpace操作将dump出来的fsimage和txid等拷贝到Standby NN上,重启就可以避免加载editlog了。

hadoop dfsadmin -safemode enter
hadoop dfsadmin -saveNamespace  ## 将namenode的fsimage信息dump出来,
hadoop dfsadmin -safemode leave

但namenode无法进?入安全模式,报错信息:无法连接standby服务

safemode: Call From db-nn1.inc.mtime.com/10.10.110.100 to db-nn1.inc.mtime.com:802
0 failed on connection exception: java.net.ConnectException: Connection refused; F
or more details see: http://wiki.apache.org/hadoop/ConnectionRefused

于是,查看etc/hadoop/hdfs-site.xml配置文件dfs.namenode.name.dir配置项下对应的NN的配置。

 <property>
    <name>dfs.namenode.name.dir</name>
    <value>/home/hadoop/apache-hadoop/hadoop/var/dfs/nn</value>
 </property>

直接将active nn的 "${dfs.namenode.name.dir}/current"目录下的fsimage和editlog等copy到standby上,启动,仍然报同样的错。

3.参考博客:http://heylinux.com/archives/3398.html对edit log 文件进行修复和修复。

在修复edit log 之前,需要现将active的namenode进入安全模式,或者先停止,避免有namenode上新的操作
,会更新editlog文件。
进入安全模式

hadoop dfsadmin -safemode enter  ##将active namenode 进入安全模式,或者直接关闭
#hadoop dfsadmin -safemode leave

将editlog文件翻译成xml文件

##将editlog文件翻译成xml文件
hdfs oev -i edits_0000000000075549590-0000000000075551044 -o edits_0000000000075549590-0000000000075551044.xml

** vi 打开edits_0000000000075549590-0000000000075551044.xml 文件**
在XML文件中搜索 找到如下操作

[leaseHolder=DFSClient_NONMAPREDUCE_-1432667135_1, path=/home/baoxin/hive/AccessToken/Token/20161202/2016120202, newHolder=HDFS_NameNode, opCode=OP_REASSIGN_LEASE, txid=746502271]

的edit log 记录,直接删掉。

##将editlog xml文件转换成原来的二进制文件
hdfs oev -i edits_0000000000075549590-0000000000075551044.xml -o edits_0000000000075549590-0000000000075551044_new -p binary

然后将新生成的binary文件同步到journalnode日志服务器中。重启standby的namenode,发现报如下错误:

2016-12-02 15:39:20,933 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
java.io.IOException: There appears to be a gap in the edit log.  We expected txid 746515168, but got txid 746515169.
    at org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:94)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:209)
    at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:137)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:820)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:678)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:281)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1006)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:736)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:531)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:587)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:754)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:738)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1427)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1493)
2016-12-02 15:39:20,937 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@db-nn1.inc.mtime.com:50070
2016-12-02 15:39:21,037 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...

原来edit log的txid是连续的,我们手工把编辑日志中的异常操作删掉,edit log的txid就变成是不连续的,namenode启动是时加载editlog就会报出期望的txid不一致的异常。

因此,不能直接把异常操作删掉, 可以将原有操作替换成其它的内容,比如无关痛痒的修改一个现有文件的权限"opCode=OP_SET_PERMISSIONS",并保留TXID 240823292以确保edits文件的完整性。
修改完成后,重新打包成二进制的edit log文件,并同步到journalnode日志服务器中,重启standby的namenode。发现报错的操作日志记录已经不在报错,但下一个editlog 日志记录又有问题,同样性质的异常:

2016-12-02 15:21:35,184 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLogLoad
er: Encountered exception on operation ReassignLeaseOp [leaseHolder=HDFS_NameNode,
path=/home/baoxin/hive/AccessToken/Token/20161202/2016120202, newHolder=HDFS_Name
Node, opCode=OP_REASSIGN_LEASE, txid=746544405]

于是,重复上面的动作,发现还有很多hdfs文件的对应操作日志对应的动作都有问题,而hdfs文件本身是没有问题的,就写了一个工具程序,将出现问题时间点后的操作日志都改成了"opCode=OP_SET_PERMISSIONS"动作。
修改edit log文件的工具程序如下:

package temp;

import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.FileReader;
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.io.PrintWriter;
import java.io.UnsupportedEncodingException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

public class ReplaceXML {
    
    static List<String> list = new ArrayList<String>();

    /**
     * 1 : editlog
     * 2 : 需要替换的文件
     * 3 : 替换为
     * 4 : 目的文件
     * @param args
     * @throws UnsupportedEncodingException
     * @throws FileNotFoundException
     */
    public static void main(String[] args) throws Exception {
        System.out.println("---------------------");
        readFileByLines(args[1]);
        BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(args[3], true), "UTF-8"));
        PrintWriter printWriter = new PrintWriter(bw);
        String recordTemplate =
                "<RECORD>\n" +
                "    <OPCODE>OP_SET_PERMISSIONS</OPCODE>\n" +
                "    <DATA>\n" +
                "      #TXID#\n" +
                "      <SRC>/home/baoxin/hive/AccessToken/Token/20161202/2016120202</SRC>\n" +
                "      <MODE>504</MODE>\n" +
                "    </DATA>\n" +
                "</RECORD>";

        String header="<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n" +
                "<EDITS>\n" +
                "  <EDITS_VERSION>-59</EDITS_VERSION>\n";
        String tail="</EDITS>";
        File file = new File(args[0]);
        BufferedReader reader = null;
        try {
            reader = new BufferedReader(new FileReader(file));
            int line = 0;
            // 一次读一条记录,读入记录为null时,
            printWriter.write(header);
            Map<String,String> info = null;
            while( (info = readRecord(reader ) ) != null){
                line++;
                String record = info.get("RECORD");
                String txid = info.get("TXID");
                record =  recordTemplate.replace("#TXID#",txid.trim());
                System.out.println("line " + line);
                printWriter.write(record + "\n");
                printWriter.flush();
            }
        } catch (IOException e) {
            e.printStackTrace();
        } finally {

            if (reader != null) {
                try {
                    reader.close();
                    printWriter.write(tail);
                } catch (IOException e1) {
                }
            }
            if (printWriter != null) {
                printWriter.close();
            }
        }
    }

    public static Map<String,String> readRecord(BufferedReader reader) throws IOException {

        Map<String,String> map = new HashMap<String,String>();
        StringBuilder record = new StringBuilder();
        String tempStr = null;
        while ((tempStr = reader.readLine()) != null) {
            record.append(tempStr).append("\n");
            if (tempStr.trim().equals("<RECORD>")) { ///开始
            }else if (tempStr.trim().equals("</RECORD>")){  ///结束
                map.put("RECORD",record.toString());
                return map;
            }else{
//              if(tempStr.trim().contains("<PATH>")){
//                  map.put("PATH",tempStr);
//              }
                if(tempStr.trim().contains("<TXID>")){
                    map.put("TXID",tempStr);
                }
            }
        }
        return null;
    }

    public static void readFileByLines(String fileName) {
        File file = new File(fileName);
        BufferedReader reader = null;
        try {
            reader = new BufferedReader(new FileReader(file));
            String tempString = null;
            int line = 1;
            // 一次读一行,读入null时文件结束
            while ((tempString = reader.readLine()) != null) {
                // 把当前行号显示出来
                line++;
                list.add(tempString);
            }
            reader.close();
        } catch (IOException e) {
            e.printStackTrace();
        } finally {
            if (reader != null) {
                try {
                    reader.close();
                } catch (IOException e1) {
                }
            }
        }
    }

}

修改完成后,重namdenode,能成功启动了。

然后。
手工执行dfsadmin -safemode leave

手工删除损坏的块fsck —delete

namenode启动后,没重启datanode,提交hivejob时报错,重启datanode即可报错信息如下:
`java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2109) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream (DFSOutputStream.java:1380) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(D FSOutputStream.java:1302) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.jav a:536) Job Submission failed with exception ‘java.io.EOFException(Premature EOF: no lengt h prefix available)‘ FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapR edTask

至此,standby namenode启动异常解决,集群恢复正常。分析出现本次异常的原因是:集群规划比较简单,集群的
的压力较大,造成整个HDFS集群频繁重启,引起hdfs文件损坏,namenode的editlog 出现损坏造成的。

3.启动解决办法:

1.正常情况下,saveNamespace操作时可以用的。将dump出来文件替换standby namenode即可。

hadoop dfsadmin -saveNamespace  ## 将namenode的fsimage信息 dump出来,

2.使用 HDFS数据恢复模式使用

HDFS数据恢复模式实质上是一种NameNode的启动方式,我们可以通过指定启动NameNode的参数来选择是否以这样的方式启动NameNode,输入hdfs namenode -help即可获取这些参数,

$ hdfs namenode -help
Usage: java NameNode [-backup] | 
    [-checkpoint] | 
    ...
    [-bootstrapStandby] | 
    [-recover [ -force] ] | // 以数据恢复模式的启动方式,force参数表示后面所有的提醒都默认选择第一个
    [-metadataVersion ]  ]

此种方式与之前脚本启动方式略有不同的一点在于,此类方式是前台启动的,用户能直观地看到NameNode的启动过程。

参考文档和文章:

HDFS QJM方式的官方文档(HDFS High Availability Using the Quorum Journal Manager)

[HADOOP] Standby NN无法启动

Hadoop运维笔记 之 Namenode异常停止后无法正常启动

HDFS数据恢复模式

hadoop 的HDFS 的 standby namenode无法启动事故处理

标签:using   while   val   运维   red   日志记录   目的   loaded   return   

人气教程排行