2015-02-01 4 views
0

명령을 실행할 때와 매우 비슷합니다. sudo service hadoop-hdfs-namenode start 명령이 아래의 메시지와 함께 실패했습니다.Hadoop hdfs namenode start 명령이 실패합니다. 형식이 지정되지 않았습니까?

2015-02-01 16:51:22,032 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 
2015-02-01 16:51:22,379 WARN org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 
2015-02-01 16:51:22,512 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 
2015-02-01 16:51:22,512 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started 
2015-02-01 16:51:23,043 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of dataloss due to lack of redundant storage directories! 
2015-02-01 16:51:23,043 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of dataloss due to lack of redundant storage directories! 
2015-02-01 16:51:23,096 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true 
2015-02-01 16:51:23,214 INFO org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager: Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-interval 
2015-02-01 16:51:23,223 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 
2015-02-01 16:51:23,227 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap 
2015-02-01 16:51:23,227 INFO org.apache.hadoop.util.GSet: VM type  = 64-bit 
2015-02-01 16:51:23,232 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB 
2015-02-01 16:51:23,233 INFO org.apache.hadoop.util.GSet: capacity  = 2^21 = 2097152 entries 
2015-02-01 16:51:23,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false 
2015-02-01 16:51:23,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication   = 1 
2015-02-01 16:51:23,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication    = 512 
2015-02-01 16:51:23,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication    = 1 
2015-02-01 16:51:23,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams  = 2 
2015-02-01 16:51:23,242 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false 
2015-02-01 16:51:23,243 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000 
2015-02-01 16:51:23,243 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer  = false 
2015-02-01 16:51:23,243 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog   = 1000 
2015-02-01 16:51:23,253 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner    = hdfs (auth:SIMPLE) 
2015-02-01 16:51:23,254 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup   = supergroup 
2015-02-01 16:51:23,254 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = false 
2015-02-01 16:51:23,254 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false 
2015-02-01 16:51:23,259 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true 
2015-02-01 16:51:23,555 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times 
2015-02-01 16:51:23,558 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 
2015-02-01 16:51:23,558 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 
2015-02-01 16:51:23,558 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension  = 0 
2015-02-01 16:51:23,563 WARN org.apache.hadoop.hdfs.server.common.Storage: Storage directory /var/lib/hadoop-hdfs/cache/hdfs/dfs/name does not exist 
2015-02-01 16:51:23,565 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system... 
2015-02-01 16:51:23,565 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 
2015-02-01 16:51:23,565 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 
2015-02-01 16:51:23,566 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join 
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /var/lib/hadoop-hdfs/cache/hdfs/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. 
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:302) 
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:207) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:741) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:531) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:403) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:445) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:621) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:606) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1177) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1241) 
2015-02-01 16:51:23,571 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 
2015-02-01 16:51:23,573 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************ 
SHUTDOWN_MSG: Shutting down NameNode at hadoop/127.0.0.1 
************************************************************/ 

오류 자체는 꽤 자기 설명이다 디렉토리는/var/lib 디렉토리/하둡 HDFS/캐시/HDFS/DFS/이름이 정확한지 누락 된 것입니다. 캐시 디렉토리가 비어있어서/cache/hdfs/dfs/name을 만들었습니다. 나는 또한 그들 위에있는 디렉토리의 그것과 일치하도록 소유자와 그룹을 변경했다. hdfs : hadoop.

이 디렉토리를 만들기 전에했던 것과 같은 방식으로 끝나는 sudo -u hdfs hdfs namenode –format 형식 명령을 다시 실행합니다.

STARTUP_MSG: build = file:///data/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hadoop-2.0.0-cdh4.7.1/src/hadoop-common-project/hadoop-common -r Unknown; compiled by 'jenkins' on Tue Nov 18 08:10:25 PST 2014 
STARTUP_MSG: java = 1.7.0_75 
************************************************************/ 
15/02/01 17:09:04 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 
Usage: java NameNode [-backup] | [-checkpoint] | [-format [-clusterid cid ] [-force] [-nonInteractive] ] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint] | [-initializeSharedEdits] | [-bootstrapStandby] | [-recover [ -force ] ] 

15/02/01 17:09:04 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************ 
SHUTDOWN_MSG: Shutting down NameNode at hadoop/127.0.0.1 

지금 다시 네임 노드 시작 명령을 실행하고 다음과 같은 오류가 나타날 수

STARTUP_MSG: build = file:///data/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hadoop-2.0.0-cdh4.7.1/src/hadoop-common-project/hadoop-common -r Unknown; compiled by 'jenkins' on Tue Nov 18 08:10:25 PST 2014 
STARTUP_MSG: java = 1.7.0_75 
************************************************************/ 
2015-02-01 17:09:26,774 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 
2015-02-01 17:09:27,097 WARN org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 
2015-02-01 17:09:27,215 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 
2015-02-01 17:09:27,216 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started 
2015-02-01 17:09:27,721 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of dataloss due to lack of redundant storage directories! 
2015-02-01 17:09:27,721 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of dataloss due to lack of redundant storage directories! 
2015-02-01 17:09:27,779 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true 
2015-02-01 17:09:27,883 INFO org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager: Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-interval 
2015-02-01 17:09:27,890 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 
2015-02-01 17:09:27,895 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap 
2015-02-01 17:09:27,895 INFO org.apache.hadoop.util.GSet: VM type  = 64-bit 
2015-02-01 17:09:27,899 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB 
2015-02-01 17:09:27,899 INFO org.apache.hadoop.util.GSet: capacity  = 2^21 = 2097152 entries 
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false 
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication   = 1 
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication    = 512 
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication    = 1 
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams  = 2 
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false 
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000 
2015-02-01 17:09:27,909 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer  = false 
2015-02-01 17:09:27,910 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog   = 1000 
2015-02-01 17:09:27,918 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner    = hdfs (auth:SIMPLE) 
2015-02-01 17:09:27,918 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup   = supergroup 
2015-02-01 17:09:27,918 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = false 
2015-02-01 17:09:27,918 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false 
2015-02-01 17:09:27,924 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true 
2015-02-01 17:09:28,178 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times 
2015-02-01 17:09:28,180 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 
2015-02-01 17:09:28,180 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 
2015-02-01 17:09:28,180 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension  = 0 
2015-02-01 17:09:28,193 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /var/lib/hadoop-hdfs/cache/hdfs/dfs/name/in_use.lock acquired by nodename [email protected] 
2015-02-01 17:09:28,196 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system... 
2015-02-01 17:09:28,196 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 
2015-02-01 17:09:28,196 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 
2015-02-01 17:09:28,197 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join 
java.io.IOException: NameNode is not formatted. 
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:217) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:741) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:531) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:403) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:445) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:621) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:606) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1177) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1241) 
2015-02-01 17:09:28,202 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 
2015-02-01 17:09:28,205 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************ 
SHUTDOWN_MSG: Shutting down NameNode at hadoop/127.0.0.1 
************************************************************/ 

내 시스템이 CentOS는 6.6 손님, 오라클 JDK 1.7와 버추얼 박스에서 실행 및 클라우 데라 CDH4을 실행하려고합니다 . 이 문제를 해결하기 위해 다음에해야 할 일에 대한 의견을 보내 주시면 감사하겠습니다.

답변

1

슬라이드 또는 무언가에서 format 명령을 복사하여 붙여 넣으면 실제로 입력하고 작동하는지 확인할 수 있습니까?

당신이

-format 및 -format 사이에 다른 볼 수 있는지 모르겠어요.

대시는 나에게 다르게 보입니다.

+0

나는 그것을 오늘 밤 시도 할 수있다. deprecated "hadoop namenode format"명령을 실행하는 것은 실제로 형식을 수행한다는 점에 유의해야합니다. 저의 문법을 용서하십시오 ... 지금 당장 머리 꼭대기에서 벗어나십시오. – Rig