2017-04-23 6 views
0

저널 노드를 실행하려고하면 실패합니다. folowing 오류 :Hadoop - 주 클래스를 찾거나로드 할 수 없습니다. org.apache.hadoop.hdfs.qjournal.server.JournalNode

./hadoop-daemon.sh start journalnode 

Error: Could not find or load main class org.apache.hadoop.hdfs.qjournal.server.JournalNode 

무엇이있을 수 있습니까? 여기

는 여기

<?xml version="1.0" encoding="UTF-8"?> 
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 

<configuration> 
    <property> 
     <name>fs.defaultFS</name> 
     <value>hdfs://hdfscluster</value> 
    </property> 
    <property> 
     <name>io.native.lib.available</name> 
     <value>True</value> 
    </property> 
    <property> 
     <name>io.file.buffer.size</name> 
     <value>65536</value> 
    </property> 
    <property> 
     <name>fs.trash.interval</name> 
     <value>60</value> 
    </property> 
</configuration> 

HDFS-site.xml의

<?xml version="1.0" encoding="UTF-8"?> 
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 

<configuration> 
    <property> 
     <name>dfs.namenode.name.dir</name> 
     <value>file:///srv/node/d1/hdfs/nn,file:///srv/node/d2/hdfs/nn,file:///srv/node/d3/hdfs/nn</value> 
     <final>true</final> 
    </property> 

    <property> 
     <name>dfs.datanode.data.dir</name> 
     <value>file:///srv/node/d1/hdfs/dn,file:///srv/node/d2/hdfs/dn,file:///srv/node/d3/hdfs/dn</value> 
     <final>true</final> 
    </property> 

    <property> 
     <name>dfs.namenode.checkpoint.dir</name> 
     <value>file:///srv/node/d1/hdfs/snn,file:///srv/node/d2/hdfs/snn,file:///srv/node/d3/hdfs/snn</value> 
     <final>true</final> 
    </property> 

    <property> 
     <name>dfs.nameservices</name> 
     <value>hdfscluster</value> 
    </property> 

    <property> 
     <name>dfs.ha.namenodes.hdfscluster</name> 
     <value>nn1,nn2</value> 
    </property> 

    <property> 
     <name>dfs.namenode.rpc-address.hdfscluster.nn1</name> 
     <value>192.168.57.101:8020</value> 
    </property> 

    <property> 
     <name>dfs.namenode.http-address.hdfscluster.nn1</name> 
     <value>192.168.57.101:50070</value> 
    </property> 
    <property> 
     <name>dfs.namenode.rpc-address.hdfscluster.nn2</name> 
     <value>192.168.57.102:8020</value> 
    </property> 

    <property> 
     <name>dfs.namenode.http-address.hdfscluster.nn2</name> 
     <value>192.168.57.102:50070</value> 
    </property> 

    <property> 
     <name>dfs.journalnode.edits.dir</name> 
     <value>/srv/node/d1/hdfs/journal</value> 
     <final>true</final> 
    </property> 

    <property> 
     <name>dfs.namenode.shared.edits.dir</name> 
     <value>qjournal://192.168.57.101:8485;192.168.57.102:8485;192.168.57.103:8485/hdfscluster</value> 
    </property> 

    <property> 
     <name>dfs.client.failover.proxy.provider.hdfscluster</name> 
     <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> 
    </property> 

    <property> 
     <name>dfs.ha.automatic-failover.enabled</name> 
     <value>true</value> 
    </property> 

    <property> 
     <name>ha.zookeeper.quorum</name> 
     <value>192.168.57.101:2181,192.168.57.102:2181,192.168.57.103:2181</value> 
    </property> 

    <property> 
     <name>dfs.ha.fencing.methods</name> 
     <value>sshfence</value> 
    </property> 

    <property> 
     <name>dfs.ha.fencing.ssh.private-key-files</name> 
     <value>/home/hdfs/.ssh/id_dsa</value> 
    </property> 

    <property> 
     <name>dfs.hosts</name> 
     <value>/etc/hadoop/conf/dfs.hosts</value> 
    </property> 

    <property> 
     <name>dfs.hosts.exclude</name> 
     <value>/etc/hadoop/conf/dfs.hosts.exclude</value> 
    </property> 

    <property> 
     <name>dfs.replication</name> 
     <value>3</value> 
    </property> 
    <property> 
     <name>dfs.permission</name> 
     <value>False</value> 
    </property> 
    <property> 
     <name>dfs.durable.sync</name> 
     <value>True</value> 
    </property> 
    <property> 
     <name>dfs.datanode.synconclose</name> 
     <value>True</value> 
    </property> 
</configuration> 

노드 IP 그것은 journalnode를 실행하는 192.168.57.103
입니다 내 코어를 site.xml입니다 및 데이터 노드

Hadoop 2.8.0을 사용하고 있습니다. 구성에 문제가 있거나 방금 놓친 것이 있습니까?

+0

'$ HADOOP_CLASSPATH'가 올바르게 설정되어 있습니까? – franklinsijo

답변

0

/usr/lib/hadoop/share/hadoop/ 디렉토리가 누락 된 이유는 알 수 없습니다. 내가 처음부터 다시 hadoop을 재설치하고 지금은 작동합니다.