2017-10-29 3 views
1

JanusGraph 0.2.0 설명서의 35 장에 예제를 사용하면 정상입니다. 하지만 conf/hadoop-graph/hadoop-load.properties에서 spark.master를 local [*]에서 spark : //192.168.63.105 : 7077으로 바꿀 때, 몇 가지 정보를 얻을 수 있습니다.JanusGraph 0.2.0 Spark가 마스터에 연결하지 못했습니다.

JanusGraph 0.2.0 문서의 35 장에 메모가 있습니다. 이 장의 예제는 로컬 모드에서 Spark를 실행하는 것을 기반으로합니다. 독립형 모드에서 Spark을 사용하거나 Spark on yarn 또는 Mesos를 사용할 때 추가 구성이 필요합니다. 추가 구성이란 무엇입니까? 다음되는

> WARN org.apache.tinkerpop.gremlin.spark.process.computer.SparkGraphComputer - class org.apache.hadoop.mapreduce.lib.output.NullOutputFormat does not implement PersistRequestGraphAware and thus, persistence options are unknown -- assuming all options are possible 

> WARN org.apache.spark.deploy.client.AppClient$ClientEndpoint - Failed to connect to master 192.168.63.105:7077 

> java.lang.RuntimeException: java.io.EOFException 
> at java.io.DataInputStream.readFully(DataInputStream.java:197) 
> at java.io.DataInputStream.readUTF(DataInputStream.java:609) 
> at java.io.DataInputStream.readFully(DataInputStream.java:564) 
> at org.apache.spark.rpc.netty.RequestMessage$.readRpcAddress(NettyRpcEnv.scala:582) 
> at org.apache.spark.rpc.netty.RequestMessage$.apply(NettyRpcEnv.scala:592) 
> at org.apache.spark.rpc.netty.NettyRpcHandler.internalReceive(NettyRpcEnv.scala:651) 
> at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:636) 
> at org.apache.spark.network.server.TransportRequestHandler.processRpcRequest(TransportRequestHandler.java:157) 
> at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:105) 
> at org.apache.spark.network.server.TransportChannelHandler.channelRead(TransportChannelHandler.java:118) 
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357) 
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343) 
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336) 
> at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287) 
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357) 
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343) 
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336) 
> at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) 
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357) 
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343) 
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336) 
> at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85) 
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357) 
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343) 
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336) 
> at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294) 
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357) 
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343) 
> at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911) 
> at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) 
> at io.netty.channel.nio.NioEventLoop.processSelectedkey(NioEventLoop.java:643) 
> at io.netty.channel.nio.NioEventLoop.processSelectedkeyOptimized(NioEventLoop.java:566) 
> at io.netty.channel.nio.NioEventLoop.processSelectedkey(NioEventLoop.java:480) 
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442) 
> at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131) 
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144) 
> at java.lang.Thread.run(Thread.java:745) 

> at org.apache.spark.network.client.TransportResponseHandler.handle(TransportResponseHandler.java:186) 
> at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:106) 
> at org.apache.spark.network.server.TransportChannelHandler.channelRead0(TransportChannelHandler.java:51) 
> at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) 
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356) 
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342) 
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335) 
> at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) 
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356) 
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342) 
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335) 
> at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) 
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356) 
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342) 
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335) 
> at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86) 
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356) 
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342) 
> at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335) 
> at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1302) 
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356) 
> at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342) 
> at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) 
> at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) 
> at io.netty.channel.nio.NioEventLoop.processSelectedkey(NioEventLoop.java:646) 
> at io.netty.channel.nio.NioEventLoop.processSelectedkeyOptimized(NioEventLoop.java:581) 
> at io.netty.channel.nio.NioEventLoop.processSelectedkey(NioEventLoop.java:498) 
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:460) 
> at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131) 
> at java.lang.Thread.run(Thread.java:745) 

내 구성 파일 conf의/janusgraph-hbase-es-test-spark.properties 내용이있다 : 내가 사용하는

storage.backend=hbase 
storage.hostname=192.168.63.105,192.168.63.107,192.168.63.109 
storage.hbase.ext.hbase.zookeeper.property.clientPort=2181 
storage.hbase.table=janus_test_spark_7077 
gremlin.graph=org.janusgraph.core.JanusGraphFactory 
cache.db-cache=true 
cache.db-cache-clean-wait=20 
cache.db-cache-time=18000 
cache.db-cache-size=0.5 
index.search.backend=elasticsearch 
index.search.hostname=192.168.63.105,192.168.63.107,192.168.63.109 
index.search.elasticsearch.client-only=true 

스파크 버전

는 정보를 경고 spark-2.2.0-bin-hadoop2.7입니다.

나는 또한 불꽃 1.6.0 - 빈 - hadoop2.6을하려고하면 경고 정보는 다음입니다 : 여러분의 관심과 도움을

WARN org.apache.spark.scheduler.TaskSetManager - Lost task 0.0 in stage 0.0 (TID 0, SparkWorker109): java.io.InvalidClassException: org.apache.spark.rdd.MapPartitionsRDD; local class incompatible: stream classdesc serialVersionUID=6732270565076291202, local class SerialVersionUID=-1059539896677275380 

at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:616) 
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1630) 
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521) 
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1781) 
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353) 
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018) 
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942) 
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808) 
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353) 
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373) 
at org.apache.spark.serializer:JavaDeserializationStream.readObject(JavaSerializer.scala:76) 
at org.apache.spark.serializer:JavaSerializerInstance.deserialize(JavaSerializer.scala:115) 
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:64) 
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) 
at org.apache.spark.scheduler.Task.run(Task.scala:89) 
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) 
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745) 

감사합니다.

+0

문제는 JanusGraph가 사용하는 불꽃의 버전 및 독립 실행 형 클러스터를 시작하는 데 사용되는 하나입니다. Spark의 어떤 버전을 사용하셨습니까? 나는 [JanusGraph 0.2.0] (https://github.com/JanusGraph/janusgraph/releases)이 발표되었음을 알았습니다. 문제가 해결되었는지 확인할 수 있습니까? –

+0

답변 해 주셔서 감사합니다. 나는 spark-1.6.0-bin-hadoop2.6, spark-2.1.1-bin-hadoop2.7 및 spark-2.2.0-bin-hadoop2.7을 별도로 사용한다. 오류 정보는 동일하다. 하지만 JanusGraph 0.2.0을 사용할 때 gremlin> hdfs.ls()와 같은 또 다른 질문을 만났습니다. 오류 정보가 나타납니다 : org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProto $ GetFileInfoRequestProto를 캐스팅 할 수 없습니다. com.google.protobuf.Message로 이동하십시오. – fromnowon

+0

JanusGraph 0.2.0에서 gremlin> hdfs.ls()를 실행할 때 오류가있는 이유를 알고 있습니다. 즉, JanusGraph-0.2.0-hadoop2/lib /에 hadoop-hdfs-2.7.2.jar가 없습니다. . 하지만 나는 spark.master와 연결할 수 없다 : //192.168.63.105 : 7077 – fromnowon

답변

0

여기 내 hadoop-load.properties입니다.

# 
# SparkGraphComputer Configuration 
# 
spark.master=yarn-client 
spark.executor.memory=512m 
spark.serializer=org.apache.spark.serializer.KryoSerializer 
spark.app.name=janusgraph-data-load 
spark.app.id=janusgraph-data-load 
spark.executor.extraClassPath=/opt/janusgraph-lib/*:/etc/hadoop/conf:/etc/hbase/conf:/etc/spark/conf 
#hdp version 
spark.yarn.am.extraJavaOptions=-Dhdp.version=2.6.1.0-129 
spark.executor.extraJavaOptions=-Dhdp.version=2.6.1.0-129 
spark.driver.extraJavaOptions=-Dhdp.version=2.6.1.0-129