2017-03-22 7 views
2

Flink v1.2, 3 개의 JobManagers, 2 개의 TaskManagers로 설정했습니다. 파일 시스템백엔드 상태 및 체크 포인트에 Hdfs를 사용하도록 Flink를 구성하는 방법

state.backend.fs.checkpointdir : HDFS : /// [IP : 포트]/FLINK - 체크 포인트 나 백엔드 상태와 체크 포인트와 사육사를위한 HDFS storageDir

state.backend를 사용하려면
state.checkpoints.dir : HDFS : /// [IP 포트]/외부 검문소
고 가용성 : 사육사
높은 availability.zookeeper.storageDir : HDFS : /// [IP 포트]/복구

JobManager 0

은 내가

2017-03-22 17:41:43,559 INFO org.apache.flink.configuration.GlobalConfiguration   - Loading configuration property: high-availability.zookeeper.client.acl, open 
2017-03-22 17:41:43,680 ERROR org.apache.flink.runtime.jobmanager.JobManager    - Error while starting up JobManager 
java.io.IOException: The given HDFS file URI (hdfs:///ip:port/recovery/blob) did not describe the HDFS NameNode. The attempt to use a default HDFS configuration, as specified in the 'fs.hdfs.hdfsdefault' or 'fs.hdfs.hdfssite' config parameter failed due to the following problem: Either no default file system was registered, or the provided configuration contains no valid authority component (fs.default.name or fs.defaultFS) describing the (hdfs namenode) host and port. 
     at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFileSystem.java:298) 
     at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:288) 
     at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:310) 
     at org.apache.flink.runtime.blob.FileSystemBlobStore.<init>(FileSystemBlobStore.java:67) 
     at org.apache.flink.runtime.blob.BlobServer.<init>(BlobServer.java:114) 
     at org.apache.flink.runtime.jobmanager.JobManager$.createJobManagerComponents(JobManager.scala:2488) 
     at org.apache.flink.runtime.jobmanager.JobManager$.startJobManagerActors(JobManager.scala:2643) 
     at org.apache.flink.runtime.jobmanager.JobManager$.startJobManagerActors(JobManager.scala:2595) 
     at org.apache.flink.runtime.jobmanager.JobManager$.startActorSystemAndJobManagerActors(JobManager.scala:2242) 
     at org.apache.flink.runtime.jobmanager.JobManager$.liftedTree3$1(JobManager.scala:2020) 
     at org.apache.flink.runtime.jobmanager.JobManager$.runJobManager(JobManager.scala:2019) 
     at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$2.apply$mcV$sp(JobManager.scala:2098) 
     at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$2.apply(JobManager.scala:2076) 
     at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$2.apply(JobManager.scala:2076) 
     at scala.util.Try$.apply(Try.scala:192) 
     at org.apache.flink.runtime.jobmanager.JobManager$.retryOnBindException(JobManager.scala:2131) 
     at org.apache.flink.runtime.jobmanager.JobManager$.runJobManager(JobManager.scala:2076) 
     at org.apache.flink.runtime.jobmanager.JobManager$$anon$9.call(JobManager.scala:1971) 
     at org.apache.flink.runtime.jobmanager.JobManager$$anon$9.call(JobManager.scala:1969) 
     at org.apache.flink.runtime.security.HadoopSecurityContext$1.run(HadoopSecurityContext.java:43) 
     at java.security.AccessController.doPrivileged(Native Method) 
     at javax.security.auth.Subject.doAs(Subject.java:422) 
     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) 
     at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:40) 
     at org.apache.flink.runtime.jobmanager.JobManager$.main(JobManager.scala:1969) 
     at org.apache.flink.runtime.jobmanager.JobManager.main(JobManager.scala) 
2017-03-22 17:41:43,694 WARN org.apache.hadoop.security.UserGroupInformation    - PriviledgedActionException as:ubuntu (auth:SIMPLE) cause:java.io.IOException: The given HDFS file URI (hdfs:///ip:port/recovery/blob) did not describe the HDFS NameNode. The attempt to use a default HDFS configuration, as specified in the 'fs.hdfs.hdfsdefault' or 'fs.hdfs.hdfssite' config parameter failed due to the following problem: Either no default file system was registered, or the provided configuration contains no valid authority component (fs.default.name or fs.defaultFS) describing the (hdfs namenode) host and port. 
2017-03-22 17:41:43,694 ERROR org.apache.flink.runtime.jobmanager.JobManager    - Failed to run JobManager. 
java.io.IOException: The given HDFS file URI (hdfs:///ip:port/recovery/blob) did not describe the HDFS NameNode. The attempt to use a default HDFS configuration, as specified in the 'fs.hdfs.hdfsdefault' or 'fs.hdfs.hdfssite' config parameter failed due to the following problem: Either no default file system was registered, or the provided configuration contains no valid authority component (fs.default.name or fs.defaultFS) describing the (hdfs namenode) host and port. 
     at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFileSystem.java:298) 
     at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:288) 
     at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:310) 
     at org.apache.flink.runtime.blob.FileSystemBlobStore.<init>(FileSystemBlobStore.java:67) 
     at org.apache.flink.runtime.blob.BlobServer.<init>(BlobServer.java:114) 
     at org.apache.flink.runtime.jobmanager.JobManager$.createJobManagerComponents(JobManager.scala:2488) 
     at org.apache.flink.runtime.jobmanager.JobManager$.startJobManagerActors(JobManager.scala:2643) 
     at org.apache.flink.runtime.jobmanager.JobManager$.startJobManagerActors(JobManager.scala:2595) 
     at org.apache.flink.runtime.jobmanager.JobManager$.startActorSystemAndJobManagerActors(JobManager.scala:2242) 
     at org.apache.flink.runtime.jobmanager.JobManager$.liftedTree3$1(JobManager.scala:2020) 
     at org.apache.flink.runtime.jobmanager.JobManager$.runJobManager(JobManager.scala:2019) 
     at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$2.apply$mcV$sp(JobManager.scala:2098) 
     at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$2.apply(JobManager.scala:2076) 
     at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$2.apply(JobManager.scala:2076) 
     at scala.util.Try$.apply(Try.scala:192) 
     at org.apache.flink.runtime.jobmanager.JobManager$.retryOnBindException(JobManager.scala:2131) 
     at org.apache.flink.runtime.jobmanager.JobManager$.runJobManager(JobManager.scala:2076) 
     at org.apache.flink.runtime.jobmanager.JobManager$$anon$9.call(JobManager.scala:1971) 
     at org.apache.flink.runtime.jobmanager.JobManager$$anon$9.call(JobManager.scala:1969) 
     at org.apache.flink.runtime.security.HadoopSecurityContext$1.run(HadoopSecurityContext.java:43) 
     at java.security.AccessController.doPrivileged(Native Method) 
     at javax.security.auth.Subject.doAs(Subject.java:422) 
     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) 
     at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:40) 
     at org.apache.flink.runtime.jobmanager.JobManager$.main(JobManager.scala:1969) 
     at org.apache.flink.runtime.jobmanager.JobManager.main(JobManager.scala) 
2017-03-22 17:41:43,697 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator   - Shutting down remote daemon. 
2017-03-22 17:41:43,704 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator   - Remote daemon shut down; proceeding with flushing remote transports. 
2 

하둡 내가 설정에서 설정 한 VM에서 단일 노드 클러스터로 설치되어 있습니다 로그인합니다. Flink가 여분의 매개 변수를 구성하는 이유는 무엇입니까? (그들은 공식 문서에 없습니다.)

답변

0

호스트 이름 : 포트 사양으로 HDFS에 액세스하려면이 URL 패턴 hdfs://[ip:port]/flink-checkpoints을 사용해야한다고 생각합니다.

Hadoop 구성에서 fs.defaultFS을 사용하는 경우 NameNode 세부 정보를 넣지 않아도됩니다.

+0

작동. 고마워. – razvan