2016-06-01 9 views
1

Hbase 버전 : 1.1.3 Phoenix 버전 : 4.7.0phoenix를 사용하여 읽는 동안 hbase.DoNotRetryIOException이 발생합니까?

데이터를 업 데이트 한 후 피닉스를 사용하여 Hbase에서 데이터를 읽을 수있었습니다.

클러스터를 다시 시작한 후 나는 내가 HBase를 테이블을 스캔 수동으로 시도 않았다

0: jdbc:phoenix:localhost> select count(*) from PRICEDATA; 
    16/06/01 12:39:39 WARN ipc.CoprocessorRpcChannel: Call failed on IOException 
    org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: PRICEDATA: at index 10 
     at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87) 
     at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:484) 
     at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11705) 
     at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7606) 
     at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1890) 
     at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1872) 
     at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32389) 
     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2117) 
     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104) 
     at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) 
     at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) 
     at java.lang.Thread.run(Thread.java:745) 
    Caused by: java.lang.NullPointerException: at index 10 
     at com.google.common.collect.ImmutableList.checkElementNotNull(ImmutableList.java:311) 
     at com.google.common.collect.ImmutableList.construct(ImmutableList.java:302) 
     at com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:278) 
     at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:424) 
     at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:315) 
     at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:303) 
     at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:883) 
     at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:501) 
     at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2481) 
     at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2426) 
     at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:451) 
     ... 10 more 

     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
     at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) 
     at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
     at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 
     at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) 
     at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) 
     at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:284) 
     at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1611) 
     at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:93) 
     at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:90) 
     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:117) 
     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:93) 
     at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:96) 
     at org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callMethod(CoprocessorRpcChannel.java:57) 
     at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService$Stub.getTable(MetaDataProtos.java:7891) 
     at org.apache.phoenix.query.ConnectionQueryServicesImpl$7.call(ConnectionQueryServicesImpl.java:1271) 
     at org.apache.phoenix.query.ConnectionQueryServicesImpl$7.call(ConnectionQueryServicesImpl.java:1258) 
     at org.apache.hadoop.hbase.client.HTable$17.call(HTable.java:1608) 
     at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
     at java.lang.Thread.run(Thread.java:745) 
    Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOException: PRICEDATA: at index 10 
     at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87) 
     at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:484) 
     at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11705) 
     at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7606) 
     at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1890) 
     at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1872) 
     at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32389) 
     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2117) 
     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104) 
     at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) 
     at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) 
     at java.lang.Thread.run(Thread.java:745) 
    Caused by: java.lang.NullPointerException: at index 10 
     at com.google.common.collect.ImmutableList.checkElementNotNull(ImmutableList.java:311) 
     at com.google.common.collect.ImmutableList.construct(ImmutableList.java:302) 
     at com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:278) 
     at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:424) 
     at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:315) 
     at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:303) 
     at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:883) 
     at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:501) 
     at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2481) 
     at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2426) 
     at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:451) 
     ... 10 more 

     at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1457) 
     at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1661) 
     at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1719) 
     at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:30411) 
     at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1607) 
     ... 14 more 
    16/06/01 12:39:39 WARN client.HTable: Error calling coprocessor service org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService for row \x00\x00PRICEDATA 
    java.util.concurrent.ExecutionException: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: PRICEDATA: at index 10 
     at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87) 
     at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:484) 
     at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11705) 
     at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7606) 
     at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1890) 
     at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1872) 
     at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32389) 
     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2117) 
     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104) 
     at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) 
     at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) 
     at java.lang.Thread.run(Thread.java:745) 
    Caused by: java.lang.NullPointerException: at index 10 
    at com.google.common.collect.ImmutableList.checkElementNotNull(ImmutableList.java:311) 
     at com.google.common.collect.ImmutableList.construct(ImmutableList.java:302) 
     at com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:278) 
     at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:424) 
     at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:315) 
     at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:303) 
     at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:883) 
     at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:501) 
     at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2481) 
     at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2426) 
     at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:451) 
     ... 10 more 

     at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
     at java.util.concurrent.FutureTask.get(FutureTask.java:188) 
     at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1620) 
     at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1577) 
     at org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1006) 
     at org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:1257) 
     at org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:350) 
     at org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:311) 
     at org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:307) 
     at org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:333) 
     at org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.<init>(FromCompiler.java:237) 
     at org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:160) 
     at org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:340) 
     at org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:330) 
     at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:240) 
     at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:235) 
     at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) 
     at org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:234) 
     at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1100) 
     at sqlline.Commands.execute(Commands.java:822) 
     at sqlline.Commands.sql(Commands.java:732) 
     at sqlline.SqlLine.dispatch(SqlLine.java:808) 
     at sqlline.SqlLine.begin(SqlLine.java:681) 
     at sqlline.SqlLine.start(SqlLine.java:398) 
     at sqlline.SqlLine.main(SqlLine.java:292) 
    Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: PRICEDATA: at index 10 
     at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87) 
     at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:484) 
     at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11705) 
     at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7606) 
     at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1890) 
     at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1872) 
     at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32389) 
     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2117) 
     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104) 
     at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) 
     at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) 
     at java.lang.Thread.run(Thread.java:745) 
    Caused by: java.lang.NullPointerException: at index 10 
     at com.google.common.collect.ImmutableList.checkElementNotNull(ImmutableList.java:311) 
     at com.google.common.collect.ImmutableList.construct(ImmutableList.java:302) 
     at com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:278) 
     at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:424) 
     at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:315) 
     at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:303) 
     at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:883) 
     at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:501) 
     at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2481) 
     at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2426) 
     at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:451) 
     ... 10 more 

     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
     at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) 
     at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
     at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 
     at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) 
     at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) 
     at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:284) 
     at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1611) 
     at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:93) 
     at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:90) 
     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:117) 
     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:93) 
     at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:96) 
     at org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callMethod(CoprocessorRpcChannel.java:57) 
     at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService$Stub.getTable(MetaDataProtos.java:7891) 
     at org.apache.phoenix.query.ConnectionQueryServicesImpl$7.call(ConnectionQueryServicesImpl.java:1271) 
     at org.apache.phoenix.query.ConnectionQueryServicesImpl$7.call(ConnectionQueryServicesImpl.java:1258) 
     at org.apache.hadoop.hbase.client.HTable$17.call(HTable.java:1608) 
     at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
     at java.lang.Thread.run(Thread.java:745) 

HBase와 지역 서버 로그

2016-06-01 13:07:47,467 INFO [B.defaultRpcServer.handler=6,queue=0,port=16201] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x574b597 connecting to ZooKeeper ensemble=HMECL001076:2181 
2016-06-01 13:07:47,467 INFO [B.defaultRpcServer.handler=6,queue=0,port=16201] zookeeper.ZooKeeper: Initiating client connection, connectString=HMECL001076:2181 sessionTimeout=90000 watcher=hconnection-0x574b5970x0, quorum=HMECL001076:2181, baseZNode=/hbase 
2016-06-01 13:07:47,468 INFO [B.defaultRpcServer.handler=6,queue=0,port=16201-SendThread(HMECL001076:2181)] zookeeper.ClientCnxn: Opening socket connection to server HMECL001076/127.0.1.1:2181. Will not attempt to authenticate using SASL (unknown error) 
2016-06-01 13:07:47,470 INFO [B.defaultRpcServer.handler=6,queue=0,port=16201-SendThread(HMECL001076:2181)] zookeeper.ClientCnxn: Socket connection established to HMECL001076/127.0.1.1:2181, initiating session 
2016-06-01 13:07:47,475 INFO [B.defaultRpcServer.handler=6,queue=0,port=16201-SendThread(HMECL001076:2181)] zookeeper.ClientCnxn: Session establishment complete on server HMECL001076/127.0.1.1:2181, sessionid = 0x1550abc32310033, negotiated timeout = 40000 
2016-06-01 13:07:47,481 INFO [B.defaultRpcServer.handler=6,queue=0,port=16201] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1550abc32310033 
2016-06-01 13:07:47,483 INFO [B.defaultRpcServer.handler=6,queue=0,port=16201] zookeeper.ZooKeeper: Session: 0x1550abc32310033 closed 
2016-06-01 13:07:47,483 INFO [B.defaultRpcServer.handler=6,queue=0,port=16201-EventThread] zookeeper.ClientCnxn: EventThread shut down 
2016-06-01 13:07:47,485 ERROR [B.defaultRpcServer.handler=6,queue=0,port=16201] coprocessor.MetaDataEndpointImpl: getTable failed 
java.lang.NullPointerException: at index 10 
    at com.google.common.collect.ImmutableList.checkElementNotNull(ImmutableList.java:311) 
    at com.google.common.collect.ImmutableList.construct(ImmutableList.java:302) 
    at com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:278) 
    at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:424) 
    at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:315) 
    at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:303) 
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:883) 
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:501) 
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2481) 
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2426) 
    at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:451) 
    at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11705) 
    at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7606) 
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1890) 
    at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1872) 
    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32389) 
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2117) 
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104) 
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) 
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) 
    at java.lang.Thread.run(Thread.java:745) 

를 기록 Sqlline 다음과 같은 오류

을 받고 수 있었다 해요 hbase 쉘을 사용하여 데이터를 검색합니다. 그러나 Phoenix API를 통해 데이터를 쓰거나 읽을 수 없습니다. 이전에이 문제를 해결 한 노드를 다시 시작하기 전에이 오류가 발생했습니다. 하지만 현재는 작동하지 않습니다. 특정 테이블에서만 문제가 발생합니다. 모든 테이블은 거의 동일한 스키마를가집니다.

테이블 쿼리

CREATE TABLE IF NOT EXISTS Pricedata (
    NUM_11 DOUBLE, 
    D81 VARCHAR, 
    D83 DOUBLE, 
    D82 VARCHAR, 
    D77 VARCHAR NOT NULL PRIMARY KEY, 
    NUM_9 DOUBLE, 
    D80 VARCHAR, 
    D79 BIGINT, 
    D78 BIGINT, 
    NUM_10 DOUBLE); 

답변

1

뭔가가 손상되었다거나 제대로 업그레이드하지 않은 가능성이 높습니다을 만듭니다. Phase의 SYSTEM.CATALOG 테이블을 비활성화하고 삭제하려면 HBase 쉘을 사용하십시오. 피닉스는 초기화시이 테이블을 재생성합니다. 클러스터를 먼저 백업하십시오.)

그리고 create table 명령을 다시 실행하여 테이블을 복원하십시오.

+0

카탈로그 파일을 비활성화하고 삭제했지만 노드를 다시 시작한 후에도 phoenix가이 테이블을 재생성하지 못했습니다. – Vishnu667

+0

성공적으로 시작 관리해야하는 경우이를 수행해야합니다. 나는 아직도 그렇지 않다는 것을 짐작하고 있나? – kliew

+0

사용해보기 : https://phdata.io/operational-notes-on-apache-phoenix/ – kliew

0

SYSTEM.CATALOG 테이블에서 데이터를 삭제 한 후 phoenix create sqls를 다시 실행해야합니다. 이것은 CATALOG 테이블의 메타 데이터를 다시 채우는 것입니다.