2017-04-19 8 views
0

hadoop jar /usr/lib/tdch/1.4/lib/teradata-connector-1.4.4.jar com.teradata.connector.common.tool.ConnectorImportTool \ -url jdbc : teradata : //192.168.2.128/DATABASE=db_1 \ -username DBC \ -password DBC \ -jobtype 하이브 \ -fileformat TEXTFILE \ -nummappers \ -sourcetable 직원 1 \ -targettable td_employee -targettableschema \ " emp_id int, 이름 문자열, 성 문자열 "하이브 테이블로 가져 오는 중 TDCH가 실패 함

여기에 로그가 있습니다. HADOOP_CLASSPATH에 hive-serde jar를 추가했습니다.

17/04/20 04:26:56 INFO tool.ConnectorImportTool: ConnectorImportTool starts at 1492687616920 
17/04/20 04:26:58 INFO common.ConnectorPlugin: load plugins in jar:file:/usr/lib/tdch/1.4/lib/teradata-connector-1.4.4.jar!/teradata.connector.plugins.xml 
17/04/20 04:26:59 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 
17/04/20 04:26:59 INFO metastore.ObjectStore: ObjectStore, initialize called 
17/04/20 04:26:59 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored 
17/04/20 04:26:59 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored 
17/04/20 04:27:03 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 
17/04/20 04:27:03 INFO metastore.MetaStoreDirectSql: MySQL check failed, assuming we are not on mysql: Lexical error at line 1, column 5. Encountered: "@" (64), after : "". 
17/04/20 04:27:05 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 
17/04/20 04:27:05 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 
17/04/20 04:27:05 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 
17/04/20 04:27:05 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 
17/04/20 04:27:05 INFO DataNucleus.Query: Reading in results for query "[email protected]" since the connection used is closing 
17/04/20 04:27:05 INFO metastore.ObjectStore: Initialized ObjectStore 
17/04/20 04:27:06 INFO metastore.HiveMetaStore: Added admin role in metastore 
17/04/20 04:27:06 INFO metastore.HiveMetaStore: Added public role in metastore 
17/04/20 04:27:06 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty 
17/04/20 04:27:06 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=td_employee 
17/04/20 04:27:06 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=td_employee 
17/04/20 04:27:06 INFO processor.TeradataInputProcessor: input preprocessor com.teradata.connector.teradata.processor.TeradataSplitByHashProcessor starts at: 1492687626978 
17/04/20 04:27:08 INFO utils.TeradataUtils: the input database product is Teradata 
17/04/20 04:27:08 INFO utils.TeradataUtils: the input database version is 16.0 
17/04/20 04:27:08 INFO utils.TeradataUtils: the jdbc driver version is 15.0 
17/04/20 04:27:08 INFO processor.TeradataInputProcessor: the teradata connector for hadoop version is: 1.4.4 
17/04/20 04:27:08 INFO processor.TeradataInputProcessor: input jdbc properties are jdbc:teradata://192.168.2.128/DATABASE=db_1 
17/04/20 04:27:09 INFO processor.TeradataInputProcessor: the number of mappers are 1 
17/04/20 04:27:09 INFO processor.TeradataInputProcessor: input preprocessor com.teradata.connector.teradata.processor.TeradataSplitByHashProcessor ends at: 1492687629069 
17/04/20 04:27:09 INFO processor.TeradataInputProcessor: the total elapsed time of input preprocessor com.teradata.connector.teradata.processor.TeradataSplitByHashProcessor is: 2s 
17/04/20 04:27:10 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir 
17/04/20 04:27:10 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty 
17/04/20 04:27:10 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=td_employee 
17/04/20 04:27:10 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=default tbl=td_employee 
17/04/20 04:27:10 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 
17/04/20 04:27:10 INFO metastore.ObjectStore: ObjectStore, initialize called 
17/04/20 04:27:10 INFO metastore.MetaStoreDirectSql: MySQL check failed, assuming we are not on mysql: Lexical error at line 1, column 5. Encountered: "@" (64), after : "". 
17/04/20 04:27:10 INFO DataNucleus.Query: Reading in results for query "[email protected]" since the connection used is closing 
17/04/20 04:27:10 INFO metastore.ObjectStore: Initialized ObjectStore 
17/04/20 04:27:10 INFO processor.HiveOutputProcessor: hive table default.td_employee does not exist 
17/04/20 04:27:10 INFO metastore.HiveMetaStore: 0: Shutting down the object store... 
17/04/20 04:27:10 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=Shutting down the object store... 
17/04/20 04:27:10 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete. 
17/04/20 04:27:10 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=Metastore shutdown complete.  
17/04/20 04:27:11 INFO client.RMProxy: Connecting to ResourceManager at sandbox.hortonworks.com/10.0.2.15:8050 
17/04/20 04:27:13 INFO client.RMProxy: Connecting to ResourceManager at sandbox.hortonworks.com/10.0.2.15:8050 
17/04/20 04:27:13 WARN mapred.ResourceMgrDelegate: getBlacklistedTrackers - Not implemented yet 
17/04/20 04:27:13 INFO mapreduce.JobSubmitter: number of splits:1 
17/04/20 04:27:14 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1492661647325_0001 
17/04/20 04:27:14 INFO impl.YarnClientImpl: Submitted application application_1492661647325_0001 
17/04/20 04:27:14 INFO mapreduce.Job: The url to track the job: http://sandbox.hortonworks.com:8088/proxy/application_1492661647325_0001/ 
17/04/20 04:27:14 INFO mapreduce.Job: Running job: job_1492661647325_0001 
17/04/20 04:27:34 INFO mapreduce.Job: Job job_1492661647325_0001 running in uber mode : false 
17/04/20 04:27:34 INFO mapreduce.Job: map 0% reduce 0% 
17/04/20 04:27:49 INFO mapreduce.Job: Task Id : attempt_1492661647325_0001_m_000000_0, Status : FAILED 
Error: java.lang.ClassNotFoundException: org.apache.hadoop.hive.serde2.SerDeException 
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366) 
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354) 
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425) 
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) 
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358) 
    at java.lang.Class.forName0(Native Method) 
    at java.lang.Class.forName(Class.java:190) 
    at com.teradata.connector.common.ConnectorOutputFormat$ConnectorFileRecordWriter.<init>(ConnectorOutputFormat.java:91) 
    at com.teradata.connector.common.ConnectorOutputFormat.getRecordWriter(ConnectorOutputFormat.java:38) 
    at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:624) 
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:744) 
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) 
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:415) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557) 
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) 

17/04/20 04:28:00 INFO mapreduce.Job: Task Id : attempt_1492661647325_0001_m_000000_1, Status : FAILED 
Error: org.apache.hadoop.fs.FileAlreadyExistsException: /user/root/temp_042710/part-m-00000 for client 10.0.2.15 already exists 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2309) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2237) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2190) 
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:520) 
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:354) 
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) 
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) 
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) 
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:415) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557) 
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) 

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) 
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 
    at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) 
    at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) 
    at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1604) 
    at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1465) 
    at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1390) 
    at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:394) 
    at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:390) 
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:390) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:334) 
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906) 
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887) 
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:784) 
    at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.getRecordWriter(TextOutputFormat.java:132) 
    at com.teradata.connector.hive.HiveTextFileOutputFormat.getRecordWriter(HiveTextFileOutputFormat.java:22) 
    at com.teradata.connector.common.ConnectorOutputFormat$ConnectorFileRecordWriter.<init>(ConnectorOutputFormat.java:89) 
    at com.teradata.connector.common.ConnectorOutputFormat.getRecordWriter(ConnectorOutputFormat.java:38) 
    at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:624) 
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:744) 
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) 
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:415) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557) 
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) 
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.FileAlreadyExistsException): /user/root/temp_042710/part-m-00000 for client 10.0.2.15 already exists 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2309) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2237) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2190) 
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:520) 
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:354) 
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) 
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) 
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) 
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:415) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557) 
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) 

    at org.apache.hadoop.ipc.Client.call(Client.java:1410) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1363) 
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) 
    at com.sun.proxy.$Proxy15.create(Unknown Source) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:606) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) 
    at com.sun.proxy.$Proxy15.create(Unknown Source) 
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:258) 
    at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1600) 
    ... 22 more 

17/04/20 04:28:05 INFO mapreduce.Job: Task Id : attempt_1492661647325_0001_m_000000_2, Status : FAILED 
Error: org.apache.hadoop.fs.FileAlreadyExistsException: /user/root/temp_042710/part-m-00000 for client 10.0.2.15 already exists 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2309) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2237) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2190) 
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:520) 
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:354) 
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) 
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) 
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) 
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:415) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557) 
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) 

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) 
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 
    at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) 
    at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) 
    at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1604) 
    at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1465) 
    at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1390) 
    at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:394) 
    at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:390) 
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:390) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:334) 
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906) 
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887) 
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:784) 
    at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.getRecordWriter(TextOutputFormat.java:132) 
    at com.teradata.connector.hive.HiveTextFileOutputFormat.getRecordWriter(HiveTextFileOutputFormat.java:22) 
    at com.teradata.connector.common.ConnectorOutputFormat$ConnectorFileRecordWriter.<init>(ConnectorOutputFormat.java:89) 
    at com.teradata.connector.common.ConnectorOutputFormat.getRecordWriter(ConnectorOutputFormat.java:38) 
    at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:624) 
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:744) 
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) 
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:415) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557) 
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) 
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.FileAlreadyExistsException): /user/root/temp_042710/part-m-00000 for client 10.0.2.15 already exists 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2309) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2237) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2190) 
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:520) 
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:354) 
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) 
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) 
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) 
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:415) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557) 
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) 

    at org.apache.hadoop.ipc.Client.call(Client.java:1410) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1363) 
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) 
    at com.sun.proxy.$Proxy15.create(Unknown Source) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:606) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103) 
    at com.sun.proxy.$Proxy15.create(Unknown Source) 
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:258) 
    at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1600) 
    ... 22 more 

17/04/20 04:28:13 INFO mapreduce.Job: map 100% reduce 0% 
17/04/20 04:28:14 INFO mapreduce.Job: Job job_1492661647325_0001 failed with state FAILED due to: Task failed task_1492661647325_0001_m_000000 
Job failed as tasks failed. failedMaps:1 failedReduces:0 

17/04/20 04:28:14 INFO mapreduce.Job: Counters: 12 
    Job Counters 
     Failed map tasks=4 
     Launched map tasks=4 
     Other local map tasks=3 
     Data-local map tasks=1 
     Total time spent by all maps in occupied slots (ms)=30868 
     Total time spent by all reduces in occupied slots (ms)=0 
     Total time spent by all map tasks (ms)=30868 
     Total vcore-seconds taken by all map tasks=30868 
     Total megabyte-seconds taken by all map tasks=7717000 
    Map-Reduce Framework 
     CPU time spent (ms)=0 
     Physical memory (bytes) snapshot=0 
     Virtual memory (bytes) snapshot=0 
17/04/20 04:28:14 WARN tool.ConnectorJobRunner: com.teradata.connector.common.exception.ConnectorException: The output post processor returns 1 
17/04/20 04:28:14 INFO processor.TeradataInputProcessor: input postprocessor com.teradata.connector.teradata.processor.TeradataSplitByHashProcessor starts at: 1492687694783 
17/04/20 04:28:15 INFO processor.TeradataInputProcessor: input postprocessor com.teradata.connector.teradata.processor.TeradataSplitByHashProcessor ends at: 1492687694783 
17/04/20 04:28:15 INFO processor.TeradataInputProcessor: the total elapsed time of input postprocessor com.teradata.connector.teradata.processor.TeradataSplitByHashProcessor is: 0s 
17/04/20 04:28:15 INFO tool.ConnectorImportTool: ConnectorImportTool ends at 1492687695150 
17/04/20 04:28:15 INFO tool.ConnectorImportTool: ConnectorImportTool time is 78s 
17/04/20 04:28:15 INFO tool.ConnectorImportTool: job completed with exit code 1 
+0

안녕하세요. 적어도 오류 메시지 이상으로 작업 할 수있는 추가 정보를 제공해야합니다. – Andrew

+0

오류 로그를 추가했습니다. –

답변

0

더 낮은 버전의 sqoop을 사용하는 경우 TDCH 가져 오기가 실패합니다. 버전 호환성을 확인하십시오.