3

CSV 파일을 읽으려고하는 AWS 람다 응용 프로그램이 있습니다. 파일의AWS 람다에서 버퍼 판독기를 읽는 중 SocketTimeoutException이 발생했습니다

S3Object s3Object = S3Client.getObject(new GetObjectRequest(SrcBucketName,SrcKey)); 

읽기 사용 라인별로 이루어집니다 :

final InputStreamReader isr = new InputStreamReader(s3Object.getContent()); 
final BufferedReader br = new BufferedReader(isr); 
try{ 
    aList = br.lines().skip(1) 
      .map(processLines) 
      .collect(toList()); 
    } catch (Exception e) { 

    } finally { 
    isr.close(); 
    br.close(); 
    } 

AWS 람다이 문제를 간헐적 소켓 시간을 던져이 같은 파일을 읽습니다. 전체 스택 트레이스는 다음과 같습니다

Caused by: java.net.SocketTimeoutException: Read timed out 
     at java.net.SocketInputStream.socketRead0(Native Method) 
     at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) 
     at java.net.SocketInputStream.read(SocketInputStream.java:171) 
     at java.net.SocketInputStream.read(SocketInputStream.java:141) 
     at sun.security.ssl.InputRecord.readFully(InputRecord.java:465) 
     at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:593) 
     at sun.security.ssl.InputRecord.read(InputRecord.java:532) 
     at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983) 
     at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:940) 
     at sun.security.ssl.AppInputStream.read(AppInputStream.java:105) 
     at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:139) 
     at org.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:200) 
     at org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178) 
     at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:137) 
     at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:72) 
     at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) 
     at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:72) 
     at com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:115) 
     at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:72) 
     at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:72) 
     at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:72) 
     at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) 
     at java.security.DigestInputStream.read(DigestInputStream.java:161) 
     at com.amazonaws.services.s3.internal.DigestValidationInputStream.read(DigestValidationInputStream.java:59) 
     at com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:72) 
     at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) 
     at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) 
     at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) 
     at java.io.InputStreamReader.read(InputStreamReader.java:184) 
     at java.io.BufferedReader.fill(BufferedReader.java:161) 
     at java.io.BufferedReader.readLine(BufferedReader.java:324) 
     at java.io.BufferedReader.readLine(BufferedReader.java:389) 

파일 크기는 단지 61킬로바이트입니다. SDK의 문제입니까? 여기서 뭐가 잘못 됐는지 말해 줄래?

감사의 말 전진. 다음

답변

0

시도 : 나는 여전히 같은 문제에 직면하고있다

S3Object amazonS3Client.getObject(new GetObjectRequest(bucketName, filePath)); 
    ArrayList<String> fileContent = new ArrayList<String>(); 

    InputStream is = s3Object.getObjectContent; 
    BufferedReader reader = new BufferedReader(new InputStreamReader(is)); 
    Try { 
    String line; 
    while ((line=reader.readLine()) != null) { 
      fileContent.add(line); 
    } 
    } Catch (IOException e) { 
      e.printStackTrace(); 
    } finally{ 
     reader.close(); 
     is.close(); 
    } 
+0

. 재시도 메커니즘을 구현할 수도 있습니다. –