开发者社区> 问答> 正文

通过Java上传文件到hdfs出错

1、开发环境为window、使用idea intellij工具开发;
2、云主机部署了hadoop HA 环境
3、有重新格式化hdfs,是清除干净再重新格式化!在线上操作上传文件正常
4、报错日记如下:
2018-07-30 20:54:32.600 INFO 2548 --- [main] com.dooogo.project.MyTest : Started MyTest in 6.447 seconds (JVM running for 7.115)
2018-07-30 20:54:34.035 INFO 2548 --- [main] com.alibaba.druid.pool.DruidDataSource : {dataSource-1} inited
2018-07-30 20:54:34.061 INFO 2548 --- [main] o.s.t.c.transaction.TransactionContext : Began transaction (1) for test context [DefaultTestContext@ca263c2 testClass = MyTest, testInstance = com.dooogo.project.MyTest@705d914f, testMethod = upload@MyTest, testException = [null], mergedContextConfiguration = [WebMergedContextConfiguration@589b3632 testClass = MyTest, locations = '{}', classes = '{class com.dooogo.project.Application, class com.dooogo.project.Application}', contextInitializerClasses = '[]', activeProfiles = '{}', propertySourceLocations = '{}', propertySourceProperties = '{org.springframework.boot.test.context.SpringBootTestContextBootstrapper=true}', contextCustomizers = set[org.springframework.boot.test.context.SpringBootTestContextCustomizer@ed9d034, org.springframework.boot.test.context.filter.ExcludeFilterContextCustomizer@4eb7f003, org.springframework.boot.test.json.DuplicateJsonObjectContextCustomizerFactory$DuplicateJsonObjectContextCustomizer@1060b431, org.springframework.boot.test.mock.mockito.MockitoContextCustomizer@0, org.springframework.boot.test.autoconfigure.properties.PropertyMappingContextCustomizer@0, org.springframework.boot.test.autoconfigure.web.servlet.WebDriverContextCustomizerFactory$Customizer@2a70a3d8], resourceBasePath = 'src/main/webapp', contextLoader = 'org.springframework.boot.test.context.SpringBootContextLoader', parent = [null]]]; transaction manager [org.springframework.jdbc.datasource.DataSourceTransactionManager@4d75c604]; rollback [true]
2018-07-30 20:54:55.636 INFO 2548 --- [Thread-7] org.apache.hadoop.hdfs.DFSClient : Exception in createBlockOutputStream

java.net.ConnectException: Connection timed out: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_91]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_91]
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) ~[hadoop-common-2.7.6.jar:na]
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) ~[hadoop-common-2.7.6.jar:na]
at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1717) ~[hadoop-hdfs-2.7.6.jar:na]
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1447) [hadoop-hdfs-2.7.6.jar:na]
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1400) [hadoop-hdfs-2.7.6.jar:na]
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554) [hadoop-hdfs-2.7.6.jar:na]

2018-07-30 20:54:55.637 INFO 2548 --- [Thread-7] org.apache.hadoop.hdfs.DFSClient : Abandoning BP-1733756934-10.10.10.4-1532954856058:blk_1073741830_1006
2018-07-30 20:54:55.744 INFO 2548 --- [Thread-7] org.apache.hadoop.hdfs.DFSClient : Excluding datanode DatanodeInfoWithStorage[10.10.10.6:50010,DS-a56459dd-1f7f-4573-8d4c-015efb7851fd,DISK]
2018-07-30 20:55:16.817 INFO 2548 --- [Thread-7] org.apache.hadoop.hdfs.DFSClient : Exception in createBlockOutputStream

java.net.ConnectException: Connection timed out: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_91]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_91]
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) ~[hadoop-common-2.7.6.jar:na]
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) ~[hadoop-common-2.7.6.jar:na]
at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1717) ~[hadoop-hdfs-2.7.6.jar:na]
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1447) [hadoop-hdfs-2.7.6.jar:na]
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1400) [hadoop-hdfs-2.7.6.jar:na]
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554) [hadoop-hdfs-2.7.6.jar:na]

2018-07-30 20:55:16.817 INFO 2548 --- [Thread-7] org.apache.hadoop.hdfs.DFSClient : Abandoning BP-1733756934-10.10.10.4-1532954856058:blk_1073741831_1007
2018-07-30 20:55:16.923 INFO 2548 --- [Thread-7] org.apache.hadoop.hdfs.DFSClient : Excluding datanode DatanodeInfoWithStorage[10.10.10.7:50010,DS-321b4a95-bb76-48f8-8454-53a5f42d80ec,DISK]
2018-07-30 20:55:16.951 WARN 2548 --- [Thread-7] org.apache.hadoop.hdfs.DFSClient : DataStreamer Exception

org.apache.hadoop.ipc.RemoteException: File /test/ai_goods.sql could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1625)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3132)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3056)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:493)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)

at org.apache.hadoop.ipc.Client.call(Client.java:1476) ~[hadoop-common-2.7.6.jar:na]
at org.apache.hadoop.ipc.Client.call(Client.java:1413) ~[hadoop-common-2.7.6.jar:na]
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[hadoop-common-2.7.6.jar:na]
at com.sun.proxy.$Proxy101.addBlock(Unknown Source) ~[na:na]
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418) ~[hadoop-hdfs-2.7.6.jar:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_91]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_91]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_91]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_91]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) ~[hadoop-common-2.7.6.jar:na]
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.7.6.jar:na]
at com.sun.proxy.$Proxy102.addBlock(Unknown Source) ~[na:na]
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1603) ~[hadoop-hdfs-2.7.6.jar:na]
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1388) ~[hadoop-hdfs-2.7.6.jar:na]
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554) ~[hadoop-hdfs-2.7.6.jar:na]

2018-07-30 20:55:17.096 INFO 2548 --- [main] o.s.t.c.transaction.TransactionContext : Rolled back transaction for test context [DefaultTestContext@ca263c2 testClass = MyTest, testInstance = com.dooogo.project.MyTest@705d914f, testMethod = upload@MyTest, testException = org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /test/ai_goods.sql could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and 2 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1625)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3132)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3056)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:493)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1758)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
, mergedContextConfiguration = [WebMergedContextConfiguration@589b3632 testClass = MyTest, locations = '{}', classes = '{class com.dooogo.project.Application, class com.dooogo.project.Application}', contextInitializerClasses = '[]', activeProfiles = '{}', propertySourceLocations = '{}', propertySourceProperties = '{org.springframework.boot.test.context.SpringBootTestContextBootstrapper=true}', contextCustomizers = set[org.springframework.boot.test.context.SpringBootTestContextCustomizer@ed9d034, org.springframework.boot.test.context.filter.ExcludeFilterContextCustomizer@4eb7f003, org.springframework.boot.test.json.DuplicateJsonObjectContextCustomizerFactory$DuplicateJsonObjectContextCustomizer@1060b431, org.springframework.boot.test.mock.mockito.MockitoContextCustomizer@0, org.springframework.boot.test.autoconfigure.properties.PropertyMappingContextCustomizer@0, org.springframework.boot.test.autoconfigure.web.servlet.WebDriverContextCustomizerFactory$Customizer@2a70a3d8], resourceBasePath = 'src/main/webapp', contextLoader = 'org.springframework.boot.test.context.SpringBootContextLoader', parent = [null]]].
2018-07-30 20:55:17.102 INFO 2548 --- [Thread-4] o.s.w.c.s.GenericWebApplicationContext : Closing org.springframework.web.context.support.GenericWebApplicationContext@4738a206: startup date [Mon Jul 30 20:54:26 CST 2018]; root of context hierarchy
2018-07-30 20:55:17.112 INFO 2548 --- [Thread-4] com.alibaba.druid.pool.DruidDataSource : {dataSource-1} closed
2018-07-30 20:55:17.113 INFO 2548 --- [Thread-4] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'getAsyncExecutor'
2018-07-30 21:15:09.269 WARN 8248 --- [main] ory$DuplicateJsonObjectContextCustomizer :

展开
收起
hbase小能手 2018-11-07 16:11:53 5853 0
2 条回答
写回答
取消 提交回答
  • java 数据分析 数据可视化 大数据

    错误地方: 2018-07-30 20:55:16.817 INFO 2548 --- [Thread-7] org.apache.hadoop.hdfs.DFSClient : Exception in createBlockOutputStream

    java.net.ConnectException: Connection timed out: no further information
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)

    显示就是没有连接上啊, 连接时间超时啊, 导致 创建失败啊

    2019-07-17 23:12:42
    赞同 展开评论 打赏
  • 社区管理员

    从上面的日志可以看出,应该是你本地环境不能连接到云 hadoop 集群,建议你在本地测试下能不能ping通hadoop的8020端口。

    2019-07-17 23:12:42
    赞同 展开评论 打赏
问答排行榜
最热
最新

相关电子书

更多
如何通过 Serverless 提高 Java 微服务治理效 立即下载
Spring Cloud Alibaba - 重新定义 Java Cloud-Native 立即下载
The Reactive Cloud Native Arch 立即下载