xml地图|网站地图|网站标签 [设为首页] [加入收藏]

Hadoop上传文件时候的错误

时间:2019-11-08 03:01来源:计算机
[root@Hadoop1www.linuxidc.com]# hadoop fs -put /home/hadoop/word.txt/tmp/wordcount/word5.txt出现的错误 12/04/05 20:32:45 WARN hdfs.DFSClient: DataStreamer Exception:org.apache.hadoop.ipc.RemoteException: java.io.IOException: File/tmp/w

[root@Hadoop1 www.linuxidc.com]# hadoop fs -put /home/hadoop/word.txt /tmp/wordcount/word5.txt出现的错误

12/04/05 20:32:45 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /tmp/wordcount/word5.txt could only be replicated to 0 nodes, instead of 1

12/04/05 20:32:45 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
12/04/05 20:32:45 WARN hdfs.DFSClient: Could not get block locations. Source file "/tmp/wordcount/word5.txt" - Aborting...
put: java.io.IOException: File /tmp/wordcount/word5.txt could only be replicated to 0 nodes, instead of 1
12/04/05 20:32:45 ERROR hdfs.DFSClient: Exception closing file /tmp/wordcount/word5.txt : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /tmp/wordcount/word5.txt could only be replicated to 0 nodes, instead of 1

解决方案:

这个问题是由于没有添加节点的原因,也就是说需要先启动namenode,再启动datanode,然后启动jobtracker和tasktracker。这样就不会存在这个问题了。 目前解决办法是分别启动节点#hadoop-daemon.sh start namenode #$hadoop-daemon.sh start datanode

1.   重新启动namenode

# hadoop-daemon.sh start namenode

starting namenode, logging to /usr/hadoop-0.21.0/bin/../logs/hadoop-root-namenode-www.keli.com.out

2.   重新启动datanode

# hadoop-daemon.sh start datanode

starting datanode, logging to /usr/hadoop-0.21.0/bin/../logs/hadoop-root-datanode-www.keli.com.out

图片 1

编辑:计算机 本文来源:Hadoop上传文件时候的错误

关键词: