Monday, 15 August 2011

hadoop - Trying to write to HDFS on a remote machine using Faunus -



hadoop - Trying to write to HDFS on a remote machine using Faunus -

i trying pull info titan-cassandra graph database , write single hadoop node using faunus. hadoop node running on remote machine. so, machine on faunus running acts source info streamed , has written remote single hadoop node.

inside titan-cassandra-input.properties, specify output written remote hdfs specifying output location:

faunus.output.location=hdfs://10.143.57.157:9000/tmp/foutput

i changed hadoop configs:

core-site.xml

<configuration> <property> <name>fs.default.name</name> <value>hdfs://10.143.57.244:9000/</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> </configuration>

mapred-site.xml

<configuration> <property> <name>mapred.job.tracker</name> <value>10.143.57.244:9001</value> </property> </configuration>

i have added source ip /etc/hosts

10.143.57.244 hadoop2

but when seek start hadoop ./start-all.sh, see namenode not getting started. when see namenode logs, see next error:

error org.apache.hadoop.hdfs.server.namenode.namenode: java.net.bindexception: problem binding master/10.143.57.244:9000 : cannot assign requested address

i not able create out why trying bind source ip. treating source ip node in hadoop cluster?

i not want setup cluster. want hadoop node hear connections source ip. how configure this? please help.

hadoop faunus

No comments:

Post a Comment