M BUZZ CRAZE NEWS
// news

I can't connect to Hadoop port 9000

By Daniel Rodriguez

So telnet actually is working, I mean telnet localhost 25 is connecting; but telnet localhost or telnet localhost 9000 got such result:

Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused

nmap results:

$ nmap localhost
Starting Nmap 6.00 ( ) at 2013-10-03 00:54 MSK
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00030s latency).
rDNS record for 127.0.0.1: localhost.localdomain
Not shown: 992 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
80/tcp open http
587/tcp open submission
631/tcp open ipp
3306/tcp open mysql
5432/tcp open postgresql
6566/tcp open sane-port

nmap on 9000 port:

$ nmap -p 9000 localhost
Starting Nmap 6.00 ( ) at 2013-10-03 00:55 MSK
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000040s latency).
rDNS record for 127.0.0.1: localhost.localdomain
PORT STATE SERVICE
9000/tcp closed cslistener

So the question is how to open necessary port, Im using Ubuntu 13.04; tried to disable ufw, tried to play with iptables; nothing helped. Dunno even what to do...

I need 9000 port for hadoop; I cant access fs without opened 9000 port

6

3 Answers

The reason for connection refused is simple - the port 9000 is NOT being used (not open).

Use the command => lsof -i :9000 to see what app is using the port. If the result is empty (return value 1) then it is not open.

You can even test further with netcat.

List on port 9000 in terminal session 1

nc -l -p 9000

In another session, connect to it

$ lsof -i :9000
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nc 12679 terry 3u IPv4 7518676 0t0 TCP *:9000 (LISTEN)
$ nc -vz localhost 9000
Connection to localhost 9000 port [tcp/*] succeeded!

So you need to fix your hadoop settings and make sure all necessary daemons/services are started properly before you can connect to use HDFS.

NOTE: This is NOT telnet nor iptables issue, it's basically TCP/IP basics. Please change the question to sth like "connecting to port 9000 issue".

Update

i need 9000 port for hadoop; I can't access fs without opened 9000 port so base on the context my understanding is that HDFS's namenode is supposed to use port 9000. So check your Hadoop/HDFS configuration files and get the services started.

iptables is irrelevant at this stage because the port is NOT used at all.

You can run iptables -L -vn to make sure there is no rules in effected. You can flush the filter table INPUT chain to make sure sudo iptables -P INPUT ACCEPT && sudo iptables -F -t filter

5

If it helps anyone, I solved my similar problem by formatting the namenode again:

hdfs namenode -format
1

I had this same problem. At first, it worked fine with 'hdfs namenode -format', although just for one time. It happened that I used '/usr/hadoop-2.9.0/sbin/start-dfs.sh' (and stop-dfs.sh) to start Hadoop. Just when I began to use '/usr/hadoop-2.9.0/sbin/start-all.sh' (after had applied 'hdfs namenode -format') the 9000 port began to be used by Hadoop. "Stranger things!"

Your Answer

Sign up or log in

Sign up using Google Sign up using Facebook Sign up using Email and Password

Post as a guest

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy