ERRO: quando iniciar hadoop

0

Estou instalado no Hadoop, seguindo Michael noll

hduser@ARUL-PC:/usr/local/hadoop$ sbin/start-all.sh

Recebi resposta como

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
14/05/03 12:36:43 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [OpenJDK 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
localhost]
sed: -e expression #1, char 6: unknown option to 's'
-c: Unknown cipher type 'cd'
64-Bit: ssh: Could not resolve hostname 64-bit: Name or service not known
Server: ssh: Could not resolve hostname server: Name or service not known
recommended: ssh: Could not resolve hostname recommended: Name or service not known
hduser@localhost's password: link: ssh: Could not resolve hostname link: No address associated with hostname
OpenJDK: ssh: Could not resolve hostname openjdk: Name or service not known
library: ssh: Could not resolve hostname library: Name or service not known
stack: ssh: Could not resolve hostname stack: Name or service not known
now.: ssh: Could not resolve hostname now.: Name or service not known
VM: ssh: Could not resolve hostname vm: Name or service not known
guard: ssh: Could not resolve hostname guard: Name or service not known
loaded: ssh: Could not resolve hostname loaded: Name or service not known
that: ssh: Could not resolve hostname that: Name or service not known
The: ssh: Could not resolve hostname the: Name or service not known
with: ssh: Could not resolve hostname with: Name or service not known
fix: ssh: Could not resolve hostname fix: Name or service not known
or: ssh: Could not resolve hostname or: Name or service not known
library: ssh: Could not resolve hostname library: Name or service not known
<libfile>',: ssh: Could not resolve hostname <libfile>',: Name or service not known
'-z: ssh: Could not resolve hostname '-z: Name or service not known
with: ssh: Could not resolve hostname with: Name or service not known
noexecstack'.: ssh: Could not resolve hostname noexecstack'.: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
to: ssh: Could not resolve hostname to: Name or service not known
the: ssh: Could not resolve hostname the: Name or service not known
will: ssh: Could not resolve hostname will: Name or service not known
it: ssh: Could not resolve hostname it: Name or service not known
the: ssh: Could not resolve hostname the: Name or service not known
which: ssh: Could not resolve hostname which: Name or service not known
guard.: ssh: Could not resolve hostname guard.: Name or service not known
disabled: ssh: Could not resolve hostname disabled: Name or service not known
might: ssh: Could not resolve hostname might: Name or service not known
VM: ssh: Could not resolve hostname vm: Name or service not known
stack: ssh: Could not resolve hostname stack: Name or service not known
You: ssh: Could not resolve hostname you: Name or service not known
you: ssh: Could not resolve hostname you: Name or service not known
It's: ssh: Could not resolve hostname it's: Name or service not known
highly: ssh: Could not resolve hostname highly: Name or service not known
'execstack: ssh: Could not resolve hostname 'execstack: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
try: ssh: Could not resolve hostname try: Name or service not known
fix: ssh: Could not resolve hostname fix: Name or service not known
warning:: ssh: Could not resolve hostname warning:: Name or service not known

Por favor, diga-me onde me enganei ...

Quando verifico a pasta de configuração, vi o seguinte,

root@ARUL-PC:/usr/local/hadoop/etc/hadoop# ls
capacity-scheduler.xml  hadoop-metrics2.properties  httpfs-site.xml             ssl-client.xml.example
configuration.xsl       hadoop-metrics.properties   log4j.properties            ssl-server.xml.example
container-executor.cfg  hadoop-policy.xml           mapred-env.cmd              yarn-env.cmd
core-site.xml           hdfs-site.xml               mapred-env.sh               yarn-env.sh
core-site.xml~          hdfs-site.xml~              mapred-queues.xml.template  yarn-site.xml
hadoop-env.cmd          httpfs-env.sh               mapred-site.xml.template
hadoop-env.sh           httpfs-log4j.properties     mapred-site.xml.template~
hadoop-env.sh~          httpfs-signature.secret     slaves

que adicionei linhas de acordo com o tutorial em hadoop-env.sh, core-site.xml, para esses arquivos, outros arquivos foram criados automaticamente, terminando com '~', da mesma forma que o nome do arquivo. Isso é normal ou é um problema?

Eu abro esse arquivo usando 'gedit' como,

root@ARUL-PC:/usr/local/hadoop/etc/hadoop# gedit hadoop-env.sh~

Eu posso ver,

Como resolver este problema.

    
por A J 03.05.2014 / 09:26

2 respostas

0

Quanto aos arquivos terminados em ~: O gedit cria um backup no arquivo de salvamento chamado ~. Se você não quiser esse comportamento, poderá desativá-lo em Preferências - > Editor - > Crie uma cópia antes de salvar

    
por Buri 03.05.2014 / 11:19
-1

o tutorial foi definido com hadoop 1.xe seu ambiente está configurado com hadoop 2.x ... o JobTracker / TaskTracker de 1.x é diferente em 2.x; JobTracker é dividido em ResourceManager e AppManager, e cada nó de dados agora tem um NodeManager ... não tenho certeza se o rastreador de tarefa de 1.x é parte do 2.x NodeManager ... um tutorial de instalação do hadoop atualizado 2.x (eu usei 2.5.0) ajudaria, este ajudou: link YARN é a adição 2.x que substituiu o JobTracker, etc.

    
por bern 04.09.2014 / 04:46