OS : CentOS 6.4 64-bit
Java : OpenJDK 1.7
Hadoop : 1.2.1
1. 설치 서버
- Name Node : 172.27.106.48 (name.odp.kt.com)
- Data Node : 172.27.233.144 (data01.odp.kt.com)
2. OpenJDK 설치
- 실행 계정 : root
- 적용 서버 : 전체 서버 (name node, data node)
yum -y install *openJDK*
3. Hadoop 계정 추가
- 실행 계정 : root
- 적용 서버 : 전체 서버 (name node, data node)
groupadd hadoop
useradd -g hadoop hadoop
passwd hadoop
4. Host 설정
- 실행 계정 : root
- 적용 서버 : 전체 서버 (name node, data node)
vi /etc/hosts (hosts 파일 수정)
== 하단에 내용 추가
172.27.106.48 name.odp.kt.com
172.27.233.144 data01.odp.kt.com
5. 방화벽 설정
- 실행 계정 : root
- 적용 서버 : 전체
service iptables stop
chkconfig iptables off
[NameNode 서버 설정]
1. 데이터 디렉토리 생성
- 실행 계정 : hadoop
- 적용 서버 : name.odp.kt.com
mkdir $HOME/data
mkdir $HOME/data/name
2. SSH 접근 제어 설정
- 실행 계정 : hadoop
- 적용 서버 : name.odp.kt.com
3. Hadoop 설치
- 실행 계정 : hadoop
- 적용 서버 : name.odp.kt.com
- Hadoop Version : 1.1.2, 1.2.1
tar xvf hadoop-1.x.x-tar.gz
4. Hadoop 환경 설정
- 실행 계정 : hadoop
- 적용 서버 : name.odp.kt.com
vi hadoop-env.sh
export HADOOP_HOME=/home/hadoop/hadoop-1.2.1
export HADOOP_HOME_WARN_SUPPRESS="TRUE"
# export JAVA_HOME=/usr/lib/jvm/jre-1.6.0-openjdk.x86_64
export JAVA_HOME=/usr/lib/jvm/jre-1.7.0-openjdk.x86_64
export HADOOP_OPTS=-server
vi core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://name.odp.kt.com:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
</property>
</configuration>
vi hdfs-site.xml
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/home/hadoop/data/name,/home/hadoop/data/backup</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/hadoop/data/node01,/home/hadoop/data/node02</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>fs.trash.interval</name>
<value>30</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.support.append</name>
<value>true</value>
</property>
<property>
<name>dfs.support.broken.append</name>
<value>true</value>
</property>
<property>
<name>dfs.permissions</name>
<value>true</value>
</property>
<property>
<name>dfs.web.ugi</name>
<value>hadoop,supergroup</value>
</property>
<property>
<name>dfs.permissions.supergroup</name>
<value>supergroup</value>
</property>
<property>
<name>dfs.upgrade.permission</name>
<value>0777</value>
</property>
<property>
<name>dfs.umaskmode</name>
<value>022</value>
</property>
<property>
<name>dfs.http.address</name>
<value>name.odp.kt.com:50070</value>
</property>
</configuration>
vi mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://name.odp.kt.com:9001</value>
</property>
<property>
<name>mapred.system.dir</name>
<value>/home/hadoop/data/mapred/system</value>
</property>
<property>
<name>mapred.local.dir</name>
<value>/home/hadoop/data/mapred/local</value>
</property>
</configuration>
vi conf/masters <= 수정 사항 없음. 일반적으로 secondary name node 정보 setup.
vi conf/slaves
== 아래 내용 추가
data01.odp.kt.com
Hadoop 설치 폴더 배포
scp -r /home/hadoop/hadoop-1.2.1 data01.odp.kt.com:/home/hadoop/hadoop-1.2.1
환경설정 배포
rsync -av /home/hadoop/hadoop-1.2.1/conf hadoop@data01.odp.kt.com:/home/hadoop/hadoop-1.2.1
Hadoop 실행
== NameNode 포맷
./hadoop namenode -format
== Hadoop 시작
./start-all.sh
== Hadoop Console 확인
./hadoop dfsadmin -report
== Hadoop 종료
./stop-all.sh
[DataNode 서버 설정]
- 실행 계정 : hadoop
- 적용 서버 : data01.odp.kt.com
mkdir $HOME/data
mkdir $HOME/data/node01
mkdir $HOME/data/node02
댓글을 달아 주세요
댓글 RSS 주소 : http://www.yongbi.net/rss/comment/581