纵有疾风起
人生不言弃

hadoop集群单点配置



===================    ===============================----------------hadoop集群搭建   -----------------------======================       ===========================192.168.75.7          255.255.255.0             192.168.75.200:50:56:27:0C:F1----------------------虚拟机基础配置 -------------------1.编辑硬件设备,设共享目录2-添加hosts头--------------------hosts头-----------------------127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4::1         localhost localhost.localdomain localhost6 localhost6.localdomain6----------------------------------------------------------------------------------System eth0 网络配置 ------root-------------------------------- VM tool 安装 -----------------root-----------1-虚拟机/安装VMware tool、双击打开VMwareTool tar包/解压到文件系统tmp./tmp/vmware-tools-distrib/vmware-install.plreboot1.网络桥接 设置物理地址vi /etc/sysconfig/network2.面板删连接配置,只剩System eth0rm -rf /etc/udev/rules.d/70-persistent-net.rules cp /mnt/hgfs/setup/hosts /etc/hostsrebootvi /etc/udev/rules.d/70-persistent-net.rulesvi /etc/sysconfig/network-scripts/ifcfg-eth0    物理地址大写-------------------------ifcfg-eth0文件---------------------DEVICE="eth0"BOOTPROTO=noneIPV6INIT="yes"NM_CONTROLLED="yes"ONBOOT="yes"TYPE="Ethernet"IPADDR=192.168.1.120PREFIX=24GATEWAY=192.168.1.1DNS1=192.168.1.1DEFROUTE=yesIPV4_FAILURE_FATAL=yesIPV6_AUTOCONF=yesIPV6_DEFROUTE=yesIPV6_FAILURE_FATAL=noNAME="System eth0"HWADDR=00:50:56:2A:C2:8DIPV6_PEERDNS=yesIPV6_PEERROUTES=yes--------------------------------------------------------service iptables stopchkconfig iptables offservice network restart-------------------- jdk 安装 -----------------root------- cp /mnt/hgfs/setup/jdk-8u211-linux-x64.rpm /opt/rpm -ivh /mnt/hgfs/setup/jdk-8u211-linux-x64.rpm which javall /usr/java/jdk1.8.0_161/bin/java        Java路径:/usr/java/jdk1.8.0_161vi /etc/profile----------------------profile 文件---------------export JAVA_HOME=/usr/java/jdk1.8.0_161export JRE_HOME=$JAVA/jreexport PATH=$JAVA_HOME/bin:$PATHexport CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar-----------------------------------------------***********************卸载JDK***********************************rpm -qa|grep jdk---看到:jdk-1.6.0_22-fcsrpm -e --nodeps jdk-1.6.0_22-fcs   ***************************************************************source /etc/profilejava -version===============  Hadoop 安装 ------先root后分配权限给Hadoop   =========cp /mnt/hgfs/setup/hadoop-2.7.6.tar.gz /opt/tar -zxvf /opt/hadoop-2.7.6.tar.gz -C /opt/vi /etc/profile----------------------profile 文件---------------export HADOOP_DEV_HOME=/opt/hadoop-2.7.6export PATH=$PATH:$HADOOP_DEV_HOME/binexport PATH=$PATH:$HADOOP_DEV_HOME/sbinexport HADOOP_MAPARED_HOME=${HADOOP_DEV_HOME}export HADOOP_COMMON_HOME=${HADOOP_DEV_HOME}export HADOOP_HDFS_HOME=${HADOOP_DEV_HOME}export YARN_HOME=${HADOOP_DEV_HOME}export HADOOP_CONF_DIR=${HADOOP_DEV_HOME}/etc/hadoop-----------------------------------------------source /etc/profilevi /opt/hadoop-2.7.6/etc/hadoop/hadoop-env.sh ----------------------hadoop-env.sh  文件---------------export JAVA_HOME=/usr/java/jdk1.8.0_161------------------------------------------------------------vi /opt/hadoop-2.7.6/etc/hadoop/core-site.xml -------- core-site.xml  文件-------要在/opt/hadoop-2.7.6/新建目录   /data/tmp---<configuration><property><name>fs.defaultFS</name><value>hdfs://cMater:9000</value></property><property><name>hadoop.tmp.dir</name><value>/root/bigdata/hadoop-2.6.5/data/tmp</value></property></configuration>     -----------------------------------------------------------------------------vi /opt/hadoop-2.7.6/etc/hadoop/hdfs-site.xml ------ hdfs-site.xml  文件----要在/opt/hadoop-2.7.6/新建目录   /data/cMater  /data/namenode  /data/checkpoint -------<configuration><property><name>dfs.datanode.data.dir</name><value>/opt/hadoop-2.7.6/data/cMater</value></property><property><name>dfs.namenode.name.dir</name><value>/opt/hadoop-2.7.6/data/namenode</value></property><property><name>dfs.namenode.checkpoint.dir</name><value>/opt/hadoop-2.7.6/data/checkpoint</value></property><property><name>dfs.replication</name><value>2</value></property></configuration>------------------------------------------------------------ mkdir -p /opt/hadoop-2.7.6/data/tmp mkdir -p /opt/hadoop-2.7.6/data/cMater mkdir -p /opt/hadoop-2.7.6/data/namenode mkdir -p /opt/hadoop-2.7.6/data/checkpointvi /opt/hadoop-2.7.6/etc/hadoop/yarn-site.xml ---------------------- yarn-site.xml  文件---------------<configuration><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property><property><name>yarn.resourcemanager.hostname</name><value>cMater</value></property></configuration>------------------------------------------------------------cp /opt/hadoop-2.7.6/etc/hadoop/mapred-site.xml.template /opt/hadoop-2.7.6/etc/hadoop/mapred-site.xmlvi /opt/hadoop-2.7.6/etc/hadoop/mapred-site.xml---------------------- mapred-site.xml 文件---------------<configuration><property><name>mapreduce.framework.name</name><value>yarn</value></property><property><name>mapreduce.jobtracker.staging.root.dir</name><value>/user</value></property></configuration>------------------------------------------------------------========================用户  权限======================useradd hadooppasswd hadoopchmod u+w /etc/sudoersvi /etc/sudoers------------sudoers 文件配置---root    ALL=(ALL)       ALL下添加一行----hadoop  ALL=(ALL)       ALL----------------------------------------------------------------------chmod u-w /etc/sudoerschown -R hadoop:hadoop /opt/hadoop-2.7.6============启动 Hadoop ========切换Hadoop用户=============su hadoop******格式化 HDFS****************hdfs namenode -format*******单点重复格式化处理********rm -rf 进行清空 /data/tmp/data/cMater  /data/namenode  /data/checkpoint 内的缓存***********单点启动失败*********检查core-site.xml、hdfs-site.xml 、yarn-site.xml、mapred-site.xml文件格式source /etc/hosts    失败--------------------单节点启动--------------------------------namenode----//hadoop-daemon.sh start secondarynamenodehadoop-daemon.sh start namenodehadoop-daemon.sh start datanodeyarn-daemon.sh start resourcemanageryarn-daemon.sh start nodemanagermr-jobhistory-daemon.sh start historyserveryarn-daemon.sh start proxyserver//hadoop-daemon.sh stop secondarynamenodehadoop-daemon.sh stop namenodehadoop-daemon.sh stop datanodeyarn-daemon.sh stop resourcemanageryarn-daemon.sh stop nodemanagermr-jobhistory-daemon.sh stop historyserveryarn-daemon.sh stop proxyserver----------------./opt/hadoop-2.7.6/sbin/hadoop-daemon.sh start namenode-----------datanode----Hadoop找不到类时检查nodemanager是否拼写正确--------hadoop-daemon.sh start datanodeyarn-daemon.sh start nodemanagerhadoop-daemon.sh stop datanodeyarn-daemon.sh stop nodemanager-------------------端口解析-----------------HDFS         50070         http服务的端口yarn           8088           http服务的端口proxyserver   WebAppProxyServer   history         JobHistoryServer   ===========================================--------------------HDFS 权限问题---------------------useradd 20160216048       # root操作passwd 20160216048          #root操作sudo usermod -a -G hadoop 20160216048  # root操作     20160216048添加到Hadoop组hadoop fs -put /opt/hadoop-2.7.6.tar.gz /root/   #hadoop操作,root无法上传hadoop fs -chown -R 20160216048:hadoop /root/  #hadoop操作hadoop dfs -chmod -R 755 /abc   #hadoop操作hadoop dfsadmin -safemode leave  #解除hadoop的安全模式hadoop dfs -chmod -R 777 /abc  #hadoop操作,修改权限,让组外用户进行操作__________________________权限比重_______________________drwxr-xr-x       -rw-r--r-- d:文件夹         r:4          w:2         x:1注意:hadoop fs -rm -r /tmp/ 与hadoop fs -rm -r /tmp没有区别!==========================    ==========================-------------------  HDFS 使用     ----------------------=====================       ==========================------------------杀掉当前正在执行的hadoop任务---------------列出当前hadoop正在执行的jobs:    hadoop job -list杀掉job: job_201212111628_11166:                    
[hadoop@
192.168.10.11 bin]$ ./hadoop job -kill job_201212111628_11166-----------------------Linux自定义命令----------------------vi /root/.bashrcalias 你自定的命令=系统原来的命令 如:alias hf='hadoop fs'-----------------------------HDFS文件回传----------------------hadoop fs -get /user/root/ds_out /mnt/hgfs/setup/data/*****************************Linux补充 免密码登录***************方法一:主从主机间要相互做一次ssh-keygen -t rsassh-copy-id -i ~/.ssh/id_rsa.pub cSlave01方法二:全部会话 :ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsassh-copy-id node11/node12/node13------------------------------禁用IPV6-----------------------## 禁用IPv6echo " " >> /etc/modprobe.d/dist.confecho "alias net-pf-10 off" >> /etc/modprobe.d/dist.confecho "alias ipv6 off" >> /etc/modprobe.d/dist.conf## 查看是否追加成功tail /etc/modprobe.d/dist.conf---------------------设置文件打开数目和用户最大进程数---------------## 设置用户最大进程数vim /etc/security/limits.conf## 结尾添加以下内容* soft nofile 32768* hard nofile 1048576* soft nproc 65536* hard nproc unlimited* soft memlock unlimited* hard memlock unlimited---------------------------时区设置-------------------------统一时区为东八区我们国家的时区为东八区(+0800):#查看当前时区date -R;cat /etc/sysconfig/clock不是北京时间,要设置一下时区,方法如下,执行命令:# 设置东八区时区为当前时区rm -rf /etc/localtimecp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime# 手动同步下网络时间ntpdate -u cn.pool.ntp.orgvim /etc/sysconfig/clockZONE="Asia/Shanghai"在次查看date -R;cat /etc/sysconfig/clock----------------------------Linux操作系统系统语言采用-------------------------------转英文---------------# 查看操作系统系统语言echo $LANG# 修改操作系统系统语言vim /etc/sysconfig/i18nLANG="en_US.UTF-8"----------------转中文------------# 查看操作系统系统语言echo $LANG# 修改操作系统系统语言vim /etc/sysconfig/i18nLANG="zh_CN.UTF-8"

文章转载于:https://www.cnblogs.com/Raodi/p/11053149.html

原著是一个有趣的人,若有侵权,请通知删除

未经允许不得转载:起风网 » hadoop集群单点配置
分享到: 生成海报

评论 抢沙发

评论前必须登录!

立即登录