C Language Experiment (12): Function (Enter the annual output calendar, leap year judgment, New Year’s Day week)

2023-01-07   ES  

0) Environment preparation

(1) Modify IP

(2) Modify the mapping of host names and host names and IP addresses

(3) Turn off the firewall

(4) SSH free login

(5) Install jdk, configure environment variables, etc.

(6) Configure the Zookeeper cluster

1) Planning cluster

hadoop102                         hadoop103                        hadoop104           

NameNode                          NameNode

JournalNode                        JournalNode                        JournalNode         

DataNode                             DataNode                            DataNode            

ZK                                         ZK                                       ZK

ResourceManager               ResourceManager

NodeManager                      NodeManager                     NodeManager      

2) Specific configuration







<!-Enable ResourceManager HA->





<!-Develop the address of the two ResourceManager->

















<!-Specify the address of the zookeeper cluster->





<!-Enable automatic recovery->





<!-Specify the status information of the ResourceManager stored in Zookeeper cluster->


        <name>yarn.resourcemanager.store.class</name>     <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>



(2) Configuration information of other nodes synchronously

3) Start HDFS

(1) On each Journalnode node, enter the following command to start the Journalnode service:

      sbin/hadoop-daemon.sh start journalnode

(2) On [nn1], format it and start:

       bin/hdfs namenode -format

       sbin/hadoop-daemon.sh start namenode

(3) On [nn2], synchronize NN1 metadata information:

  bin/hdfs namenode -bootstrapStandby

(4) Start [nn2]:

  sbin/hadoop-daemon.sh start namenode

(5) Start all Datanode

 sbin/hadoop-daemons.sh start datanode

(6) Switch [nn1] to Active

       bin/hdfs haadmin -transitionToActive nn1

4) Start Yarn




Execution inbigdata112:

sbin/yarn-daemon.sh start resourcemanager

3) View service status

bin/yarn rmadmin -getServiceState rm1


Congratulations to the construction and complete


Expand: HDFS Federation architecture design

  1. Namenode architecture

(1) restrictions of namespace (naming space)

Since the number of metadata (Metadata) in memory is stored in memory, the number of objects (files+blocks) that a single NameNode can be stored is limited by the Heap Size where NameNode is JVM. The 50G Heap can store an object of 2 billion (200million). These 2 billion objects support 4,000 Datanode and 12PB storage (assuming the average size of the file is 40MB). With the rapid growth of data, the demand for storage has also increased. A single Datanode increased from 4T to 36T, and the size of the cluster increased to 8,000 Datanode. Storage demand has increased from 12PB to greater than 100pb.

(2) isolation problem

Since the HDFS has only one NameNode and cannot isolate each program, one experimental program on HDFS is likely to affect the program running on the entire HDFS.

(3) Performance bottleneck

Since the HDFS architecture of a single nameNode is limited by the throughput of the entire HDFS file system, the throughput of the entire HDFS file system is limited by the throughput of a single nameNode.

——— Keep hunger and learn



Related Posts

X86 architecture Data structure definition of linux interrupt processing IRQDESC, etc.

UITEXTFILED and UITEXTVIEW limited length and judgment input emoji code

Center 7 deploy mysqlkevin

Section 14.7 Python simulation browser access to implement HTTP message compression transmission

C Language Experiment (12): Function (Enter the annual output calendar, leap year judgment, New Year’s Day week)

Random Posts

ubuntu16.04+vscode+opencv3.1- Read the picture Jen

Good city selection (three -level linkage) ARMY

MacOS configuration SQL Server Environment Sanjin C

bzoj 3757 Apple Tree Mo Team

Hibernate primary key generator