Integrating LVM with Hadoop and providing Elasticity to DataNode Storage

Task 7

Benjamin Francis
3 min readMar 8, 2021

Task description

Task 7.1 :- Integrating LVM with Hadoop and providing Elasticity to DataNode Storage

For integrating Hadoop with Lvm frist we have create a Hadoop Cluster

so, main step is Hadoop Cluster, I already have a Hadoop cluster with so i will show how to integrate Lvm with Hadoop

Step 1

Add two harddisk in the Datanode

Disk name 1= /dev/sdb

Disk name 2 = /dev/sdc

Step 2

Create PV (Physical volume) for both the hard disk

cmd:- pvcreate /dev/sdb /dev/sdc

To see created pv

cmd:- pvdisplay /dev/sdb /dev/sdc

Step 3

Create volume group for both the PV

cmd:-vgcreate LVM /dev/sdb /dev/sdc

Step 4

Create logical volume from volume group

cmd:- lvcreate - -size 5G- — name Task LVM

5G means the space u want give to namenode

To see logical volume

cmd:- lvdisplay LVM/TASK

Step 5

Format the partion

cmd:- mkfs.ext4 /dev/LVM/TASK

Step 6

Mount the partion on the datanode folder /dn

cmd:- mount /dev/LVM/TASK /dn

Step 7

Start datanode service

To check

cmd:- hadoop dfsadmin-report

How we can give Elasticity to Datanode Storage

For increasing/ expand the storage

cmd:- lvextend — size /dev/vg name/lvname

After incresing the storage we have format volume

cmd :-resize2fs /dev/<vg name>/<lv name>

To check wheather it added or not

cmd hadoop dfsadmin-report

Now we can 2 GB is added last time we only have 5 Gb

For decreasing the storage

cmd :-lvreduce -L <size> /dev/<vg name>/<lv name>

To check

Now lv became 2Gb it reduced from 5GB

Thanks for reading 🙏

--

--