Experience with RAID and Ephemeral Devices

I have been doing cloud deployments for what, may be over 3 years now. I have always ignored the ephemeral storage that Amazon promised with its instances in favor of EBS for easy management in terms of detaching them from one instance and attaching them to another. Also because clients usually preferred something they could play with and EBS volumes can be managed via the AWS console.

So i have been doing deployment for this client and wanted to give ephemeral and RAID a shot after a long discussion in IRC (Thanks flashmanbahadur). Here is the command by command detail of what i did:

umount /dev/sdb mounted on /mnt by default as we would use it in raid setup

# umount /mnt

yes: because atleast /dev/sdb would have a filesystem and we know we want to create raid even if there is a fs.

# yes | mdadm --create /dev/md0 --level=0 -c256 --raid-devices=2 /dev/sdb /dev/sdc
mdadm: /dev/sdb appears to contain an ext2fs file system
    size=440366080K  mtime=Thu May 26 08:45:58 2011
mdadm: /dev/sdc appears to contain an ext2fs file system
    size=440366080K  mtime=Thu Jan  1 00:00:00 1970
Continue creating array? mdadm: array /dev/md0 started.

Setup /etc/mdadm.conf so it can persist boots and mdadm knows what arrays are there.

# echo 'DEVICE /dev/sdb /dev/sdc' > /etc/mdadm.conf
# mdadm --detail --scan >> /etc/mdadm.conf

Lets have a look at what above commands did

# cat /etc/mdadm.conf 
DEVICE /dev/sdb /dev/sdc
ARRAY /dev/md0 level=raid0 num-devices=2 metadata=00.90 UUID=f4317a48:33e4e94b:69a35f8e:7fada538

change block size to 64 for performance

# blockdev --setra 65536 /dev/md0

time to clean it up real nice. Why XFS? lets discuss that some other time.

# mkfs.xfs -f /dev/md0
meta-data=/dev/md0               isize=256    agcount=32, agsize=6880704 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=220182528, imaxpct=25
         =                       sunit=64     swidth=128 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=107520, version=2
         =                       sectsz=512   sunit=64 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

And also modify /etc/fstab so that /dev/md0 is mounted on boot

# cat /etc/fstab 
# /etc/fstab: static file system information.
proc                                            /proc           proc    nodev,noexec,nosuid 0       0
LABEL=uec-rootfs                                       /               ext4    defaults,noatime,nodiratime        0       0

#/dev/sdb, not needed anymore as we are using it as part of RAID
#/dev/sdb       /mnt    auto    defaults,nobootwait,comment=cloudconfig 0       2

# RAID0 volume
/dev/md0        /mnt    xfs     defaults,nobootwait,noatime,comment=ephemeral-raid0     0       2

Lets see if we setup /etc/fstab correctly

# mount -a

No errors, sweet. What do we have? it should be 840G, as i was using m1.large, mounted on /mnt

# df -Th
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/sda1     ext4    7.9G  1.3G  6.2G  18% /
none      devtmpfs    3.7G  120K  3.7G   1% /dev
none         tmpfs    3.7G     0  3.7G   0% /dev/shm
none         tmpfs    3.7G   68K  3.7G   1% /var/run
none         tmpfs    3.7G     0  3.7G   0% /var/lock
/dev/md0       xfs    840G   34M  840G   1% /mnt

awesome. Creating a test file

# touch /mnt/testfile

Listing files to see what is there

# ls -al /mnt/
total 4
drwxr-xr-x  2 root root   21 2011-05-26 08:57 .
drwxr-xr-x 26 root root 4096 2011-05-19 14:33 ..
-rw-r--r--  1 root root    0 2011-05-26 08:57 testfile

Now after reboot i don’t see mdadm mounted and db server complains, let see what do we have.

scan for all defined raid arrays

#  mdadm --detail --scan
mdadm: md device /dev/md/d0 does not appear to be active.

is the conf even there and correct?

# cat /etc/mdadm.conf 
DEVICE /dev/sdb /dev/sdc
ARRAY /dev/md0 level=raid0 num-devices=2 metadata=00.90 UUID=f4317a48:33e4e94b:69a35f8e:7fada538

are the ephemeral devices still available?

# ls -al /dev/sd*
brw-rw---- 1 root disk 202,  1 2011-05-26 08:58 /dev/sda1
brw-rw---- 1 root disk 202, 16 2011-05-26 08:58 /dev/sdb
brw-rw---- 1 root disk 202, 32 2011-05-26 08:58 /dev/sdc

is the md0 is defined? redundant check, its already answered by first test in this series.

# ls -al /dev/md0
ls: cannot access /dev/md0: No such file or directory

Try to assemble it

# mdadm  --assemble  --uuid=f4317a48:33e4e94b:69a35f8e:7fada538 /dev/md0 /dev/sdb /dev/sdc
mdadm: /dev/sdb has wrong uuid.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm: /dev/sdc has wrong uuid.
mdadm: no devices found for /dev/md0

what? the uuids changed? may be because they are `ephemeral’ afterall…

Also tried same setup with creating unformatted partitions on /dev/sdb and /dev/sdc, exactly same results. So with this i conclude my experiment with RAID0 on ephemeral due to time constraints. Should i have been able to work around reboots and persist data i would have used it for atleast my database server to increase read/write performance but as the name says; `ephemeral`.

What bothers me is that if i paste a file on /mnt (/dev/sdb default mounted position and keep in mind that /dev/sdb is ephemeral) then reboot, the file still is there with no data loss.

[A big thanks to http://www.gabrielweinberg.com/blog/2011/05/raid0-ephemeral-storage-on-aws-ec… , tru_tru for support and flashmanbahadur for giving the idea on ##aws]

Tags: , , , , , , , , , , , ,

Leave a Comment