FreeBSD Software RAID Howto
How to setup disk partitions, labels and software raid on FreeBSD systems.
Start out with a FreeBSD 6.2 system install on a regular disk. Add 4 empty SCSI, SATA or whatever drives.
This doc uses http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/vinum-vinum.html as a reference.
Overview
After having been bitten by my PCI-X SATA raid controller only working in few system because it sticks out too far, i realized that using software raid may be a better way to go, due to it's hardware independence. Vinum doesn't really boot off striped or concatenated sets, but it does boot off of raid1. This system acts as low volume file, web, mail, vpn, jukebox, gateway, dhcp... you get the picture in my home. Speed is nice but not paramount. Reliability is.
Stability
I started out trying this on 6-release and found gvinum to be very unstable. 6.2 turned out to not be much better. I ended up getting another hardware raid controller, but this time a 3ware 4x PCI-E. I decided to post this howto anyway as i only saw little pieces on the net and tought a step by step guide might be of use to someone. I actually tried this several times using VMWare. The instabilities came up when making changes to the geom partitions, something that is normally not done. Usually they're created and used, not created, removed and then changed around again. In the absense of these changes gvinum is probably fairly stable, as i've seen various reports on the net saying so. I just didn't want to take a chance and threw some hardware at it instead.
The disk architecture
There are 4 250G disks. All the static data such as music collections, photos etc. will be stored on a large raid5 partition. This provides a nice blend of speed/redundancy. The boot partition as well as most of the system will reside on 2 raid1 partitions. On my system this makes about 12G altogether wich splits in half nicely by moving /usr/ports and /usr/local on the second raid1 partition while the remainder will stay on the 1st raid1 partition.
NOTE: Cleaning out the ports reduces space by a lot, but leaving it like this provides for extra room should it be needed for large upgrades.
The following is the layout:
da0 da1 da2 da3 +-----+ +-----+ +-----+ +-----+ | 15G | | 15G | | 15G | | 15G | | | | | | | | | | | | | | +- RAID1 -+ | | +- RAID1 -+ | | boot | | d2 | | | | | | | | | | .5G | | .5G | | .5G | | .5G | | | | | | | | | | | | +----------- RAID0 -----------+ | | | | swap | | | | | | | | | | | | rem | | rem | | rem | | rem | +-----+ +-----+ +-----+ +-----+ | | +----------- RAID5 -----------+ data
NOTE: The numbers below don't actually use the 250GB example but rather the smaller sample values that i used on my VMWare test box. But size adjustments probably need to be made anyway.
Setup the slices and labels
This can all be done from any writable directory. I used /root as root.
- Set up the partitions and slices. Do this for disk 1-4 X=0-3
- fdisk -BI daX && bsdlabel -wB daXs1
- Make the first two bootable
- fdisk -b da[01]
- Or to have the boot manager
- boot0cfg da[01]
- Edit the first disk's bsdlabel
- bsdlabel -e da0s1
# /dev/da0s1: 8 partitions: # size offset fstype [fsize bsize bps/cpg] a: 750M 16 vinum b: 125M * swap c: 4192902 0 unused 0 0 # "raw" part, don't edit e: * * vinum
Remember to leave c: untouched. See man bsdlabel for details. After saving the label rerun the edit command so bsdlabel can do its calculations. Now set up the a: partition to be bootable. Move the old a: partition to d: and add an offset of 265 to the a: partition and also reduce the size of the a: partition by 265.
# /dev/da0s1: 8 partitions: # size offset fstype [fsize bsize bps/cpg] a: 1535735 281 4.2BSD 0 0 0 b: 256000 1536016 swap c: 4192902 0 unused 0 0 # "raw" part, don't edit d: 1536000 16 vinum e: 2400886 1792016 vinum
- Then dump this into a file by issuing
- bsdlabel da0s1 > label
- Now i can write this label to the disks 2-4 X=1-3 using
- bsdlabel -R daXs1 label
Setup the "boot" and "d2" root mirrors.
This is the conf file for the "boot" volume. It's called "boot" and it's path should be "./boot" The same goes for all these files. They're not needed after the devices have been created.
drive b0 device /dev/da0s1d drive b1 device /dev/da1s1d volume boot plex org concat sd drive b0 plex org concat sd drive b1
- Run this to create the entry
- gvinum create boot
- This can be removed by issuing
- gvinum rm -r b0 b1
This is the conf file for the "d2" volume.
drive b2 device /dev/da2s1d drive b3 device /dev/da3s1d volume d2 plex org concat sd drive b2 plex org concat sd drive b3
- Create the entry
- gvinum create d2
Create the filesystems
- The boot partition
- newfs /dev/gvinum/boot
- And "d2"
- newfs /dev/gvinum/d2
- Mount the boot partition
- mount /dev/gvinum/boot /mnt
- Copy all the files but /usr/local and /usr/ports over
- rsync -aSxHv --exclude /usr/local --exclude /usr/ports / /mnt/
- Mount "d2"
- mkdir /mnt/d2 && mount /dev/gvinum/d2 /mnt/d2/
- Now copy the files for /usr/local and /usr/ports over
- rsync -aSxHv /usr/local /usr/ports /mnt/d2/
- Hook everything together
- cd /mnt/usr && ln -s ../d2/local . && ln -s ../d2/port .
Setup swap
The conf file for the swap partition
drive s0 device /dev/da0s1b drive s1 device /dev/da1s1b drive s2 device /dev/da2s1b drive s3 device /dev/da3s1b volume swap plex org striped 512k sd drive s0 sd drive s1 sd drive s2 sd drive s3
- Make the entry
- gvinum create swap
- Turn on swap
- swapctl -a /dev/gvinum/swap
Setup raid5
The conf file for the "raid5" partition
drive r0 device /dev/da0s1d drive r1 device /dev/da1s1d drive r2 device /dev/da2s1d drive r3 device /dev/da3s1d volume raid5 plex org raid5 512k sd drive r0 sd drive r1 sd drive r2 sd drive r3
- Make the entry
- gvinum create raid5
- Create the filesystem
- newfs /dev/gvinum/raid5
- Mount raid5
- mkdir /mnt/data && mount /dev/gvinum/raid5 /mnt/data/
Adjust /mnt/etc/fstab
# Device Mountpoint FStype Options Dump Pass# /dev/gvinum/swap none swap sw 0 0 /dev/gvinum/boot / ufs rw 1 1 /dev/gvinum/d2 /d2 ufs rw 1 2 /dev/gvinum/raid5 /data ufs rw 1 3 /dev/acd0 /cdrom cd9660 ro,noauto 0 0
- Setup loader and friends
- echo 'geom_vinum_load="YES"' >> /mnt/boot/loader.conf
Try Booting
Now shutdown and try booting off the newly created partitions, either by changing the boot disk in the BIOS, using the FreeBSD boot manager on the old disk, or by unplugging the old disk altogether.
I found it to be least confusing to unplug the old disk.
Feedback
- I got this from Cyrus on 12/18/07
I thought I'd link you to a page about choosing RAID for your swap needs on FreeBSD on your http://www.schmut.com/howto/freebsd-software-raid-howto page. http://www.bsdforums.org/forums/archive/index.php/t-31272.html For OpenBSD... I'd suggest you be careful or reconsider as if a drive dies, upon reboot you will have _no_ swap. FreeBSD can span swap partitions itself without needing, and further complicated by, RAID. The Book "Absolute BSD: The Ultimate Guide to FreeBSD" http://books.google.com/books?id=vebgS-r9fP8C also has some little tips on choosing your swap configuration. (I think near page 16) FreeBSD can optimize on 4 disks as per Michael W. Lucas. Just thought I'd point that out. I am looking at configuring a server myself - probably splitting swap among 2 or more 15k SCSI drives. Disk layout is always a fun challenge :)
Cheers, Cyrus
Thanks Cyrus, i knew Linux did this but wasn't sure about FreeBSD. Thanks for pointing that out. The funny thing is, i actually have that book. It's on page 12 :). So yes i would definitely recommend not setting up the stripe set and letting the kernel handle this itself. mario;>