With the help of a couple of friends, we’ve put a 4.5T RAID-5 machine on our network and I’m trying to figure out how to share the storage with the rest of the hosts. In the past, I have used NFS and CIFS/Samba to provide access to remote hosts. This has generally worked okay so long as the server stays online.
I don’t know if the results are going to be much different, but I am now trying a different approach. I plan to run an iSCSI server, and I’ve already configured AoE (ATA over Ethernet). I’ve exported a block device on the network segment and mounted it on a remote host. This was pretty easy to configure. There is a bit of documentation on the internet already, but I’ll give another quick overview.
I gave the storage server the unoriginal name ‘san0’. This host is running debian lenny. I am testing the configuration from my debian sid development host, which has the similarly unoriginal name ‘dev0’. So, think server when you see ‘san0’ and client when you see ‘dev0’.
I assume that you’ve already got an LVM volume group set up. Mine is called ‘vg0’. Correct the following examples to account for any differences. You can use disk partitions instead of LVM logical volumes.
Create a logical volume to be exported:
cjac@san0:~$ sudo lvcreate /dev/vg0 -n e0.1 -L 5G
Load the AoE kernel module:
cjac@san0:~$ sudo modprobe aoe
Install the package containing the vblade block device export server:
cjac@san0:~$ sudo apt-get install vblade
Export the block device. Note that the ethernet bridge on which I export the device is called ‘loc’:
cjac@san0:~$ sudo vbladed 0 1 loc /dev/vg0/e0.1
Install the AoE discovery tools on the client:
cjac@dev0:~$ sudo apt-get install aoetools
Load the AoE kernel module:
cjac@dev0:~$ sudo modprobe aoe
Probe for exported AoE devices:
cjac@dev0:~$ sudo aoe-discover
Verify that our exported device was discovered:
cjac@dev0:~$ test -e /dev/etherd/e0.1 && echo "yep"
yep
You can now treat /dev/etherd/e0.1 as you would any other block device. You can format it directly, or partition it and format a partition, use it as a device in your software RAID array, use it as swap space (ha), or something completely different.
Now to figure out this iSCSI stuff…
5 responses to “SAN configuration (AoE)”
AoE is fun stuff. I’ve used it quite a bit for personal projects. It works really well as a storage medium for hot-migrating Xen instances. I’ve even hacked around on the CentOS/Fedora initrd scripts to make boxes boot off AoE. I <3 it.
Ha. I was going to look in to AoE root, so I googled for it. I guess you know what you’re doing, or at least how to fool google into thinking you do.
I also tested AoE and iSCSI. My conclusion is to not use AoE because it has very bad performance if you you use more than one blade. Just try to make 100 AoE blades and on the client side use one to test performance!
iSCSI on the other hand seems to be more expansive in terms of performance, but iSCSI is highly optimized, sometimes already in hardware. If you use one iscsi target and 100 LUNs on it you will have the same performance as only having one (in contrast to AoE). Don’t use a lot of iscsi targets (e.g. for each export one iscsi target) bacause this will need a lot of ram, using a lot of LUNs doesn’t cost much ram.
My recommendation is to test the performance of AoE and iSCSI for your setup.
Setting up a iscsi target:
aptitude install iscsitarget-modules-2.6
aptitude install iscsitarget
vi /etc/ietd.conf
Setting up a iscsi initiator:
aptitude install open-iscsi
vi /etc/iscsi/iscsid.conf
iscsiadm –mode discovery –type sendtargets –portal IP_OF_ISCSI_TARGET
iscsiadm –mode node -l
Thanks for this. I eventually got it working, and I’m dumping a bunch of data across the VPN from one site to another. I’m currently wishing that comcast offered better than 2M links…
What I like about AoE is that is is trivially easy to use. What bugs me is that it does not support authentication. This makes it practically unusable to me in a safe manner, and rather than jumping through a lot of hoops with filtering, VLANs etc, I’ll be moving on to iSCSI too…