iSCSI is an increasingly popular way to connect storage with servers, without the need for expensive, dedicated links such as Fibre Channel.
If you have a Gigabit Ethernet network which is woefully underused, you might well wonder why you need to install a second network alongside for the storage traffic.
This is a mini HOWTO, which shows how to export a disk on a server, and mount it on another server somewhere else using iSCSI.
My server is a Debian box.
You can export a whole disk, a RAIDed disk, an LVM LV, or a file on an existing system.
Let's assume that /dev/sdb is the device you want to export.
server# apt-get install iscsitarget server# vi /etc/iet/ietd.confAdd a section like this:
Target iqn.2011-06.com.domain.servername:10GBiSCSI Lun 0 Path=/dev/sdb,Type=fileio,ScsiId=xyz,ScsiSN=xyz # IncomingUser username s3cretP4sswordThe target name is an iSCSI Qualified Name. For the sake of testing, make it of the format: iqn.yyyy-dd.serverfqdn:someidentifier.
The next line dictates what is exported as that IQN. Change the Path to the relevant device or file.
If you like, you can set a username and password. This means that random people can't mount your storage. If you're on a private network, or you firewall the port to only one machine, you probably don't need to bother, but it's up to you.
Allow the packets
iptables -I INPUT -s 184.108.40.206 -p tcp --dport 3260 -j ACCEPTneeds to be added to your firewall rules (change 220.127.116.11 to your client's address).
Run /etc/init.d/iscsitarget restart, and ps auxw | grep ietd to check it's running.
Time to move to the client.
My client is a Fedora 14 box.
client# yum install iscsi-initiator-utils.x86_64 client# iscsiadm --mode discoverydb --type sendtargets --portal ip.of.server --discoverIf all is running well, you should see something like this:
18.104.22.168:3260,1 iqn.2011-06.com.domain.servername:10GBiSCSI(If your server has multiple interfaces, you might see a line for each interface)
Usernames and passwords
If you used a username and password in the conf file on the server, edit /var/lib/iscsi/nodes/iqn.2011-06.com.domain.servername:10GBiSCSI/22.214.171.124\,3260\,1/default
and add the following two lines:
node.session.auth.username = username node.session.auth.password = s3cretP4ssword
Multiple interfaces on the server
If you have multiple interfaces on the server, then you will have seen them all during the discovery phase.
This is so that your client can have multiple routes or "paths" to the server. (Think of a different ethernet network to each interface - that provides redundancy).
If you don't have the multiple paths set up, you should disable all but one interface. This will prevent delays in the startup script.
Go through all the "default" files in /var/lib/iscsi/nodes/iqn..., and set startup to manual.
Log in, and out
Now you need to log in and out.
iscsiadm --mode node --targetname iqn.2011-06.com.domain.servername:10GBiSCSI --portal 126.96.36.199:3260 --login iscsiadm --mode node --targetname iqn.2011-06.com.domain.servername:10GBiSCSI --portal 188.8.131.52:3260 --logoutIf that all went well, you're ready to use your iSCSI block device.
Your iSCSI device should now be visible on your client system in /dev/disks/by-path/.
You can now mkfs it, mount it, pvcreate it, set up software raid on it, add it in your /etc/crypttab and set up encryption on it, use multipath, or do anything else with it that you can do with a locally attached disk.
If it's a brand new device, you can
mkfs.ext3 /dev/disks/by-path/path-to-device, then mount /dev/disks/by-path/path-to-device /mnt/mountpointIf you put it permanently in /etc/fstab, make sure you use _netdev as an option, so that Linux doesn't try to mount it until the networking is running.
Obviously, the speed of it is limited to the speed of your network. If you're running it over a home ADSL connection, well, you'll suffer :)
If you're on a Gigabit ethernet network, then your disks will probably be slower than the network.
Let me know how you got on - problems, questions, criticisms.