iPXE on Dell C6100 (BIOS Mod)

My step by step instructions on modifying the Dell PowerEdge C6100 BIOS firmware to replace the stock Intel iSCSI/PXE bootloader with the much more versatile iPXE firmware. Some of iPXE features include boot from ATA over Ethernet, FCoE, Infiniband network, HTTP web server and of course iSCSI. It supports an advanced scripting environment that can alter the boot process by simply uplading a new script file to a web server or TFTP server among others. A huge improvement over the standard Intel PXE loader.

This is a non-standard, risky procedure because we’re modifying embedded network interfaces and not add-on cards. The goal is to have the Dell C6100 as a completely diskless system with the ability to boot from various sources over the network without having to be physically at the server.

ipxe2

First the usual disclaimer: I hold no responsibility if you hose your motherboard doing this. Anything in this article is very far from “supported operation”. Just because it worked for me, doesn’t guarantee that it’ll work for you.

With that out of the way…

The ROM that I’ll be building will be capable of chain loading a dynamic PXE/iSCSI script from a TFTP server. script file will be unique to each server as the script filename will be based on the MAC address of the iPXE interface.

If you want to skip building the whole thing yourself, you can just download the ROM file. All files referenced in this doc are at the end of this post.

Prerequisites:

  • Flash BIOS to the latest version. 1.7 in my case.
  • Enable Option ROM in BIOS for the Network cards
  • Linux machine with gcc/make environment to build our iPXE ROM
  • Windows machine to modify the stock C6100 BIOS and inject the iPXE firmware
  • Bootable USB stick to flash the firmware

In order to build the proper ROM firmware we need to identify the specific Vendor ID and Device ID. To do this simply boot a Linux Live CD on the C6100 and issue the “lspci” command to show all devices

[root@localhost ~]# lspci
00:00.0 Host bridge: Intel Corporation 5500 I/O Hub to ESI Port (rev 22)
00:01.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 1 (rev 22)
00:03.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 3 (rev 22)
00:07.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 7 (rev 22)
00:13.0 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub I/OxAPIC Interrupt Controller (rev 22)
00:14.0 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub System Management Registers (rev 22)
00:14.1 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub GPIO and Scratch Pad Registers (rev 22)
00:14.2 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub Control Status and RAS Registers (rev 22)
00:14.3 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub Throttle Registers (rev 22)
00:16.0 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22)
00:16.1 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22)
00:16.2 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22)
00:16.3 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22)
00:16.4 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22)
00:16.5 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22)
00:16.6 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22)
00:16.7 System peripheral: Intel Corporation 5520/5500/X58 Chipset QuickData Technology Device (rev 22)
00:1d.0 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #1
00:1d.1 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #2
00:1d.2 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #3
00:1d.7 USB controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #1
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 90)
00:1f.0 ISA bridge: Intel Corporation 82801JIR (ICH10R) LPC Interface Controller
00:1f.2 SATA controller: Intel Corporation 82801JI (ICH10 Family) SATA AHCI Controller
00:1f.3 SMBus: Intel Corporation 82801JI (ICH10 Family) SMBus Controller
01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
01:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)
05:04.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 10)

We can see the Ethernet controll is at address 01:00.0. Let’s get some details

[root@localhost ~] lspci -n -s 01:00.0
01:00.0 0200: 8086:10c9 (rev 01)

We have our Vendor and Device ID’s. We combine the two to get our ROM filename, but that will come later.

Now it’s time to set up a build environment. Assuming all the compiler tools are installed on our linux machine, we download the source from the Git repository.

[root@localhost ~] cd /tmp
[root@localhost ~] git clone git://git.ipxe.org/ipxe.git
[root@localhost ~] cd ipxe/src

Upgrading Dell C6100 BIOS

These are my step-by-step instructions how to safely upgrade the Dell PowerEdge C6100 without bricking the nodes.

Couple of points:

  • If your service tag is not on Dell Site, you can not upgrade. Forcing the flash will likely brick your node.
  • Always upgrade the BMC first.
  • BMC must be upgraded on ALL nodes in the chassis
  • These instructions worked for me. I can not be responsible if they do not work for you
  • A lof of steps here might seem unnecessary, please follow anyways
  • There seems to be no difference when a chassis is hosting nodes with mixed BIOS/BMC versions.
  • When flashing a node, the powered state of other nodes doesn’t matter. Can even be removed.

Using the following method, I’ve successfully upgraded the following firmwares:

BMC 1.11 -> 1.32
BMC 1.22 -> 1.32
BMC 1.29 -> 1.32

BIOS 1.44 -> 1.70
BIOS 1.62 -> 1.70
BIOS 1.69 -> 1.70

FCB 1.08 -> 1.20
FCB 1.14 -> 1.20

First thing is first. Need to download the appropriate firmware files from the Dell Support Site here. To find your firmware files, enter your Service Tag number found on the back of the server. If the sticker is missing you can try the BIOS but be warned that not all nodes might be from the same chassis.
c6100bios1

If the DELL site comes up with “Invalid Service Tag” message, STOP RIGHT HERE. Your server is a custom build and it does not support the C6100 firmware from the site. Proceeding will most likely end up in bricking your nodes.

Once the drivers and files show up on the site, download the BIOS, BMC and FCB files.
c6100bios2

Run each downloaded EXE file and extract the files contents to your local drive.
c6100bios3
c6100bios4

Once all files have been extracted. Copy the three folders to a bootable DOS USB stick. There are many ways of making a DOS bootable USB Stick. Use the stick to boot into a DOS prompt.

We’re first going to flash the BMC. This is important because an older BMC might have trouble communicating with a new BIOS, but a new BMC will always support an older BIOS.
To flash the BMC, we’re using the SOCFLASH utility. Flashing is easy. Simply navigate to the SOCFLASH\DOS folder and execute flash8.bat.
IMG_0648

The rest of the BMC flash process is completely automated.
IMG_0650

Once the BMC has been flashed, power cycle the chassis. This means, unplug the server, wait a few seconds and plug it back in. This ensures a complete reset of the BMC. Power up the node and watch closely for POST messages.

If you see an “Error: BMC NOT Responding” message. REBOOT THE NODE.
IMG_0651

A simple soft reboot is enough to re-establish communication with the BMC.
IMG_0643

Once back in the dos prompt. Navigate to the folder where the C6100 BIOS files are located. At this point attempting to flash using the automated FBIOS.bat file will most likely generate a message “Different ROM ID between current system and update BIOS rom file”. This is expected.
IMG_0641

To properly flash the BIOS enter the following command at the prompt

>afudos 6100v170.rom /p /b /n /c /x

Naturally, replace the .rom filename with whatever the current firmware version is. The /X parameter skips the ROM ID check and forces the flash process.
IMG_0644

Once the flash process finishes, reboot the Node (a soft boot will suffice). During the boot process, the F2 (BIOS Entry) option is not displayed. Not sure why that is. Once the boot process finishes, reboot the node one more time to re-enter the BIOS settings.
IMG_0646

Once the BIOS settings are configured. Simply save and exit. At this point the flash process for the node is done and the next node can be flashed. For every remaining node flash both the BMC and BIOS.

Once all the nodes are flash, it might be a good idea to flash the Fan Controller Board. Some earlier firmwares (prior to 1.14) run the fans at much higher speeds than necessary making the server much noisier than it should be. If running the server in a data center this is typically not a problem but definitely makes a difference when running at home. Flashing the Fan Controller is very easy. Simply navigate to the folder on the USB stick where the FCB files are located and run FCB.bat. A quick note. The firmware is only compatible with the PIC-18 controller board. If your server came with the PIC-16 board, you’re out of luck. The firmware can not be flashed and the only way to slow down the fans is either via upgrading the FCB or by swapping the fans for something slower/quieter.
The FCB needs to be only flashed once per chassis. That means once the FCB has been flashed on a single node, the other nodes will see the upgraded firmware.
IMG_0659

And that’s it. The C6100 is now running on latest firmware.

New ZFS Server

I bought a new server to replace my Dell PowerEdge R710. The new server is an HP DL180 G6. Equipped with E5620 CPU and 48GB of RAM.
IMG_0604

The server also came with 12 300GB 15,000 RPM SAS drives. These drives will be split between the server backplane and a Dell PowerVault MD1000 to increase throughput.
IMG_0605

The new server will also act as a backup server, so I’ll add 3 3TB drives in RAIDZ-1 config.

First order of business is to replace the stock HP P400 RAID card with a non-raid HBA. For this, I’ve chosen the popular IBM M1015 SAS card. It’s a great 6Gbps PCIe 2.0 card that can be easily reflashed to plain HBA mode.
IMG_0606

I’ve also added a temporary Intel PRO/1000 Quad port NIC for Round-Robin iSCSI. In the next few weeks this will be replaced with 40Gbps Infiniband card. But that’s not ready yet.
IMG_0607

The stock RAID card has the SFF-8087 connector in the back of the card, the M1015 has the connectors on top.
IMG_0608

Fortunately HP provided enough slack on the cable to reach to the top of the new card.
IMG_0609

Before the M1015 can be used with ZFS (NexentaStor in my case). I need to reflash the card to “IT” firmware. The process is relatively simple.

Download the IT firmware here – This should be compatible with any LSI SAS2008 based card.

1. Extract the contents of the RAR file to a DOS bootable USB Flash drive.
2. Boot the server using the flash drive. Make sure only one card is connected.

3. Clear the firmware from the card

> megarec -writesbr 0 empty.bin

4. Erase the flash

> megarec -cleanflash 0

5. Reboot the box.
6. Flash the new IT Firmware to the card

> sas2flsh -o -f 2108it.bin -b mptsas2.rom

7. Enable the card’s IT mode. “500605bxxxxxxxxx” SAS address from sticker on the card without dash or quotes.

> sas2flsh -o -sasadd 500605bxxxxxxxxx

8. Final reboot and the card is good to go.

Last step is to rack up the server, populate the drives and install NexentaStor.
IMG_0613

Once NexentaStor is installed. Create the proper DataSets and we’re done. In current config running 14x300GB drives in Mirrored groups giving me about 2TB of usable, high performance storage.
nexenta

New Mail Server – The Search

Looking to set up a new mail server. Currently running Exchange Server but it’s a pain to manage multiple domains. Spam filtering not great. I have few dozen domains and I’d like to consolidate them. I need an easy to use and manage server where I don’t have to spend too much time on it.

Required Features:

* Free (or really cheap)
* Webmail
* Push via Activesync or IMAP Idle
* Anti-Spam Filtering
* Multi-Domain / Aliases
* All-In-One Solution
* Web Based Admin / No manual file editing

Ability to easily create email aliases, great for registering at web sites. Also to determine which web site is selling email addresses.

Product Free WebMail Anti-Spam Push Notes
Axigen Almost Free (100 User Limit) Yes No* No* * Available in full version
Kolab 3.0 Free ? ? ? Failed to install
hMailServer Free No Yes Yes (IMAP Idle) No web admin
SoGo Free ? ? ? Steep learning curve
Blue Mind Free Yes No ? Missing features
Horde Free ? ? ? Steep learning curve
Zimbra Community Free Yes Yes Yes Worth a second look
Synovel Collabsuite Free Yes Yes ? Not sure if Push works

It’s quite a list but I wanted to make sure I touch on all the popular products.

First thing’s first. Configure a Virtual Machine for testing. Most of the products are Linux based, and I’m fine with that but some products run only under Windows so a Windows VM will also be required.

CentosInstall

Once both VM’s were configured and all the updates have been installed I took a snapshot of each VM to make moving onto the next product easy.

WinInstall

I also created a local hosts entry to see how the product responds accessed “remotely”

# Copyright (c) 1993-2009 Microsoft Corp.
#
# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
#
# This file contains the mappings of IP addresses to host names. Each
# entry should be kept on an individual line. The IP address should
# be placed in the first column followed by the corresponding host name.
# The IP address and the host name should be separated by at least one
# space.
#
# Additionally, comments (such as these) may be inserted on individual
# lines or following the machine name denoted by a '#' symbol.
#
# For example:
#
#      102.54.94.97     rhino.acme.com          # source server
#       38.25.63.10     x.acme.com              # x client host

# localhost name resolution is handled within DNS itself.
#	127.0.0.1       localhost
#	::1             localhost

192.168.77.35 linux.mailtest.com
192.168.77.36 windows.mailtest.com

On To Testing