Why Bother with PCIe Bifurcation on the X10SRi-F?
The Supermicro X10SRi-F is a solid single-socket server board built around the Intel C612 chipset. It packs six PCIe slots, an onboard Intel i350-AM2 dual-port gigabit NIC (direct from CPU, PCIe 2.0 x4 shared with Slot 5), and support for the Xeon E5-2600 v3/v4 series. If you’re building a homelab NAS, a GPU compute node, or a high-density NVMe storage server, you’ve probably asked yourself: can I run multiple NVMe drives on this thing without buying a pricey PCIe switch card?
The answer is yes — but only if you understand how the bifurcation (slot splitting) works on this board. And here’s the thing: there is almost no documentation online about the X10SRi-F’s PCIe bifurcation. Most Supermicro guides cover the X10SRD or X10DR series, but the SRi-F is different enough that generic advice will lead to hours of blind BIOS menu hunting.
This post covers everything I discovered the hard way.
Overview: The 6 PCIe Slots
The board has 6 physical PCIe slots. Here’s the map:
| Slot | Physical Size | Electrical Lanes | Controlled By | Bifurcable? |
|---|---|---|---|---|
| Slot 1 (PCI-E x8) | x8 | PCIe 2.0 x2 | PCH (C612) | No |
| Slot 2 (PCI-E x8) | x8 | PCIe 2.0 x4 | PCH (C612) | No |
| Slot 3 (PCI-E x8) | x8 | x8 | CPU (IOU0) | Yes — x4x4 |
| Slot 4 (PCI-E x8) | x8 | x8 | CPU (IOU1) | Yes — multiple options |
| Slot 5 (PCI-E x4) | x4 (open-ended) | x4 | CPU (IOU1) | No (but see below) |
| Slot 6 (PCI-E x16) | x16 | x16 | CPU (IOU2) | Yes — x8x8, x4x4x4x4, etc. |
The Critical Detail: Slot 5 Shares Lanes with the i350-AM2 NIC
Here’s the gotcha that caught me off guard. The onboard Intel i350-AM2 dual-port gigabit controller is connected directly from the CPU via PCIe 2.0 x4 lanes, and shares those lanes with Slot 5.
What this means in practice:
- When Slot 5 is empty, the i350 NIC gets all 4 lanes and works normally.
- When you plug a device into Slot 5, the 4 lanes are split between the NIC and the slot — making the total number of devices on this IOU1 port equals 2: the NVMe or whatever you put in Slot 5, plus the i350 NIC.
This is the root cause of the “device not recognized” problem many people encounter.
Slots 1 & 2: PCH-Connected, PCIe 2.0, No Bifurcation
Slot 1 and Slot 2 come from the C612 PCH (Platform Controller Hub). Slot 1 is electrically PCIe 2.0 x2, Slot 2 is PCIe 2.0 x4. Both cannot be bifurcated. Use them for SATA controllers, low-bandwidth cards, or other non-NVMe devices. If you plug a single NVMe into either via an adapter, it may work — but performance will be limited by PCIe 2.0 bandwidth, and you cannot split them into multiple devices.
Slot 3: CPU-Attached, x4x4 Only
Slot 3 is a x8 slot routed from the CPU via IOU0. It supports one bifurcation mode only: x4x4. This lets you run two NVMe drives (each x4) in a single x8 slot, provided you use an x8 to dual-M.2 adapter like the ASUS Hyper M.2 x8 Mini (NOT the full-height x16 version — that won’t fit in an x8 slot).
The BIOS setting for Slot 3 is under PCIe/PCI-X/PnP → CPU1 Slot3 IOU0 (IIO0). Set it to x4x4.
Slot 6: The Big One — x16 with Full Bifurcation Options
Slot 6 is a full x16 slot on IOU2. Its bifurcation options are the most flexible:
- x16 — no split, for a single GPU or one x16 NVMe adapter
- x8x8 — two x8 devices (e.g. two GPUs or dual x8 adapters)
- x4x4x4x4 — up to 4 NVMe drives in a single x16 slot
For a high-density NVMe setup, the x4x4x4x4 mode on Slot 6 combined with a quad-M.2 adapter is a popular choice — you get 4 U.2/U.3 NVMe drives from a single slot.
Slot 4: The Tricky One — IOU1
Now we get to the most confusing slot on the board. Slot 4 is wired as x8 (PCIe 3.0), controlled by IOU1. The same IOU1 port also controls Slot 5 and the onboard i350-AM2 NIC.
Slot 4 offers several bifurcation options in the BIOS:
- x8x8 — Slot 4 appears as two x4 devices internally (one x8 physical slot split into two x4 endpoints)
- x8x4x4 — one x8 port + one x4 + one x4 (unusual, rarely used)
- x4x4x8 — reverse order
- x4x4x4x4 — four x4 endpoints
- Auto — the default, which maps the slot as a single x8 endpoint
The Hidden Trap with Auto Mode on Slot 4
Here’s where most people hit a wall. Consider this common setup:
- Slot 4 — x8 to dual-M.2 adapter with 2 NVMe drives
- Slot 5 — single NVMe via x4 M.2 adapter
If you leave Slot 4’s bifurcation on Auto, the BIOS treats Slot 4 as a single x8 endpoint. Now IOU1 sees three devices: the single endpoint on Slot 4, the NVMe in Slot 5, and the i350 NIC. But the bifurcation mapping doesn’t account for 3 devices properly — so Slot 5’s device simply won’t be detected.
The fix: set Slot 4 bifurcation to x4x4x4x4.
Why? Because with x4x4x4x4, IOU1 provides exactly 4 x4 endpoints. Two are consumed by the NVMe drives in Slot 4 (via your dual-M.2 adapter), one by the NVMe in Slot 5, and one by the onboard i350. 4 devices on 4 endpoints — no conflict.
BIOS Setting: IOU1 (PCIe/PCI-X/PnP Configuration)
To change Slot 4 bifurcation, enter the BIOS and navigate to:
Advanced → PCIe/PCI-X/PnP Configuration → CPU1 Slot4 IOU1 (IIO1)
Set the desired bifurcation mode. The available options depend on your BIOS version. The X10SRi-F typically runs BIOS version 3.0 or 3.2 (latest is 3.3 as of writing). If you don’t see all the bifurcation options listed above, check that you’re running the latest BIOS — earlier versions have fewer split modes.

Summary: Recommended Configuration for Max NVMe Density
| Slot | What to Put | Bifurcation Setting |
|---|---|---|
| Slot 1 | Low-bandwidth card (SATA controller, etc.) | N/A (PCH PCIe 2.0 x2) |
| Slot 2 | SATA controller or leave empty | N/A (PCH PCIe 2.0 x4) |
| Slot 3 | Dual NVMe (x8→dual M.2) | x4x4 (IOU0) |
| Slot 4 | Dual/quad NVMe | x4x4x4x4 (IOU1) |
| Slot 5 | Single NVMe | Inherits from IOU1 |
| Slot 6 | Quad NVMe (x16→quad M.2) or GPU | x4x4x4x4 or x8x8 |
With this configuration, you can run up to 10 NVMe drives on a single X10SRi-F board (4 from Slot 6 + 2 from Slot 4 + 2 from Slot 3 + 1 from Slot 5 + 1 more creatively) — all without a PCIe switch card.
Final Notes
- BIOS version matters. Update to the latest (3.3) before attempting bifurcation.
- Use proper adapters. The ASUS Hyper M.2 x16 Gen 3 (V2) works well in Slot 6. For Slot 3 and 4, use the ASUS Hyper M.2 x8 Mini or a generic x8 to dual-M.2 adapter.
- PCIe 3.0 limits. Each x4 link gives ~3.9 GB/s — plenty for most NVMe drives. If you’re using PCIe 4.0 drives, they’ll run at 3.0 speeds on this platform, so don’t expect their full sequential bandwidth.
- If Slot 5 doesn’t detect your drive, don’t waste time reseating cables — change Slot 4’s bifurcation to x4x4x4x4 first.