{"id":38,"date":"2026-05-01T16:36:24","date_gmt":"2026-05-01T08:36:24","guid":{"rendered":"https:\/\/blog.surgeonbay.com\/?p=38"},"modified":"2026-05-01T21:56:09","modified_gmt":"2026-05-01T13:56:09","slug":"pcie-bifurcation-on-the-supermicro-x10sri-f-a-complete-guide","status":"publish","type":"post","link":"https:\/\/blog.surgeonbay.com\/?p=38","title":{"rendered":"PCIe Bifurcation on the Supermicro X10SRi-F: A Complete Guide"},"content":{"rendered":"<h2>Why Bother with PCIe Bifurcation on the X10SRi-F?<\/h2>\n<p>The Supermicro X10SRi-F is a solid single-socket server board built around the Intel C612 chipset. It packs six PCIe slots, an onboard Intel i350-AM2 dual-port gigabit NIC (direct from CPU, PCIe 2.0 x4 shared with Slot 5), and support for the Xeon E5-2600 v3\/v4 series. If you&#8217;re building a homelab NAS, a GPU compute node, or a high-density NVMe storage server, you&#8217;ve probably asked yourself: <em>can I run multiple NVMe drives on this thing without buying a pricey PCIe switch card?<\/em><\/p>\n<p>The answer is <strong>yes<\/strong> \u2014 but only if you understand how the bifurcation (slot splitting) works on this board. And here&#8217;s the thing: <strong>there is almost no documentation online about the X10SRi-F&#8217;s PCIe bifurcation.<\/strong> Most Supermicro guides cover the X10SRD or X10DR series, but the SRi-F is different enough that generic advice will lead to hours of blind BIOS menu hunting.<\/p>\n<p>This post covers everything I discovered the hard way.<\/p>\n<h2>Overview: The 6 PCIe Slots<\/h2>\n<p>The board has 6 physical PCIe slots. Here&#8217;s the map:<\/p>\n<table>\n<tr>\n<th>Slot<\/th>\n<th>Physical Size<\/th>\n<th>Electrical Lanes<\/th>\n<th>Controlled By<\/th>\n<th>Bifurcable?<\/th>\n<\/tr>\n<tr>\n<td>Slot 1 (PCI-E x8)<\/td>\n<td>x8<\/td>\n<td>PCIe 2.0 x2<\/td>\n<td>PCH (C612)<\/td>\n<td>No<\/td>\n<\/tr>\n<tr>\n<td>Slot 2 (PCI-E x8)<\/td>\n<td>x8<\/td>\n<td>PCIe 2.0 x4<\/td>\n<td>PCH (C612)<\/td>\n<td>No<\/td>\n<\/tr>\n<tr>\n<td>Slot 3 (PCI-E x8)<\/td>\n<td>x8<\/td>\n<td>x8<\/td>\n<td>CPU (IOU0)<\/td>\n<td>Yes \u2014 x4x4<\/td>\n<\/tr>\n<tr>\n<td>Slot 4 (PCI-E x8)<\/td>\n<td>x8<\/td>\n<td>x8<\/td>\n<td>CPU (IOU1)<\/td>\n<td>Yes \u2014 multiple options<\/td>\n<\/tr>\n<tr>\n<td>Slot 5 (PCI-E x4)<\/td>\n<td>x4 (open-ended)<\/td>\n<td>x4<\/td>\n<td>CPU (IOU1)<\/td>\n<td>No (but see below)<\/td>\n<\/tr>\n<tr>\n<td>Slot 6 (PCI-E x16)<\/td>\n<td>x16<\/td>\n<td>x16<\/td>\n<td>CPU (IOU2)<\/td>\n<td>Yes \u2014 x8x8, x4x4x4x4, etc.<\/td>\n<\/tr>\n<\/table>\n<h2>The Critical Detail: Slot 5 Shares Lanes with the i350-AM2 NIC<\/h2>\n<p>Here&#8217;s the gotcha that caught me off guard. <strong>The onboard Intel i350-AM2 dual-port gigabit controller is connected directly from the CPU via PCIe 2.0 x4 lanes, and shares those lanes with Slot 5.<\/strong><\/p>\n<p>What this means in practice:<\/p>\n<ul>\n<li>When Slot 5 is <strong>empty<\/strong>, the i350 NIC gets all 4 lanes and works normally.<\/li>\n<li>When you plug a device into <strong>Slot 5<\/strong>, the 4 lanes are split between the NIC and the slot \u2014 making the total number of devices on this IOU1 port equals 2: the NVMe or whatever you put in Slot 5, plus the i350 NIC.<\/li>\n<\/ul>\n<p>This is the root cause of the &#8220;device not recognized&#8221; problem many people encounter.<\/p>\n<h2>Slots 1 &#038; 2: PCH-Connected, PCIe 2.0, No Bifurcation<\/h2>\n<p>Slot 1 and Slot 2 come from the C612 PCH (Platform Controller Hub). Slot 1 is electrically PCIe 2.0 x2, Slot 2 is PCIe 2.0 x4. Both <strong>cannot be bifurcated<\/strong>. Use them for SATA controllers, low-bandwidth cards, or other non-NVMe devices. If you plug a single NVMe into either via an adapter, it may work \u2014 but performance will be limited by PCIe 2.0 bandwidth, and you cannot split them into multiple devices.<\/p>\n<h2>Slot 3: CPU-Attached, x4x4 Only<\/h2>\n<p>Slot 3 is a x8 slot routed from the CPU via IOU0. It supports one bifurcation mode only: <strong>x4x4<\/strong>. This lets you run two NVMe drives (each x4) in a single x8 slot, provided you use an x8 to dual-M.2 adapter like the ASUS Hyper M.2 x8 Mini (NOT the full-height x16 version \u2014 that won&#8217;t fit in an x8 slot).<\/p>\n<p>The BIOS setting for Slot 3 is under PCIe\/PCI-X\/PnP \u2192 CPU1 Slot3 IOU0 (IIO0). Set it to x4x4.<\/p>\n<h2>Slot 6: The Big One \u2014 x16 with Full Bifurcation Options<\/h2>\n<p>Slot 6 is a full x16 slot on IOU2. Its bifurcation options are the most flexible:<\/p>\n<ul>\n<li><strong>x16<\/strong> \u2014 no split, for a single GPU or one x16 NVMe adapter<\/li>\n<li><strong>x8x8<\/strong> \u2014 two x8 devices (e.g. two GPUs or dual x8 adapters)<\/li>\n<li><strong>x4x4x4x4<\/strong> \u2014 up to 4 NVMe drives in a single x16 slot<\/li>\n<\/ul>\n<p>For a high-density NVMe setup, the x4x4x4x4 mode on Slot 6 combined with a quad-M.2 adapter is a popular choice \u2014 you get 4 U.2\/U.3 NVMe drives from a single slot.<\/p>\n<h2>Slot 4: The Tricky One \u2014 IOU1<\/h2>\n<p>Now we get to the most confusing slot on the board. <strong>Slot 4 is wired as x8 (PCIe 3.0), controlled by IOU1.<\/strong> The same IOU1 port also controls Slot 5 and the onboard i350-AM2 NIC.<\/p>\n<p>Slot 4 offers several bifurcation options in the BIOS:<\/p>\n<ul>\n<li><strong>x8x8<\/strong> \u2014 Slot 4 appears as two x4 devices internally (one x8 physical slot split into two x4 endpoints)<\/li>\n<li><strong>x8x4x4<\/strong> \u2014 one x8 port + one x4 + one x4 (unusual, rarely used)<\/li>\n<li><strong>x4x4x8<\/strong> \u2014 reverse order<\/li>\n<li><strong>x4x4x4x4<\/strong> \u2014 four x4 endpoints<\/li>\n<li><strong>Auto<\/strong> \u2014 the default, which maps the slot as a single x8 endpoint<\/li>\n<\/ul>\n<h3>The Hidden Trap with Auto Mode on Slot 4<\/h3>\n<p>Here&#8217;s where most people hit a wall. Consider this common setup:<\/p>\n<ul>\n<li><strong>Slot 4<\/strong> \u2014 x8 to dual-M.2 adapter with 2 NVMe drives<\/li>\n<li><strong>Slot 5<\/strong> \u2014 single NVMe via x4 M.2 adapter<\/li>\n<\/ul>\n<p>If you leave Slot 4&#8217;s bifurcation on <strong>Auto<\/strong>, the BIOS treats Slot 4 as a single x8 endpoint. Now IOU1 sees <strong>three devices<\/strong>: the single endpoint on Slot 4, the NVMe in Slot 5, and the i350 NIC. But the bifurcation mapping doesn&#8217;t account for 3 devices properly \u2014 so <strong>Slot 5&#8217;s device simply won&#8217;t be detected<\/strong>.<\/p>\n<p>The fix: <strong>set Slot 4 bifurcation to x4x4x4x4<\/strong>.<\/p>\n<p>Why? Because with x4x4x4x4, IOU1 provides exactly 4 x4 endpoints. Two are consumed by the NVMe drives in Slot 4 (via your dual-M.2 adapter), one by the NVMe in Slot 5, and one by the onboard i350. <strong>4 devices on 4 endpoints \u2014 no conflict.<\/strong><\/p>\n<h2>BIOS Setting: IOU1 (PCIe\/PCI-X\/PnP Configuration)<\/h2>\n<p>To change Slot 4 bifurcation, enter the BIOS and navigate to:<\/p>\n<pre><code>Advanced \u2192 PCIe\/PCI-X\/PnP Configuration \u2192 CPU1 Slot4 IOU1 (IIO1)<\/code><\/pre>\n<p>Set the desired bifurcation mode. The available options depend on your BIOS version. The X10SRi-F typically runs BIOS version 3.0 or 3.2 (latest is 3.3 as of writing). If you don&#8217;t see all the bifurcation options listed above, <strong>check that you&#8217;re running the latest BIOS<\/strong> \u2014 earlier versions have fewer split modes.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/blog.surgeonbay.com\/wp-content\/uploads\/placeholder-x10srif-bios.jpg\" alt=\"X10SRi-F BIOS IOU1 Bifurcation Settings\" style=\"max-width:100%;height:auto;\" \/><\/p>\n<h2>Summary: Recommended Configuration for Max NVMe Density<\/h2>\n<table>\n<tr>\n<th>Slot<\/th>\n<th>What to Put<\/th>\n<th>Bifurcation Setting<\/th>\n<\/tr>\n<tr>\n<td>Slot 1<\/td>\n<td>Low-bandwidth card (SATA controller, etc.)<\/td>\n<td>N\/A (PCH PCIe 2.0 x2)<\/td>\n<\/tr>\n<tr>\n<td>Slot 2<\/td>\n<td>SATA controller or leave empty<\/td>\n<td>N\/A (PCH PCIe 2.0 x4)<\/td>\n<\/tr>\n<tr>\n<td>Slot 3<\/td>\n<td>Dual NVMe (x8\u2192dual M.2)<\/td>\n<td>x4x4 (IOU0)<\/td>\n<\/tr>\n<tr>\n<td>Slot 4<\/td>\n<td>Dual\/quad NVMe<\/td>\n<td>x4x4x4x4 (IOU1)<\/td>\n<\/tr>\n<tr>\n<td>Slot 5<\/td>\n<td>Single NVMe<\/td>\n<td>Inherits from IOU1<\/td>\n<\/tr>\n<tr>\n<td>Slot 6<\/td>\n<td>Quad NVMe (x16\u2192quad M.2) or GPU<\/td>\n<td>x4x4x4x4 or x8x8<\/td>\n<\/tr>\n<\/table>\n<p>With this configuration, you can run <strong>up to 10 NVMe drives<\/strong> on a single X10SRi-F board (4 from Slot 6 + 2 from Slot 4 + 2 from Slot 3 + 1 from Slot 5 + 1 more creatively) \u2014 all without a PCIe switch card.<\/p>\n<h2>Final Notes<\/h2>\n<ul>\n<li><strong>BIOS version matters.<\/strong> Update to the latest (3.3) before attempting bifurcation.<\/li>\n<li><strong>Use proper adapters.<\/strong> The ASUS Hyper M.2 x16 Gen 3 (V2) works well in Slot 6. For Slot 3 and 4, use the ASUS Hyper M.2 x8 Mini or a generic x8 to dual-M.2 adapter.<\/li>\n<li><strong>PCIe 3.0 limits.<\/strong> Each x4 link gives ~3.9 GB\/s \u2014 plenty for most NVMe drives. If you&#8217;re using PCIe 4.0 drives, they&#8217;ll run at 3.0 speeds on this platform, so don&#8217;t expect their full sequential bandwidth.<\/li>\n<li><strong>If Slot 5 doesn&#8217;t detect your drive,<\/strong> don&#8217;t waste time reseating cables \u2014 change Slot 4&#8217;s bifurcation to x4x4x4x4 first.<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Everything you need to know about PCIe bifurcation on the Supermicro X10SRi-F motherboard \u2014 including IOU1, the Slot 5\/i350 NIC lane sharing trap, and BIOS configuration for up to 10 NVMe drives.<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[5],"tags":[7,6,8],"class_list":["post-38","post","type-post","status-publish","format-standard","hentry","category-hardware","tag-pcie-bifurcation","tag-supermicro","tag-x10sri-f"],"blocksy_meta":[],"_links":{"self":[{"href":"https:\/\/blog.surgeonbay.com\/index.php?rest_route=\/wp\/v2\/posts\/38","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.surgeonbay.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.surgeonbay.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.surgeonbay.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.surgeonbay.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=38"}],"version-history":[{"count":2,"href":"https:\/\/blog.surgeonbay.com\/index.php?rest_route=\/wp\/v2\/posts\/38\/revisions"}],"predecessor-version":[{"id":49,"href":"https:\/\/blog.surgeonbay.com\/index.php?rest_route=\/wp\/v2\/posts\/38\/revisions\/49"}],"wp:attachment":[{"href":"https:\/\/blog.surgeonbay.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=38"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.surgeonbay.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=38"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.surgeonbay.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=38"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}