2. Bandwidth, if you’ve wanted something greater that 10GbE over a single link, you’ve pretty much had little choice up to this point. While 40GbE exists many view this as an expensive alternative. Recent pushes by two groups to flesh out 25GbE and 50GbE ahead of the IEEE have resulted in this standards group stepping up its' efforts. All of this has accelerated the industries approach toward a unified 100GbE server solution for 2017. Add to this Arista and others pushing Intel to provide CLR4 as an affordable 4-channel 25G, 100G optical transceiver, and things get even more interesting.
3. Latency, has always been a strong reason for selecting Infiniband. Much of it's gains are the result of moving the communications stack into user space, and accelerating the wire to PCIe bus connection. These tricks are not unique to Infiniband, others have played them all for Ethernet delivering performance ethernet controllers and OS bypass stacks which now offers similar latencies, when compared at similar speeds. This is why nearly all securities world-wide are traded through systems using Solarflare adapters leveraging their OS Bypass stack called OpenOnload while using standard UDP, and TCP protocols. The domain of low latency is no longer exclusive to RDMA as it can now be more easily, and transparently done using existing code, and via UDP and TCP transport layers over industry standard ethernet.
4. Single vendor, if you want Infiniband there really is only one vendor who offers a single end-to-end solution. End-to-end solution providers are great because they expose the single throat to choke when things eventually don't work it. Conversely, many customers will avoid adopting technologies where there is only one single provider, because it removes competition and choice from the equation. Also when that vendor stumbles, and they always do, with a single vendor you're stuck. Ethernet, the open industry standard, affords you options while also providing you with interoperability.
5. Return to a Single Network, ever since Fiber Channel intruded into the data center nearly two decades ago network engineers have been looking ways to remove it. Then along came exascale, HPC by another name, and Infiniband was also pulled into the datacenter. Some will say Infiniband can do all three, but clearly those people have never dealt with bridging real world Ethernet traffic with Infiniband traffic. At 100Gbps Ethernet should have what it needs in both features, and performance, to provide a pipeline for all three protocols over a single generic network fabric.
Given all the above it should be interesting to revisit this post in 2018 to see how the market reacted. For some perspective, in this blog back in December 2012 I wrote "How Ethernet Won the West" where I predicted that both Fiber Channel and Infiniband would eventually disappear. Fiber Channel as a result of Fiber Channel over Ethernet (FCoE), which never really took off, and Infiniband because everyone else was abandoning it including Jim Cramer. Turns out while I've yet to be right about either, Cramer nailed it. Since January 2013, adjusting for splits and dividends, Mellanox stock has dropped 14%.