![]() The cluster is then tested for performance using native IB, native Ethernet, and RoCE via OpenMPI 1.10.3 built with Mellanox OFED3.3’s MXM libraries. When running as an Ethernet link layer, they communicate across a 100Gb Juniper QFX5200 Data Center switch. From a Mellanox to the Intel card I only obtain about 75 to 80 gigabit.| MSB7700-ES2F EDR Mellanox switch. Between two Mellanox cards I achieve the expected 93 gigabit. I have the problem that the Intel card does not provide the expected performance when benchmarking. All four are connected to an Alcatel 100 GbE switch. ![]() ![]() ![]() The other three have Mellanox ConnectX-5 100 GbE network cards.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |