For fibre you'll need a pair of 10GbE cards with SFP ports, fibre patch leads and two SFP optical modules for each patch lead (one for each end). For a direct connection between two PCs it's basically 10GbE card -> optical module -> patch lead <- optical module <- 10GbE card (except that the optical modules actually sit
inside the SFP ports on the cards). It's exactly like patching two PCs together with a normal Ethernet cross-over cable, only with a few more bits of kit to break
For copper it's a bit simpler, you just need a good CAT6 cable and a pair of 10GbE cards with copper ports (RJ45).
Lastly, if you go for the SFP cards and your two devices are literally no more than a couple of feet apart, you can use direct attach SFP cables. These are heavy duty copper cables with the transceiver module built in on each end. You're very limited by length and bend radius, and the signal degradation becomes a problem beyond ~10m (according to some Mellanox docs I read while back).
For my 'lab' I connect my workstation to each ESXi host with fibre patch leads and connect each host together with a direct attach cable (0.5m). I can then pull data from my SAN server (which itself is connected to each ESXi host with 4Gb fibre channel) via a VM at around 380MB/s, and vMotion between hosts at 10Gbps.
Hope that's useful. I'm just a lowly software dev playing with infrastructure so don't consider this expert advice/comment