29th May 2017
Lots of Intel processors feature "Hyper-Threading". This presents two virtual-cores for each physical core, allowing two queues of instructions (threads) to queue up for one core. I think this improves performance because any time when one thread is stalled the core can run instructions from the other in the meantime.
In other words, instead of two threads each running at 50% speed, as would normally happen when running two threads on one core, they each run at maybe 60%. So the core's doing 120% as much work as usual because it spends less time idle waiting for memory access or pipeline stalls or whatever.
The Wikipedia article mentions 15-30% extra performance. But is it really true? Let's find out.
I will compile a large C program (Ryzom) using 1, 2 and 4 threads per physical core, all with and without Hyper-Threading enabled (I can disable it in the BIOS). Processor is an i3 6320.
|1 thread/core||47 mins||47 mins|
|2 threads/core||49 mins||35 mins|
|4 threads/core||52 mins||35 mins|
As expected, HT makes no difference when running 1 thread per core (as the Hyper-Threading is basically not being used). But with two threads per core, Hyper-Threading is 40% FASTER!.
So there you have it. Hyper-Threading can give 40% extra performance with a real workload.
Update: I've now tested AMD's Hyper-Threading equivalent: "simultaneous multithreading". Processor used is a Ryzen 5 1600X.