CPU

Cache

Block sizes in code

When choosing a block size in your code (i.e. when reading a file in chunks), make it smaller than the L2 or L3 cache. This way the block will more likely stay in cache while you do operations on it. If you make the block too big, the CPU will need to continuously swap parts of it between the CPU cache and RAM making things much slower.
Keep in mind other stuff will be in the cache, too (like the stack of your program). 10% of smallest cache size of the target machines probably will be fine.
The best size can be determined by testing (measure performance with increasing blocks sizes), there will be a big drop once swap starts.

Pointers

Assembly

https://uops.info/table.html

Modulus (%)

Language Performance Comparisons Are Junk

Napkin Math

Throughput
DDR4 RAM Read/Write (2133 MHz) 17 GiB/s
DDR4 RAM Read/Write (3200 MHz, 2020 Standard) 25.6 GiB/s
DDR5 RAM Read/Write (5500 MHz, 2025 Standard) 45 GiB/s
DDR5 RAM Read/Write (8800 MHz) 70 GiB/s
SSD Sequential Read/Write (PCI Gen. 3 - 2020 Standard) 3 GiB/s
SSD Sequential Read/Write (PCI Gen. 4/5 - 2025 Standard) 6 GiB/s
SSD Sequential Read/Write on SATA III 600 MiB/s
SSD Random Read/Write (depends on file size) 50 - 500 MiB/s
HDD Sequential Read 250 MiB/s
HDD Sequential Write 200 MiB/s
HDD Random Read <20 MiB/s
HDD Random Write 1 MiB/s
SD Card (UHS I) Sequential Read/Write 150 MiB/s
SD Card (UHS I) Random Read/Write <5 MiB/s
USB Stick (USB 3.0) Sequential Read/Write 150 MiB/s
USB Stick (USB 3.0) Random Read/Write <5 MiB/s
USB 2.0 Peak Read/Write 50 MiB/s
USB 3.0 Peak Read/Write 330 MiB/s
USB 3.1 Gen 2 / 3.2 Gen 1x2 / 3.2 Gen 2x1 Peak Read/Write (10 Gbps) 800 MiB/s
Home Network (1 GBit/s) 100 MiB/s
Fiber Internet (100 - 1000 MBit/s) 20 - 100 MiB/s
Notes:

Cost of CPU operations: Infographics: Operation Costs in CPU Clock Cycles - IT Hare on Soft.ware

GitHub repo with programming related "ballpark" numbers: GitHub - sirupsen/napkin-math: Techniques and numbers for estimating system's performance from first-principles

Great visualization of latency numbers (with historical data): https://colin-scott.github.io/personal_website/research/interactive_latency.html