Probably a very basic question but confused the hell out of me - say if I have 100mb internet at home, and scenario one, a router with 100mb port speed and I connect two PCs to it, each has a 100mb NIC card, is it true that ignoring other factors I should be able to get close to, if not 100mb connection on each of the PCs? On the other hand, scenario 2, if I have a (unmanaged) switch and I connect the PCs to the switch I would only ended up getting 50mb each on each of the PCs (i.e., the switch essentially “halved” my internet speed if I connect 2 PCs to it, 1/3 if I connect 3 PCs to it, etc)?
No switch or router does load balancing, you wont get 5 times 20mbit it will be all over the place…
You could actually expect less than 20Mbs because of congestion issues assuming no QoS and you’re right that any port might get more at any particular moment of time. This is mean to be an illustration of bottlenecks and not an implication of layer-2 load balancing. The traffic just can’t be more than what the bottleneck will allow.
There’s another variable here, which is the behavior that TCP and UDP flows have on each other. There are a number of TCP congestion management algorithms that have been developed over the years. This paper, for example, shows that BBR congestion control is very unfair to CUBIC. IOW, if one PC is using BBR and another CUBIC, the first PC will hog most of the bandwidth.
Similarly, QUIC, which is a UDP-based alternative to TCP originally developed by Google and used a lot by Chrome, is quite unfair to TCP as the images show.
Anyway, this is a bit off topic. The main point that the network is only as fast as the slowest link is correct.