Confira este tópico . Um dos contribuintes (Frennzy) descreve isso muito bem. Vou citar:
The "real" speed of gigabit ethernet is...
1Gbps.
That is to say, it will transfer bits at the rate of 1 billion per second.
How much data throughput you get is related to various and sundry factors:
NIC connection to system (PCI vs PCIe vs Northbridge, etc).
HDD throughput.
Bus contention.
Layer 3/4 protocol and associated overhead.
Application efficiency (FTP vs. SMB/CIFS, etc)
Frame size.
Packet size distribution (as relates to total throughput efficiency)
Compression (hardware and software).
Buffer contention, windowing, etc.
Network infrastructure capacity and architecture (number of ports, backplane capacity, contention, etc)
In short, you won't really know, until you test it. NetCPS is a good tool for this, as are many others.
E isso, mais tarde no tópico (meu destaque):
Stop thinking like this. Stop it now. All of you.
As much as you would like to figure out kilo-or mega BYTE per second transfer, the fact is that it is variable, even when network speed remains constant. Network "speed" (bits per second) is absolute. Network throughput (actual payload data per second) is not.
To the OP: will you, in general, see faster data transfers when switching from 100Mbps to 1000Mbps? Almost definitely. Will it be anywhere close to the theoretical maximum? No. Will it be worth it? That's for you to decide.
If you want to talk about network speeds, talk about network speeds. If you want to talk about data throughput, talk about data throughput. The two are not tied together in a 1-1 fashion.