As usual, I did the first run with OpenSpeedTest, which was installed in a Docker container as shown above.

And despite a 10 Gigabit connection of both server and client and an installation on the M.2 SSD, only these rather slow values could be achieved. I have also completely reinstalled the container, but the values have not improved. I could not find a solution.

The YABS benchmark doesn’t work either, but this is probably due to the lack of support for ARM.

My next attempt to utilize as much bandwidth as possible went a little better. After all, around 6 gigabits at the peak via SMB from a Windows system, which is quite decent. Without explicit optimizations and fine-tuning, it will not be so easy to achieve 10 gigabits under Windows due to protocol overhead and other limitations.
However, 10 gigabits can be achieved in the optimum case, as a detailed test with iPerf3 shows. I was able to achieve 10 gigabits in both directions with the following command, minus some unavoidable overhead:
.\iperf3.exe -c [QNAP-IP] -p 5201 -t 600 -P 4 -i 60 -w 2M
[SUM] 0.00-600.00 sec 655 GBytes 9.38 Gbits/sec sender
[SUM] 0.00-599.95 sec 655 GBytes 9.38 Gbits/sec receiver
To achieve these values, it was necessary to use four threads and a larger TCP buffer. As mentioned, this is a known fact when transferring from Windows.
Power consumption
The consumption was measured from the socket, so it also includes two installed NVMe SSDs. I removed all other data carriers in order to get as close as possible to the value of the NAS alone. A 10 Gigabit connection was made possible via a DAC cable. The actual energy requirement ultimately depends primarily on the hard disks or SSDs used in your setup.
|
Off / WoL active |
Standby |
Data transfer |
|
0.7 watts |
11.6 watts |
16.2 watts |



































9 Antworten
Kommentar
Lade neue Kommentare
Urgestein
Urgestein
Urgestein
Moderator
Urgestein
Moderator
Urgestein
Urgestein
Neuling
Alle Kommentare lesen unter igor´sLAB Community →