LINUX.ORG.RU

Низкая скорость сети


0

2

Поставил гигабитный свич
Скорость копирования с одной машины на другую по NFS
начинается с 85 мб/c и спустя 5 секунд начинает резко падать
60 / 55 / 47 / 29 / 18
Копурую MC
Проверял скорость iperf-ом
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-71.2 sec 4.88 GBytes 589 Mbits/sec

Есть идеи почему такое может происходить ?

Проверял скорость iperf-ом

[ ID] Interval Transfer Bandwidth

[ 5] 0.0-71.2 sec 4.88 GBytes 589 Mbits/sec

Вполне нормальная скорость для гигабита, особенно если на одном из концов сетевая карта PCI'ная.

Deleted
()
Ответ на: комментарий от Deleted

[ ID] Interval Transfer Bandwidth
[ 5] 0.0-71.2 sec 4.88 GBytes 589 Mbits/sec

Скорость то нормальная в iperf ничего не говорю
А вот в MC почему то очень нестабильная
1000 Mbits сетевухи не PCI, а встроеные в материнку.

ANGELOS
() автор топика
Ответ на: комментарий от amorpher

# lspci на машине куда копирую

00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2)
00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3)
00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3)
00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1)
00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2)
00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1)
00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3)
00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3)
00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3)
00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2)
00:06.1 Audio device: nVidia Corporation MCP55 High Definition Audio (rev a2)
00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3)
00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3)
00:0a.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3)
00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3)
00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3)
00:0d.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3)
00:0e.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3)
00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3)
00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration
00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map
00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller
00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control
07:00.0 VGA compatible controller: ATI Technologies Inc RV630 [Radeon HD 2600XT]
07:00.1 Audio device: ATI Technologies Inc RV630/M76 audio device [Radeon HD 2600 Series]

# lspci на машине откуда копирую

00:00.0 Host bridge: Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller (rev 02)
00:01.0 PCI bridge: Intel Corporation 82G33/G31/P35/P31 Express PCI Express Root Port (rev 02)
00:1a.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02)
00:1a.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02)
00:1a.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 02)
00:1a.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 02)
00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 02)
00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 02)
00:1c.5 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 6 (rev 02)
00:1d.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 02)
00:1d.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 02)
00:1d.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 02)
00:1d.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 02)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 92)
00:1f.0 ISA bridge: Intel Corporation 82801IB (ICH9) LPC Interface Controller (rev 02)
00:1f.2 IDE interface: Intel Corporation 82801IB (ICH9) 2 port SATA IDE Controller (rev 02)
00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02)
00:1f.5 IDE interface: Intel Corporation 82801I (ICH9 Family) 2 port SATA IDE Controller (rev 02)
01:00.0 VGA compatible controller: nVidia Corporation G86 [GeForce 8400 GS] (rev a1)
03:00.0 IDE interface: JMicron Technology Corp. JMB368 IDE controller
04:04.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8110SC/8169SC Gigabit Ethernet (rev 10)

ANGELOS
() автор топика
Ответ на: комментарий от ANGELOS

[:||||||||||||||||:]

Вообще вот - http://nfs.sourceforge.net/

B10. Sometimes my server gets slow or becomes unresponsive, then comes back to life. I'm using NFS over UDP, and I've noticed a lot of IP fragmentation on my network. Is there anything I can do?

A. UDP datagrams larger than the IP Maximum Transfer Unit (MTU) must be divided into pieces that are small enough to be transmitted. If, for example, your network's MTU is 1524 bytes, the Linux IP layer must break UDP datagram larger than 1524 bytes into separate packets, all of which must be smaller than the MTU. These separated packets are called fragments.

The Linux IP layer transmits each fragment as it is breaking up a UDP datagram, encoding enough information in each fragment so that the receiving end can reassemble the individual fragments into the original UDP datagram. If something happens that prevents a client from continuing to fragment a packet (e.g., the output socket buffer space in the IP layer is exceeded), the IP layer stops sending fragments. In this case, the receiving end has a set of fragments that is incomplete, and after a certain time window, it will drop the fragments if it does not receive enough to assemble a complete datagram. When this occurs, the UDP datagram is lost. Clients detect this loss when they have not received a reply from the server after a certain time interval, and recover by retransmitting the datagram.

Under heavy write loads, the Linux NFS client can generate many large UDP datagrams. This can quickly exhaust output socket buffer space on the client. If this occurs many times in a short time, the client sends the server a large number of fragments, but almost never gets a whole datagram's worth of fragments to the server. This fills the server's IP reassembly queue, causing it to become unreachable via UDP until it expels the useless fragments from the queue.

Note that the same thing can occur on servers that are under a heavy read load. If the server's output socket buffers are too small, large reads will cause them to overflow during IP fragmentation. The client's IP reassembly queue then fills with worthless fragments, and little UDP traffic can get to the client.

Here are some symptoms of this problem:

* You use NFS over UDP with a large wsize (relative to the network's MTU), and your application workload is write-intensive, or with a large rsize with a read-intensive application.
* You may see many fragmentation errors on your server or clients (netstat -s will tell the story).
* Your server may periodically become very slow or unreachable.
* Increasing the number of threads on your server has no effect on performance.
* One or a small number of clients seem to make the server unusable.
* The network path between your client and server may have a router or switch with small port buffers, or the path may contain links that run at different speeds (100Mb/s and GbE).

The fix is to make the Linux's IP fragmentation logic continue fragmenting a datagram even when output socket buffer space is over its limit. This fix appears in kernels newer than 2.4.20. You can work around this problem in one of several ways:

1. Use NFS over TCP. TCP does not use fragmentation, so it does not suffer from this problem. Using TCP may not be possible with older Linux NFS clients and servers that only support NFS over UDP.
2. If you can't use NFS over TCP, upgrade your clients to 2.4.20 or later.
3. If you can't upgrade your clients, increase the default size of your client's socket buffers (see below). 2.4.20 and later kernels do this automatically for the NFS client's socket buffers. See Section 5.3 of the NFS How-To for more information.
4. If your rsize or wsize is very large, reduce it. This will reduce the load on your client's and server's output socket buffers.
5. Reduce network congestion by ensuring your GbE links use full flow control, that your switch and router ports use adequate buffer sizes, and that all links are negotiating their fastest settings.

amorpher ★★★★★
()

Проверить линейное чтение с винта на передающей стороне
Проверить линейную запись на винт на приемной стороне

sdio ★★★★★
()

Реальная скорость передачи ограничивается наименьшей скоростью одного из компонентов, участвующих в процессе. В данном случае им является жёсткий диск.

post-factum ★★★★★
()
Ответ на: комментарий от ANGELOS

Это наверняка сырое линейное чтение. А учитывая ФС и фрагментацию — вполне реальны маленькие значения.

post-factum ★★★★★
()
Вы не можете добавлять комментарии в эту тему. Тема перемещена в архив.