The local network is slow. Big problems of small networks, or Why is my network slow? Briefly about local computer network standards

Recently, users and administrators are increasingly beginning to complain that new 1C configurations developed on the basis of a managed application work slowly, in some cases unacceptably slow. It is clear that new configurations contain new functions and capabilities, and therefore are more resource-demanding, but most users do not understand what primarily affects the operation of 1C in file mode. Let's try to correct this gap.

In ours, we have already touched on the impact of disk subsystem performance on the speed of 1C, but this study concerned the local use of the application on a separate PC or terminal server. At the same time, most small implementations involve working with a file database over a network, where one of the user’s PCs is used as a server, or a dedicated file server based on a regular, most often also inexpensive, computer.

A small study of Russian-language resources on 1C showed that this issue is diligently avoided; if problems arise, it is usually recommended to switch to client-server or terminal mode. It has also become almost generally accepted that configurations on a managed application work much slower than usual. As a rule, the arguments are “iron”: “Accounting 2.0 just flew, and the “troika” barely moved. Of course, there is some truth in these words, so let’s try to figure it out.

Resource consumption, first glance

Before we began this study, we set ourselves two goals: to find out whether managed application-based configurations are actually slower than conventional configurations, and which specific resources have the primary impact on performance.

For testing, we took two virtual machines running Windows Server 2012 R2 and Windows 8.1, respectively, giving them 2 cores of the host Core i5-4670 and 2 GB of RAM, which corresponds to approximately an average office machine. The server was placed on a RAID 0 array of two, and the client was placed on a similar array of general-purpose disks.

As experimental bases, we selected several configurations of Accounting 2.0, release 2.0.64.12 , which was then updated to 3.0.38.52 , all configurations were launched on the platform 8.3.5.1443 .

The first thing that attracts attention is the increased size of the Troika’s information base, which has grown significantly, as well as a much greater appetite for RAM:

We are ready to hear the usual: “why did they add that to this three,” but let’s not rush. Unlike users of client-server versions, which require a more or less qualified administrator, users of file versions rarely think about maintaining databases. Also, employees of specialized companies servicing (read updating) these databases rarely think about this.

Meanwhile, the 1C information base is a full-fledged DBMS of its own format, which also requires maintenance, and for this there is even a tool called Testing and correcting the information base. Perhaps the name played a cruel joke, which somehow implies that this is a tool for troubleshooting problems, but low performance is also a problem, and restructuring and reindexing, along with table compression, are well-known tools for optimizing databases for any DBMS administrator. Shall we check?

After applying the selected actions, the database sharply “lost weight”, becoming even smaller than the “two”, which no one had ever optimized, and RAM consumption also decreased slightly.

Subsequently, after loading new classifiers and directories, creating indexes, etc. the size of the base will increase; in general, the “three” bases are larger than the “two” bases. However, this is not more important, if the second version was content with 150-200 MB of RAM, then the new edition needs half a gigabyte and this value should be taken into account when planning the necessary resources for working with the program.

Net

Network bandwidth is one of the most important parameters for network applications, especially like 1C in file mode, which move significant amounts of data across the network. Most networks of small enterprises are built on the basis of inexpensive 100 Mbit/s equipment, so we began testing by comparing 1C performance indicators in 100 Mbit/s and 1 Gbit/s networks.

What happens when you launch a 1C file database over the network? The client downloads a fairly large amount of information into temporary folders, especially if this is the first, “cold” start. At 100 Mbit/s, we are expected to run up against the channel width and the download can take a significant amount of time, in our case about 40 seconds (the cost of dividing the graph is 4 seconds).

The second launch is faster, since some of the data is stored in the cache and remains there until the reboot. Switching to a gigabit network can significantly speed up program loading, both “cold” and “hot”, and the ratio of values ​​is respected. Therefore, we decided to express the result in relative values, taking the largest value of each measurement as 100%:

As you can see from the graphs, Accounting 2.0 loads at any network speed twice as fast, the transition from 100 Mbit/s to 1 Gbit/s allows you to speed up the download time by four times. There is no difference between the optimized and non-optimized "troika" databases in this mode.

We also checked the influence of network speed on operation in heavy modes, for example, during group transfers. The result is also expressed in relative values:

Here it’s more interesting, the optimized base of the “three” in a 100 Mbit/s network works at the same speed as the “two”, and the non-optimized one shows twice as bad results. On gigabit, the ratios remain the same, the unoptimized “three” is also half as slow as the “two”, and the optimized one lags behind by a third. Also, the transition to 1 Gbit/s allows you to reduce the execution time by three times for edition 2.0 and by half for edition 3.0.

In order to evaluate the impact of network speed on everyday work, we used Performance measurement, performing a sequence of predetermined actions in each database.

Actually, for everyday tasks, network throughput is not a bottleneck, an unoptimized “three” is only 20% slower than a “two”, and after optimization it turns out to be about the same faster - the advantages of working in thin client mode are evident. The transition to 1 Gbit/s does not give the optimized base any advantages, and the unoptimized and the two begin to work faster, showing a small difference between themselves.

From the tests performed, it becomes clear that the network is not a bottleneck for the new configurations, and the managed application runs even faster than usual. You can also recommend switching to 1 Gbit/s if heavy tasks and database loading speed are critical for you; in other cases, new configurations allow you to work effectively even in slow 100 Mbit/s networks.

So why is 1C slow? We'll look into it further.

Server disk subsystem and SSD

In the previous article, we achieved an increase in 1C performance by placing databases on an SSD. Perhaps the performance of the server's disk subsystem is insufficient? We measured the performance of a disk server during a group run in two databases at once and got a rather optimistic result.

Despite the relatively large number of input/output operations per second (IOPS) - 913, the queue length did not exceed 1.84, which is a very good result for a two-disk array. Based on this, we can make the assumption that a mirror made from ordinary disks will be enough for the normal operation of 8-10 network clients in heavy modes.

So is an SSD needed on a server? The best way to answer this question will be through testing, which we carried out using a similar method, the network connection is 1 Gbit/s everywhere, the result is also expressed in relative values.

Let's start with the loading speed of the database.

It may seem surprising to some, but the SSD on the server does not affect the loading speed of the database. The main limiting factor here, as the previous test showed, is network throughput and client performance.

Let's move on to redoing:

We have already noted above that disk performance is quite sufficient even for working in heavy modes, so the speed of the SSD is also not affected, except for the unoptimized base, which on the SSD has caught up with the optimized one. Actually, this once again confirms that optimization operations organize information in the database, reducing the number of random I/O operations and increasing the speed of access to it.

In everyday tasks the picture is similar:

Only the non-optimized database benefits from the SSD. You, of course, can purchase an SSD, but it would be much better to think about timely maintenance of the database. Also, do not forget about defragmenting the section with infobases on the server.

Client disk subsystem and SSD

We analyzed the influence of SSD on the speed of operation of locally installed 1C in, much of what was said is also true for working in network mode. Indeed, 1C quite actively uses disk resources, including for background and routine tasks. In the figure below you can see how Accounting 3.0 quite actively accesses the disk for about 40 seconds after loading.

But at the same time, you should be aware that for a workstation where active work is carried out with one or two information databases, the performance resources of a regular mass-produced HDD are quite sufficient. Purchasing an SSD can speed up some processes, but you won’t notice a radical acceleration in everyday work, since, for example, downloading will be limited by network bandwidth.

A slow hard drive can slow down some operations, but in itself cannot cause a program to slow down.

RAM

Despite the fact that RAM is now obscenely cheap, many workstations continue to work with the amount of memory that was installed when purchased. This is where the first problems lie in wait. Based on the fact that the average “troika” requires about 500 MB of memory, we can assume that a total amount of RAM of 1 GB will not be enough to work with the program.

We reduced the system memory to 1 GB and launched two information databases.

At first glance, everything is not so bad, the program has curbed its appetites and fit well into the available memory, but let’s not forget that the need for operational data has not changed, so where did it go? Dumped into disk, cache, swap, etc., the essence of this operation is that data that is not needed at the moment is sent from fast RAM, the amount of which is not enough, to slow disk memory.

Where it leads? Let's see how system resources are used in heavy operations, for example, let's launch a group retransfer in two databases at once. First on a system with 2 GB of RAM:

As we can see, the system actively uses the network to receive data and the processor to process it; disk activity is insignificant; during processing it increases occasionally, but is not a limiting factor.

Now let's reduce the memory to 1 GB:

The situation is changing radically, the main load now falls on the hard drive, the processor and network are idle, waiting for the system to read the necessary data from the disk into memory and send unnecessary data there.

At the same time, even subjective work with two open databases on a system with 1 GB of memory turned out to be extremely uncomfortable; directories and magazines opened with a significant delay and active access to the disk. For example, opening the Sales of goods and services journal took about 20 seconds and was accompanied all this time by high disk activity (highlighted with a red line).

To objectively evaluate the impact of RAM on the performance of configurations based on a managed application, we carried out three measurements: the loading speed of the first database, the loading speed of the second database, and group re-running in one of the databases. Both databases are completely identical and were created by copying the optimized database. The result is expressed in relative units.

The result speaks for itself: if the loading time increases by about a third, which is still quite tolerable, then the time for performing operations in the database increases three times, there is no need to talk about any comfortable work in such conditions. By the way, this is the case when buying an SSD can improve the situation, but it is much easier (and cheaper) to deal with the cause, not the consequences, and just buy the right amount of RAM.

Lack of RAM is the main reason why working with new 1C configurations turns out to be uncomfortable. Configurations with 2 GB of on-board memory should be considered minimally suitable. At the same time, keep in mind that in our case, “greenhouse” conditions were created: a clean system, only 1C and the task manager were running. In real life, on a work computer, as a rule, a browser, an office suite are open, an antivirus is running, etc., etc., so proceed from the need for 500 MB per database, plus some reserve, so that during heavy operations you do not encounter a lack of memory and a sharp decrease in productivity.

CPU

Without exaggeration, the central processor can be called the heart of the computer, since it is it that ultimately processes all calculations. To evaluate its role, we conducted another set of tests, the same as for RAM, reducing the number of cores available to the virtual machine from two to one, and the test was performed twice with memory amounts of 1 GB and 2 GB.

The result turned out to be quite interesting and unexpected: a more powerful processor quite effectively took on the load when there was a lack of resources, the rest of the time without giving any tangible advantages. 1C Enterprise (in file mode) can hardly be called an application that actively uses processor resources; it is rather undemanding. And in difficult conditions, the processor is burdened not so much by calculating the data of the application itself, but by servicing overhead costs: additional input/output operations, etc.

conclusions

So, why is 1C slow? First of all, this is a lack of RAM; the main load in this case falls on the hard drive and processor. And if they do not shine with performance, as is usually the case in office configurations, then we get the situation described at the beginning of the article - the “two” worked fine, but the “three” is ungodly slow.

In second place is network performance; a slow 100 Mbit/s channel can become a real bottleneck, but at the same time, the thin client mode is able to maintain a fairly comfortable level of operation even on slow channels.

Then you should pay attention to the disk drive; buying an SSD is unlikely to be a good investment, but replacing the drive with a more modern one would be a good idea. The difference between generations of hard drives can be assessed from the following material: .

And finally the processor. A faster model, of course, will not be superfluous, but there is little point in increasing its performance unless this PC is used for heavy operations: group processing, heavy reports, month-end closing, etc.

We hope this material will help you quickly understand the question “why 1C is slow” and solve it most effectively and without extra costs.

  • Tags:

Please enable JavaScript to view the

01.04.2004, 19:16

:virus:I'm not a very experienced admin. Sorry for the awkward question. There is a suspicion that the local network (and Telnet too) is slowing down due to broadcasts (the entire switch is blinking and 25% of packets are constantly not getting through!!!)! Does anyone know a program or a way to track which machine they are sent from or how to block them?

-----
changed my hat
PrayeR

01.04.2004, 20:14

There is a suspicion that the local network (and Telnet too) is slowing down due to broadcasts (the entire switch is blinking and 25% of packets are constantly not getting through!!!)!
Why did you decide it was because of broadcasts?
Describe what kind of network it is, is there a domain, what operating systems...

And the topic should have been called something else

01.04.2004, 20:30

snake2005

Try to sniff - you will see what packets are running around the network. Moreover, what kind of network is it? If the LAN is normal, but under load, it’s still good that only 25% is lost.

02.04.2004, 00:25

Original message from sky7
snake2005
It would be nice to change the title of the topic.

Yeah, and we have a separate section on networks... For now I’ve moved it there...

05.04.2004, 18:53

1. What network equipment?
2. types of links between switches?
3. IP static or dynamic?
4. how many switches are in the network and how are they connected?

The way you asked the question, it is simply impossible to answer it.
Once you answer these questions, I’ll ask the next ones, only then will it be possible to understand what’s going on.

05.04.2004, 22:47

Posnif, look at the source poppy of the broadcasts (or rather queries) and look in the arps who is doing this weird thing. A similar case is possible when 2 switch ports are shorted to each other (not all). Or the client installed a switch and shorted a couple of ports - but Windows broadcasts are still allowed - so they multiply - and who knows who is to blame.

06.04.2004, 11:58

asdus:
and guess what, someone is to blame here

I agree, because of the problems you mentioned, a broadcast storm may occur, which is why you need to know what equipment is needed. Then it’s clear what to do next. And with a sniffer you will see a broadcast storm, but in most cases you will not be able to track it. especially: A similar case is possible when 2 switch ports are shorted to each other (not all). Or the client installed a switch and shorted a couple of ports

06.04.2004, 13:42

snake2005, what switch? You can look at the statistics on it. Including broadcasts, you can see from which port they are coming from.

06.04.2004, 16:18

snake2005
Um... well, for example, this can happen if you have a 100 Mbit network, about 80-90 computers, groups are connected through simple hubs, all DHCP is used, and all this is via SSH, with key exchange after each connection = ))))
I just described some kind of hell... =)

But seriously, such a thing can happen if some computers in the segment with a switch work with 10 Mbit cards, some - with 100....

06.04.2004, 16:56

A case from my practice (a month ago):
The department has a network for 10 machines, a Win2000 domain, and a bunch of subnets, all on switches. The machines are old + take out 98, the network works fine. They replaced our cars with celerons 2000 (mother Asus P4S533, built-in network drives SiS 900-Based), at the same time we changed the axles on WinXP.... and it began... the brakes on the network are wild... it’s almost impossible to transfer something over the network to another car impossible, the connection between the machines breaks down at any moment, the speed is extremely low...
No matter what they did, it got to the point where they set the domain to Win2003...don’t care. But I must say that everyone’s IP is constant. We decided to install DHCP, the situation is the same...
In order not to change the TCP/IP settings on the machines back, I reserve the IP in DHCP by MAC address, and find that all machines have the same MAC address!!!

06.04.2004, 17:40

Drill:
MAC address is the same!!!

Drill:
firewood for network drives by default

The MAC address can be changed programmatically (but this will not physically change it), but drivers do not do this.

MAC-addresses resemble unique serial numbers given to each Ethernet network adapter upon manufacturing.

06.04.2004, 19:53

Appz_newS:
What is the relationship between drivers and the MAC address of the hardware? Does MAC depend on drivers?

In very old MAC cards it was necessary to manually set them up and this is exactly what the firewood was doing, but this fell into oblivion about 10 years ago.

A similar situation could occur on Realtek cards and the like if the system was installed by cloning the system disk. This also happened to me once, unfortunately I don’t remember the exact model of the card, but I’m sure it’s Realtek.Appz_newS:
The problem is that when installing WinXP, everyone used the default network drive drivers from the WinXP package. This problem was solved by installing firewood from a compact for the mother... Now the network flies...
It seems to me that it was enough to simply demolish it and put the firewood in again (no matter where it came from).

06.04.2004, 21:13

Unfortunately (as a networker by profession), 99% of Windows systems allow you to change the MAC address of the network adapter without going far - just in the properties of the network card.
As for cloning the system, it depends little on the model of the network interface ;-) In principle, nothing at all.

06.04.2004, 21:20

Appz_newS
What is the relationship between drivers and the MAC address of the hardware? Does MAC depend on drivers?
Can't believe it?
This is the address “00-E0-06-09-55-66” on all machines. I used Google and got this answer:
Q. Why many of my P4S533-VM motherboard all use the same MAC address "00-E0-06-09-55-66"? Is there an utility to recover it?
A. The problem was caused by customer using WinXP default driver. Please use the driver updated from support CD-disk or download site to resolve this problem.

The only difference is in the mothers, I have P4S533-MX

Here's another one (http://maryno.net/forum/viewthread.php?tid=1174)

07.04.2004, 00:31

90% is not matching the speeds of the switch and network cards, this will be especially noticeable on large packets

08.04.2004, 11:49

titano:
90% is not matching the speeds of the switch and network cards, this will be especially noticeable on large packets
Or the love of novice administrators is to set 100Mb / full duplex on the card, and leave auto-detection on the switch. I met Bolesin about 30 times in different offices and with different switches :) :) :) . Treat yourself with a smack on the hands.

08.04.2004, 13:23

Alexs-B

08.04.2004, 13:36

SSTOP:
What's the problem, exactly? If the switch holds a hundred square meters?
The fact is that in this case you will receive on the one hand 100/Full duplex, and on the other 100/half duplex, and 80-90% of dropped packets.
Read how Auto works when determining a port and why it was developed.

08.04.2004, 13:59

And what will snake2005 himself say?
questions to him remained unanswered, but the debate at the round table was serious....
It’s still unclear whether he solved his problems, or whether he doesn’t need it...

08.04.2004, 14:37

Alexs-B

08.04.2004, 15:41

SSTOP:
And where does half-duplex come from on a switch, if it’s not a secret?
Yes, it seems like from the Fast Ethernet specification

08.04.2004, 15:53

Alexs-B
I understand that I've already gotten it, but I would like to see a more detailed answer. Still, why, when you force a network card to be installed at 100/fullduplex, will there be 100/halfduplex on the switch?

08.04.2004, 16:35

How to set Speed/Duplex correctly

We configure ______________________ we get

Map___________Switch___________Map____________Switch_____________result
10/H___________avto/avto__________10/h______________10/h____________________Ok
10/f______________-________________10/f____________________10/h____________________Shit
10/a______________-________________10/f____________________10/f____________________Ok
100/h____________-_______________100/h_____________100/h____________Ok
100/f____________-________________100/f_____________100/h__________________Shit
100/a____________-_______________100/f______________100/f__________________Ok
a/h_______________-_______________100/h___________100/h__________________Ok
a/f_______________-________________100/f___________100/f__________________Ok
a/a_______________-_______________100/f______________100/f__________________Ok

Source of information - any book on administering entry-level networks, instructions for a switch (not just any one, as described in normal ones).
The reason is the procedure for determining the speed and duplex mode on the interfaces. If you are interested, I can tell you in more detail, but this is a separate topic.

08.04.2004, 18:03

Alexs-B
Yes, perhaps I’m interested... in a separate topic, ok?

08.04.2004, 18:15

Alexs-B
Quite a strange calculation, in my opinion... Is Cisco’s “Manual of United Network Technologies” a fairly authoritative book for you? Everything is accurate there, but vice versa - with auto-negotiation, the duplex mode has higher priority over the half-duplex mode, so it will not be “100/f_avto/avto_100/f_100/h_Shit”, but “100/f_avto/avto_100/f_100/f_All_bunch”. Which, by the way, is much more logical, because why choose what is obviously not the best connection option out of all the really possible ones?
P.S. As for a separate topic, I don’t mind at all. :)

08.04.2004, 20:38

SSTOP:
Is Cisco's Guide to Internet Technologies a sufficiently authoritative book for you?
And for you, “Basics of setting up Cisco routers” is 1 course of standard training?
Open a new topic for Monday, let's discuss it!

08.04.2004, 21:53

Alexs-B
Cisco (http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/ethernet.htm#xtocid29), which means... There’s no point in starting a new topic. Once you turn to the original source (http://www.ieee802.org/3/ab/public/feb98/an1.pdf), everything will fall into place. It was in vain that you crippled the hands of the admins. :)

09.04.2004, 11:55

SSTOP:
It was in vain that you crippled the hands of the admins.
Why in vain? This information is quite common, and how communication in what mode is established 10 years ago in the 4th year of the institute took place, for the 10th networks, and in much more detail than Cisco described.

09.04.2004, 19:22

Alexs-B
That is, despite the links given, you insist that the switch will have half-duplex, do I understand correctly?

12.04.2004, 12:02

I suggest you first conduct an experiment. Take a switch (preferably a controlled one so that the status of the port can be viewed) or 2 computers and a cross cable, simulate the situation and see the result. This will be somewhat faster than arguing. And the result will be immediate.

12.04.2004, 13:19

Alexs-B
An eccentric man... Is the official standard not enough for you? :)

12.04.2004, 13:39

Is it hard to check?

12.04.2004, 13:55

Alexs-B

12.04.2004, 14:22

I watched, watched your argument. Here are the results of the experiment: on the auto card, on the auto switch the result is full duplex.

12.04.2004, 14:27

Appz_newS
If auto-auto, then there are no questions. We are arguing about what will happen when the network card is forced to 100/full, and the switch is set to auto.

12.04.2004, 14:38

SSTOP, ok, I'll check it now. I’ll write the result directly in this post so as not to flood.

I checked. On the card 100/full, on the switch auto. The computer overloaded. Result: full duplex on the switch.
Intel network card, HP J4813A ProCurve Switch 2524.

12.04.2004, 15:00

SSTOP:
From what? Please - Intel network, built-in. Switch - 3Com 4300. On the network device we change Auto to 100/full, on the switch we have 100/full, everything is as it should be.
Did you remember to restart Link?;) (physically)

12.04.2004, 16:57

Alexs-B
Who do they take me for here? :rolleyes:

12.04.2004, 17:14

Well, maybe this works on modern cards (especially integrated ones) (the car is already installed there before the computer is loaded), but on older ones it doesn’t work.
And this is for fans of SSTOP primary sources: http://www.cisco.com/warp/public/473/46.html#gen_tr_10_100 (I searched for 7 minutes :))

:cool:
I'm talking about the classics. This time I checked it on the new Asus with the integrated device - and really everything is a bundle. On P II with 3com 905b - classic.
So most likely this is due to the fact that the card is turned on before the system starts and does not look at its settings.

Added after 4 minutes:
Alexs-B:
Who do they take me for here?
They don't hold on to anyone. It’s just that everyone expresses their opinion. :), and if it doesn’t coincide, they argue!

/ Big problems of small networks, or Why is my network slow?

Big problems of small networks, or Why is my network slow?

It is safe to say that computer network today is one of the key components of any successful business. The computer has long turned from a luxury into an indispensable tool. But at the same time, surprisingly, most enterprises pay minimal attention to the quality of computer network installation. Entrepreneurs believe that the performance of computing systems depends on the power of computers and willingly spend money on expensive and fast models, forgetting that the means of interaction between these computers are no less important. The build quality of computers has begun to play a significant role. Consumers are no longer satisfied with cheap models assembled “in the back room on their knees”; they prefer to buy more expensive computers from well-known brands. It is all the more surprising that the deployment of a local area network is still often associated with something frivolous, monkey work that can be performed by any student with a tool. As a result, he is engaged in laying the network. While there are only 5-10 computers on the network, the disadvantages of this approach are, as a rule, invisible, but as soon as it begins to grow, servers, network services, and databases appear in it - so the apparent savings immediately turn out to be a time bomb. The network begins to slow down and freeze, fast computers turn out to be useless hardware, because the network is not able to quickly transmit the required volumes of information, it becomes impossible synchronize time on computers. In all cases, it turns out that failures and downtime due to network failures are much more expensive than quality installation. Why does this happen and how to deal with it?

Briefly about local computer network standards

The design, installation and operation of structured cabling systems (SCS) have been standardized for more than 15 years. There are American (ANSI/TIA/EIA), European (EN) and international (ISO/IEC) groups of standards. A local network built according to the ISO standard increases the value of the company, so many large enterprises either initially build their networks according to the standard, or upgrade existing ones and receive a certificate of conformity - this is a significant help in finding investors. Small businesses do not need to certify their networks, but, in any case, the deployment of a local network should be entrusted to specialists familiar with the standards and rules for installing networks - this will greatly increase its reliability.

A little about the structure of a computer network

Modern local networks are mostly built using Ethernet technology (not to be confused with the Internet!). In the most common case, a network is a set of network concentrators (also called “hub” and “switch”) - devices that connect cables from computers in one node - and the cables themselves. Network hubs vary in the number of network connectors (ports), data transfer rates, and ability to manage network traffic. An Ethernet cable consists of eight copper conductors in one sheath, braided in pairs into 4 pairs. Each end of the cable is either crimped into a network plug (connector) or mechanically secured into a device with network connectors (socket or patch panel). In order for computers to work on an Ethernet network, they are equipped with network cards - a device that has a connector for connecting a network cable. The basic data transfer speed on the network depends on the network hubs and network cards of computers and is 100 megabits per second for 100BaseT networks and 1000 Mbit/s for Gigabit Ethernet networks

8 typical networking mistakes, or why 1C slows down over the network

So, powerful modern computers have been purchased and installed, a local network has been assembled, the Internet is connected, and you can work. And suddenly, from somewhere, a lot of small (and sometimes not at all small) problems appear: then the server on the network stops responding, then accountants complain that 1C works terribly slowly, then the Internet disappears, then computers stop seeing the network altogether. What's happened? Probably, errors were made during the installation of the network, which led to both the speed and reliability of data transmission being affected. The most typical of them are described below. Check your network to see if anything from this list is present.

  1. The local network cable is laid together with the electrical one.

    To save on cable channels, local network cables are sometimes laid together with electrical ones (even worse - when they are woven or fastened together). The electromagnetic field around the electrical cable induces noise in the local network cable, the number of errors increases, and the data transfer speed decreases. In the worst case, network equipment may fail - a network hub or network card in a computer may burn out.

  2. One cable for two sockets.

    As stated above, a twisted pair cable consists of eight conductors braided in pairs. In gigabit (1000-megabit) networks, data is transmitted over all four pairs, and in 100-megabit networks, only two. This becomes fertile ground for the “brilliant” ideas of some home-grown specialists: why in a 100-megabit network run two cables to two sockets, when you can use two pairs for one and two for another socket? And in order to save money, the cables are mercilessly shredded: twisted pairs are removed from the sheath, unraveled, wrapped with electrical tape, etc. These manipulations violate the most important principle of building information networks: one device - one cable. In no case should you save on cable - it always results in additional costs. In this case, the wave characteristics of the cable deteriorate, and the amount of noise in the connection increases. The number of errors increases and the actual data transfer rate decreases. In addition, it becomes impossible to use this cable later to migrate to the faster Gigabit Ethernet standard.

  3. Computer and telephone networks in one cable.

    A blatant case of violation of the rules for laying cable networks. Unfortunately, very widespread. Home-grown “specialists” use two pairs of four to organize a computer network, and through the other two they connect telephones. In addition to the consequences described in paragraph 3, interference from “telephone” pairs is added, the nominal voltage in which reaches 60 V, and at the time of a call - up to 120 V (in “computer” pairs - up to 5 V). The effect of interference is especially noticeable on long cables. Distortion caused by noise and interference causes a 100-megabit network to transmit data at 10-megabit speeds. It should be remembered that the network speed that Windows shows at the time of connection is the speed of the network connection standard, and not the actual data transfer speed. The exact value can be found out using special programs, for example, SiSoft Sandra. You can make a rough estimate yourself using Windows Explorer by measuring the time it takes to transfer a large file (more than 500 MB) over the network.

  4. Extending the local network line with additional sockets.

    When pulling it, it turns out that the length of the cable was calculated incorrectly, and some tens of centimeters are missing from the right place. Shouldn't we re-route the entire cable because of this? And the “specialist” finds a way out - he mounts a socket at the end of the cable, to which he connects another cable, at the end of which there is also a socket mounted. If the computer is then moved further, then a new piece of cable with a socket is added to the line in the same way... The worst option is when the cable is extended without sockets, splicing the conductors manually using a soldering iron or twists, which is completely unacceptable. It should be borne in mind that cable connectors are the main source of interference; their number on one line should not exceed four.

  5. Poorly crimped connectors.

    A very common mistake. The Ethernet connector is designed to securely hold not only the copper conductors, but also the insulation. Twisted pairs should not be unraveled by more than 1 cm. Unfortunately, it is often possible to observe how these requirements are ignored, the insulation is cut off much more than necessary, and the untwisted twisted pairs are crimped with a connector. Such a cable has significantly worse wave characteristics, and careless mechanical impact can cause conductor breakage.

  6. Cables lying on the floor.

    You can be sure that any network cable lying on the floor in the passage will be pulled at least once. If in your offices the computer network cables lie on the floor (even worse - across the aisle), then sooner or later someone will get caught on them. At a minimum, this means a cable has jumped out of the network card, but a cable break may also occur, followed by failure of the network hub. If possible, all local network cables, except cables from the outlet to the computer, should be placed in prepared channels: corrugated pipes, plastic boxes, gutters behind wall or ceiling panels. If it is impossible to remove the cable from the passage, then at a minimum it should be covered with a metal or plastic protective casing.

  7. Cables near fluorescent lamps.

    A very common option is to lay network cables behind a suspended ceiling - it is fast, neat and cheap. However, it is not enough to simply hide the cables behind the ceiling panels; one must take into account that fluorescent lamps are the strongest sources of electromagnetic interference. Cables should be laid as far as possible from fluorescent lamps, even better - in grounded wire trays attached to the ceiling. Unfortunately, this principle is violated everywhere. The cables behind the ceiling lie haphazardly, often right on the lamps. A large amount of noise and a decrease in network speed are guaranteed.

  8. A large number of network hubs.

    The length of one Ethernet network line built on twisted pair cables does not exceed 100 meters. When it is necessary to extend a network line over a significant distance, the optimal solution would be to use fiber optic cable, but it seriously increases the cost of installation, so system administrators resort to various tricks. For example, they lay a line in pieces of 100 meters, between which they install the so-called. repeaters (signal amplifiers). Most often, ordinary, cheapest network hubs are used as repeaters. As a rule, in this way they connect separate buildings, in which local networks have already been laid and there are network hubs. The design rules require that there be no more than 4 hubs between any two computers, but with this approach to network expansion, they are very easy to break, which can lead to slowdowns and failures. The next common mistake is to install a large number of network hubs with a small number of ports. Sooner or later, a situation arises when the hub runs out of free ports and there is nowhere to connect new computers. The cheapest and fastest way to fix the situation is to buy another small hub and connect it to the old one. This method is also the most incorrect. It is worth remembering that one hub with a large number of ports always works better than several small ones. Another error occurs when laying a network in several adjacent rooms, each of which has its own network hub. The optimal solution for such cases is to directly connect each hub to the root hub of the network, but system administrators, wanting to reduce the amount of cabling work, connect the hubs to each other in a cascade, lining them up in a chain. If, in addition to this, the cheapest concentrator models are chosen, which are unable to cope with a significant amount of traffic, then a slowdown in network performance is almost inevitable.

How to fix the network and speed up 1C

If a quick inspection of your network shows that it contains at least some of the typical errors listed, then you should think about putting your network in order. In most cases, a qualified specialist can correct the situation or suggest solutions. But the most effective and correct solution will always be to entrust the work to specialists, carefully design the computer network before deployment and follow the installation rules.

Introduction of innovations into the network infrastructure is profitable. The cost of a “correct” cable system rarely exceeds 5% of the cost of the entire computer network, and a penny saving in this area can result in serious losses from failures and downtime.

Alex Tsemik,

network security and server infrastructure partner