Please tell me what a novice Linux network administrator should know, besides knowledge of RFC? Linux for beginners or what a girl can teach

Artem
Morales

Artyom
P.

Maksim
Datskevich

Dmitriy
Boone


Vladimir
Eliseev
(Kislovodsk)

Experience:







Michael
Drogomeretsky

This was my first remote course in system administration. Expectations were more than fully met! Many thanks to the teachers and fellow students!
Essentially.
WHAT I liked:
1. Time of lectures. I easily made it home to the start after work.
2. The ability to review lectures afterwards at any time.
3. Homework! They forced me, besides additional viewing lectures, read documentation. I really enjoyed reading the documentation! I'm not kidding. Previously, I hated doing this and looked for some quick manuals on Google. Now, before setting up any software, I make sure to read the docs and enjoy it. I noticed how much my eyes open after reading it. In addition to the material necessary to perform the assignment, I had to deal with related topics, which naturally expanded my knowledge. In general, homework gave me 80% of everything I learned and learned during the course.
4. Responsiveness of the teaching staff and fellow students. No moralizing, everything is to the point.

What I didn't like:

Basil
Strukov





It definitely won't be boring!







Vladimir
Revyakin

The course is very necessary and important especially for beginners, you will learn a ton of necessary and important information information that you wouldn’t learn on your own, detailed lectures and questions that arise on the topic are immediately explained, and homework reinforces new knowledge, I already found a job in the second month. I highly recommend Alexey Tsykunov and Alexander Rumyantsev!!!

Alexander
Samusev

When I was considering the course, I had doubts, after all, the price tag was rather high. I was lucky - my employer paid for it, but after completing the course I had the opinion that even if I had to study it for my own money, I still wouldn’t regret it.

I have very little experience with Linux - I worked for six months as a junior linux administrator in one outsourcing company. And I really lacked depth, that is, you do something every day, set some parameters, but why it is so is not entirely clear.

The Linux Administrator course puts everything in order. It gives you confidence in your abilities. The course covers theoretical and practical questions that are asked during interviews and which are then encountered in practice. It’s worth saying that I changed jobs halfway through the course.

Quite detailed lectures on fundamental principles and tools - very cool! But what’s even cooler is that then you’re given homework, which you need to do not only on the basis of the knowledge gained in the lecture, but also to dig through a lot of mana, docs, and forums yourself.

In the course, all home stands are deployed in Vagrant, so you will become familiar with this tool during the course. In addition, it is advisable to post homework on github in the form of code - Vagrantfile + scripts and other project files. This allows you to get better at working with git, if you haven’t had this practice before. Also, the course deals with an administrator tool such as Ansible and, after studying it in the course, home stands are deployed using Vagrant and configured using Ansible.

Thus, I believe that if you see your professional future in Linux work engineer, then this course is a must have! Then you should definitely take the course" DevOps practices and tools." These two courses are the basis of your high value in the market as a specialist.

Artem
Morales

I have very little experience with Linux. I took the course with the goal of gaining fundamental knowledge and quickly acquiring practical skills. Honestly, at first I thought that the course was no different from the others. But after the first week my opinion changed dramatically...

The first is lectures. They are long, but you don’t even notice how time flies. In addition to dry theory (which is also taught qualitatively), teachers dilute lectures with jokes, anecdotes and practical advice from my experience. During the lecture you can get an answer to any of your questions.

Secondly, teachers. Without a doubt, they are professionals in their field. The material is taught confidently, plus, as I wrote above, they are willing to share their experience.

Third - homework. Are you waiting for everything to be chewed out for you? This is not the place for you. Everything is as in real life: you are given a task, given additional material that will help you complete it, and you are obliged to understand it. If something doesn’t work out, you can always ask, but you will receive instructions to do it or not. And this is a huge plus!

Total. The course left a pleasant impression. I'm still a junior, but at heart I feel like a middle :)

Artyom
P.

The course provides a good theoretical basis, supported by homework assignments that allow you to immediately try out the acquired knowledge in practice.

The range of issues considered is quite wide: from assembling the kernel to deploying a fault-tolerant web cluster using Ansible.

Lectures are given by experienced teachers, and guest specialists are periodically invited. So you can get answers to questions from those who have extensive experience in operating the desired technology/service/application in a product environment

It is worth documenting completed homework assignments in detail; the result is a mini-wiki that you can look at more than once to refresh your memory of some details.

It is possible to watch a recorded lecture, which is very convenient, especially if you are located in a different time zone.

Personally, the course helped me get rid of the “footcloths” in bash and switch to ansible.

Maksim
Datskevich

For me, the course is difficult, the knowledge and experience that I have are not sufficient. I spend a lot of time studying basic things. In general, the course is very interesting, the teachers are great, the presentation of the material is excellent, there are a lot of extras. material. Behind practical examples Respect to Alexey Tsykunov and the pre-posted material for the lecture. I have undoubtedly expanded my knowledge base, but I still lack practice.

Unfortunately, I could not find enough free time to solve the problem. If you do your homework conscientiously, the result will exceed your expectations!

Dmitriy
Boone

An excellent course filled with practice and experience.
I am sure that every participant of the course will find something new for themselves, learn something new, and find support from the community.

You need to understand that no one will study for you and the course is definitely not for the lazy, a large number of practical classes, closes gaps in knowledge, fills voids with a monolith :)

I express my deep gratitude to all the teachers, especially Alexey, Alexander and Leonid.

Vladimir
Eliseev
(Kislovodsk)

Experience:
Windows2008(AD,Exchange,Zabbix...) 10 years,
FreeBSD(LAMP,LEMP,Zabbix,Bacula) 3 years (can be compressed to 2 years)

I would like to highlight two reasons for coming to the Linux Administrator course:
1. Leave with Windows Server s && Desktops and find a Full-time job with relocation as Linux Administrator or Full-time as remote linux Engineer;
2. Improve Linux administration skills and gather together an understanding of the interaction of components Linux kernels and GNU userland to migrate my current work from Windows platforms on Linux (Rosa(Cobalt)||Astra||Alt) and then change jobs);

I had the pleasure of communicating with highly qualified teachers:
- Alexander, a person working in high-end industry with a large background with providers, hosters and corporations, he could tell going into detail system calls switching to the C language. The lesson time flew by with great interest and more because theoretical material tied to practice and Alexander’s stories of how he implemented it in production. Programmer C, Bash, PHP, Perl, Java, Python;
Responses to questions in chats and provision of assistance occurred at intervals of 5 minutes. up to 3.4 hours (I understood and appreciated that I had teachers from Hilo od!), acceptance of the work was carried out with instructions and disclosure possible options solutions.
- Alexey, a person who has implemented many startups, a system architect (experience in data centers), Oracle DBA, worked for providers and telecom. Experienced in distributed storage systems and VoIP billing.
Responses to questions in chats and provision of assistance occurred instantly; the acceptance of the DD was carried out with detailed comments on adjustments and instructions.
Python, Perl, Bash programmer;

The course gave me a clear understanding of how the OS works internally and confidence (during interviews and discussions with colleagues about modern things in a unix-like environment for implementing projects) in my knowledge acquired through theory from PDF + URL content (links were given by teachers and they significantly saved time on searching up-to-date information for understanding and solving the problem) I want to highlight
an important feature of remote control, it was necessary to find a lot of time (often I had to sit until 2 am) to solve problems because I had about 2 years of experience only in FreeBSD and a year of theory thanks to YouTube with the keyword “preparation for LPIC”. I almost forgot to write about team help in the chat on Slack, we discussed assignments, upcoming classes, voted, asked friends for help. friend, described interviews and desired job changes)

I recommend that before starting the course you have minimal hardware - ssd, cpu i3.8GB ram. Because I didn't spend much time setting up stands on Vagrant+Ansible and ssh access this fast machine (I had to regularly configure the Vagrantfile and playbook the Playbook to debug roles or the start order of virtual hosts) can have 5-12 virtual machines spinning in the RAM at the same time. The most interesting projects on Ansible: Mysql(Master_Slave), PostgreSQL(Master_Slave), bash(writing daemons(sysV,SystemD)), Bacula, ELK stack, Zabbix|Grafana+Prometheus.
Separately, I will highlight the project at the end of the course within one month + 2 weeks after it was necessary to build a Web HA Cluster, choosing any technologies, I chose the following. (iptables,nginx+HAProxy,php-fpm,MariaDB_galera(Master_Master),Pacemaker+Corosync,iSCSI(mdraid60)) and all on Ansible Playbook, Elasticsearch_Logstash_Kibana(ELK), Bacula))
And view preparation courses for LPIC1,2 or Kirill Semaev’s channel preparation for LPIC 1 and 2.

After the course: The manager called and offered to choose the desired employers to send them my resume on behalf and recommendations of OTUS (I chose 7 out of 12, but did not receive any calls). In addition to the internal growth of knowledge and confidence, I received two offers (during the course I added new skills to the HH profile), but there was Windows & Linux with relocation. During the course, which lasted a productive 5 months, there were about 15-20 technical interviews.
A low, grateful bow to Alexey, Alexander and the OTUS team!

4. Responsiveness of the teaching staff and fellow students. No moralizing, everything is to the point.

What I didn't like:
1. I think that this course should be divided into 2 large parts and the part about clustering should be included in a separate 2-3 month part. Perhaps this is such a personal feeling, because the topic of clustering was completely new and unknown to me.
2. Teachers need to improve their teaching skills. That is, the ability to give a lecture or conduct a seminar. By the way, at the end of the course, I liked the format when the teacher (Alexey Tsykunov) asked questions to the students about the material they had just covered. This is closer to the concept of a seminar.

Conclusion: would I recommend this course? Definitely yes! Will I continue my studies at OTUS in the technologies that interest me? Yes, I’m just waiting for the course that interests me to open.

Basil
Strukov

This course opened up a lot for me.

Although I have been working on Linux for quite a long time, I still learned more and more about myself in every lesson.
Both in fundamental knowledge of Linux Systems and in the operation of services. I’ve never even heard of some solutions.
The course covers a lot large sphere knowledge in the profession of System Administrator.
And each module is unique in its own way. And he solves his problems.
It definitely won't be boring!
What's special this course. The fact is that, starting from the first lessons, they are immediately accustomed to automating all the tasks that have to be solved.
The level of knowledge of the Teachers is Very High and they do not stand still, but continue to improve their knowledge and skills while teaching students of this course.
It is also immediately clear that they have extensive experience in this field and experience in solving most of the problems encountered on the path of a System Administrator.
I found a lot for myself in these courses. Especially the 5th module. For me, everything that was connected with the word cluster was a mega machine and it was not clear what and how was happening there, and how to approach it.
It turned out that not everything is so scary and you can approach it step by step without fear.
I will say a huge THANK YOU to the Teachers. Alexander and Alexey You are simply unreal Thank you for all the knowledge, advice and life experience that you gave us as part of this course. Health, strength and creative success to you. Leonid, you too, always happy to help those in need.
Happy New Year to the entire OTUS Team.
I wish more smart students.
Health, Strength and Desire to solve the assigned tasks.

Whenever possible and in any chat, I always recommend taking these courses. At times, I even help those in need with the solutions that Our Teachers gave us, both in manuals and in lectures.

P.S. With great desire I will come to you for courses on Ceph clusters and everything connected with them.


The huge amount of material is both a big plus and minus of the course. The advantage is that the information is structured, which makes it much easier to perceive and assimilate. The presentation of the lectures is good, the teachers constantly communicate with the audience, answer questions from students if they arise during the lecture. Also, the lessons come with materials that will help you save time if you have to return to the topic you covered - this especially helps when you come across this at work after some time.

The downside that there is a lot of information is that some lectures can last for 3-4 hours; it would be better, of course, to split them into two (for example, a lesson about the Linux kernel, about PostgreSQL).

Having homework is great, especially since they are checked not for show, but quite responsibly (they are even forced to redo it if the result is not achieved :). But there is a nuance - if you have work, it is unlikely that you will be able to do them all on time and with high quality (by high quality, I mean, first of all, assimilation for yourself). Therefore, first of all, you have to do those that are useful for work here and now, or that you really want to study.

Conclusions: for complete beginners, I probably would not recommend this course (only if you have a lot of free time), but for people even with little experience - definitely yes.

User administration in Linux is both very similar and different from administration in Windows. Both systems are multi-user and resource access control is based on user identification. Both systems allow you to group users so that access control is simplified and each change does not have to affect many users. And then the differences begin.

Superuser
In Linux, the Super User is called root. root user can control every process, has access to every file, and can perform any function on the system. Nothing can be hidden from the root. In administrative terms, the root is the supreme being. Therefore, it is very important that Account root was protected secret password. You should not use root to perform common tasks.

Other users can be given superuser privileges, but this must be done with caution. Typically you will configure individual programs so that certain users can run them as root, instead of giving everyone superuser access.


Creating new users

New users can be created either from the console or using a tool such as Webmin. A user is added using the useradd command. From the console this is done, for example, like this:

useradd -c "normal user" -d /home/userid -g users\
-G webadm,helpdesk -s\ /bin/bash userid

This command creates a new user named "userid" (the last parameter in the command). A comment is given that says "userid" is "normal user" ( regular user). A home directory "/home/userid" will be created for it. His main group will be users, but he will also be a member of the "webadm" and "helpdesk" groups. As a regular console environment New user from the example will use the shell "/bin/bash".

Webmin makes creating a new user easy and intuitive. Sign in Webmin your favorite browser and go to the section System. Select a tool "Users and Groups" and then click on Create a new user.

Enter details about the user and click Create. The user will be created.

Changing passwords

From the console user password changed with passwd command:

Only root can change another user's password in this way. After entering the command, you will be asked to enter and confirm the password you are setting. If they match, the user data will be updated and the password will be changed. The user can also change his own password by writing passwd in the console command line; in this case, you will need to enter the old one before entering a new password.

Majority Linux distributions are installed with the password cracker module, which is called to change the password. This module checks how good the password is. If not very good, then a warning appears that the user bad password. Depending on the configuration, he may be required to create a secure password before accepting it. Root" can simply notify you when the password has already been set.

In Webmin, the password is changed using the module "Change Passwords" from section System. Select a user from the list and enter a new password in empty fields forms.

Removing users

Users are deleted from the console using the userdel command.

userdel -r userid

The optional -r switch will delete, in addition to the user, his home directory with all contents. If you want to leave the directory, do not write -r. This key will not automatically delete all files on the system that belong to the user, only his home directory.

How users are organized

Linux configuration is text based. Therefore, all users in Linux are located in a file called /etc/passwd. With the more command you can view this file page by page:

more /etc/passwd

The structure of this file is quite clear. Each line contains a new user with options separated by a colon.

userid:x:75000:75000::/home/userid:/bin/bash

The first column contains the username. The second contains his password. In the third - a user numeric id. In the fourth - the id of the user's main group. In the fifth - full name user. The sixth is the location of the user directory. Typically this directory lives in /home and is named after the username. The seventh column contains the default shell.

Password file structure

Note that in the example above, there is an "x" in the password column. This does not mean that the user has such a password. At one time, passwords were stored inside a file in plain text. This configuration is still possible, but is rare due to the consequences. It was decided to create something called a shadow password. An "x" is written in place of the password in the /etc/passwd file, and the encoded version of the password goes to the /etc/shadow file. This technology improves security by separating user information and passwords. The MD5 password encoding algorithm has further improved security by only allowing strong passwords. Below is an example of a shadow password entry:

userid:$1$z2NXZR19$PZpyL84DmPKBXMeURaXXM.:12138:0:186:7:::
The entire shadow password feature is behind the scenes, and you'll rarely need to do anything more than enable it.

Groups

Groups in Linux are almost the same as in Windows. You create a group and add members to its list. Resources can have rights assigned to a group. Members of a group have access to the resource associated with that group.

Creating a group is easy console command groupadd:

groupadd mygroup

This command will create a group with no members called "mygroup". Groups live in a file called /etc/group. Each group is given a separate line, as written below:

The first column shows the group name. The second is the password. Again, "x" means that the real password is stored in the shadow file /etc/gshadow. The third column will contain comma-separated IDs of the group members.

To add a group member, use the gpasswd command with the -a switch and the id of the user you want to add:

gpasswd -a userid mygroup

You can remove users from a group using the same command, but with the -d switch instead of -a:

gpasswd -d userid mygroup

You can also make changes to groups by directly editing the /etc/group file. Groups can be created, edited and destroyed in Webmin using the same tool that was used above to work with users.

User and group applications

How do users and groups relate to files? If you look at the expanded output of a list of files in a directory, you might see something like the following.

Ignoring the other columns for now, look at the third, fourth and last. The third column contains the name of the file owner, userid. The fourth column contains the group associated with the file, mygroup. The last column is the file name. Each file has only one owner and one group. You can give rights to Other, users who do not fall into any category. Think of Other as the Windows equivalent of the Everyone group.

A single owner of a file is common in operating systems, but a single group owner seems restrictive to administrators unfamiliar with the technology. This is wrong. Since users can be members of any number of groups, more and more groups can be easily created to keep the resource secure. On Linux, group definitions are based on required access to resources rather than on company divisions. If resources are organized logically in the system, then create more groups to then configure access to resources.

More full information For information about linking users and groups, see the Resources section at the end of this article. For details on how to change file permissions, see man chmod.

Conclusion

In principle, users and groups in Linux work the same way as in Windows, the only difference being that only one group can correspond to a resource. When dealing with groups on Linux, consider them "cheap" and don't be afraid to create a lot of them for a complex environment. When creating groups, base them on the need to access resources rather than on company divisions.

User and group information is stored in the /etc/passwd and /etc/group files, respectively. Your system may also contain /etc/shadow and /etc/gshadow files, which contain encrypted passwords for added security. It is possible to work with users and groups by directly editing files, but this must be done with great care.

All operations with users and groups can be performed from the console, which makes it possible to include these operations in scripts. There are also programs, such as Webmin, that provide a graphical interface for working with users and groups.

Leave your comment!

As the reader response shows, interest in solutions to Linux based very, very large, at the same time, the level of training of administrators in this area leaves much to be desired. Evidence of this is the endlessly repeated simple questions in the comments. In many ways, this is a consequence of the fact that our instructions can be followed “verbatim” and get a working result. But there is also back side medals, this approach does not provide for the emergence of systemic knowledge, leaving knowledge of the subject at a fragmented level.

Yes, in addition to practical materials, we always try to publish reviews devoted to any technology in general, or make extensive theoretical digressions, so that the reader has minimum required knowledge. However, they all assume that the reader has basic knowledge of the system in which he works.

But what about those who are just taking their first steps? Unfortunately, there is some snobbery in the IT community, they say, why talk about it, everyone already knows it, or “Google to the rescue,” forgetting that each of us was once a beginner and looked at a black screen with mystical horror Linux consoles, absolutely not understanding where he ended up and what to do.

As a result, a beginner, faced with the first difficulties, is forced to go look for knowledge elsewhere, and it’s good if such a place can be quickly found. Therefore, we decided to release a short series of materials in which we will lay out the basics of administering Linux systems at an accessible level, literally explaining “well-known things” on the fingers. experienced users They can skip this series, or they can read it, at the same time updating their knowledge.

So you've decided to become a Linux administrator...

Let’s paraphrase Mayakovsky a little “I would go to Linux admins, let them teach me”, this is exactly how things are in most cases. There is a need, there is a desire, there is basic set knowledge of working with Windows systems - all this will be useful when working with Linux systems. It’s much worse if any component is missing, then it’s probably worth thinking about wrong choice professions.

Immediately about what you need to forget once and for all. These are "religious wars" and "religious fanaticism." It is equally bad to deny the capabilities of Linux systems, as it is to extol them, trying to transfer everything that is needed and not needed to Linux. Remember - operating system- it's a tool good specialist takes the most suitable one for each task, the fanatic will hammer nails with a microscope, because “religion does not allow him” to pick up a hammer.

Even more, the operating system itself has no value; it is just an environment for launching and executing some services. Without software, the system is dead. Let's take the BeOS clone Haiku as an example, well, we installed it, we looked at it - it's cool... And then what?

So, you have decided to become... First of all, be ready to perceive new things, in particular new approach to administration, trying to forget about existing habits for a while. On for a long time your main tool will be the console.

For a Windows administrator accustomed to graphical tools, this may seem difficult. But one truth should be firmly understood - the console is the only full-fledged Linux administration tool and does not at all mean that the system is limited in capabilities or inferior. On the contrary, the command line allows you to perform many tasks much faster and easier than graphical administration tools.

But there are graphical administration tools, another reader will say, there are different panels, or you can install a graphical shell. It's possible, but not necessary. Why? Take a close look at the diagram below:

Linux, created in the image and likeness of UNIX systems, is a full-fledged system and without a graphical shell, moreover, we can start, close or even change the graphical shell without any impact on the performance of the system and even without rebooting it. We ended the Gnome session, launched KDE, and then went out to the console. Therefore, all system management tools are designed for use in command line. And all panels and graphical tools are just a superstructure on top of them.

Windows was developed for a long time using a fundamentally different technology; the graphical shell was placed at the core of the system and for a long time even ran at the kernel level (Win 9x family). Therefore, all administration tools were originally graphical, and command line tools complemented them rather than replaced them. Anyone who has been involved Windows recovery, knows that the capabilities of command line tools there are significantly limited and are intended primarily for system recovery, and not for its administration.

The situation began to change with the release of PowerShell and Core versions of Windows Server. Despite the fact that today the graphical shell continues to play a significant role in Windows systems, administrators now have an alternative tool in their hands - the PowerShell console, which allows you to fully administer Windows in command line mode. At the same time, the capabilities of PowerShell immediately gained popularity among specialists, as they allow you to perform many tasks faster and easier than graphical tools.

And the command line mode gives unlimited possibilities in creating your own scripts and scenarios that allow you to perform complex sequences of actions in automatic mode or according to schedule.

After this, we think you will have to look at Linux console on a completely different side. Regarding panels and graphic tools, then there are significant differences from Windows systems. IN Windows graphic tools are a complete alternative to PowerShell. IN Linux graphic the instruments are an add-on over the console, in fact using the same instruments, but through an additional layer. Therefore, we categorically do not recommend using various types of panels and other graphical tools, at least until you master the console. After this, you can decide for yourself whether you need a panel or whether you can do everything easier and faster through the console.

Infatuation with panels at an early stage of getting to know the system leads to the fact that system administration skills will be replaced by skills in working with the panel, which is fraught with problems when the panel for some reason turns out to be inaccessible, but you need to work with the system. This can be compared to the fact that a person who has learned to drive a car with a manual transmission will switch to an automatic without any problems, but a person who initially knows how to drive only an automatic is unlikely to be able to additional training drive a car with manual transmission.

If you haven’t changed your mind about becoming a Linux administrator, then let’s move on and look at the differences in the system architecture.

Kernel and drivers

The core of any operating system is the kernel. There are several different kernel architectures, Linux, like the vast majority UNIX systems, uses a monolithic kernel, Windows, on the contrary, uses the concept of a microkernel, although the Windows architecture is not truly microkernel, it is generally accepted that Windows uses a hybrid kernel.

A feature of a monolithic kernel is that all hardware drivers are also part of the kernel. Previously, when the hardware was changed, the kernel had to be rebuilt; today, monolithic kernels use a modular design, i.e. dynamically allow you to load the necessary modules responsible for this or that functionality. Those. Having added a new device to the system, we must dynamically load the corresponding kernel module, and if there is no such module, then working with the device will be impossible. As a solution, we can build the module ourselves, but in this case the module will be compiled under current version kernel and when changing it, the module will need to be recompiled.

In microkernel and hybrid architectures, drivers, although they can work at the kernel level, are not part of it and do not depend on the kernel version. Therefore, we can update the kernel without problems or use the same driver for all versions of systems with general structure kernels. For example, in Windows for the entire family of modern operating systems, from Windows Vista before Windows 8.1, the same driver is often used.

This does not mean that Linux is worse in this regard; a different architecture provides for different approaches. In practice, this means only one thing - you need to be more careful when choosing equipment for servers, trying to ensure that all major devices are supported by the kernel of your distribution. This is especially true for network cards. It will be very unpleasant if after each kernel update you have to run to the server room, connect a monitor and keyboard to the server, and reassemble the kernel module.

In fact, there is no such thing as a driver in Linux systems. The hardware is either supported by the kernel or it is not. The undoubted advantage of a monolithic kernel is that it is self-sufficient. If all the equipment is supported - you set it and forget it, it’s time to remember the situation when under Windows there is no network card driver and the disk is lost.

File system

We will not touch upon specific file systems; there should be no problems here; if the administrator worked with Windows systems, then he knows what a file system is and how FAT differs from NTFS, so to understand the difference between ext3, ext4 and, say, ReiserFS for him it won't be much of a problem. Let's talk about the fundamental differences. Unlike Windows, the Linux file system is hierarchical. It starts from the root, which is indicated by the sign / (slash), and has a tree-like structure. In this case, it does not matter at all that individual parts of the file system may be located on other partitions or even physical disks.

Let's look at another diagram.

In Windows, each partition has its own file system and its own letter. All paths to files and folders begin with a letter, i.e. from the root of the section. So if we had a DATA folder on the first physical disk, on the second logical partition, then the path to it will accordingly be like D:\DATA, if we want to move it to second hard disk, then let it change to E:\DATA. In some cases, this is terribly inconvenient, since the path must be changed in all places where it is used, and there are even corresponding utilities.

In Linux, the approach is radically different. It's time to get acquainted with the term mount point, which means the file system location where the storage device is mounted. For example, we want to move the users' home directories to a separate partition, as in the diagram above; for this we need to mount the second logical partition of the first physical disk sda2 V /home. Then transfer all user data there. This will happen absolutely transparently for the system and programs, as they used absolute path, let's say /home/andrey/data, so they will use it. We added another disk and want to move the directory there /var? No problem, stop the services using the directory, mount sdb1 V /var and transfer the data, start the services.

Everything is a file

Another fundamental principle that is inherited from UNIX systems. In Linux, everything is a file: devices, disks, sockets, etc., for example, opening /var/run we will see pid files corresponding to each running service in the system, and in /dev files for each device connected to the system:

What does this give? We will not go into details, but will look at a few simple examples. Let's say you need to create an image optical disk. In Windows we will need specialized software for this, in Linux everything is simpler, a CD-ROM is a block device, but at the same time it is a file, a block device file. We take the appropriate tool and copy the contents of the device file to ISO file image:

Dd if=/dev/cdrom of=/home/andrey/image.iso

We want to replace HDD? There is nothing simpler, we copy the contents of one block device file to the file of another block device:

Dd if=/dev/sda of=/dev/sdb

And you don't need any Partition Magic.

Another situation is that some software is urgently looking for the library lib-2-0-1.so, and we have a compatible but newer library, lib-2-1-5.so, what should we do? Create a symbolic link to lib-2-1-5.so with the name lib-2-0-1.so and everything will work. Because everything is a file and a symbolic link is also a file type. Now try to slip Windows application lib-2-0-1.lnk instead of lib-2-1-5.dll...

Ifconfig

will display information about the system network adapters:

Now remember that everything is a file, including the display device (screen), so we’ll simply redirect the standard output stream instead of the screen to the file we need:

Ifconfig > ~/123.txt

After which the command output will be saved to file 123.txt in the user’s root directory:

Threads and conveyor

In the previous example, we touched on the standard output stream. Linux has standard I/O streams for all processes. stdin, stdout and error output stream stderr. What does it mean? At a minimum, the process of data exchange between different processes is standardized. This allows you to create pipelines where the standard output of one command is passed to the standard input of another. For example, we want to see a list installed packages in the system, in particular squid packages. There is a command for this purpose:

Uh... What is this and how can I understand something here? Information about all the packages installed on the system quickly flashed on the screen and all we can see is the “tail” of this output:

But we don’t need the entire output of this command, we are only interested in squid packages. Therefore, we will direct the output of this command to the input of another, which will already select and show what we need:

Dpkg -l | grep squid

This is a completely different matter!

Moreover, the pipeline can be as long as desired; the result of one command can be transferred to a second, from the second to the third, etc. Another example from life. You need to get all the lines of your configuration file squid, but without comments and empty lines, so that, for example, you can post it on the forum or send it to a friend. You can, of course, copy everything, but it’s unlikely that anyone will want to help you by scrolling through the canvas standard file squid.conf, most of which is comments and examples. Let's make it simpler:

Cat /etc/squid3/squid.conf | grep -v "^#" | sed "/^$/d" > ~/mysquid.conf

And this is what we got:

Simple and clear, all options are at your fingertips. This became possible as a result of using a pipeline of three commands, the first outputted the contents of the file into the stream, the second selected all lines except comments, and the third deleted empty ones; we sent the result to a file.

Large letters, small letters

Linux, like UNIX, is a case-sensitive system. And we must remember this! Because, unlike Windows, myfile.txt, Myfile.txt And myfile.TXT- that's three different files. For the sake of compatibility with other systems, you should not abuse this and store files whose names differ only in case, and it is considered good form to use only lowercase letters in names.

Extensions and file types

IN Windows systems the file type is determined by its extension, if we rename exe file V jpg, then it will not start, and the system will try to process it as a picture. In Linux, a file type is determined by its content and the extension is used solely for compatibility with other systems or for user convenience. The ability to execute a file is ensured by setting the appropriate attribute. So on Windows, to make the script executable, you had to change the extension from txt on bat, in Linux, to do this you need to make the file executable. Misunderstanding of this point leads to situations where a novice administrator does not understand why his script myscript.sh is not executed. Actually an extension .sh it is necessary only for convenience, so that it is immediately clear that this is a Bash Shell script, and for it to work, it needs to be given the executable attribute, and it can be called anything, even myscript.pupkin-vasya.

Too shy to ask...

Excuse me, another reader will say, there is so much to remember: command syntax, keys, options, etc., etc. Here you need to buy a reference book or always keep the Internet at hand... Not at all, it’s enough to remember the names of the commands, it’s just not difficult, according to the traditions established in UNIX, commands are given short and convenient names. And everything else can be asked from the system. Contrary to popular belief, Linux systems are well documented. You can view the syntax and keys of any command by running it with the key --help, and since descriptions usually do not fit on one screen, you should redirect the help output to the utility more, which will display information screen by screen. Let's say we are interested in the team grep:

Grep --help | more

More detailed information can be obtained using the command man:

Man grep

Unfortunately, the information is in English, but knowledge of technical English is at least at the level of “reading with a dictionary” necessary requirement To system administrator. Does the last screenshot remind you of anything? That's right, OpenNET.

Without in any way belittling the importance of this resource, we can say that by adopting the command man and basic knowledge of English, you will visit OpenNET much less often.

Conclusion

We hope that after reading this article, novice administrators will have a better understanding of the structure of Linux systems and their fundamental differences from the Windows they are used to. This will make it possible in the future to correctly interpret the information received and put together a holistic picture of the functioning of the system, which will no longer be a “black box” and commands “a Chinese letter.”

We would also like to point out that in our examples we used only standard tools, which once again shows the wealth of administration tools, despite the fact that they only work on the command line. Let's return to the last example - the output of the squid config, and now think about how this could be done using graphical tools and how long would it take?

There is no need to be afraid of the command line; Linux puts in the hands of the administrator a very powerful set of tools that allows you to successfully solve all emerging problems without involving third party tools. When you master at least some of these capabilities, then Linux will no longer seem difficult to you, and the console will no longer seem gloomy, on the contrary, even having graphical shell you will launch the terminal, plunging into a familiar and understandable environment, understanding that it is you who control the system and do exactly what you want, and not what the developers of the next panel intended.

  • Tags:

Please enable JavaScript to view the

There wouldn't be much of a difference between a Linux administrator and an administrator. Networks are usually mixed (especially in Russia they love Windows for its “free” :-)).

It was said above that for practical work you need to know the commands. But in order to use them, you need a little theory on general administration and Linux administration.

Optionally you need to know:

    Fundamentals of local computer networks.

    Types of networks by size and purpose. Topologies: classification, application, distribution. Standardization of local networks.

    Technical equipment of the local network.

    Network adapters: wired and wireless. Repeaters, hubs, switches, routers. IP cameras, IP phones, IP printers, access points. Gateways, bridges, firewalls, NAS and RAID arrays.

    Communication lines.

    Shielded and unshielded twisted pair. Fiber optic communication line. Radio frequency devices. Practical work: installation of a communication line.

    Logical structure of the network.

    Protocols for logical interaction in a local network. TCP/IP protocol v4 and v6. Classful and classless addressing, subnet mask.

    Building a peer-to-peer network.

    Basic methods of network construction. From work groups to home groups. User list management. Resource sharing in a peer-to-peer network.

    Disk subsystem and printing subsystem.

    Work with hard drives. Working with printers. Restriction of access to network resources.

    Building a wireless network.

    Basic construction methods wireless networks. Formation wireless point access. Security protocols when forming a wireless network.

    General information about DNS. Formation of a domain zone, connection. DNS server monitoring. Practical work: Setting up a DNS server.

    General information about DHCP. Installing and configuring a DHCP server. Manage DHCP scopes, pools, leases and reservations.

On Linux, the following program is given for novice administrators:

  1. Introduction.

    1. Brief history of UNIX and Linux. GNU Project.
    2. General information about Linux system architecture.
    3. Basic concepts - operating system, shell, console, terminal.
    4. Review of existing Linux distributions.
  2. Installation and getting started.

    1. What you need to know before installation.
    2. Installing Debian GNU/Linux.
    3. Getting started in Linux.
    4. Local login. Virtual terminals.
  3. Linux management basics.

    1. Command line interface.
    2. Bourne Shell Basics ( sh).
    3. Bash: interactive shell.
    4. What are shell scripts?
    5. Process and task management.
  4. Beginning of work.

    1. How to get a certificate - man And info.
    2. Files and directories.
    3. Search files.
    4. Word processing. Text editor vi.
    5. File management via Midnight Commander.
  5. Installation and removal of programms.

    1. Utilities make, diff, patch.
    2. Installing programs from source codes.
    3. RPM package management system and extensibility with YUM.
    4. APT package management system.
  6. Working with disks and file systems.

    1. Disk drives in Linux.
    2. Creating disk partitions: fdisk, cfdisk.
    3. File systems in the file: loop device.
    4. Virtual memory (swap).
    5. Linux file systems: Ext2, Ext3, Ext4, ReiserFS, XFS.
    6. Support for non-native file files NTFS systems,FAT.
    7. Virtual file systems.
  7. Administration of user and group accounts. Authorization in Linux.

    1. Access rights.
    2. User authorization.
    3. User administration.
    4. Password management: passwd.
  8. Logging and the Linux kernel.

    1. Logs, their location, registration of system messages and events.
    2. Interacting with a running kernel - configuring the kernel.
    3. Access to equipment.
    4. System loaders LILO, GRUB.
    5. Kernel module management: modprobe, rmmod, lsmod And modinfo.
  9. Backup and recovery.

    1. General issues. Terminology.
    2. Backup strategies.
    3. Archiver tar.
    4. Direct access to devices - dd.
    5. Packers gzip, bzip2.
  10. Linux boot process and graphics system X Window System.

    1. Loading sequence. Program init and its functions.
    2. Scenario rc and the SystemV initialization system.
    3. Service concept. Service management.
    4. GUI architecture.
    5. Setting up X.org.
    6. Launch X.
    7. Access to remote X servers.
  11. Basics of networking.

    1. Equipment and network topology.
    2. Hierarchy network protocols. ISO/OSI reference model.
    3. TCP/IP protocol family.11.4 Basics of IP addressing, routing, classes and subnet masks.
  12. Linux networking tools.

    1. Setting up network interfaces.
    2. Setting up static routing
    3. Diagnostic tools: ping, traceroute, netstat, tcpdump, lsof.
    4. Remote access - secure shell(OpenSSH).
    5. Synchronizing files using the RSync utility.

And in general, take the educational course program and read about these topics on the Internet. You can also watch a couple of webinars.