Redirecting I/O streams c. Program channels and streams, redirection

1403

Redirecting "2>&1", ">/dev/null" or output streams on Unix (bash/sh)

Your rating: masterpiece wonderful very good good normal Haven't read it tolerable mediocre bad very bad don't read

Output streams

Script messages are output to well-defined streams - output streams. So what we output through

echo "Hello, world!"

Not just displayed on the screen, but, from the point of view of the system, and specifically the command interpreters sh and bash, it is output through a specific output stream. In the case of echo, stream number 1 (stdout) with which the screen is associated.

Some programs and scripts also use a different output stream - number 2 (stderr). They display error messages there. This makes it possible to separate regular information and error messages from streams and route and process them separately.

For example, you can block the output information messages, leaving only error messages. Or send error messages to separate file for logging.

What is ">somefile"

Such an entry in Unix (in bash interpreters and sh) specifies redirection of output streams.

In the following example, we will redirect all the informational (regular) messages of the ls command to the file myfile.txt, thus having just a list of ls in this file:

$ ls > myfile.txt


In this case, after pressing Enter you will not see anything on the screen, but the file myfile.txt will contain everything that should have been displayed on the screen.

However, let's do a deliberately erroneous operation:

$ ls /masdfasdf > myfile.txt


So what will happen? Because the masdfasdf directory does not exist in the root of the file system (I assume so - what if you do?), then the ls command will generate an error. However, it will throw this error not through the regular stdout stream (1), but through the error stream stderr (2). And redirection is set only for stdout ("> myfile.txt").

Because we did not redirect the stderr(2) stream anywhere - the error message will appear on the screen and will NOT appear in the myfile.txt file

Now let's run the ls command so that the information data is written to the file myfile.txt and the error messages are written to the file myfile.err, with nothing appearing on the screen during execution:

$ ls >myfile.txt 2>myfile.err


Here we encounter for the first time the indication of a stream number as a redirection. The entry "2>myfile.err" indicates that stream number 2 (stderr) should be redirected to the file myfile.err.

Of course, we can direct both streams to the same file or to the same device.

2>&1

You can often find such an entry in scripts. It means: "Redirect stream number 2 to stream number 1", or "Stream stderr - redirect through stream stdout". Those. We route all error messages through the thread through which normal, non-error messages are usually printed.

$ ls /asfasdf 2>&1


Here's another example in which all messages are redirected to the file myfile.txt:

$ ls /asfasdf >myfile.txt 2>&1


In this case, all messages, both error and normal, will be written to myfile.txt, because We first redirected the stdout stream to a file, and then indicated that errors should be dumped into stdout - accordingly, into the file myfile.txt

/dev/null

However, sometimes we just need to hide all messages - without saving them. Those. just block the output. For this purpose it is used virtual device/dev/null. In the following example, we will direct all the output of regular messages from the ls command to /dev/null:

$ ls > /dev/null


Nothing but error messages will be displayed on the screen. And in the following example, error messages will be blocked:

$ ls > /dev/null 2>&1


Moreover, this record is equivalent to a record of the form:

$ ls >/dev/null 2>/dev/null


And in the following example we will block only error messages:

$ ls 2>/dev/null

Please note that you can no longer specify “2>&1” here, because stream (1) is not redirected anywhere and in this case error messages will simply appear on the screen.

What comes first - the egg or the chicken?

I will give you 2 examples here.

Example 1)

$ ls >/dev/null 2>&1


Example 2)

$ ls 2>&1 >/dev/null


In appearance, the sum does not change by rearranging the places of the syllables. But the order of the redirection pointers matters!

The point is that interpreters read and apply redirections from left to right. And now we will look at both examples.

Example 1

1) ">/dev/null" - we direct stream 1 (stdout) to /dev/null. All messages entering stream (1) will be sent to /dev/null.

2) "2>&1" - we redirect stream 2 (stderr) to stream 1 (stdout). But, because thread 1 is already associated with /dev/null - all messages will still end up in /dev/null.

Result: the screen is empty.

Example 2

1) "2>&1" - we redirect the error stream stderr (2) to the stdout stream (1). At the same time, because thread 1 is associated with the terminal by default - we will successfully see error messages on the screen.

2) ">/dev/null" - and here we redirect thread 1 to /dev/null. AND regular messages we won't see.

Result: We will see error messages on the screen, but will not see normal messages.

Conclusion: redirect the stream first, then link to it.

In the system, by default, three “files” are always open - (keyboard), (screen) and (displaying error messages on the screen). These, and any other open files, can be redirected. IN in this case, the term "redirection" means to take the output from a file, command, program, script, or even a single block in a script (see Example 3-1 and Example 3-2) and pass it as input to another file, command, program, or script.

Each open file has a file descriptor associated with it. The file descriptors, and are 0, 1 and 2, respectively. When opening additional files, descriptors 3 to 9 remain unoccupied. Sometimes additional descriptors can do a good job of temporarily storing a link to, or. This makes it easier to return handles to their normal state after complex redirection and swap manipulations (see Example 16-1).

COMMAND_OUTPUT > # Redirecting stdout(output) to a file. # If the file was missing, it will be created, otherwise it will be overwritten. ls -lR > dir-tree.list # Creates a file containing a list of a directory tree. : > filename # The operation > truncates the file "filename" to length zero. # If the file did not exist before the operation, # it is created new file With zero length(the "touch" command has the same effect). # The symbol: acts as a placeholder here, without outputting anything. > filename # The operation > truncates the file "filename" to length zero. # If the file did not exist before the operation, # then a new file with zero length is created (the "touch" command has the same effect). # (same result as ":>" above, but this option does not work # in some shells.) COMMAND_OUTPUT >> # Redirect stdout (output) to a file. # Creates a new file if it was missing, otherwise appends it to the end of the file. # Single-line redirection commands # (affect only the line on which they appear): # ——————————————————————— 1>filename # Redirect output (stdout) to file "filename". 1>>filename # Redirect output (stdout) to file "filename", the file is opened in append mode. 2>filename # Redirect stderr to file "filename". 2>>filename # Redirect stderr to file "filename", the file is opened in append mode. &>filename # Redirect stdout and stderr to file "filename". #==================================================== ============================= # Redirect stdout, for one line only. LOGFILE=script.log echo "This line will be written to the file \"$LOGFILE\"." 1>$LOGFILE echo "This line will be added to the end of the \"$LOGFILE\" file." 1>>$LOGFILE echo "This line will also be added to the end of the \"$LOGFILE\" file." 1>>$LOGFILE echo "This line will be printed to the screen and will not end up in the \"$LOGFILE\" file." # After each line, the redirection made is automatically reset. # Redirect stderr, for one line only. ERRORFILE=script.errors bad_command1 2>$ERRORFILE # The error message will be written to $ERRORFILE. bad_command2 2>>$ERRORFILE # An error message will be added to the end of $ERRORFILE. bad_command3 # The error message will be printed to stderr, #+ and will not go to $ERRORFILE. # After each line, the redirection made is also automatically reset. #==================================================== ============================= 2>&1 # Redirects stderr to stdout. # Error messages are sent to the same place as standard output. i> i V j. # Output to file with descriptor i passed to file with descriptor j. >&j # File descriptor is redirected 1 (stdout) to a file with a descriptor j. # Output to stdout is sent to file descriptor j. 0< FILENAME < FILENAME # Ввод из файла. # Парная команде ">", often found in combination with it. # # grep search-word filename # File "filename" is opened for reading and writing, and associated with handle "j". # If "filename" is missing, it is created. # If descriptor "j" is not specified, then descriptor 0, stdin, is taken by default. # # One use for this is to write to a specific position in a file. echo 1234567890 > File # Write a string to the file "File". exec 3<>File # Open "File" and associate with handle 3. read -n 4<&3 # Прочитать 4 символа. echo -n . >&3 # Write the dot character. exec 3>&- # Close handle 3. cat File # ==> 1234.67890 # Random access, and that's it! | # Conveyor (channel). # Universal remedy to combine commands into one chain. # Looks like ">", but is actually more extensive. # Used to combine commands, scripts, files and programs into one chain (pipeline). cat *.txt | sort | uniq > result-file # The contents of all .txt files are sorted, duplicate lines are removed, # the result is saved in the file "result-file".

Redirection operations and/or pipelines can be combined on the same command line.

command< input-file >output-file command1 | command2 | command3 > output-file See Example 12-23 and Example A-17.

It is possible to redirect multiple streams to one file.

ls -yz >> command.log 2>&1 # A message about an invalid "yz" option in the "ls" command will be written to the "command.log" file. # Because stderr is redirected to a file.

Closing file handles

Close the input file descriptor.

0<&-, <&-

Close the output file descriptor.

1>&-, >&-

Child processes inherit handles open files. This is why conveyors work. To prevent handles from being inherited, close them before starting the child process.

# Only stderr is passed into the pipeline. exec 3>&1 # Save the current "state" to stdout. ls -l 2>&1 >&3 3>&- | grep bad 3>&- # Close desc. 3 for "grep" (but not for "ls"). # ^^^^ ^^^^ exec 3>&- # Now close it for the rest of the script. # Thanks S.C.

For more information about I/O redirection, see Appendix D.

16.1. Using the command exec

Team exec redirects input from to a file. From now on, all input, instead of (usually the keyboard), will be made from this file. This makes it possible to read the contents of a file, line by line, and parse each line entered using sed and/or awk.

Example 16-1. Redirecting with exec

#!/bin/bash # Redirect stdin using "exec". exec 6<&0 # Связать дескр. #6 со стандартным вводом (stdin). # Сохраняя stdin. exec < data-file # stdin заменяется файлом "data-file" read a1 # Читается первая строка из "data-file". read a2 # Читается вторая строка из "data-file." echo echo "Следующие строки были прочитаны из файла." echo "——————————————" echo $a1 echo $a2 echo; echo; echo exec 0<&6 6<&- # Восстанавливается stdin из дескр. #6, где он был предварительно сохранен, #+ и дескр. #6 закрывается (6<&-) освобождая его для других процессов. # # <&6 6<&- дает тот же результат. echo -n "Введите строку " read b1 # Теперь функция "read", как и следовало ожидать, принимает данные с обычного stdin. echo "Строка, принятая со stdin." echo "—————————" echo "b1 = $b1" echo exit 0

Likewise, the design exec >filename redirects output to the specified file. After this, all output from commands that would normally be directed to is now output to this file.

Example 16-2. Redirecting with exec

#!/bin/bash # reassign-stdout.sh LOGFILE=logfile.txt exec 6>&1 # Link desc. #6 with stdout. # Storing stdout. exec > $LOGFILE # stdout is replaced by the file "logfile.txt". # ———————————————————— # # All output from the commands in this block is written to the $LOGFILE file. echo -n "Logfile: " date echo "————————————-" echo echo "Output of \"ls -al\"" echo ls -al echo; echo echo "Output of the command \"df\"" echo df # ———————————————————— # exec 1>&6 6>&- # Restore stdout and close the file. #6. echo echo "== stdout restored to default == " echo ls -al echo exit 0

Example 16-3. Simultaneous redirection of devices, and, using the exec command

#!/bin/bash # upperconv.sh # Convert characters in the input file to uppercase. E_FILE_ACCESS=70 E_WRONG_ARGS=71 if [ ! -r "$1" ] # Is the file readable? then echo "Unable to read due to of this file!" echo "Usage: $0 input-file output-file" exit $E_FILE_ACCESS fi # In case the input file ($1) is not specified #+ the exit code will be the same. if [ -z "$2" ] then echo " An output file must be specified." echo "Usage: $0 input-file output-file" exit $E_WRONG_ARGS fi exec 4<&0 exec < $1 # Назначить ввод из входного файла. exec 7>&1 exec > $2 # Assign output to an output file. # Assuming the output file is writable # (add check?). # ———————————————— cat — | tr a-z A-Z # Convert to uppercase # ^^^^^ # Read from stdin. # ^^^^^^^^^^ # Write to stdout. # However, both stdin and stdout were redirected. # ———————————————— exec 1>&7 7>&- # Restore stdout. exec 0<&4 4<&- # Восстановить stdin. # После восстановления, следующая строка выводится на stdout, чего и следовало ожидать. echo "Символы из \"$1\" преобразованы в верхний регистр, результат записан в \"$2\"." exit 0

Next: Redirecting errors to a file Up: I/O redirection Previous: Redirecting input from a file Contents Index

Redirecting output to a file

To redirect standard output to a file, use the `>' operator.

I/O redirection in Linux

Follow the command name with the > operator, followed by the name of the file that will serve as the destination for the output. For example, to write the output of a program to a file, enter:

If you redirect standard output to an already existing file, it will be overwritten from the beginning. To append standard output to the contents of an existing file, you must use the `"' operator. For example, to add the results of the work when running the program again to a file, enter:

Alex Otwagin 2002-12-16

A program is usually valuable because it can process data: accept one thing, produce another as output, and almost anything can act as data: text, numbers, sound, video... The input and output data streams for a command are called input And conclusion. Each program can have several input and output streams. In each process, when created in mandatory receives so-called standard input(standard input, stdin) and standard output(standard output, stdout) and standard error output(standard error, stderr).

Standard input/output streams are intended primarily for exchanging text information. It doesn’t even matter who communicates via texts: a person with a program or programs between themselves - the main thing is that they have a data transmission channel and that they speak “the same language.”

The textual principle of working with the machine allows you to escape from specific parts of the computer, such as system keyboard and video cards with a monitor, considering a single terminal device, through which the user enters text (commands) and transmits it to the system, and the system outputs necessary for the user data and messages (diagnostics and errors). Such a device is called terminal. IN general case terminal is the user's entry point into the system and has the ability to transmit text information. The terminal can be a separate external device, connected to a computer via a serial data port (in personal computer it is called "COM port"). A program (for example, xterm or ssh) can also work as a terminal (with some support from the system). Finally, virtual consoles- also terminals, only organized programmatically using suitable devices modern computer.

When working at the command line, the shell's standard input is associated with the keyboard, and standard output and error output are associated with the monitor screen (or terminal emulator window). Let's show with an example the simplest command-cat. Usually the team cat reads data from all files that are specified as its parameters and sends the read directly to standard output (stdout). Therefore the command

/home/larry/papers# cat history-final masters-thesis

will display the contents of the file first, and then the file.

However, if the file name is not specified, the program cat reads input from stdin and immediately returns it to stdout (without modifying it in any way). Data passes through cat like through a pipe. Let's give an example:

/home/larry/papers# cat Hello there. Hello there. Bye. Bye. CtrlD/home/larry/papers#

Every line entered from the keyboard is immediately returned to the screen by the cat program. When entering information from standard input, the end of the text is signaled by entering special combination keys, usually CtrlD.

Let's give another example. Team sort reads lines of input text (also from stdin if no file name is specified) and outputs a set of these lines in an ordered form on stdout. Let's check its action.

/home/larry/papers# sort bananas carrots apples Ctrl+D apples bananas carrots /home/larry/papers#

As you can see, after pressing CtrlD, sort output the lines ordered in alphabetical order.

Standard input and standard output

Let's say you want to route output sort commands to some file to save the alphabetically ordered list on disk. The command shell allows you to redirect the standard output of a command to a file using a symbol. Let's give an example:

/home/larry/papers# sort > shopping-list bananas carrots apples CtrlD/home/larry/papers#

You can see that the output of the sort command is not printed on the screen, but it is saved in a file named. Let's display the contents of this file:

/home/larry/papers# cat shopping-list apples bananas carrots /home/larry/papers#

Now let the original unordered list be in a file. This list can be sorted using the command sort, by telling it that it should read from a given file rather than from its standard input, and in addition redirecting the standard output to a file, as was done above. Example:

/home/larry/papers# sort items shopping-list /home/larry/papers# cat shopping-list apples bananas carrots /home/larry/papers#

However, you can do it differently by redirecting not only the standard output, but also standard input utilities from the file using the symbol:

/home/larry/papers# sort< items apples bananas carrots /home/larry/papers#

Command result sort< items equivalent to the command sort items, however the first one demonstrates the following: when issuing the command sort< items the system behaves as if the data contained in the file had been entered from standard input. Redirection is done by the command shell. Team sort the file name was not reported: this command read data from its standard input as if we had entered it from the keyboard.

Let's introduce the concept filter. A filter is a program that reads data from standard input, processes it in some way, and sends the result to standard output. When redirection is applied, files can be used as standard input and output. As stated above, by default, stdin and stdout refer to the keyboard and screen, respectively. The sort program is a simple filter - it sorts the input data and sends the result to standard output. A very simple filter is the program cat- it does nothing with the input data, but simply sends it to the output.

We've already demonstrated how to use the sort program as a filter above. These examples assumed that the source data was in some file or that the source data would be entered from the keyboard (standard input). However, what if you want to sort data that is the result of some other command, for example, ls?

We will sort the data in reverse alphabetical order; this is done by the command option sort. If you wanted to list the files in the current directory in reverse alphabetical order, one way to do it would be like this.

I/O redirection

Let's first use the command ls:

/home/larry/papers# ls english-list history-final masters-thesis notes /home/larry/papers#

Now we redirect the command output ls to a file named file-list

/home/larry/papers# ls > file-list /home/larry/papers# sort -r file-list notes masters-thesis history-final english-list /home/larry/papers#

Here is the command output ls saved in a file, and after that this file was processed by the command sort. However, this path is inelegant and requires the use temporary file to store program output ls.

A solution to this situation could be to create docked commands(pipelines). The docking is carried out by the command shell, which directs the stdout of the first command to the stdin of the second command. In this case we want to send commands to stdout ls to stdin commands sort. A symbol is used for docking, as shown in the following example:

/home/larry/papers# ls | sort -r notes masters-thesis history-final english-list /home/larry/papers#

This command is shorter than a collection of commands and is easier to type.

Let's consider another useful example. Team

/home/larry/papers# ls /usr/bin

returns a long list of files. Most of this list flies across the screen too quickly for the contents of this list to be read. Let's try to use the more command to display this list in parts:

/home/larry/papers# ls /usr/bin | more

Now you can “turn through” this list.

You can go further and dock more than two teams. Consider the team head, which is a filter with the following property: it outputs the first lines from the input stream (in our case, the input will be the output from several concatenated commands). If we want to display the last alphabetical file name in the current directory, we can use the following long command:

/home/larry/papers# ls | sort -r | head -1 notes /home/larry/papers\#

where is the team head displays the first line of the input stream of lines it receives (in our case, the stream consists of data from the command ls), sorted in reverse alphabetical order.

Using stacked commands (pipeline)

The effect of using a symbol to redirect file output is destructive; in other words, the team

/home/larry/papers# ls > file-list

will destroy the contents of the file if the file previously existed and create a new file in its place.

If the redirection is done using symbols instead, the output will be appended to the end of the specified file without destroying the original contents of the file. For example, the command

/home/larry/papers# ls >> file-list

attributes the output of the command ls to the end of the file.

Please note that input and output redirection and command splicing are performed command shells, which support the use of symbols, and. The teams themselves are not able to perceive and interpret these symbols.

Non-destructive output redirection

Something like this should do what you need?

Check it out: wintee

No need for cygwin.

However, I have encountered and reported some issues.

Also you might want to check out http://unxutils.sourceforge.net/ because it contains tee (and doesn't need cygwin), but be careful that EOL output is UNIX-like.

Last but not least, if you have PowerShell, you can try Tee-Object. Type in PowerShell console for more information.

This works, although it's a little ugly:

It's a little more flexible than some other solutions in that it works in its own way so you can use it to add.

I use this quite a lot in batch files to log and display messages:

Yes, you could simply repeat the ECHO statement (once for the screen and a second time redirecting to the log file), but that looks just as bad and is a maintenance issue.

Redirecting input and output

At least this way you don't have to make changes to messages in two places.

Note that _ is just short name file, so you will need to remove it at the end of your batch file (if you are using batch file).

This will create a log file with current time and time, and you can use console lines during the process

If you have cygwin in your path Windows environment, you can use:

Simple console application C# would do the trick:

To use this, you simply pass the source command to the program and specify the path to any files for which you want to duplicate the output. For example:

Will display search results and also save the results in files1.txt and files2.txt.

Note that there is not much in the way of error handling (nothing!) and support for multiple files may not be necessary.

I was also looking for the same solution, after a little try I was able to successfully do this on the command line. Here's my solution:

It even hijacks any PAUSE command.

An alternative is to tee stdout to stderr in your program:

Then in your dos batchfile:

Stdout will go to the log file and stderr (same data) will be shown on the console.

How to display and redirect output to a file. Suppose if I use dos command, dir> test.txt, this command redirects the output to the test.txt file without displaying the results. how to write a command to print the output and redirect the output to a file with using DOS, i.e. command Windows strings, not UNIX/LINUX.

You may find these commands in biterscripting (http://www.biterscripting.com) useful.

This is a variation of MTS's previous answer, however it adds some features that may be useful to others. Here is the method I used:

  • The command is set as a variable that can be used later in the code to be output to the command window and added to the log file using
    • the command avoids redirection using the carrot symbol so that commands are not evaluated initially
  • A temporary file is created with a file name similar to a batch file with a name that uses the parameter extension syntax command line to get the name of the batch file.
  • The result is added to a separate log file

Here is the sequence of commands:

  1. Output and error messages are sent to a temporary file
  2. The contents of the temporary file are then:
    • added to log file
    • output to command window
  3. The temporary file with the message is deleted

Here's an example:

This way the command can simply be added after later commands in a batch file, which looks much cleaner:

This can be added at the end of other commands as well. As far as I can tell this will work when messages have multiple lines. For example, next command prints two lines if there is an error message:

I agree with Brian Rasmussen, the unxutils port is the easiest way to do this. In the Batch Files section of his Scripting pages, Rob van der Woude provides a wealth of information about using MS-DOS and CMD commands. I thought he might have own solution your problem, and after I did some digging around in TEE.BAT, I found TEE.BAT, which seems to be just that, a batch language pack MS-DOS. This is a fairly complex batch file and I would be inclined to use the unxutils port.

I install perl on most of my machines, so the answer is using perl: tee.pl

dir | perl tee.pl or catalog | perl tee.pl dir.bat

raw and untested.

One of the most interesting and useful topics for system administrators and new users who are just starting to understand how to work with the terminal - this is redirection of Linux input/output streams. This terminal feature allows you to redirect the output of commands to a file, or the contents of a file to command input, to combine commands together, and to form command pipelines.

In this article, we will look at how I/O stream redirection is performed in Linux, what operators are used for this, and where all this can be used.

All the commands we execute return us three types of data:

  • The result of the command, usually text data requested by the user;
  • Error messages - inform about the process of command execution and unexpected circumstances that have arisen;
  • The return code is a number that allows you to evaluate whether the program worked correctly.

In Linux, all substances are considered files, including input streams linux output- files. Each distribution has three main stream files that programs can use, these are defined by the shell and identified by their file descriptor number:

  • STDIN or 0- this file is associated with the keyboard and most commands receive data to work from here;
  • STDOUT or 1- this is the standard output; the program sends all the results of its work here. It is associated with the screen, or to be more precise, with the terminal in which the program is running;
  • STDERR or 2- all error messages are output to this file.

I/O redirection allows you to replace one of these files with your own. For example, you can have a program read data from a file in file system, rather than the keyboard, you can also output errors to a file rather than to the screen, etc. All this is done using symbols "<" And ">" .

Redirect output to file

Everything is very simple. You can redirect output to a file using the > symbol. For example, let's save the output of the top command:

top -bn 5 > top.log

The -b option causes the program to run in non-interactive batch mode, and n - repeats the operation five times to obtain information about all processes. Now let's see what happened with cat:

Symbol ">" overwrites information from a file if there is already something there. To append data to the end use ">>" . For example, redirect output to linux file also for top:

top -bn 5 >> top.log

By default, the standard output file descriptor is used for redirection. But you can specify this explicitly. This command will give the same result:

top -bn 5 1>top.log

Redirect errors to file

To redirect error output to a file, you need to explicitly specify the file descriptor you are going to redirect. For errors, this is number 2. For example, when trying to gain access to the superuser directory, ls will throw an error:

You can redirect standard error to a file like this:

ls -l /root/ 2> ls-error.log
$ cat ls-error.log

To append data to the end of the file, use the same symbol:

ls -l /root/ 2>>ls-error.log

Redirect standard output and errors to file

You can also redirect all output, errors, and standard output to a single file. There are two ways to do this. The first, older one, is to pass both handles:

ls -l /root/ >ls-error.log 2>&1

First, the output of the ls command will be sent to the ls-error.log file using the first redirection character. Then all errors will be sent to the same file. The second method is simpler:

ls -l /root/ &> ls-error.log

You can also use appending instead of rewriting:

ls -l /root/ &>> ls-error.log

Standard input from file

Most programs, except services, receive data for their work through standard input. By default, standard input expects input from the keyboard. But you can force the program to read data from a file using the operator "<" :

cat

You can also immediately redirect the output to a file too. For example, let's re-sort the list:

sort sort.output

Thus, we redirect input/output to linux in one command.

Use of tunnels

You can work not only with files, but also redirect the output of one command as the input of another. This is very useful for performing complex operations. For example, let's display five recently modified files:

ls -lt | head -n 5

With the xargs utility, you can combine commands so that standard input is passed as parameters. For example, let's copy one file to several folders:

echo test/tmp/ | xargs -n 1 cp -v testfile.sh

Here the -n 1 option specifies that only one parameter should be supplied per command, and the -v option to cp allows detailed information about movements to be printed. Another useful command in such cases is tee. It reads data from standard input and writes to standard output or files. For example:

echo "Tee operation test" | tee file1

In combination with other commands, these can be used to create complex multi-command instructions.

conclusions

In this article, we covered the basics of Linux I/O stream redirection. Now you know how to redirect output to a linux file or output from a file. It's very simple and convenient. If you have any questions, ask in the comments!

Linux's built-in redirection capabilities provide you with a wide range of simplified tools for all kinds of tasks. The ability to manage various I/O streams will significantly increase productivity, both when developing complex software and when managing files using the command line.

I/O threads

Input and output in a Linux environment is distributed among three threads:

  • Standard input (standard input, stdin, thread number 0)
  • Standard output (stdout, number 1)
  • Standard error, or diagnostic stream (standard error, stderr, number 2)

When a user interacts with a terminal, standard input is transmitted through the user's keyboard. The standard output and standard error are displayed as text on the user's terminal. All these three streams are called standard streams.

Standard input

The standard input stream typically passes data from the user to the program. Programs that expect standard input typically receive input from a device (such as a keyboard). Standard input stops when it reaches EOF (end-of-file). EOF indicates that there is no more data to read.

To see how standard input works, run the cat program. The name of this tool means “concatenate” (to connect or combine something). Typically this tool is used to combine the contents of two files. When run without arguments, cat opens a command prompt and accepts the contents of standard input.

Now enter some numbers:

1
2
3
ctrl-d

By entering a number and pressing enter, you send standard input to the running cat program, which accepts the data. In turn, the cat program displays the received input on standard output.

The user can set EOF by pressing ctrl-d, which will cause the cat program to stop.

Standard output

Standard output records the data generated by the program. If standard output has not been redirected, it will output text to the terminal. Try running the following command as an example:

echo Sent to the terminal through standard output

The echo command, without any additional options, displays on the screen all the arguments passed to it on the command line.

Now run echo without any arguments:

The command will return an empty string.

Standard error

This standard stream records errors generated by a program that has crashed. Like standard output, this stream sends data to the terminal.

Let's look at an example of an ls command error stream. The ls command displays the contents of directories.

Without arguments, this command returns the contents of the current directory. If you specify a directory name as an argument to ls, the command will return its contents.

Since directory % does not exist, the command will return the standard error:

ls: cannot access %: No such file or directory

Stream redirection

Linux provides special commands to redirect each thread. These commands write standard output to a file. If the output is redirected to a non-existent file, the command will create a new file with that name and save the redirected output to it.

Commands with one angle bracket overwrite the existing content of the target file:

  • > - standard output
  • < — стандартный ввод
  • 2> - standard error

Commands with double angle brackets do not overwrite the contents of the target file:

  • >> - standard output
  • << — стандартный ввод
  • 2>> - standard error

Consider the following example:

cat > write_to_me.txt
a
b
c
ctrl-d

This example uses the cat command to write the output to a file.

Review the contents of write_to_me.txt:

cat write_to_me.txt

The command should return:

Redirect cat to write_to_me.txt again and enter three numbers.

cat > write_to_me.txt
1
2
3
ctrl-d

Now check the contents of the file.

cat write_to_me.txt

The command should return:

As you can see, the file only contains the latest output because the command that redirected the output used a single angle bracket.

Now try running the same command with two angle brackets:

cat >> write_to_me.txt
a
b
c
ctrl-d

Open write_to_me.txt:

1
2
3
a
b
c

Commands with double angle brackets do not overwrite existing content, but rather append to it.

Conveyors

Pipes redirect the output of one command to the input of another. In this case, the data transferred to the second program is not displayed in the terminal. The data will appear on the screen only after processing by the second program.

Pipelines in Linux are represented by a vertical bar.

For example:

This command will pass the output of ls (the contents of the current directory) to less, which will display the data passed to it line by line. Typically, ls displays the contents of directories consecutively, without breaking them into lines. If you redirect ls output to less, the latter command will split the output into lines.

As you can see, a pipeline can redirect the output of one command to the input of another, unlike > and >>, which only redirect data to files.

Filters

Filters are commands that can change the redirection and output of a pipeline.

Note: Filters are also standard Linux commands that can be used without a pipeline.

  • find – searches for a file by name.
  • grep – searches for text using a given pattern.
  • tee – redirects standard input to standard output and one or more files.
  • tr – search and replace strings.
  • wc – counting characters, lines and words.

Examples of I/O redirection

Now that you are familiar with the basic concepts and mechanisms of redirection, let's look at some basic examples of their use.

command > file

This pattern redirects the command's standard output to a file.

ls ~> root_dir_contents.txt

This command passes the contents of the system's root directory as standard output and writes the output to the file root_dir_contents. This will delete all previous content in the file since the command uses a single angle bracket.

command > /dev/null

/dev/null is a special file (called a "null device") that is used to suppress standard output or diagnostics to avoid unwanted console output. All data that ends up in /dev/null is discarded. Redirection to /dev/null is commonly used in shell scripts.

ls > /dev/null

This command resets the standard output returned by ls to /dev/null.

command 2 > file

This pattern redirects the command's standard error stream to a file, overwriting its current contents.

mkdir "" 2> mkdir_log.txt

This command will redirect the error caused by an invalid directory name and write it to log.txt. Please note: the error still appears in the terminal.

command >> file

This pattern redirects the standard output of a command to a file without overwriting the current contents of the file.

echo Written to a new file > data.txt
echo Appended to an existing file"s contents >> data.txt

This pair of commands first redirects user input to a new file and then pastes it into an existing file without overwriting its contents.

command 2>>file

This pattern redirects the command's standard error stream to a file without overwriting the file's existing contents. It is suitable for creating program or service error logs, since the log contents will not be constantly updated.

find "" 2> stderr_log.txt
wc "" 2>> stderr_log.txt

The above command redirects the error message caused by the invalid find argument to the stderr_log.txt file and then appends the error message caused by the invalid wc argument to the stderr_log.txt file.

team | team

This template redirects the standard output of the first command to standard input second team.

find /var lib | grep deb

This command searches the /var directory and its subdirectories for file names and deb extensions and returns the file paths, highlighting the search pattern in red.

team | tee file

This pattern redirects the command's standard output to a file and overwrites its contents, then displays the redirected output in the terminal. If the specified file does not exist, it creates a new file.

IN this template The tee command is typically used to view the output of a program and save it to a file at the same time.

wc /etc/magic | tee magic_count.txt

This command passes the number of characters, lines and words in magic file(Linux uses this to determine file types) to the tee command, which sends this data to the terminal and to the magic_count.txt file.

team | team | command >> file

This template redirects the standard output of the first command and filters it through the next two commands, then appends the final result to a file.

ls ~ | grep *tar | tr e E >> ls_log.txt

This command sends the output of ls for the root directory to grep. In turn, grep searches the received data tar files. After this, the result of grep is passed to the tr command, which will replace all e characters with the character E. The resulting result will be added to the file ls_log.txt (if such a file does not exist, the command will create it automatically).

Conclusion

Linux I/O redirection functions seem overly complex at first. However, working with redirection is one of the most important skills of a system administrator.

To find out more about a particular command, use:

man command | less

For example:

This command will return full list commands for tee.

Tags:

Learning Linux, 101

Streams, program channels and redirects

Learning the Basics of Linux Pipelines

Content Series:

Short review

In this article, you will learn the basic techniques for redirecting standard I/O streams in Linux. You will learn:

  • Redirect standard input/output streams: standard input, standard output, and standard error.
  • Direct the output of one command to the input of another command.
  • Send output simultaneously to standard device output (stdout) and to a file.
  • Use the output of a command as arguments to another command.

This article will help you prepare to take the LPI 101 Administrator exam entry level(LPIC-1) and contains materials from Objective 103.4 of Topic 103. The Objective has a weight of 4.

About this series

This series of articles will help you master administration tasks operating system Linux. You can also use the material in these articles to prepare for.

To view descriptions of articles in this series and get links to them, please refer to our. This list is continually updated with new articles as they become available and contains the most current (as of April 2009) LPIC-1 certification exam objectives. If any article is missing from the list, you can find it more earlier version, consistent with previous LPIC-1 goals (prior to April 2009), by referring to our .

The necessary conditions

To extract greatest benefit from our articles, you need to have basic knowledge of Linux and have a working Linux computer on which you can execute all the commands you encounter. Sometimes different versions Programs produce results differently, so the contents of the listings and figures may differ from what you see on your computer.

Preparing to Run the Examples

How to contact Ian

Ian is one of our most popular and prolific authors. Check out (EN) published on developerWorks. You can find contact information at and connect with him and other My developerWorks contributors and contributors.

To run the examples in this article, we will use some of the files created earlier in the " " article. If you haven't read this article or saved these files, don't worry! Let's start by creating a new directory lpi103-4 and all necessary files. To do this, open a text window and navigate to your home directory. Copy the contents of Listing 1 into the text box; as a result of executing commands in your home directory The lpi103-4 subdirectory will be created and all the necessary files in it, which we will use in our examples.

Listing 1. Creating the files needed for this article's examples
mkdir -p lpi103-4 && cd lpi103-4 && ( echo -e "1 apple\n2 pear\n3 banana" > text1 echo -e "9\tplum\n3\tbanana\n10\tapple" > text2 echo "This is a sentence. " !#:* !#:1->text3 split -l 2 text1 split -b 17 text2 y; )

Your window should look like Listing 2, and your current working directory should be the newly created lpi103-4 directory.

Listing 2. Results of creating the necessary files
$ mkdir -p lpi103-4 && cd lpi103-4 && ( > echo -e "1 apple\n2 pear\n3 banana" > text1 > echo -e "9\tplum\n3\tbanana\n10\tapple" > text2 > echo "This is a sentence." !#:* !#:1->text3echo "This is a sentence." "This is a sentence." "This is a sentence.">text3 > split -l 2 text1 > split -b 17 text2 y; ) $

Redirecting standard input/output

A Linux shell, such as Bash, receives input and sends output in the form of sequences or streams characters. Any character is independent of either previous or subsequent characters. Symbols are not organized into structured entries or fixed-size blocks. Streams are accessed using I/O mechanisms regardless of where the character streams come from or are sent to (a file, keyboard, window, screen, or other I/O device). Command interpreters Linux uses three standard I/O streams, each of which is assigned a specific file descriptor.

  1. stdoutstandard output, displays command output and has handle 1.
  2. stderrstandard error stream, displays command errors and has descriptor 2.
  3. stdinstandard input, passes input to commands and has handle 0.

Input streams provide input (usually from the keyboard) to commands. Output streams provide printing text characters, usually to the terminal. The terminal was originally an ASCII printing device or display terminal, but now it is usually just a window on the computer desktop.

If you have already read the "" guide, then some of the material in this article will be familiar to you.

Output redirection

There are two ways to redirect output to a file:

n> redirects output from a file descriptor n to file. You must have write permissions to the file. If the file does not exist, it will be created. If the file exists, then all its contents are usually destroyed without any warning. n>> also redirects output from the file descriptor n to file. You must also have write permissions to the file. If the file does not exist, it will be created. If the file exists, the output is appended to its contents.

Symbol n in operators n> or n>> is file descriptor. If it is not specified, the standard output device is assumed to be used. Listing 3 demonstrates the redirection operation to separate the standard output and standard error of the ls command, using files that were created earlier in the lpi103-4 directory. Adding command output to existing files is also demonstrated.

Listing 3. Output redirection
$ ls x* z* ls: cannot access z*: No such file or directory xaa xab $ ls x* z* >stdout.txt 2>stderr.txt $ ls w* y* ls: cannot access w*: No such file or directory yaa yab $ ls w* y* >>stdout.txt 2>>stderr.txt $ cat stdout.txt xaa xab yaa yab $ cat stderr.txt ls: cannot access z*: No such file or directory ls: cannot access w*: No such file or directory

We've already said that redirecting output using the n> operator usually results in overwriting existing files. You can control this property using the noclobber option of the set built-in command. If this option is defined, you can override it using the n>| operator, as shown in Listing 4.

Listing 4. Redirecting output using the noclobber option
$ set -o noclobber $ ls x* z* >stdout.txt 2>stderr.txt -bash: stdout.txt: cannot overwrite existing file $ ls x* z* >|stdout.txt 2>|stderr.txt $ cat stdout.txt xaa xab $ cat stderr.txt ls: cannot access z*: No such file or directory $ set +o noclobber #restore original noclobber setting

Sometimes you may want to redirect both standard output and standard error to a file. This is often used in automated processes or background jobs so that you can view the results of your work later. To redirect standard output and standard error to the same location, use the &> or &>> operator. Alternative option– redirect file descriptor n and then the file descriptor m to the same place using the construction m>&n or m>>&n. In this case, the order in which the threads are redirected is important. For example, the command
command 2>&1 >output.txt
it's not the same as a command
command >output.txt 2>&1
In the first case, the error stream stderr is redirected to the current location of the stdout stream, and then the stdout stream is redirected to the output.txt file; however, the second redirection only affects stdout and not stderr. In the second case, the stderr stream is redirected to the current location of the stdout stream, that is, to the output.txt file. These redirections are illustrated in Listing 5. Notice in the last command that the standard output was redirected after the standard error stream, and as a result, the error stream continues to be output to the terminal window.

Listing 5. Redirecting two streams to one file
$ ls x* z* &>output.txt $ cat output.txt ls: cannot access z*: No such file or directory xaa xab $ ls x* z* >output.txt 2>&1 $ cat output.txt ls: cannot access z*: No such file or directory xaa xab $ ls x* z* 2>&1 >output.txt # stderr does not go to output.txt ls: cannot access z*: No such file or directory $ cat output. txt xaa xab

In other situations, you may want to ignore standard output or standard error entirely. To do this, redirect the corresponding thread to empty file/dev/null. Listing 6 shows how to ignore the ls command error stream and how to use cat commands make sure that the /dev/null file is actually empty.

Listing 6. Ignoring standard error by using /dev/null
$ ls x* z* 2>/dev/null xaa xab $ cat /dev/null

Input redirection

Just like we can redirect stdout and stderr, we can redirect stdin from a file using the operator<. Если вы прочли руководство " ", то должны помнить, что в разделе была использована команда tr для замены пробелов в файле text1 на символы табуляции. В том примере мы использовали вывод команды cat чтобы создать стандартный поток ввода для команды tr . Теперь для преобразования пробелов в символы табуляции вместо бесполезного вызова команды cat мы можем использовать перенаправление ввода, как показано в листинге 7.

Listing 7. Input redirection
$ tr " " "\t"

Command interpreters, including bash, implement the concept here-document, which is one of the ways to redirect input. It uses the design<< и какое-либо слово, например END, являющееся маркером, или сигнальной меткой, означающей конец ввода. Эта концепция продемонстрирована в листинге 8.

Listing 8. Redirecting input using the here-document concept
$sort -k2<1 apple > 2 pear > 3 banana > END 1 apple 3 banana 2 pear

But why can’t you just type the command sort -k2 , enter the data and press the combination Ctrl-d, indicating the end of the input? Of course, you could run this command, but then you wouldn't know about the here-document concept, which is very common in shell scripts (where there is no other way to specify which lines should be accepted as input). Since tabs are widely used in scripts to align text and make them easier to read, there is another technique for using the here-document concept. When using the operator<<- вместо оператора << начальные символы табуляции удаляются.

In Listing 9, we used command substitution to create a tab character, and then created a small shell script containing two cat commands, each of which reads data from the here-document block. Notice that we used the word END to signal the here-document block that we are reading from the terminal. If we were to use this same word in our script, our input would end prematurely. Therefore, instead of the word END, we use the word EOF in the script. Once our script is created, we use the command. (dot) to run it in the context of the current shell.

Listing 9. Redirecting input using the here-document concept
$ ht=$(echo -en "\t") $ cat<ex-here.sh > cat<<-EOF >apple > EOF > $(ht)cat<<-EOF >$(ht)pear > $(ht)EOF > END $ cat ex-here.sh cat<<-EOF apple EOF cat <<-EOF pear EOF $ . ex-here.sh apple pear

In future articles in this series, you'll learn more about command substitution and scripting. Links to all articles in this series can be found in.

Creating pipelines

Using the xargs command

The xargs command reads data from the standard input device and then constructs and executes commands that take the received input as parameters. If no command is specified, the echo command is used. Listing 12 shows a simple example of using our text1 file, which contains three lines of two words each.

Listing 12. Usage xargs commands
$ cat text1 1 apple 2 pear 3 banana $ xargs

Why then does xargs output contain only one line? By default, xargs splits the input if it encounters delimiter characters, and each resulting fragment becomes a separate parameter. However, when xargs builds a command, it is passed as many parameters as possible at one time. This behavior can be changed using the –n or --max-args option. Listing 13 shows an example of both options; an explicit call to the echo command was also made for use with xargs.

Listing 13. Using xargs and echo commands
$xargs " args > 1 apple 2 pear 3 banana $ xargs --max-args 3 " args > 1 apple 2 args > pear 3 banana $ xargs -n 1 " args > 1 args > apple args > 2 args > pear args > 3 args > banana

If the input data contains spaces, but they are enclosed in single or double quotes(or spaces are represented as escape sequences using backslashes), then xargs will not break them into separate parts. This is shown in Listing 14.

Listing 14. Using the xargs command and quotes
$ echo ""4 plum"" | cat text1 - 1 apple 2 pear 3 banana "4 plum" $ echo ""4 plum"" | cat text1 - | xargs -n 1 1 apple 2 pear 3 banana 4 plum

Until now, all arguments have been added to the end of the command. If you need other optional arguments to be added after them, use the -I option to specify a replacement string. At the point in the command called via xargs where the replacement string is used, an argument will be substituted instead. With this approach, only one argument is passed to each command. However, the argument will be created from the entire input string, rather than from a separate fragment of it. You can also use the -L option of the xargs command, which will cause the entire string to be used as an argument, rather than individual pieces separated by spaces. Using the -I option implicitly causes the -L 1 option to be used. Listing 15 shows examples of using the -I and –L options.

Listing 15. Using the xargs command and input lines
$ xargs -I XYZ echo "START XYZ REPEAT XYZ END" " <9 plum> <3 banana><3 banana> <10 apple><10 apple>$ cat text1 text2 | xargs -L2 1 apple 2 pear 3 banana 9 plum 3 banana 10 apple

Although our examples use simple text files, you won't often use the xargs command for such cases. Typically, you'll be dealing with a large list of files resulting from commands like ls , find or grep . Listing 16 shows one way to pass xargs a list of the contents of a directory to a command such as grep.

Listing 16. Using xargs command and file list
$ ls |xargs grep "1" text1:1 apple text2:10 apple xaa:1 apple yaa:1

In the last example, what happens if one or more of the filenames contains spaces? If you try to use the command as in Listing 16, you will get an error. In a real situation, the list of files may not be obtained from the ls command, but, for example, as a result of executing a user script or command; or you may want to process it at other stages in the pipeline for additional filtering. So we're not taking into account the fact that you could simply use grep "1" * instead of the existing logical structure.

In the case of the ls command, you could use the --quoting-style option to force filenames containing spaces to be enclosed in parentheses (or escaped). A better solution (when possible) is to use the -0 option of the xargs command, which causes empty characters (\0) to be used to separate input arguments. Although the ls command does not have an option to use null-terminated filenames as output, many commands can do this.

In Listing 17, we'll first copy file text1 to "text 1" and then give some examples of using a list of filenames containing spaces with the xargs command. These examples help you understand the idea, since it may not be so easy to fully master xargs. In particular, the last example of converting newlines to empty characters would not work if some file names already contained newlines. In the next section of this article, we'll look at a more robust solution, using the find command to generate suitable output that uses null characters as delimiters.

Listing 17. Using the xargs command and files containing spaces in their names
$ cp text1 "text 1" $ ls *1 |xargs grep "1" # error text1:1 apple grep: text: No such file or directory grep: 1: No such file or directory $ ls --quoting-style escape * 1 text1 text\ 1 $ ls --quoting-style shell *1 text1 "text 1" $ ls --quoting-style shell *1 |xargs grep "1" text1:1 apple text 1:1 apple $ # Illustrate -0 option of xargs $ ls *1 | tr "\n" "\0" |xargs -0 grep "1" text1:1 apple text 1:1 apple

The xargs command cannot build arbitrarily long commands. Thus, in Linux, up to kernel version 2.26.3, the maximum command length was limited. If you try to run a command such as rm somepath/* and the directory contains many files with long names, it may fail with an error stating that the argument list is too long. If you're running older versions of Linux or UNIX that may have these limitations, it might be helpful to know how you can use xargs in a way that gets around them.

You can use the --show-limits option to view the default limits for the xargs command, and the -s option to set the maximum length of command output. You can learn about other options from the man pages.

Using the find command with the -exec option or in conjunction with the xargs command

In the " " tutorial, you learned how to use the find command to find files based on their names, modification times, sizes, and other characteristics. Usually, certain actions must be performed on the found files - delete, copy, rename them, and so on. We'll now look at the -exec option of the find command, which works similar to the find command and then passing the output to the xargs command.

Listing 18. Using find with -exec option
$ find text -exec cat text3()\; This is a sentence. This is a sentence. This is a sentence. 1 apple 2 pear 3 banana This is a sentence. This is a sentence. This is a sentence. 9 plum 3 banana 10 apple

Comparing the results of Listing 18 with what you already know about xargs reveals a few differences.

  1. You must use () symbols in the command to indicate the substitution location where the file name will be substituted. These characters are not automatically added to the end of the command.
  2. You must end the command with a semicolon, which must be escaped (\;, ";" or ";").
  3. The command is executed once for each input file.

Try running find text |xargs cat text3 yourself to see the differences.

Now let's go back to the case where the filename contains spaces. In Listing 19, we tried using the find command with the -exec option instead of the ls and xargs commands.

Listing 19. Using the find command with the -exec option and files containing spaces in their names
$find. -name "*1" -exec grep "1" () \; 1 apple 1 apple

So far so good. However, don't you think that something is missing here? Which files contained the lines found by grep? What's missing here is filenames because find calls grep once for each file, and grep, being a smart command, knows that if it was only given the name of one file, it doesn't need to tell you what it was. .

In this situation, we could use the xargs command, but we already know about the problem with files whose names contain spaces. We also mentioned the fact that the find command can generate a null-delimited list of names thanks to the -print0 option. Modern versions of the find command can be separated not by a semicolon, but by a + sign, so that the maximum possible number of names can be passed in one call to the find command, just like when using xargs . Needless to say, in this case you can only use the () construct once, and that it must be the last parameter of the command. Listing 20 demonstrates both of these methods.

Listing 20. Using find , xargs and files containing spaces in their names
$find. -name "*1" -print0 |xargs -0 grep "1" ./text 1:1 apple ./text1:1 apple $ find . -name "*1" -exec grep "1" () + ./text 1:1 apple ./text1:1 apple

Both of these methods are working and the choice of one of them is often determined only by the personal preferences of the user. Be aware that you may run into problems when piping objects with raw delimiters and whitespace; so if you pass output to xargs , use the -print0 option of find , as well as the -0 option of xargs , which tells you to use null delimiters in the input. Other commands, including tar , also support the -0 option and working with null-delimited input, so you should always use this option for commands that support it, unless you are 100% sure that the input list will not produce you problems.

Our last comment concerns working with a list of files. It's a good idea to always check your list and commands carefully before performing batch operations (such as deleting or renaming multiple files). Having an up-to-date backup when needed can also be invaluable.

Output Splitting

To wrap up this article, we'll take a quick look at one more command. Sometimes you may need to view the output on the screen and save it to a file at the same time. For this you we could redirect the output of a command to a file in one window and then use tail -fn1 to trace the output in another window, but the easiest way is to use the tee command.

The tee command is used in a pipe, and its argument is the name of the file (or names of several files) to which the standard output will be piped. The -a option allows you to not replace the old contents of the file with new content, but to append the data to the end of the file. As discussed when we discussed pipelining, if you want to store both standard output and error stream, you must redirect stderr to stdout before passing data as input to the tee command. Listing 21 shows an example of using the tee command to save output to two files, f1 and f2.

Listing 21. Splitting stdout using the tee command
$ ls text|tee f1 f2 text1 text2 text3 $ cat f1 text1 text2 text3 $ cat f2 text1 text2 text3