• Uncategorized

About bash : How-can-I-use-a-file-in-a-command-and-redirect-output-to-the-same-file-without-truncating-it

Question Detail

Basically I want to take as input text from a file, remove a line from that file, and send the output back to the same file. Something along these lines if that makes it any clearer.

grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' file_name > file_name

however, when I do this I end up with a blank file.
Any thoughts?

Question Answer

Use sponge for this kind of tasks. Its part of moreutils.

Try this command:

 grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' file_name | sponge file_name

You cannot do that because bash processes the redirections first, then executes the command. So by the time grep looks at file_name, it is already empty. You can use a temporary file though.

#!/bin/sh
tmpfile=$(mktemp)
grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' file_name > ${tmpfile}
cat ${tmpfile} > file_name
rm -f ${tmpfile}

like that, consider using mktemp to create the tmpfile but note that it’s not POSIX.

Use sed instead:

sed -i '/seg[0-9]\{1,\}\.[0-9]\{1\}/d' file_name

try this simple one

grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' file_name | tee file_name

Your file will not be blank this time 🙂 and your output is also printed to your terminal.

You can’t use redirection operator (> or >>) to the same file, because it has a higher precedence and it will create/truncate the file before the command is even invoked. To avoid that, you should use appropriate tools such as tee, sponge, sed -i or any other tool which can write results to the file (e.g. sort file -o file).

Basically redirecting input to the same original file doesn’t make sense and you should use appropriate in-place editors for that, for example Ex editor (part of Vim):

ex '+g/seg[0-9]\{1,\}\.[0-9]\{1\}/d' -scwq file_name

where:

  • '+cmd'/-c – run any Ex/Vim command
  • g/pattern/d – remove lines matching a pattern using global (help :g)
  • -s – silent mode (man ex)
  • -c wq – execute :write and :quit commands

You may use sed to achieve the same (as already shown in other answers), however in-place (-i) is non-standard FreeBSD extension (may work differently between Unix/Linux) and basically it’s a stream editor, not a file editor. See: Does Ex mode have any practical use?

One liner alternative – set the content of the file as variable:

VAR=`cat file_name`; echo "$VAR"|grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' > file_name

Since this question is the top result in search engines, here’s a one-liner based on https://serverfault.com/a/547331 that uses a subshell instead of sponge (which often isn’t part of a vanilla install like OS X):

echo "$(grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' file_name)" > file_name

The general case is:

echo "$(cat file_name)" > file_name

Edit, the above solution has some caveats:

  • printf '%s' <string> should be used instead of echo <string> so that files containing -n don’t cause undesired behavior.
  • Command substitution strips trailing newlines (this is a bug/feature of shells like bash) so we should append a postfix character like x to the output and remove it on the outside via parameter expansion of a temporary variable like ${v%x}.
  • Using a temporary variable $v stomps the value of any existing variable $v in the current shell environment, so we should nest the entire expression in parentheses to preserve the previous value.
  • Another bug/feature of shells like bash is that command substitution strips unprintable characters like null from the output. I verified this by calling dd if=/dev/zero bs=1 count=1 >> file_name and viewing it in hex with cat file_name | xxd -p. But echo $(cat file_name) | xxd -p is stripped. So this answer should not be used on binary files or anything using unprintable characters, as Lynch pointed out.

The general solution (albiet slightly slower, more memory intensive and still stripping unprintable characters) is:

(v=$(cat file_name; printf x); printf '%s' ${v%x} > file_name)

Test from https://askubuntu.com/a/752451:

printf "hello\nworld\n" > file_uniquely_named.txt && for ((i=0; i<1000; i++)); do (v=$(cat file_uniquely_named.txt; printf x); printf '%s' ${v%x} > file_uniquely_named.txt); done; cat file_uniquely_named.txt; rm file_uniquely_named.txt

Should print:

hello
world

Whereas calling cat file_uniquely_named.txt > file_uniquely_named.txt in the current shell:

printf "hello\nworld\n" > file_uniquely_named.txt && for ((i=0; i<1000; i++)); do cat file_uniquely_named.txt > file_uniquely_named.txt; done; cat file_uniquely_named.txt; rm file_uniquely_named.txt

Prints an empty string.

I haven’t tested this on large files (probably over 2 or 4 GB).

I have borrowed this answer from Hart Simha and kos.

This is very much possible, you just have to make sure that by the time you write the output, you’re writing it to a different file. This can be done by removing the file after opening a file descriptor to it, but before writing to it:

exec 3<file ; rm file; COMMAND <&3 >file ;  exec 3>&-

Or line by line, to understand it better :

exec 3<file       # open a file descriptor reading 'file'
rm file           # remove file (but fd3 will still point to the removed file)
COMMAND <&3 >file # run command, with the removed file as input
exec 3>&-         # close the file descriptor

It’s still a risky thing to do, because if COMMAND fails to run properly, you’ll lose the file contents. That can be mitigated by restoring the file if COMMAND returns a non-zero exit code :

exec 3<file ; rm file; COMMAND <&3 >file || cat <&3 >file ; exec 3>&-

We can also define a shell function to make it easier to use :

# Usage: replace FILE COMMAND
replace() { exec 3<$1 ; rm $1; ${@:2} <&3 >$1 || cat <&3 >$1 ; exec 3>&- }

Example :

$ echo aaa > test
$ replace test tr a b
$ cat test
bbb

Also, note that this will keep a full copy of the original file (until the third file descriptor is closed). If you’re using Linux, and the file you’re processing on is too big to fit twice on the disk, you can check out this script that will pipe the file to the specified command block-by-block while unallocating the already processed blocks. As always, read the warnings in the usage page.

The following will accomplish the same thing that sponge does, without requiring moreutils:

    shuf --output=file --random-source=/dev/zero 

The --random-source=/dev/zero part tricks shuf into doing its thing without doing any shuffling at all, so it will buffer your input without altering it.

However, it is true that using a temporary file is best, for performance reasons. So, here is a function that I have written that will do that for you in a generalized way:

# Pipes a file into a command, and pipes the output of that command
# back into the same file, ensuring that the file is not truncated.
# Parameters:
#    $1: the file.
#    $2: the command. (With $3... being its arguments.)
# See https://stackoverflow.com/a/55655338/773113

siphon()
{
    local tmp file rc=0
    [ "$#" -ge 2 ] || { echo "Usage: siphon filename [command...]" >&2; return 1; }
    file="$1"; shift
    tmp=$(mktemp -- "$file.XXXXXX") || return
    "[email protected]" <"$file" >"$tmp" || rc=$?
    mv -- "$tmp" "$file" || rc=$(( rc | $? ))
    return "$rc"
}

There’s also ed (as an alternative to sed -i):

# cf. http://wiki.bash-hackers.org/howto/edit-ed
printf '%s\n' H 'g/seg[0-9]\{1,\}\.[0-9]\{1\}/d' wq |  ed -s file_name

You can use slurp with POSIX Awk:

!/seg[0-9]\{1,\}\.[0-9]\{1\}/ {
  q = q ? q RS $0 : $0
}
END {
  print q > ARGV[1]
}

Example

This does the trick pretty nicely in most of the cases I faced:

cat <<< "$(do_stuff_with f)" > f

Note that while $(…) strips trailing newlines, <<< ensures a final newline, so generally the result is magically satisfying.
(Look for “Here Strings” in man bash if you want to learn more.)

Full example:

#! /usr/bin/env bash

get_new_content() {
    sed 's/Initial/Final/g' "${1:?}"
}

echo 'Initial content.' > f
cat f

cat <<< "$(get_new_content f)" > f

cat f

This does not truncate the file and yields:

Initial content.
Final content.

Note that I used a function here for the sake of clarity and extensibility, but that’s not a requirement.

A common usecase is JSON edition:

echo '{ "a": 12 }' > f
cat f
cat <<< "$(jq '.a = 24' f)" > f
cat f

This yields:

{ "a": 12 }
{
  "a": 24
}

Try this

echo -e "AAA\nBBB\nCCC" > testfile

cat testfile
AAA
BBB
CCC

echo "$(grep -v 'AAA' testfile)" > testfile
cat testfile
BBB
CCC

I usually use the tee program to do this:

grep -v 'seg[0-9]\{1,\}\.[0-9]\{1\}' file_name | tee file_name

It creates and removes a tempfile by itself.

You may also like...

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.