How do you delete a specific line in a file Linux?

Delete specific line number(s) from a text file using sed?

I want to delete one or more specific line numbers from a file. How would I do this using sed?

  • 1Can you give a more specific example of what you want? How will you decide which lines to remove? Jan 21, 2010 at 20:14
  • Maybe see also stackoverflow.com/questions/13272717/… and just applyeit in reverse (print if key not in associative array). Apr 10, 2018 at 18:12

7 Answers

Help us improve our answers.

Are the answers below sorted in a way that puts the best answer at or near the top?

If you want to delete lines from 5 through 10 and line 12th:

sed -e '5,10d;12d' file

This will print the results to the screen. If you want to save the results to the same file:

sed -i.bak -e '5,10d;12d' file

This will store the unmodified file as file.bak, and delete the given lines.

Note: Line numbers start at 1. The first line of the file is 1, not 0.

  • 36
  • 5what If I wanted to delete the 5th line up to the last line? May 11, 2013 at 3:58
  • [email protected]sed -e '5,$d' fileMay 11, 2013 at 20:12
  • [email protected]sed -e '5d' file. The syntax is<address><command>; where<address>can be either a single line like5or a range of lines like5,10, and the commandddeletes the given line or lines. The addresses can also be regular expressions, or the dollar sign
  • 2Note that the lines from the 5th to 10th are all inclusive. Nov 6, 2017 at 13:03

You can delete a particular single line with its line number by

sed -i '33d' file

This will delete the line on 33 line number and save the updated file.

  • 2In my case "sed" removed a wrong line. So I use this approach:sed -i '0,/<TARGET>/{/<NEW_VALUE>/d;}' '<SOME_FILE_NAME>'. Thanks! Oct 3, 2018 at 22:45
  • 1Same here, I wrote a loop and strangely some files lost the correct line but some files lost one other line too, have no clue what went wrong. (GNU/Linux bash4.2) awk command below worked fine in loop Oct 5, 2018 at 8:28
  • 1To comments about wrong lines being deleted within a loop : be sure to start with the largest line number, otherwise each deleted line will offset the line numbering… Jun 3, 2019 at 12:41
  • 1On my system, when processing large files,sedappears an order of magnitude slower than a simple combination ofheadandtail: here's an example of the faster way (without in-place mode):delete-line() { local filename="$1"; local lineNum="$2"; head -n $((lineNum-1)) "$filename"; tail +$((lineNum+1)) "$filename"; }Aug 16, 2020 at 8:13

The shortest, deleting the first line in sed

sed -i '1d' file

As Brian states here, <address><command> is used, <address> is <1> and <command> <d>.

This is very often a symptom of an antipattern. The tool which produced the line numbers may well be replaced with one which deletes the lines right away. For example;

grep -nh error logfile | cut -d: -f1 | deletelines logfile

(where deletelines is the utility you are imagining you need) is the same as

grep -v error logfile

Having said that, if you are in a situation where you genuinely need to perform this task, you can generate a simple sed script from the file of line numbers. Humorously (but perhaps slightly confusingly) you can do this with sed.

sed 's%$%d%' linenumbers

This accepts a file of line numbers, one per line, and produces, on standard output, the same line numbers with d appended after each. This is a valid sed script, which we can save to a file, or (on some platforms) pipe to another sed instance:

sed 's%$%d%' linenumbers | sed -f - logfile

On some platforms, sed -f does not understand the option argument /dev/stdin or /proc/$pid/fd/1 if your OS (or shell) has that.

-to mean standard input, so you have to redirect the script to a temporary file, and clean it up when you are done, or maybe replace the lone dash with

As always, you can add -i before the -f option to have sed edit the target file in place, instead of producing the result on standard output. On *BSDish platforms (including OSX) you need to supply an explicit argument to -i as well; a common idiom is to supply an empty argument; -i ''.

  • I don't quite agree with "symptom of an antipattern". Markup-based file types (e.g. XML or JSON) require specific lines at the end in order to be valid files. In that case, it's often the most reasonable approach to remove those lines, put into the file what you want to be added and then re-add those lines, because putting the lines in between straight away can be much more effort, and goes against the potential desire to avoid extra tools like sed as much as you can. Nov 12, 2017 at 12:02
  • I don't quite understand what sort of scenario you are imagining. Therearescenarios where this is a legitimate approach but the vast majority of cases I have seen are newbies who do more or less exactly what my first example demonstrates. (Perhaps they come from some really low-level language and are used to dividing their problem way past the molecular level, because you have to in asm or C.) Apr 10, 2018 at 18:07
  • What I basically mean by that, is that as the creator of such a file, you know what has to be at the end of the document (i.e. the set of closing braces/square brackets in the last few lines for JSON, or the exact closing tags for XML). Being aware of that, the most simple approach to extend such a document is 1) remove the last few lines, 2) add the new content, 3) re-add the last few lines. This way, the document can be valid both before and after it has been extended, without needing to find a way of adding lines mid-document. Apr 20, 2018 at 14:44
  • 1So far this is the only answer with an appropriate solution for a large number of lines (i.e. provided by a file). And the foreword makes sense too. It deserves more upvotes. BTW, if you want topinstead ofd, along with option-n(it won't work without-n, and!dwon't work either). Jun 3, 2019 at 13:01

and awk as well

awk 'NR!~/^(5|10|25)$/' file
  • 3NB: That awk line worked more reliably for me than the sed variant (between OS-X and Ubuntu Linux) Feb 23, 2012 at 19:13
  • 4Note that this doesn't delete anything in the file. It just prints the file without these lines to stdout. So you also need to redirect the output to a temp file, and then move the temp file to replace the original. Jun 5, 2015 at 9:00
$ cat foo 1 2 3 4 5 $ sed -e '2d;4d' foo 1 3 5 $

I would like to propose a generalization with awk.

When the file is made by blocks of a fixed size and the lines to delete are repeated for each block, awk can work fine in such a way

awk '{nl=((NR-1)%2000)+1; if ( (nl<714) || ((nl>1025)&&(nl<1029)) ) print $0}' OriginFile.dat > MyOutputCuttedFile.dat

In this example the size for the block is 2000 and I want to print the lines [1..713] and [1026..1029].

  • NRis the variable used by awk to store the current line number.
  • nl=((NR-1)%BLOCKSIZE)+1Here we write in the variablethe line number inside the current block. (see below)nl
  • and OR AND
  • print $0writes the full line
Why ((NR-1)%BLOCKSIZE)+1: (NR-1) We need a shift of one because 1%3=1, 2%3=2, but 3%3=0. +1 We add again 1 because we want to restore the desired order. +-----+------+----------+------------+ | NR | NR%3 | (NR-1)%3 | (NR-1)%3+1 | +-----+------+----------+------------+ | 1 | 1 | 0 | 1 | | 2 | 2 | 1 | 2 | | 3 | 0 | 2 | 3 | | 4 | 1 | 0 | 1 | +-----+------+----------+------------+
  • 2I admire the way you live up to your madness-inducing name. Apr 23, 2015 at 8:09

Source: Stack Overflow

Related Articles