Shell解析器
Linux提供的Shell解析器
[root@localhost bin]# cat /etc/shells/bin/sh/bin/bash/sbin/nologin/usr/bin/sh/usr/bin/bash/usr/sbin/nologin
bash和sh的区别
[root@localhost bin]# clear
[root@localhost bin]# ll | grep bash
-rwxr-xr-x. 1 root root 964544 Apr 11 2018 bash
lrwxrwxrwx. 1 root root 10 Jan 9 02:57 bashbug -> bashbug-64
-rwxr-xr-x. 1 root root 6964 Apr 11 2018 bashbug-64
lrwxrwxrwx. 1 root root 4 Jan 9 02:57 sh -> bash
Centos默认的解析器是bash
[root@localhost bin]# echo $SHELL
/bin/bash
Shell脚本
脚本格式
脚本以
#!/bin/bash开头(指定解析器)
脚本的执行方式
第一种
- 解析器(sh/bash) + 路径(绝对路径/相对路径)
sh helloworld.shbash /home/linux100/datas/helloworld.sh
第二种
- 首先要赋予脚本的
+x权限chmod 777 helloworld.sh
- 执行脚本
- 相对路径
./helloworld.sh - 绝对路径
/home/linux100/datas/helloworld.sh
- 相对路径
Shell中的变量
系统变量
$HOME``$PWD``$SHELL``$USER等
自定义变量
1)基本语法
- 定义变量:
变量=值 - 撤销变量:
unset 变量 - 声明静态变量:
readonly 变量,注意:静态变量不能unset
2)变量定义规则
- 变量名称可以由字母、数字和下划线组成,但是不能以数字开头,环境变量名建议大写。
等号两侧不能有空格- 在bash中,变量默认类型都是字符串类型,无法直接进行数值运算。
- 变量的值如果有空格,需要使用双引号或单引号括起来。
- 可以把变量提成为全局环境变量,可供其他Shell程序使用
export 变量名
特殊变量
$n
基本语法
n为数字,$0代表该脚本名称,$1-$9代表第一个到第九个参数,十以上的参数需要用大括号包含,比如${10}
案例
# 输入该脚本文件名称、输入参数1和输入参数2的值
touch parameter.sh
vim parameter.sh
# parameter.sh
#!/bin/bash
echo "$0 $1 $2"
chmod 777 parameter.sh
./parameter.sh cls xz
$
基本语法
获取所有输入参数个数,常用于循环。
案例
vim parameters.sh
# parameters.sh
#!/bin/bash
echo "$0 $1 $2"
echo $#
$*、$@
基本语法
$*这个变量代表命令行中所有的参数,把所有的参数看做一个整体$@这个变量代表命令行中所有的采纳数,不过$@ 把每个参数区分对待
案例实操
vim parameter.sh
#!/bin/bash
echo "$0 $1 $2"
echo $#
echo $*
echo $@
$?
基本语法
最后一次执行的命令的返回装填。如果这个变量的值为0,证明上一个命令正确执行;如果这个变量的值为非0(具体是哪个数,由命令自己来决定),则证明上一个命令执行不正确了。
案例实操
# 执行一个程序,一个sh文件
echo $?
运算符
基本语法
$((运算式))或$[运算式]expr +,-,\*,/,%加,减,乘,除,取余- 注意:expr运算符间要有空格。
案例实操
expr 2 + 3
expr 3 - 2
expr `expr 2 + 3` \* 4
s=$[(2+3)*4]
echo $s
条件判断
基本语法
[ condition ],注意condition前后要有空格 条件非空即为true,[ linux ]返回true,[]返回false。
常用判断条件
| 条件 | 含义 |
|---|---|
| = | 字符串比较 |
| -lt | 小于(less than) |
| -le | 小于等于(less equal) |
| -eq | 等于(equal) |
| -gt | 大于(greater than) |
| -ge | 大于等于(greater equal) |
| -ne | 不等于(Not equal) |
| 按照文件权限进行判断 | |
| -r | 有读的权限(read) |
| -w | 有些的权限(write) |
| -x | 有执行的权限(execute) |
| 按照文件类型进行判断 | |
| -f | 文件存在并且是一个常规的文件(file) |
| -e | 文件存在(existence) |
| -d | 文件存在并使一个目录(directory) |
案例实操
[ 23 -ge 22 ] # 比较大小
[ -w helloworld.sh ] # 文件是否有写权限
[ -e /home/linux100/cls.txt ] # 目录中的文件是否存在
# 多条件判断(&&表示前一条命令执行成功时,才执行后一条命令,||表示上一条命令执行失败后,才执行下一条命令)
[ condition ] && echo OK || echo notok
[ condition ] && [ ] || echo notok
流程控制
if判断
基本语法
if [ 条件判断式 ]
then
程序
fi
# 或者
if [ 条件判断式 ];then
程序
fi
[ 条件判断式 ],中括号和条件判断式之间必须有空格。if后要有空格
案例实操
#!/bin/bash
if [ $1 -eq "1" ]
then
echo "banzhang zhen shuai"
elif [ $1 -eq "2" ]
then
echo "cls zhen mei"
fi
case语句
基本语法
case $变量名 in
"值1")
程序
;;
"值2")
程序
;;
...其他分支...
*)
如果变量的值都不是以上的值,则执行此程序
;;
esac
- case行尾必须为单词
in,每一个匹配模式必须以右括号)结束。 - 双分号
;;表示命令序列的结束,相当于java中的break。 - 最后的
*)表示默认模式,相当于java中的default。
案例实操
#!/bin/bash
case $1 in
"1")
echo "banzhang"
;;
"2")
echo "cls"
;;
*)
echo "renyao"
;;
esac
for循环
基本语法
for((初始值;循环控制条件;变量变化))
do
程序
done
for 变量 in 值1 值2 值3...
do
程序
done
案例实操
# 从1加到100
#!/bin/bash
s=0
for((i=0;i<=100;i++))
do
s=$[$s+$i]
done
echo $s
# 打印所有输入参数
#!/bin/bash
for i in $*
do
echo "banzhang love $i "
done
for j in $@
do
echo "banzhang love $j "
done
#!/bin/bash
for i in "$*"
# $*中所有参数看成是一个整体,所以这个for循环只会循环一次
do
echo "banzhang love $i"
done
for j in "$@"
# $@中的每个参数都看成是独立的,所以"$@"中有几个参数,就会循环几次
do
echo "banzhang love $j"
done
$*和$@都表示传递给函数或脚本的所有参数,不被双引号""包含时,都以$1,$2...$n的形式输出所有参数。- 当他们被双引号
""包含时,"$*"会将所有的参数作为一个整体,以"$1 $2 ... $n"的形式输出所有参数,"$@"会将各个参数分开,以"$1""$2"..."$n"的形式输出所有参数。
while循环
基本语法
while [ 条件判断式 ]
do
程序
done
案例实操
#!/bin/bash
s=0
i=1
while [ $i -le 100 ]
do
s=$[$s+$i]
i=$[$i+1]
done
echo $s
read读取控制台输入
基本语法
read(选项)(参数)
选项:
-p:指定读取值时的提示符-t:指定读取值时的时间(秒)。
参数:
- 变量:指定读取值的变量名
案例实操
#!/bin/bash
read -t 7 -p "Enter your name in seconds: " NAME
echo $NAME
函数
系统函数
basename
基本语法
[root@localhost ~]# basename --help
Usage: basename NAME [SUFFIX]
or: basename OPTION... NAME...
Print NAME with any leading directory components removed.
If specified, also remove a trailing SUFFIX.
Mandatory arguments to long options are mandatory for short options too.
-a, --multiple support multiple arguments and treat each as a NAME
-s, --suffix=SUFFIX remove a trailing SUFFIX
-z, --zero separate output with NUL rather than newline
--help display this help and exit
--version output version information and exit
Examples:
basename /usr/bin/sort -> "sort"
basename include/stdio.h .h -> "stdio"
basename -s .h include/stdio.h -> "stdio"
basename -a any/str1 any/str2 -> "str1" followed by "str2"
GNU coreutils online help: <http://www.gnu.org/software/coreutils/>
For complete documentation, run: info coreutils 'basename invocation'
案例实操
[linux100@localhost datas]$ basename /home/linux100/datas/batch.sh
batch.sh
[linux100@localhost datas]$ basename /home/linux100/datas/batch.sh .sh
batch
dirname
基本语法
[linux100@localhost datas]$ dirname --help
Usage: dirname [OPTION] NAME...
Output each NAME with its last non-slash component and trailing slashes
removed; if NAME contains no /'s, output '.' (meaning the current directory).
-z, --zero separate output with NUL rather than newline
--help display this help and exit
--version output version information and exit
Examples:
dirname /usr/bin/ -> "/usr"
dirname dir1/str dir2/str -> "dir1" followed by "dir2"
dirname stdio.h -> "."
GNU coreutils online help: <http://www.gnu.org/software/coreutils/>
For complete documentation, run: info coreutils 'dirname invocation'
- dirname文件绝对路径,从给定的包含绝对路径的文件名中取出文件名(非目录部分),然后返回剩下的路径(目录的部分)
案例实操
[linux100@localhost datas]$ dirname /home/linux100/datas/batch.sh
/home/linux100/datas
自定义函数
基本语法
[ function ] funcname[()]
{
Action;
[return int;]
}
#调用
funcname
经验技巧
- 必须在调用函数地方之前,先声明函数,shell脚本是逐行运行。不会像其他语言一样先编译。
- 函数返回值,只能通过
$?系统变量获得,可以显示加:return返回,如果不加,将以最后一条命令运行结果,作为返回值。return后跟数值n(0-255)
案例实操
#!/bin/bash
function sum()
{
s=0
s=$[ $1 + $2 ]
echo "$s"
}
read -p "Please input the number1:" n1
read -p "Please input the number2:" n2
sum $n1 $n2
Shell工具
cut
cut的工作就是“剪”,具体的说就是在文件中负责剪切数据用的。cut命令从文件的每一行剪切字节、字符和字段并将这些字节、字符和字段输出。
基本用法
默认分隔符是制表符
[linux100@localhost datas]$ cut --help
Usage: cut OPTION... [FILE]...
Print selected parts of lines from each FILE to standard output.
Mandatory arguments to long options are mandatory for short options too.
-b, --bytes=LIST select only these bytes
-c, --characters=LIST select only these characters
-d, --delimiter=DELIM use DELIM instead of TAB for field delimiter
-f, --fields=LIST select only these fields; also print any line
that contains no delimiter character, unless
the -s option is specified
-n with -b: don't split multibyte characters
--complement complement the set of selected bytes, characters
or fields
-s, --only-delimited do not print lines not containing delimiters
--output-delimiter=STRING use STRING as the output delimiter
the default is to use the input delimiter
--help display this help and exit
--version output version information and exit
Use one, and only one of -b, -c or -f. Each LIST is made up of one
range, or many ranges separated by commas. Selected input is written
in the same order that it is read, and is written exactly once.
Each range is one of:
N N'th byte, character or field, counted from 1
N- from N'th byte, character or field, to end of line
N-M from N'th to M'th (included) byte, character or field
-M from first to M'th (included) byte, character or field
With no FILE, or when FILE is -, read standard input.
GNU coreutils online help: <http://www.gnu.org/software/coreutils/>
For complete documentation, run: info coreutils 'cut invocation'
案例实操
[linux100@localhost datas]$ cat cut.txt # 数据准备
dong shen
guan zhen
wo wo
lai lai
le le
[linux100@localhost datas]$ cut -d " " -f 1 cut.txt # 切割cut.txt第一列
dong
guan
wo
lai
le
[linux100@localhost datas]$ cut -d " " -f 2,3 cut.txt # 切割cut.txt第二、三列
shen
zhen
wo
lai
le
[linux100@localhost datas]$ cat cut.txt | grep "guan" | cut -d " " -f 1 # 在cut.txt文件中切割出guan
guan
# 选取系统PATH的变量值,第一个“:”开始后的所有路径。
[linux100@localhost datas]$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
[linux100@localhost datas]$ echo $PATH | cut -d : -f 2-
/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
# 切割ifconfig后打印的IP地址
[linux100@localhost datas]$ ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.10.100 netmask 255.255.255.0 broadcast 192.168.10.255
inet6 fe80::a391:fd5e:fca1:f344 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:7e:31:5c txqueuelen 1000 (Ethernet)
RX packets 16294 bytes 1029077 (1004.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2063 bytes 349968 (341.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 8 bytes 696 (696.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8 bytes 696 (696.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[linux100@localhost datas]$ ifconfig ens33 | grep "inet " | cut -d " " -f 10
192.168.10.100
# 切割ip addr后打印的IP地址
[linux100@localhost datas]$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:7e:31:5c brd ff:ff:ff:ff:ff:ff
inet 192.168.10.100/24 brd 192.168.10.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::a391:fd5e:fca1:f344/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[linux100@localhost datas]$ ip addr | grep inet | grep ens33 | cut -d " " -f 6 | cut -d "/" -f 1
192.168.10.100
sed
sed是一种流编辑器,它一次处理一行内容。处理时,把当前处理的行存储在临时缓冲区中,称为“模式空间”,接着用sed明林处理缓冲区中的内容,处理完成后,把缓冲区的内容送往屏幕。接着处理下一行,这样不断重复,知道文件末尾。文件内容并没有改变,除非使用重定向存储输出。
基本用法sed [选项参数] 'command' filename
[linux100@localhost datas]$ sed --help
Usage: sed [OPTION]... {script-only-if-no-other-script} [input-file]...
-n, --quiet, --silent
suppress automatic printing of pattern space
-e script, --expression=script
add the script to the commands to be executed
-f script-file, --file=script-file
add the contents of script-file to the commands to be executed
--follow-symlinks
follow symlinks when processing in place
-i[SUFFIX], --in-place[=SUFFIX]
edit files in place (makes backup if SUFFIX supplied)
-c, --copy
use copy instead of rename when shuffling files in -i mode
-b, --binary
does nothing; for compatibility with WIN32/CYGWIN/MSDOS/EMX (
open files in binary mode (CR+LFs are not treated specially))
-l N, --line-length=N
specify the desired line-wrap length for the `l' command
--posix
disable all GNU extensions.
-r, --regexp-extended
use extended regular expressions in the script.
-s, --separate
consider files as separate rather than as a single continuous
long stream.
-u, --unbuffered
load minimal amounts of data from the input files and flush
the output buffers more often
-z, --null-data
separate lines by NUL characters
--help
display this help and exit
--version
output version information and exit
If no -e, --expression, -f, or --file option is given, then the first
non-option argument is taken as the sed script to interpret. All
remaining arguments are names of input files; if no input files are
specified, then the standard input is read.
GNU sed home page: <http://www.gnu.org/software/sed/>.
General help using GNU software: <http://www.gnu.org/gethelp/>.
E-mail bug reports to: <bug-sed@gnu.org>.
Be sure to include the word ``sed'' somewhere in the ``Subject:'' field
命令功能描述
| 命令 | 功能描述 |
|---|---|
| a | 新增,a的后面可以接字符串,在下一行出现 |
| d | 删除 |
| s | 查找并替换 |
案例实操
[linux100@localhost datas]$ cat sed.txt # 数据准备
dong shen
guan zhen
wo wo
lai lai
le le
# 将“mei nv”插入到第二行下面,打印
[linux100@localhost datas]$ sed "2a mei nv" sed.txt
dong shen
guan zhen
mei nv
wo wo
lai lai
le le
[linux100@localhost datas]$ cat sed.txt
dong shen
guan zhen
wo wo
lai lai
le le
# 删除文件所有包含wo的行
[linux100@localhost datas]$ sed '/wo/d' sed.txt
dong shen
guan zhen
lai lai
le le
# 将文件中wo替换为ni,g表示global,全部替换
[linux100@localhost datas]$ sed 's/wo/ni/g' sed.txt
dong shen
guan zhen
ni ni
lai lai
le le
# 将文件第二行删除并将wo替换为ni
[linux100@localhost datas]$ sed -e '2d' -e 's/wo/ni/g' sed.txt
dong shen
ni ni
lai lai
le le
awk
一个强大的文本分析工具,把文件逐行读入,以空格为默认分隔符将每行切片,切开的部分再进行分析处理。
基本用法awk [选项参数] 'pattern1{action1} pattern2{action2}...' filename
pattern:表示AWK在数据中查找的内容,就是匹配模式
action:在找到匹配内容时所执行的一系列命令
[linux100@localhost datas]$ awk --help
Usage: awk [POSIX or GNU style options] -f progfile [--] file ...
Usage: awk [POSIX or GNU style options] [--] 'program' file ...
POSIX options: GNU long options: (standard)
-f progfile --file=progfile
-F fs --field-separator=fs
-v var=val --assign=var=val
Short options: GNU long options: (extensions)
-b --characters-as-bytes
-c --traditional
-C --copyright
-d[file] --dump-variables[=file]
-e 'program-text' --source='program-text'
-E file --exec=file
-g --gen-pot
-h --help
-L [fatal] --lint[=fatal]
-n --non-decimal-data
-N --use-lc-numeric
-O --optimize
-p[file] --profile[=file]
-P --posix
-r --re-interval
-S --sandbox
-t --lint-old
-V --version
To report bugs, see node `Bugs' in `gawk.info', which is
section `Reporting Problems and Bugs' in the printed version.
gawk is a pattern scanning and processing language.
By default it reads standard input and writes standard output.
Examples:
gawk '{ sum += $1 }; END { print sum }' file
gawk -F: '{ print $1 }' /etc/passwd
选项参数说明
| 选项参数 | 功能 |
|---|---|
| -F | 指定输入文件拆分隔符 |
| -v | 赋值一个用户定义变量 |
awk的内置变量
| 变量 | 说明 |
|---|---|
| FILENAME | 文件名 |
| NR | 已读的记录数 |
| NF | 浏览记录的域的个数(切割后,列的个数) |
案例实操
[linux100@localhost datas]$ cp /etc/passwd ./ # 数据准备
[linux100@localhost datas]$ cat passwd
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
operator:x:11:0:operator:/root:/sbin/nologin
games:x:12:100:games:/usr/games:/sbin/nologin
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
nobody:x:99:99:Nobody:/:/sbin/nologin
systemd-network:x:192:192:systemd Network Management:/:/sbin/nologin
dbus:x:81:81:System message bus:/:/sbin/nologin
polkitd:x:999:998:User for polkitd:/:/sbin/nologin
tss:x:59:59:Account used by the trousers package to sandbox the tcsd daemon:/dev/null:/sbin/nologin
sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin
postfix:x:89:89::/var/spool/postfix:/sbin/nologin
linux100:x:1000:1000:linux100:/home/linux100:/bin/bash
# 搜索文件以root关键字开头的所有行,并输出该行的第7列。
[linux100@localhost datas]$ awk -F : '/^root/{print $7}' passwd
/bin/bash
# 输出第一列和第7列,中间用“,”分割,只有匹配了pattern的行才会执行action
[linux100@localhost datas]$ awk -F : '/^root/{print $1","$7}' passwd
root,/bin/bash
# 显示第一列和第七列,以逗号分割,且在所有行前面添加列名user, shell在最后一行添加“abc, /abc/abc”。BEGIN在所有数据读取行之前执行,End在所有数据执行之后执行。
[linux100@localhost datas]$ awk -F : 'BEGIN{print "user, shell"} {print $1", "$7} END{print "abc, /abc/abc"}' passwd
user, shell
root, /bin/bash
bin, /sbin/nologin
daemon, /sbin/nologin
adm, /sbin/nologin
lp, /sbin/nologin
sync, /bin/sync
shutdown, /sbin/shutdown
halt, /sbin/halt
mail, /sbin/nologin
operator, /sbin/nologin
games, /sbin/nologin
ftp, /sbin/nologin
nobody, /sbin/nologin
systemd-network, /sbin/nologin
dbus, /sbin/nologin
polkitd, /sbin/nologin
tss, /sbin/nologin
sshd, /sbin/nologin
postfix, /sbin/nologin
linux100, /bin/bash
abc, /abc/abc
# 将passwd文件中的用户id增加数值1并输出
[linux100@localhost datas]$ awk -v i=1 -F : '{print $3+i}' passwd
1
2
3
4
5
6
7
8
9
12
13
15
100
193
82
1000
60
75
90
1001
# 统计passwd文件名,每行的行号,每列的列数
[linux100@localhost datas]$ awk -F : '{print "filename:"FILENAME ", linenumber:" NR ", columns:" NF}' passwd
filename:passwd, linenumber:1, columns:7
filename:passwd, linenumber:2, columns:7
filename:passwd, linenumber:3, columns:7
filename:passwd, linenumber:4, columns:7
filename:passwd, linenumber:5, columns:7
filename:passwd, linenumber:6, columns:7
filename:passwd, linenumber:7, columns:7
filename:passwd, linenumber:8, columns:7
filename:passwd, linenumber:9, columns:7
filename:passwd, linenumber:10, columns:7
filename:passwd, linenumber:11, columns:7
filename:passwd, linenumber:12, columns:7
filename:passwd, linenumber:13, columns:7
filename:passwd, linenumber:14, columns:7
filename:passwd, linenumber:15, columns:7
filename:passwd, linenumber:16, columns:7
filename:passwd, linenumber:17, columns:7
filename:passwd, linenumber:18, columns:7
filename:passwd, linenumber:19, columns:7
filename:passwd, linenumber:20, columns:7
# 查询文件中的空行所在的行号
[linux100@localhost datas]$ cat sed.txt
dong shen
guan zhen
wo wo
lai lai
le le
[linux100@localhost datas]$ awk '/^$/{print NR}' sed.txt
5
# 切割IP
[linux100@localhost datas]$ ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.10.100 netmask 255.255.255.0 broadcast 192.168.10.255
inet6 fe80::a391:fd5e:fca1:f344 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:7e:31:5c txqueuelen 1000 (Ethernet)
RX packets 19668 bytes 1269374 (1.2 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3400 bytes 492420 (480.8 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 8 bytes 696 (696.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8 bytes 696 (696.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[linux100@localhost datas]$ ifconfig ens33 | grep "inet " | awk -F " " '{print $2}'
192.168.10.100
[linux100@localhost datas]$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:7e:31:5c brd ff:ff:ff:ff:ff:ff
inet 192.168.10.100/24 brd 192.168.10.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet6 fe80::a391:fd5e:fca1:f344/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[linux100@localhost datas]$ ip addr | grep "inet " | grep ens33 | awk -F " " '{print $2}' | awk -F / '{print $1}'
192.168.10.100
sort
sort命令是在Linux里非常有用,它将文件进行排序,并将排序结果标准输出。
基本语法
[linux100@localhost datas]$ sort --help
Usage: sort [OPTION]... [FILE]...
or: sort [OPTION]... --files0-from=F
Write sorted concatenation of all FILE(s) to standard output.
Mandatory arguments to long options are mandatory for short options too.
Ordering options:
-b, --ignore-leading-blanks ignore leading blanks
-d, --dictionary-order consider only blanks and alphanumeric characters
-f, --ignore-case fold lower case to upper case characters
-g, --general-numeric-sort compare according to general numerical value
-i, --ignore-nonprinting consider only printable characters
-M, --month-sort compare (unknown) < 'JAN' < ... < 'DEC'
-h, --human-numeric-sort compare human readable numbers (e.g., 2K 1G)
-n, --numeric-sort compare according to string numerical value
-R, --random-sort sort by random hash of keys
--random-source=FILE get random bytes from FILE
-r, --reverse reverse the result of comparisons
--sort=WORD sort according to WORD:
general-numeric -g, human-numeric -h, month -M,
numeric -n, random -R, version -V
-V, --version-sort natural sort of (version) numbers within text
Other options:
--batch-size=NMERGE merge at most NMERGE inputs at once;
for more use temp files
-c, --check, --check=diagnose-first check for sorted input; do not sort
-C, --check=quiet, --check=silent like -c, but do not report first bad line
--compress-program=PROG compress temporaries with PROG;
decompress them with PROG -d
--debug annotate the part of the line used to sort,
and warn about questionable usage to stderr
--files0-from=F read input from the files specified by
NUL-terminated names in file F;
If F is - then read names from standard input
-k, --key=KEYDEF sort via a key; KEYDEF gives location and type
-m, --merge merge already sorted files; do not sort
-o, --output=FILE write result to FILE instead of standard output
-s, --stable stabilize sort by disabling last-resort comparison
-S, --buffer-size=SIZE use SIZE for main memory buffer
-t, --field-separator=SEP use SEP instead of non-blank to blank transition
-T, --temporary-directory=DIR use DIR for temporaries, not $TMPDIR or /tmp;
multiple options specify multiple directories
--parallel=N change the number of sorts run concurrently to N
-u, --unique with -c, check for strict ordering;
without -c, output only the first of an equal run
-z, --zero-terminated end lines with 0 byte, not newline
--help display this help and exit
--version output version information and exit
KEYDEF is F[.C][OPTS][,F[.C][OPTS]] for start and stop position, where F is a
field number and C a character position in the field; both are origin 1, and
the stop position defaults to the line's end. If neither -t nor -b is in
effect, characters in a field are counted from the beginning of the preceding
whitespace. OPTS is one or more single-letter ordering options [bdfgiMhnRrV],
which override global ordering options for that key. If no key is given, use
the entire line as the key.
SIZE may be followed by the following multiplicative suffixes:
% 1% of memory, b 1, K 1024 (default), and so on for M, G, T, P, E, Z, Y.
With no FILE, or when FILE is -, read standard input.
*** WARNING ***
The locale specified by the environment affects sort order.
Set LC_ALL=C to get the traditional sort order that uses
native byte values.
GNU coreutils online help: <http://www.gnu.org/software/coreutils/>
For complete documentation, run: info coreutils 'sort invocation'
| 选项 | 说明 |
|---|---|
| -n | 依照数值的大小排序 |
| -r | 以相反的顺序来排序 |
| -t | 设置排序时所用的分割字符 |
| -k | 指定需要排序的列 |
案例实操
[linux100@localhost datas]$ cat sort.sh
bb:40:5.4
bd:20:4.2
xz:50:2.3
cls:10:3.5
ss:30:1.6
# 按照:分割后的第三列倒序排序
[linux100@localhost datas]$ sort -t : -nrk 3 sort.sh
bb:40:5.4
bd:20:4.2
cls:10:3.5
xz:50:2.3
ss:30:1.6
企业面试题
# 使用linux命令查询文件中空行所在的行号
[linux100@localhost datas]$ awk '/^$/{print NR}' sed.txt
5
# 使用linux命令计算第二列的和并输出
[linux100@localhost datas]$ cat chengji.txt
张三 40
李四 50
王五 60
[linux100@localhost datas]$ cat chengji.txt | awk -F " " '{sum+=$2} END{print sum}'
150
# Shell脚本里如何检查一个文件是否存在?如果不存在该如何处理?
#!/bin/bash
if [ -f file.txt ]
then
echo "文件存在!"
else
echo "文件不存在!"
fi
# 用shell写一个脚本,对文本中无需序的一列数字排序
[linux100@localhost datas]$ cat test.txt
9
8
6
5
4
3
2
1
[linux100@localhost datas]$ sort -n test.txt | awk '{a+=$0; print $0} END{print "SUM="a}'
1
2
3
4
5
6
8
9
SUM=38
# 请用shell脚本写出查找当前文件夹(/home)下所有文本文件内容中包含有字符“shen”的文件名称
[linux100@localhost datas]$ grep --help
Usage: grep [OPTION]... PATTERN [FILE]...
Search for PATTERN in each FILE or standard input.
PATTERN is, by default, a basic regular expression (BRE).
Example: grep -i 'hello world' menu.h main.c
Regexp selection and interpretation:
-E, --extended-regexp PATTERN is an extended regular expression (ERE)
-F, --fixed-strings PATTERN is a set of newline-separated fixed strings
-G, --basic-regexp PATTERN is a basic regular expression (BRE)
-P, --perl-regexp PATTERN is a Perl regular expression
-e, --regexp=PATTERN use PATTERN for matching
-f, --file=FILE obtain PATTERN from FILE
-i, --ignore-case ignore case distinctions
-w, --word-regexp force PATTERN to match only whole words
-x, --line-regexp force PATTERN to match only whole lines
-z, --null-data a data line ends in 0 byte, not newline
Miscellaneous:
-s, --no-messages suppress error messages
-v, --invert-match select non-matching lines
-V, --version display version information and exit
--help display this help text and exit
Output control:
-m, --max-count=NUM stop after NUM matches
-b, --byte-offset print the byte offset with output lines
-n, --line-number print line number with output lines
--line-buffered flush output on every line
-H, --with-filename print the file name for each match
-h, --no-filename suppress the file name prefix on output
--label=LABEL use LABEL as the standard input file name prefix
-o, --only-matching show only the part of a line matching PATTERN
-q, --quiet, --silent suppress all normal output
--binary-files=TYPE assume that binary files are TYPE;
TYPE is 'binary', 'text', or 'without-match'
-a, --text equivalent to --binary-files=text
-I equivalent to --binary-files=without-match
-d, --directories=ACTION how to handle directories;
ACTION is 'read', 'recurse', or 'skip'
-D, --devices=ACTION how to handle devices, FIFOs and sockets;
ACTION is 'read' or 'skip'
-r, --recursive like --directories=recurse
-R, --dereference-recursive
likewise, but follow all symlinks
--include=FILE_PATTERN
search only files that match FILE_PATTERN
--exclude=FILE_PATTERN
skip files and directories matching FILE_PATTERN
--exclude-from=FILE skip files matching any file pattern from FILE
--exclude-dir=PATTERN directories that match PATTERN will be skipped.
-L, --files-without-match print only names of FILEs containing no match
-l, --files-with-matches print only names of FILEs containing matches
-c, --count print only a count of matching lines per FILE
-T, --initial-tab make tabs line up (if needed)
-Z, --null print 0 byte after FILE name
Context control:
-B, --before-context=NUM print NUM lines of leading context
-A, --after-context=NUM print NUM lines of trailing context
-C, --context=NUM print NUM lines of output context
-NUM same as --context=NUM
--group-separator=SEP use SEP as a group separator
--no-group-separator use empty string as a group separator
--color[=WHEN],
--colour[=WHEN] use markers to highlight the matching strings;
WHEN is 'always', 'never', or 'auto'
-U, --binary do not strip CR characters at EOL (MSDOS/Windows)
-u, --unix-byte-offsets report offsets as if CRs were not there
(MSDOS/Windows)
'egrep' means 'grep -E'. 'fgrep' means 'grep -F'.
Direct invocation as either 'egrep' or 'fgrep' is deprecated.
When FILE is -, read standard input. With no FILE, read . if a command-line
-r is given, - otherwise. If fewer than two FILEs are given, assume -h.
Exit status is 0 if any line is selected, 1 otherwise;
if any error occurs and -q is not given, the exit status is 2.
Report bugs to: bug-grep@gnu.org
GNU Grep home page: <http://www.gnu.org/software/grep/>
General help using GNU software: <http://www.gnu.org/gethelp/>
[linux100@localhost datas]$ grep -r "shen" /home
/home/linux100/datas/cut.txt:dong shen
/home/linux100/datas/sed.txt:dong shen
[linux100@localhost datas]$ grep -r "shen" /home | cut -d ":" -f 1
/home/linux100/datas/cut.txt
/home/linux100/datas/sed.txt
