Daily Work

日常工作常用命令(Linux、Mac)

Linux Daily

1. rpm简单命令

查看已经安装的软件

-qa | grep mysql```
1
2
3
4
强力卸载软件 ```rpm -e --nodeps mysql```  
安装软件 ```rpm -ivy xxx.rpm```

## 2. yum下载源

cd /etc/yum.repos.d

1
2
3
4
5
6
7
8
9
10

> 如果是RHEL, 则/etc/yum.repos.d下没有任何源.
> 可以通过rpm -ivh epel*.rpm安装源. 安装后会在/etc/yum.repos.d下生成repl.repo文件.
> 如果是CentOS, 则有CentOS-Base.repo. 在确保虚拟机能够ping通外网, 可以直接通过wget获取文件.

### RHEL使用163源

<http://boris05.blog.51cto.com/1073705/1439865>

删除redhat原有的yum源 ```rpm -aq|grep yum|xargs rpm -e --nodeps

下载yum安装文件 163 6.5

1
2
3
4
wget http://mirrors.163.com/centos/6.5/os/x86_64/Packages/yum-3.2.29-40.el6.centos.noarch.rpm 
wget http://mirrors.163.com/centos/6.5/os/x86_64/Packages/yum-metadata-parser-1.1.2-16.el6.x86_64.rpm
wget http://mirrors.163.com/centos/6.5/os/x86_64/Packages/yum-plugin-fastestmirror-1.1.30-14.el6.noarch.rpm
wget http://mirrors.163.com/centos/6.5/os/x86_64/Packages/python-iniparse-0.3.1-2.1.el6.noarch.rpm

进行安装yum

1
2
rpm -ivh python*
rpm -ivh yum*

更改yum源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
cd /etc/yum.repos.d/
vim /etc/yum.repos.d/rhel.repo

[base]
name=CentOS-$releasever - Base
baseurl=http://mirrors.163.com/centos/6.5/os/$basearch/
gpgcheck=1
gpgkey=http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-6
#released updates
[updates]
name=CentOS-$releasever - Updates
baseurl=http://mirrors.163.com/centos/6.5/updates/$basearch/
gpgcheck=1
gpgkey=http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-6
#packages used/produced in the build but not released
#[addons]
#name=CentOS-$releasever - Addons
#baseurl=http://mirrors.163.com/centos/$releasever/addons/$basearch/
#gpgcheck=1
#gpgkey=http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-6
#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras
baseurl=http://mirrors.163.com/centos/6.5/extras/$basearch/
gpgcheck=1
gpgkey=http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-6
#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus
baseurl=http://mirrors.163.com/centos/6.5/centosplus/$basearch/
gpgcheck=1
enabled=0
gpgkey=http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-6

yum clean all
yum makecache
yum update

EPEL-7 下载源

1
2
3
# wget http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-1.noarch.rpm 
# rpm -ivh epel-release-7-1.noarch.rpm
# rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

EPEL-6 下载源

1
2
3
# wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm   
# rpm -ivh epel-release-6-8.noarch.rpm
# rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6

注意如果是CentOS, 则最后的rpm –import要修改成CentOS-6

CentOS使用阿里云源

CentOS-163源

对于CentOS, /etc/yum.repos.d下有CentOS-Base.repo, 可以直接用别的源替换默认的, 或者新增加源.

1
2
3
4
5
6
7
8
# wget http://mirrors.163.com/.help/CentOS6-Base-163.repo 

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-6.repo
mv /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel.repo.backup
mv /etc/yum.repos.d/epel-testing.repo /etc/yum.repos.d/epel-testing.repo.backup
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-6.repo
yum makecache

更新源

1
2
3
4
# yum clean all 
# yum makecache
# yum update
# yum repolist

3. 安装基本软件

1. 确保能够上网, 并且yum repolist有数据, 比如先下个163的源. 当服务器稳定之后, 可以禁用源(文件后缀改下即可)

1
# yum install wget

2.英文环境

1
2
3
4
5
# vi ~/.bashrc 
export LANG=en_US.UTF-8
# source ~/.bashrc
# vi /etc/sysconfig/i18n
LANG="en_US.UTF-8"

3. 安装gcc, git等

1
2
3
4
5
6
7
8
9
10
11
12
# yum group install "Development Tools"     --> CentOS6要使用yum groupinstall中间没有空格!
# yum grouplist
Loaded plugins: fastestmirror, product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Setting up Group Process
Loading mirror speeds from cached hostfile
Installed Groups:
Console internet tools
Development tools
E-mail server
Perl Support
Security Tools

这里已经安装了Development tools, 所以会显示在Installed Groups里. 注意不是yum group list
如果没有安装, 会显示在Available Groups里. 上一步的英文环境很重要, 否则如果是中文环境, 你就不知道要安装哪个组了.

4.yum安装MySQL客户端和服务器

1
2
3
4
5
6
7
8
9
10
11
12
13
# yum info mysql 
# yum list | grep mysql
# yum groupinfo "MySQL Database server"
# yum groupinfo "MySQL Database client"
Mandatory Packages:
mysql
Default Packages:
MySQL-python
mysql-connector-odbc
Optional Packages:
libdbi-dbd-mysql
mysql-connector-java
perl-DBD-MySQL

当然也可以单独一个一个安装. 同样要注意在/etc/yum.repos.d中要存在163, 或者epel等源. 否则如果没有源, 安装任何软件都没有数据.

http://www.cnblogs.com/xiaoluo501395377/archive/2013/04/07/3003278.html

1
2
3
4
5
6
7
8
9
# yum install mysql mysql-server mysql-devel 
# service mysqld start
# netstat -anpt | grep 3306
# chkconfig --add mysqld
# chkconfig --list | grep mysqld
# chkconfig mysqld on
# mysqladmin -u root password 'root'
# mysql -u root -p
> show databases;

5. nginx

源码安装方式: http://network810.blog.51cto.com/2212549/1264669
yum源安装: nginx默认不在源里. 需要自己去nginx官网下载repo文件. 下载完后可以删除或者备份.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
CentOS 
# vi /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=0
enabled=1

RHEL
# vi /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/rhel/$releasever/$basearch/
gpgcheck=0
enabled=1

# yum install nginx

如果服务器开启防火墙, 则要开放80端口

1
2
# vi /etc/sysconfig/iptables 
-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT

重启防火墙

1
# service iptables restart

启动nginx方法. 显然第一种方法最快

1
2
3
4
5
6
# service nginx start 

# cd /usr/local/nginx/sbin
# ./nginx

# /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf

同mysqld加入到chkconfig的方式开机启动, 也可以把nginx加入开机启动项中.
如果提示没有nginx这个service, 参考 http://www.01happy.com/centos-nginx-shell-chkconfig
可以在主机的浏览器上查看, 或者查看端口号80是否开启

1
2
# netstat –na|grep 80 
# ps -ef | grep nginx

默认配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
user  nginx;
worker_processes 1;

error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

events {
worker_connections 1024;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
include /etc/nginx/conf.d/*.conf;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
user  www www;
worker_processes 2;
error_log logs/error.log;
pid logs/nginx.pid;
events {
use epoll;
worker_connections 2048;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;

# gzip压缩功能设置
gzip on;
gzip_min_length 1k;
gzip_buffers 4 16k;
gzip_http_version 1.0;
gzip_comp_level 6;
gzip_types text/html text/plain text/css text/javascript application/json application/javascript application/x-javascript application/xml;
gzip_vary on;

# http_proxy 设置
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 75;
proxy_send_timeout 75;
proxy_read_timeout 75;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_temp_path /usr/local/nginx/proxy_temp 1 2;

# 设定负载均衡后台服务器列表
upstream backend {
server 192.168.10.100:8080 max_fails=2 fail_timeout=30s ;
server 192.168.10.101:8080 max_fails=2 fail_timeout=30s ;
}

# 很重要的虚拟主机配置
server {
listen 80;
server_name itoatest.example.com;
root /apps/oaapp;
charset utf-8;
access_log logs/host.access.log main;

#对 / 所有做负载均衡+反向代理
location / {
root /apps/oaapp;
index index.jsp index.html index.htm;

proxy_pass http://backend;
proxy_redirect off;
# 后端的Web服务器可以通过X-Forwarded-For获取用户真实IP
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;

}

#静态文件,nginx自己处理,不去backend请求tomcat
location ~* /download/ {
root /apps/oa/fs;
}
location ~ .*\.(gif|jpg|jpeg|bmp|png|ico|txt|js|css)$
{
root /apps/oaapp;
expires 7d;
}
location /nginx_status {
stub_status on;
access_log off;
allow 192.168.10.0/24;
deny all;
}

location ~ ^/(WEB-INF)/ {
deny all;
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
## 其它虚拟主机,server 指令开始
}

6. scp文件拷贝

在主机中要拷贝文件到虚拟机中

1
2
3
4
[主机]# scp xxx root@h101:~/ 
bash: scp: command not found
lost connection
[虚机]# yum install openssh-clients

7. http虚拟机yum源

1
# yum install -y httpd

将主机上的iso文件拷贝到虚拟机中

1
2
[主机]# scp **.iso root@h101:~/ 
[虚机]# mount -o loop CentOS*.iso /var/www/html
1
2
3
4
5
6
7
# vi /etc/yum.repos.d/http-local.repo 
[http-local]
name=http-local-on
baseurl=http://192.168.56.101/CentOS_6.5_Final
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6

8. http主机yum源

上面的方式会将iso文件拷贝到虚拟机中, 占用本来空间就不多的虚拟机.
可以不用这种方式, 而是在主机中安装httpd服务器(或者nginx), 在虚拟机中直接能访问也可以.

9. 虚拟机ftp客户端

◇ 主机中开启vsftpd服务, 虚拟机中安装ftp客户端,

1
2
3
4
5
6
7
8
9
10
11
12
# yum install ftp 
# ftp 192.168.56.1
Connected to 192.168.56.1 (192.168.56.1).
220 (vsFTPd 3.0.2)
Name (192.168.56.1:root): anonymous
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp>ls ftp服务器当前位置的文件列表
ftp>!Ls ftp客户端当前位置的文件列表

10. SVN

1). 查看服务器系统是否已经安装SVN

1
2
3
4
5
6
7
8
9
10
[root@datanode01 svn]# svn --version
svn,版本 1.6.11 (r934486) 编译于 Apr 12 2012,11:09:11
可使用以下的版本库访问模块:
* ra_neon : 通过 WebDAV 协议使用 neon 访问版本库的模块。
- 处理“http”方案
- 处理“https”方案
* ra_svn : 使用 svn 网络协议访问版本库的模块。 - 使用 Cyrus SASL 认证
- 处理“svn”方案
* ra_local : 访问本地磁盘的版本库模块。
- 处理“file”方案

说明svn已经安装, 并且支持http访问方式.

1
2
3
4
5
6
[root@datanode01 svn]# whereis httpd
httpd: /usr/sbin/httpd.event /usr/sbin/httpd /usr/sbin/httpd.worker /etc/httpd /usr/lib64/httpd /usr/include/httpd /usr/share/man/man8/httpd.8.gz
[root@datanode01 svn]# whereis svn
svn: /usr/bin/svn /usr/share/man/man1/svn.1.gz
[root@datanode01 svn]# whereis svnserve
svnserve: /usr/bin/svnserve /usr/share/man/man8/svnserve.8.gz

2). 创建版本仓库

◆ 新建一个目录用于存储SVN所有文件

1
mkdir /data/data8/svn

◆ 新建一个版本仓库

1
svnadmin create /data/data8/svn/project

◆ 修改 vi /data/data8/svn/project/conf/passwd 添加用户

1
2
3
4
5
[users]  
# harry = harryssecret
# sally = sallyssecret
admin = admin123
zhengqh = zhengqh

◆ 修改 vi /data/data8/svn/project/conf/authz 修改用户访问策略

1
2
3
4
5
6
7
[groups]
# harry_and_sally = harry,sally
# harry_sally_and_joe = harry,sally,&joe
group1 = admin,zhengqh

[/]
@group1 = rw

◆ 修改 vi /data/data8/svn/project/conf/svnserve.conf文件,让用户和策略配置升效.

1
2
3
4
5
[general]  
anon-access = none
auth-access = write
password-db = /data/data8/svn/project/conf/passwd
authz-db = /data/data8/svn/project/conf/authz

◆ 启动服务器

1
svnserve -d -r /data/data8/svn

◆ 测试checkout代码库

1
2
cd ~
svn co svn://172.17.212.69/project

显示如下就表示成功了:

1
2
3
4
5
Authentication realm: <svn://172.17.212.69:3690> e296f93b-eec2-43dd-92b5-cc10ee55c901
Password for 'root':
Authentication realm: <svn://172.17.212.69:3690> e296f93b-eec2-43dd-92b5-cc10ee55c901
Username: admin
Password for 'admin': ***

3). 配置支持使用http访问

◆ 创建svn帐号或修改密码:

1
2
/usr/bin/htpasswd -b -c /data/data8/svn/svn-auth-file admin admin123
/usr/bin/htpasswd -b /data/data8/svn/svn-auth-file zhengqh zhengqh

-c表示不存在这个文件时创建它. 注意第一行加-c, 后面添加用户不能加-c, 否则会发生覆盖.

◆ 修改svn用户访问策略

1
2
3
4
vi /data/data8/svn/svn-access-file
[project:/]
admin = rw
zhengqh = rw

◆ 修改svn目录权限

1
chmod -R 777 /data/data8/svn/project

◆ 查看svn和httpd依赖的文件:

1
2
3
4
# cd /etc/httpd/modules
# ls | grep svn
mod_authz_svn.so
mod_dav_svn.so

◆ 修改 vi /etc/httpd/conf/httpd.conf 增加

1
2
3
4
5
6
7
8
9
10
11
12
LoadModule dav_svn_module     /etc/httpd/modules/mod_dav_svn.so
LoadModule authz_svn_module /etc/httpd/modules/mod_authz_svn.so

<Location /svn>
DAV svn
SVNParentPath /data/data8/svn
AuthType Basic
AuthName "Subversion repository"
AuthUserFile /data/data8/svn/svn-auth-file
Require valid-user
AuthzSVNAccessFile /data/data8/svn/svn-access-file
</Location>

◆ 启动apache httpd服务

下面几个命令只要执行其中一个即可(最后一个/usr/local一般用于自定义安装httpd才使用)

1
2
3
# /usr/sbin/apachectl start
# /etc/init.d/httpd start
# /usr/local/apache2/bin/apachectl start

◆ 重启svn服务

1
2
3
4
5
# ps -ef | grep svn
root 41348 1 0 11:22 ? 00:00:00 svnserve -d -r /data/data8/svn
root 41971 40812 0 11:39 pts/2 00:00:00 grep svn
# kill -9 41348
# svnserve -d -r /data/data8/svn

4). 用户维护

1
2
3
4
5
6
7
8
vi /data/data8/svn/project/conf/passwd
添加用户名 = 密码
vi /data/data8/svn/project/conf/authz
添加用户名到group1中
执行命令: /usr/bin/htpasswd -b /data/data8/svn/svn-auth-file 用户名 密码
添加用户的访问策略
vi /data/data8/svn/svn-access-file
用户名 = rw

4. 磁盘

查看文件大小

ll -h只能查看文件的大小. 不能查看文件夹占用的大小.

查看某个目录总的大小

1
2
3
cd dir
du -h ./
du -sh *

在最后会列出这个文件夹占用的大小. 或者不用cd, 直接du -h dir

扩容操作

查看磁盘使用量: df -mh
根目录/达到了100% : /dev/mapper/vg_datanode01-LogVol00

查看卷: vgdisplan:

1
2
3
4
5
---Volume group ---
VG Name vg_datanode01

--- Logical volume ---
LV Path /dev/vg_datanode01/LogVol00

扩容:

1
2
lvextend -L +10G  /dev/mapper/vg_datanode01-LogVol00
resize2fs /dev/mapper/vg_datanode01-LogVol00

5. 系统

文件数和进程数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
[midd@datanode01 ~]$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 62700
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
[midd@datanode01 ~]$ ps -ef | grep ETL
midd 14369 1 0 Mar29 ? 00:00:00 ETL_ScheduleCenter
midd 14370 14369 99 Mar29 ? 5-06:14:46 ETL_ScheduleServer
midd 14442 14369 6 Mar29 ? 02:33:41 ETL_ServerManger
midd 41892 41839 0 10:19 pts/3 00:00:00 grep ETL

[midd@datanode01 ~]$ lsof -p 14369 | wc -l
20
[midd@datanode01 ~]$ lsof -p 14370 | wc -l
928
[midd@datanode01 ~]$ lsof -p 14442 | wc -l
4169
[midd@datanode01 ~]$ cat /proc/sys/fs/file-max
792049

[midd@datanode01 ~]$ su -
[root@datanode01 ~]# vi /etc/security/limits.conf
* soft nofile 65536 * hard nofile 65536
添加以上, 其中*表示任何用户. 注意要用root用户执行.

或者使用root用户添加指定用户的文件数和进程数:

1
2
3
4
5
6
7
8
9
echo '########################for ETL 4.1.0' >> /etc/security/limits.conf
echo 'midd soft nofile 65536' >> /etc/security/limits.conf
echo 'midd hard nofile 65536' >> /etc/security/limits.conf
echo 'midd soft nproc 131072' >> /etc/security/limits.conf
echo 'midd hard nproc 131072' >> /etc/security/limits.conf
echo 'midd soft nofile 65536' >> /etc/security/limits.conf
echo 'midd hard nofile 65536' >> /etc/security/limits.conf
echo 'midd soft nproc 131072' >> /etc/security/limits.conf
echo 'midd hard nproc 131072' >> /etc/security/limits.conf

然后使用midd用户验证ulimit

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[midd@datanode01 ~]$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 62700
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 65536
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 131072
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

另外方法:http://gaozzsoft.iteye.com/blog/1824824

1
2
3
1.使用ps -ef |grep java   (java代表你程序,查看你程序进程) 查看你的进程ID,记录ID号,假设进程ID为122.使用:lsof -p 12 | wc -l    查看当前进程id为12的 文件操作状况    执行该命令出现文件使用情况为 10523.使用命令:ulimit -a   查看每个用户允许打开的最大文件数    发现系统默认的是open files (-n) 1024,问题就出现在这里。4.然后执行:ulimit -n 4096
将open files (-n) 1024 设置成open files (-n) 4096
这样就增大了用户允许打开的最大文件数

内存

1
2
3
4
5
6
7
8
9
free
free -m 以MB为单位
free -g 以GB为单位

df
df -m 以MB
df -h 以人类(human)可读的, 即GB

pstree -p | wc -l

自动重启(jstat)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
c=`/usr/install/jdk1.8.0_60/bin/jps -lm | grep CassandraDaemon | awk '{print $1}'`
old=`/usr/install/jdk1.8.0_60/bin/jstat -gc $c |tail -1 |awk '{print $8}'`
if [ ${old%.*} -gt 8388608 ]; then
echo "error: $old"
/usr/install/cassandra/bin/nodetool flush
/usr/install/cassandra/bin/nodetool stopdaemon
sleep 15s
/usr/install/cassandra/bin/cassandra
else
echo "normal: $old"
fi

#crontab -e
#*/1 * * * * sh gc_old.sh > gc_old.log 2>&1 &

用户和权限

添加用户, 设置密码

1
2
useradd -d /home/postgres postgres 
passwd postgres

更改读写权限, 地柜目录使用大写的R. (注意scp时用的是小写的r)

1
2
chmod 755 file
chmod -R 755 folder

更改用户名:组

1
chown hadoop:hadoop -R folder

定时任务cron

1.编写脚本

1
2
3
4
5
# vi /usr/lib/zookeeper/bin/cron.day 
#bin/sh
cd /usr/lib/zookeeper/bin
./zkCleanup.sh /opt/hadoop/zookeeper/version-2 5
echo 'clean up end...'

2.更改脚本权限

1
# chmod 755 /usr/lib/zookeeper/bin/cron.day

3.定时调度策略

1
2
3
# cd /etc/cron.d
# vi /etc/cron.d/zk.cron
0 13 * * * root run-parts /usr/lib/zookeeper/bin/cron.day

4.导入调度配置

1
# crontab zk.cron

5.查看调度列表

1
2
# crontab -l
0 13 * * * root run-parts /usr/lib/zookeeper/bin/cron.day

6.查看是否调度的日志

1
2
# tail -f /var/log/cron
Apr 1 13:00:01 namenode02 CROND[41859]: (root) CMD (root run-parts /usr/lib/zookeeper/bin/cron.day)

2.定时任务 sudo -u admin crontab -e:
20 11 * /usr/install/sh/activity.sh >> /home/admin/output/cronlogs/do_activity.log 2>&1

3.创建日志重定向文件:
sudu -u admin touch /home/admin/output/cronlogs/do_activity.log

4.查看任务运行日志:
$ sudo tail -f /var/log/cron
Sep 9 11:20:02 spark047214 CROND[9191]: (admin) CMD (/usr/install/sh/activity.sh >> /home/admin/output/cronlogs/do_activity.log 2>&1)

如果没有使用日志重定向, 则默认定时任务输出到mail中:
$ sudo -u admin tail -200f /var/spool/mail/admin

其他知识点:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# vi /etc/crontab
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/

# For details see man 4 crontabs

# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * user-name command to be executed

6. 进程

进程和端口

根据端口号查询进程名字

1
lsof -Pnl +M -i4[i6] | grep 20880

top

1
2
3
4
5
6
按M可以按照Memory排序, 按P按照CPU排序
按u可以选择指定的user, 只显示该用户的进程
top -p $(pidof mongod) 只显示指定的进程

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4735 midd 20 0 30.1g 748m 4944 S 2.0 9.5 102:39.94 mongod

telnet

1
2
3
4
5
6
7
$ telnet 192.168.6.52 80
Trying 192.168.6.52...
Connected to 192.168.6.52.
Escape character is '^]'.
^] ⬅️ MAC下同时按下Control和]两个键
telnet> quit ⬅️ 出现这个,键入quit,成功退出
Connection closed.

kill

killall 命令可以杀死同一个进程的所有子进程.
如果用ps -ef | grep 则要一个一个杀.
比如ps -ef | grep ETL 显示一共由三个相关进程

1
2
3
4
5
[midd@datanode01 bin]$ ps -ef | grep ETL
midd 4812 41955 0 16:01 pts/3 00:00:00 grep ETL
midd 42276 1 0 15:52 ? 00:00:00 ETL_ScheduleCenter
midd 42277 42276 99 15:52 ? 00:15:35 ETL_ScheduleServer
midd 42345 42276 7 15:53 ? 00:00:34 ETL_ServerManger

而用killall 只需要一行: killall ScheduleCenter

批量杀进程

1
2
process="cross-partner"
ps aux|grep $process|grep -v grep|awk '{print $2}'|xargs kill -9

或者更简单的:(类似ssh->pssh, kill->pkill)

1
2
kill -9 $(pgrep amarok)
pkill -9 amarok

screen

1
2
3
4
5
6
7
8
9
10
screen

dstat -tlrvn 10

Ctrl+a+d 退出screen

screen -ls 列表
screen -r 恢复

kill screen: screen -X -S $session quit

nohup

http://ora12c.blogspot.com/2012/04/how-to-put-scp-in-background.html

scp命令需要输入密码, 结合nohup, 而nohup是在后台执行, 因此密码没办法输入.

1
2
3
4
5
6
[qihuang.zheng@cass047224 ~]$ nohup scp -l 100000 -r 1447314738524 192.168.47.219:~/snapshot/224_1114/ &
[1] 16169
nohup: 忽略输入并把输出追加到"nohup.out"
[qihuang.zheng@cass047224 ~]$ qihuang.zheng@192.168.47.219's password:

[1]+ Stopped nohup scp -l 100000 -r 1447314738524 192.168.47.219:~/snapshot/224_1114/

按这里: http://unix.stackexchange.com/questions/91065/nohup-sudo-does-not-prompt-for-passwd-and-does-nothing
和这里: http://stackoverflow.com/questions/13147861/run-scp-in-background-and-monitor-the-progress
不要加&, 可以输入密码, Ctrl+Z暂停任务, bg恢复任务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[qihuang.zheng@cass047224 ~]$ nohup scp -l 100000 -r 1447314738524 192.168.47.219:~/snapshot/224_1114/
nohup: 忽略输入并把输出追加到"nohup.out"
qihuang.zheng@192.168.47.219's password: 在这里输入密码, 注意必须等输入密码之后,再暂停任务,还没有出现时,不能暂停!!!
^Z
[2]+ Stopped nohup scp -l 100000 -r 1447314738524 192.168.47.219:~/snapshot/224_1114/
[qihuang.zheng@cass047224 ~]$
[qihuang.zheng@cass047224 ~]$ ps -ef|grep scp
501 16169 11856 0 12:29 pts/0 00:00:00 scp -l 100000 -r 1447314738524 192.168.47.219:~/snapshot/224_1114/
501 16183 16169 0 12:29 pts/0 00:00:00 /usr/bin/ssh -x -oForwardAgent no -oPermitLocalCommand no -oClearAllForwardings yes 192.168.47.219 scp -r -t ~/snapshot/224_1114/
501 17492 11856 0 12:33 pts/0 00:00:00 scp -l 100000 -r 1447314738524 192.168.47.219:~/snapshot/224_1114/
501 17493 17492 0 12:33 pts/0 00:00:00 /usr/bin/ssh -x -oForwardAgent no -oPermitLocalCommand no -oClearAllForwardings yes 192.168.47.219 scp -r -t ~/snapshot/224_1114/
501 17718 11856 0 12:33 pts/0 00:00:00 grep scp
[qihuang.zheng@cass047224 ~]$ bg
[2]+ nohup scp -l 100000 -r 1447314738524 192.168.47.219:~/snapshot/224_1114/ & 出现这个表示后台开始运行任务了!!!
[qihuang.zheng@cass047224 ~]$ bg
[1]+ nohup scp -l 100000 -r 1447314738524 192.168.47.219:~/snapshot/224_1114/ &

如果敲入多次bg, 是不是多次执行?

1
2
3
[qihuang.zheng@cass047224 ~]$ jobs
[1]+ Stopped nohup scp -l 100000 -r 1447314738524 192.168.47.219:~/snapshot/224_1114/
[2]- Running nohup scp -l 100000 -r 1447314738524 192.168.47.219:~/snapshot/224_1114/ &

scp-l

Linux中&、jobs、fg、bg等命令的使用方法: http://blog.sina.com.cn/s/blog_673ee2b50100iywr.html

disown

disown 示例1(如果提交命令时已经用“&”将命令放入后台运行,则可以直接使用“disown”)

1
2
3
4
5
6
7
8
9
[root@pvcent107 build]# cp -r testLargeFile largeFile &
[1] 4825
[root@pvcent107 build]# jobs
[1]+ Running cp -i -r testLargeFile largeFile &
[root@pvcent107 build]# disown -h %1
[root@pvcent107 build]# ps -ef |grep largeFile
root 4825 968 1 09:46 pts/4 00:00:00 cp -i -r testLargeFile largeFile
root 4853 968 0 09:46 pts/4 00:00:00 grep largeFile
[root@pvcent107 build]# logout

disown 示例2(如果提交命令时未使用“&”将命令放入后台运行,可使用 CTRL-z 和“bg”将其放入后台,再使用“disown”)

1
2
3
4
5
6
7
8
9
10
11
12
[root@pvcent107 build]# cp -r testLargeFile largeFile2

[1]+ Stopped cp -i -r testLargeFile largeFile2
[root@pvcent107 build]# bg %1
[1]+ cp -i -r testLargeFile largeFile2 &
[root@pvcent107 build]# jobs
[1]+ Running cp -i -r testLargeFile largeFile2 &
[root@pvcent107 build]# disown -h %1
[root@pvcent107 build]# ps -ef |grep largeFile2
root 5790 5577 1 10:04 pts/3 00:00:00 cp -i -r testLargeFile largeFile2
root 5824 5577 0 10:05 pts/3 00:00:00 grep largeFile2
[root@pvcent107 build]#

7. 文件

查看文件编码格式

1
2
3
4
5
6
7
8
9
10
11
12
# vi .vimrc
:set fileencoding
fileencoding=utf8
set fileencodings=ucs-bom,utf-8,cp936,gb18030,big5,latin1

# vi XXX.file
:set fencs?
fileencodings=ucs-bom,utf-8,cp936,gb18030,big5,latin1
:set fenc?
fileencoding=cp936
:set enc?
encoding=utf-8

cat文件乱码

Windows下生成的纯文本文件,其中文编码为GBK,在Ubuntu下显示为乱码,可以使用iconv命令进行转换:

1
2
# iconv -f gbk -t utf8 source_file > target_file
iconv: 未知 5 处的非法输入序列

GBK转码实践

1
2
3
4
5
6
7
8
9
10
11
12
13
#!/bin/bash
if [ "$#" != "2" ]; then
echo "Usage: `basename $0` dir filter"
exit
fi
dir=$1
filter=$2
echo $1

for file in `find $dir -name "$2"`; do
echo "$file"
iconv -f gbk -t utf8 -o $file $file
done

执行方式: 第一个参数是目录, 第二个是文件选择

1
2
3
4
~/ftp/GBK2UTF-8_batch.sh ./  M_EP_PD_AQI*
~/ftp/GBK2UTF-8_batch.sh ./ M_EP_PH_AQI*
~/ftp/GBK2UTF-8_batch.sh ./ M_METE_CITY_PRED*
~/ftp/GBK2UTF-8_batch.sh ./ M_METE_WEATHER_LIVE*

执行最后一个, 文件>32kb, 报错:

1
4361 总线错误 (core dumped) iconv -c -f gbk -t utf8 -o $file $file

解决方式: http://myotragusbalearicus.wordpress.com/2010/03/10/batch-convert-files-to-utf-8/
还是不行: http://www.path8.net/tn/archives/3448
使用//IGNORE, 成功!

1
iconv -f gbk//IGNORE -t utf8//IGNORE $file -o $file.tmp

注意原始文件必须是和-f对应,如果原始文件是utf8, 要再次转换成utf8, 也会报错.

GBK2UTF8.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#!/bin/bash
if [ "$#" != "2" ]; then
echo "Usage: `basename $0` dir filter"
echo "sample: ./GBK2UTF8.sh /home/midd/ftp/fz12345/back/2015-03 fz12345_*.txt"
exit
fi
dir=$1
filter=$2
tmp='T'
echo $1

for file in `find $dir -name "$2"`; do
echo "$file"
#iconv -f gbk -t utf8 -o $file $file
#Notic, the Source File should not utf8 format. or u 'll get error
iconv -f gbk//IGNORE -t utf8//IGNORE $file -o $tmp$file
done

wordcount计数

计算文件的行数: wc -l file.txt
要统计单词数量, 加上w选项. L选项表示最长行的长度. 注意这是一整行.不能按照列计算最长长度.

帮助信息: $ wc –help. 如果不知道一个命令, 最好看看–help

1
2
3
4
5
6
7
8
9
用法:wc [选项]... [文件]...
 或:wc [选项]... --files0-from=F
-c, --bytes print the byte counts
-m, --chars print the character counts
-l, --lines print the newline counts
-L, --max-line-length 显示最长行的长度
-w, --words 显示单词计数
--help 显示此帮助信息并退出
--version 显示版本信息并退出

grep查找

1
2
ps -ef | grep ETL  查看进程
cat fz12345_original.txt | grep FZ15032700599 查找一个文件里的字符串

查找目录下的文件里的内容

1
2
3
4
cat foder/*.* | grep XXX
find /etc/ -name "*" | xargs grep XXX

find ./ -name "*" | xargs grep 8080

注意: 第一个cat命令无法用于递归子目录, 第二个命令/etc后面必须跟上/, 而且name是*

grep ‘your-search-word’ . -rn

大文件定位到某一行

1
sed -n '111,112p' file.txt

截取文件:

1
2
3
4
5
2016-05-13T00:00   406465
2016-05-14T00:00 1348308

错误的方式: head -1348308 gc.log.0 | tail -406465 > tongdun_cassandra_20160513.log
正确的方式: sed -n '406465,1348308p' gc.log.0 > tongdun_cassandra_20160513.log

find文件名

在指定目录查找文件名: find ~/repo -name *tmp*

使用管道, xargs表示递归找到的每个值. 如果是文件, 使用rm. 如果是文件夹, 用rm -rf.
递归删除svn文件夹: find SVNFOLDER -name .svn | xargs rm -rf
递归删除文件: find ~/repo -name *tmp* | xargs rm

find -name ‘0456’ -print
cat
|grep XXX

文件内容替换

\n替换为, :%s/\n/,/

ORACLE类型转换为hive类型:

1
2
3
4
5
6
7
8
9
10
%s/STRING(.*)/STRING/
%s/VARCHAR2(.*)/STRING/
%s/CHAR(.*)/STRING/
%s/DATE,/STRING,/
%s/date,/STRING,/
%s/INTEGER/INT/
%s/NUMBER(..)/BIGINT/
%s/NUMBER(.)/INT/
%s/NUMBER(.*)/DOUBLE/
%s/.not null//

find . -name “*” | xargs sed -i -e ‘s%cn.fraudmetrix.pontus%com.spark.connectors%g’

1
2
3
4
5
递归查找所有文件夹的字符串(grep -r)
grep "mqtutorial-instance1" -r mqtutorial-instance2

替换(注意grep -rl):
sed -i "" "s/mqtutorial-instance1/mqtutorial-instance2/g" `grep "mqtutorial-instance1" -rl mqtutorial-instance2`

rename批量修改文件名

1
2
3
4
5
6
7
rename .XLS .xlsx *.XLS   把文件名=.XLS的替换成.xlsx
rename \_linux '' *.txt 把文件名=_linux的替换成空, 注意_要用转义, 即去掉文件名包含_linux的
rename -Unlicensed- '' *.xlsx
rename data md5 * # rename 原文件要替换 替换后 要替换的文件

递归修改文件名:
find . -name "mqtutorial*" |xargs rename 's/mqtutorial-all-in-one/mqtutorial-el/g'

ftp文件夹下载

wget ftp://172.17.227.236/ctos_analyze/data/tmp/ –ftp-user=ftpd –ftp-password=ftpd123 -r 必须要有, 最后的-r表示目录下载

wget -r -l 1 http://www.baidu.com/dir/

文件续传: wget -c xxx.file

下载网站的所有文件: wget -r -l 1 http://atlarge.ewi.tudelft.nl/graphalytics/

文件按行数分割

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ wc -l dispatch2012.csv 
231272 dispatch2012.csv

$ split -l 60000 dispatch2012.csv dispatch2012_new.csv
$ ll
-rw-r--r-- 1 hadoop hadoop 27649615 9月 27 2014 dispatch2012.csv
-rw-rw-r-- 1 hadoop hadoop 7115577 4月 14 19:08 dispatch2012_new.csvaa
-rw-rw-r-- 1 hadoop hadoop 7188497 4月 14 19:08 dispatch2012_new.csvab
-rw-rw-r-- 1 hadoop hadoop 7208496 4月 14 19:08 dispatch2012_new.csvac
-rw-rw-r-- 1 hadoop hadoop 6137045 4月 14 19:08 dispatch2012_new.csvad
$ wc -l dispatch2012_new.csv*
60000 dispatch2012_new.csvaa
60000 dispatch2012_new.csvab
60000 dispatch2012_new.csvac
51272 dispatch2012_new.csvad
231272 总用量

nc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
scp复制方式: scp influxdb-0.13.0_linux_amd64.tar.gz qihuang.zheng@192.168.6.52:~/

用nc需要先在接收端开启一个端口, 然后在发送端把数据发送到接收端的端口

远程:nc -l 1234 > influxdb-0.13.0_linux_amd64.tar.gz
本地:nc -w 1 192.168.6.52 1234 < influxdb-0.13.0_linux_amd64.tar.gz

复制文件夹:
scp方式:scp -r influxdb-0.13.0-1 qihuang.zheng@192.168.6.52:~/

(注意不要在|之间加空格!默认远程的文件夹和本地的一样)
远程:$ nc -l 1234|tar zxvf -
influxdb-0.13.0-1/
influxdb-0.13.0-1/etc/
influxdb-0.13.0-1/usr/
...
influxdb-0.13.0-1/etc/influxdb/influxdb.conf

本地:$ tar -cvzf - influxdb-0.13.0-1|nc 192.168.6.52 1234
a influxdb-0.13.0-1
a influxdb-0.13.0-1/etc
a influxdb-0.13.0-1/usr
...
a influxdb-0.13.0-1/etc/influxdb/influxdb.conf

tar -cvzf - android_device_session|nc 192.168.50.20 1234
1
2
nc -l 1234|tar xvf -
tar -cvf - android_device_session|nc 192.168.50.20 1234

nohup 结合 nc报错:

1
2
3
4
5
gzip: stdin: unexpected end of file
tar: Child returned status 1
tar: Error is not recoverable: exiting now

[1]+ Exit 2 nohup nc -l 1234 | tar zxvf -

screen:

1
2
screen -S fp_android_device_session_159 nc -l 1234|tar xvf -
screen -S fp_android_device_session_159 tar -cvf - android_device_session|nc 192.168.50.20 1234

disown:

1
2
3
4
5
nc -l 1234|tar xvf - &
jobs
disown -h %1
jobs
logout
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[admin@spark015010 ~]$ nc -d -l 1234|tar xvf - &
[2] 13672
[admin@spark015010 ~]$ jobs
[1]+ Stopped nc -l 1234 | tar xvf -
[2]- Running nc -l 1234 | tar xvf - &
[admin@spark015010 ~]$ disown -h %1
[admin@spark015010 ~]$ ps -ef |grep 1234
admin 13671 7993 0 13:38 pts/0 00:00:00 nc -l 1234
admin 13713 7993 0 13:38 pts/0 00:00:00 grep 1234
[admin@spark015010 ~]$ logout
There are stopped jobs.
[admin@spark015010 ~]$ jobs
[1]+ Stopped nc -l 1234 | tar xvf -
[2]- Running nc -l 1234 | tar xvf - &
[admin@spark015010 ~]$ jobs -p
11590
13671

8. VI

显示行号: :set number / :set nu
复制模式::set paste
打开文件定位到最后一行: vi + file.txt ,或者G
第一行::0回车
从指定行删除到最后一行:使用set number计算当前行和最后一行的差比如100,输入100dd
清空文件内容:先跳转到文件最后一行:G:1,.d
向前搜索/向后搜索:N/N

9. Awk/sed

列编辑

1
2
3
4
5
6
7
8
9
10
11
12
13
replace(replace(DISPATCHMEMO,chr(10),''),chr(9),'')

awk '{print "<" $0 "> "}' O_FZ12345_CALLINFO>O_FZ12345_CALLINFO2
awk '{print "select count(*) from " $0 " union all \n"}' cl>cl2

awk '{print "replace(replace(" $0 ",chr(10),''),chr(9),'')"}' O_FZ12345_CALLINFO>O_FZ12345_CALLINFO2

awk '{print "replace(replace(" $0 ",chr(10),''),chr(9),'')"}' c3>c4

%s/,)/,'')/

alter table owner to etl;
awk '{print "alter table " $0 " owner to etl;"}' test>test2

列的最大长度

下面2个语句执行的结果不同??
打印结果时,用双引号
awk ‘{if (length($NF)>maxlength) maxlength=length($NF)} END {print maxlength” “$1” “$2” “$NF}’ fz12345_original.txt

awk ‘{s[$1] += $2}END{ for(i in s){ print i, s[i] } }’ file1 > file2

awk ‘{s[$1” “$2] += $3}END{ for(i in s){print i, s[i] } }’ file1 > file2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
找出最后一列的最大长度  
awk '{if (length($NF)>maxlength) maxlength=length($NF)} END {print maxlength }' fz12345_original.txt

找出最大长度的那一条记录
awk 'length($NF)==2016 {print $1" "$2" "$NF}' fz12345_original.txt

最后一列为空
awk '{$NF="" ;print $0}' fz12345_original.txt | head

截取最后一列:
awk '{print substr($NF, 0, 900)}' fz12345_original.txt

打印一整行:
awk '{print $0}' fz12345_original.txt

打印第一列和最后一列, 最后一列被截取, 以\t分割
awk '{$NF=substr($NF, 0, 900) ;print $1"\t"$NF}' fz12345_original.txt | head

截取最后一列, 并打印整行. 但是分隔符变成空格. 如果原先内容有空格,则无法正确解析
awk '{$NF=substr($NF, 0, 900) ;print $0}' fz12345_original.txt | head

截取最后一列, 打印整行, 分隔符为\t
awk 'BEGIN {OFS="\t"}{$NF=substr($NF, 0, 900) ;print $0}' fz12345_original.txt | head

行的字段数不一样. 是因为如果有些字段值为空: 如果是空值,则不会被计算为一列!
awk '{print NF}' fz12345.txt | head
awk '{print NF}' fz12345_original.txt | head

输出字段分隔符:
awk 'BEGIN {OFS="\t"}{$NF=substr($NF, 0, 900) ;print $0}' fz12345_original.txt > fz12345.txt
输入字段分隔符,输出字段分隔符:
awk 'BEGIN {FS="\t";OFS="\t"}{$NF=substr($NF, 0, 900) ;print $0}' fz12345_original.txt > fz12345.txt
输入字段,输入行,输出字段,输出行分隔符:
awk 'BEGIN {FS="\t";RS="\n";OFS="\t";ORS="\n";}{$NF=substr($NF, 0, 900) ;print $0}' fz12345_original.txt > fz12345.txt

awk FS="\t" '{$NF=substr($NF, 0, 900) ;print $0}' OFS="\t" fz12345_original.txt > fz12345.txt


<http://coolshell.cn/articles/9070.html>
awk -F:'{print $1,$3,$6}' OFS="\t" /etc/passwd
/etc/passwd文件是以:为分隔符的. 取出第1,3,6列. 以\t分割!

一列转多行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
hadoop@hadoop:~$ echo 'A|B|C|aa,bb|DD' | awk -F\| 'BEGIN{OFS="|"}{split($4,a,",");for(i in a){$4=a[i];print $0}}'
A|B|C|aa|DD
A|B|C|bb|DD
hadoop@hadoop:~$ echo 'A|B|C|aa|DD' | awk -F\| 'BEGIN{OFS="|"}{split($4,a,",");for(i in a){$4=a[i];print $0}}'
A|B|C|aa|DD

awk 'BEGIN{FS="\t";OFS="\t"}{split($5,a,",");for(i in a){$5=a[i];print $0}}' \
fz12345_dispatch_2014.txt > fz12345_dispatch_2014_2.txt

转换前文件:
2251562 FZ15032600537 福州市信访局 2015-03-26 16:29:50 福州市城乡建设委员会,福州市交通运输委员会 0 1 10 2015-04-10 16:29:50

转换后文件:
2251562 FZ15032600537 福州市信访局 2015-03-26 16:29:50 福州市城乡建设委员会 0 1 10 2015-04-10 16:29:50
2251562 FZ15032600537 福州市信访局 2015-03-26 16:29:50 福州市交通运输委员会 0 1 10 2015-04-10 16:29:50

去掉文件中的所有双引号: sed -i ‘s/“//g’ dispatch2012.csv

将^M删除

1
2
3
4
5
6
sed -i 's/"//g' dispatch2012.csv            ×
tr -d '^M' < dispatch2012_2.csv ×

alias dos2unix="sed -i -e 's/'\"\$(printf '\015')\"'//g' " √
dos2unix dispatch2012_2.csv
sed -i -e 's/'\"\$(printf '\015')\"'//g' dispatch2012_2.csv 报错: bash: 未预期的符号 `(' 附近有语法错误

删除第一行: sed -i ‘1d;$d’ dispatch2012_2.csv

第五列因为乱码直接改为空字符串!
awk ‘BEGIN {FS=”,”;RS=”\n”;OFS=”\t”;ORS=”\n”;}{$5=””;print $0}’ dispatch2012.csv > dispatch2012_2.csv

指定行前/后插入内容

i表示在之前匹配Regex之前插入,a表示在之后插入

1
2
sed -i 'N;/Regex/i\插入的内容' file
sed -i 'N;/Regex/a\插入的内容' file

比如

1
2
3
JVM_OPTS="$JVM_OPTS -Djava.net.preferIPv4Stack=true"

# JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=<public name>"

要在preferIPv4Stack这一行后面插入RMI

1
sed -i 'N;/preferIPv4Stack/a\JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=192.168.6.52"' apache-cassandra-2.2.6/conf/cassandra-env.sh

结果:

1
2
3
4
JVM_OPTS="$JVM_OPTS -Djava.net.preferIPv4Stack=true"

JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=192.168.6.52"
# JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=<public name>"

截取时间段日志文件

1
2
3
4
5
6
7
8
9
end=`date +"%Y-%m-%d %H"`
start=`date -d "-1 hour" +"%Y-%m-%d %H"`
gcCount=`sed -n "/$start:[0-9][0-9]:[0-9][0-9]/,/$end:[0-9][0-9]:[0-9][0-9]/p" /usr/install/cassandra/logs/system.log | grep 'GC in [0-9][0-9][0-9][0-9][0-9]' | wc -l`
echo $start
echo $end
echo $gcCount
if [ $gcCount > 0 ]; then
#告警
fi

示例:

1
sed -n '/2016-06-30 18:[0-9][0-9]:[0-9][0-9]/,/2016-06-30 19:[0-9][0-9]:[0-9][0-9]/p' /usr/install/cassandra/logs/system.log | grep 'GC in [0-9][0-9][0-9][0-9][0-9]' | wc -l

统计最大值

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
2  url1
8 url3
2 url2
3 url1
4 url3

awk '{max[$2]=max[$2]>$1?max[$2]:$1;number[$2]++;sum[$2]+=$1}
END{for (i in max) print max[i], sum[i]/number[i],number[i],i}' OFS="\t" url.txt

awk '{max[$2]=max[$2]>$1?max[$2]:$1;}END{for (i in max) print max[i],i}' OFS="\t" url.txt
8 url3
3 url1
2 url2

awk '{max[$1]=max[$1]>$2?max[$1]:$2;}END{for (i in max) print i,max[i]}' OFS="\t" url.txt
#$1是key,map[$1]存的是value,比如$1是url1,$2就是2
url1 2
url3 8
url2 2
url1 3
url3 4

awk ‘BEGIN {max = 0} {if ($1+0 > max+0) max=$1} END {print “Max=”, max}’ data

awk -F ‘,’ ‘BEGIN{sum=0}{sum+=$1;} END {print “sum=”sum”}’

awk ‘BEGIN{sum=0}{sum+=$1;} END {print “sum=”sum”}’

替换

博客图片显示不了,给每个链接添加前缀,采用替换。注意:mac下在-i后要添加””。
注意:这里URL中包含//,所以不能用s/http://这种方式。

1
2
3
URL="http://img.blog.csdn.net"
URL2="https://images.weserv.nl/?url=http://img.blog.csdn.net"
sed -i "" "s#${URL}#$URL2#g" *

10. 脚本

制作程序启动脚本:

1
2
3
4
5
6
7
8
9
arg=$1
cmd=''

if [ $arg == 'idea' ]; then
cmd='/home/hadoop/tool/idea-IU-139.225.3/bin/idea.sh'
fi

echo '应用程序启动命令:$cmd'
nohup $cmd &

免密码登录:expect

http://wgkgood.blog.51cto.com/1192594/1271543
http://blog.51yip.com/linux/1462.html
http://bbs.chinaunix.net/thread-915007-1-1.html
http://os.51cto.com/art/200912/167898.htm

访问服务器的通用命令是: ssh [-port] user@host. 通常情况下需要输入密码. 可以使用expect交互命令, 使用命令行直接登录.

1
2
3
4
5
6
7
8
9
10
11
#!/usr/bin/expect

set timeout 30
spawn ssh [lindex $argv 0]@[lindex $argv 1]
expect {
"(yes/no)?"
{send "yes\n";exp_continue}
"password:"
{send "[lindex $argv 2]\n"}
}
interact

使用方式: 指定远程机器的用户名,IP地址,密码,[端口]: ./login.exp USERNAME HOST PASS [PORT]

1
2
3
4
5
6
7
8
9
10
#expect
#!/usr/bin/expect -f

spawn ssh log@xxxx
expect "*assword:*"

send "111111\r"
expect "*$*"

interact

免密码登录:ssh

在本机ssh-keygen,并将密钥拷贝到目标机器当前访问用户的~/ssh/authorized_keys

1
2
3
4
5
6
7
8
➜  ~  ssh qihuang.zheng@JUMP_HOST

每次都要输入用户名@远程服务器地址, 可以把它们写死在一个文件里.
➜ ~ cat jump
qihuang.zheng@JUMP_HOST
➜ ~ ssh `cat jump`

或者把ssh qihuang.zheng@JUMP_HOST整个写在一个脚本里并加入到PATH里. 只需要执行脚本即可: sshjump

跳板机访问远程命令:端口映射

👉 将远程ssh端口22映射到本地指定的端口: ssh -f qihuang.zheng@JUMP_HOST -L 127.0.0.1:2207:192.168.47.207:22 -N
解释下上面的命令了:

1
2
3
JUMP_HOST         👉跳板机
192.168.47.207:22 👉远程机器的ssh默认端口
127.0.0.1:2207 👉映射到本机端口

👉 现在远程服务器的ssh端口已经映射到本地的2207端口了,所以可以直接: ssh -p 2207 qihuang.zheng@localhost
👉 或者使用调用expect脚本自动登陆的方式: login.exp qihuang.zheng localhost $pass $port

端口映射脚本:

1
2
3
4
5
6
7
8
#!/usr/bin/expect
set timeout 30
spawn ssh -f qihuang.zheng@JUMP_HOST -L 127.0.0.1:[lindex $argv 1]:192.168.47.[lindex $argv 0]:[lindex $argv 1] -N
expect {
"password:"
{send "YOURPASSWORD@\n"}
}
interact

使用方式: map.exp 222 8888

命令行时间格式转换(Mac版)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
当前时间撮
➜ ~ date +%s
1448811002

格式化当前时间
➜ date "+%Y%m%d%H%M%S"
20150717085930
➜ date "+%Y-%m-%d %H:%M:%S"
2015-07-17 09:01:32

指定一个时间的时间撮
➜ ~ date -j -f "%Y-%m-%d %H:%M:%S" "2015-10-21 18:03:00" "+%s"
1445421780

将时间错转换为human格式
➜ ~ date -r 1445421780
2015年10月21日 星期三 18时03分00秒 CST

上面的是在shell终端的结果, 如果要在bash脚本中获取, 则使用执行命令

1
2
3
4
5
➜ echo `date "+%Y-%m-%d %H:%M:%S"`
2015-07-17 09:04:23
➜ currentDate=`date "+%Y-%m-%d %H:%M:%S"`
➜ echo $currentDate
2015-07-17 09:05:38

正则表达式

日志开始部分

1
(\d{4})-(0\d{1}|1[0-2])-(\d{2}) (\d{2}):(\d{2}):(\d{2}):(\d{3}) (\[main\])

11. Shell

基本

if条件:

循环:

1
2
3
4
5
6
7
8
for i in `seq 0 10`; do echo $i; nodetool cfstats md5s.md5_id_$i | grep memory ;done

nums=('0' '1' '2' '3' '4' '5' '6' '7' '8' '9' 'a' 'b' 'c' 'd' 'e' 'f')
for n1 in ${nums[@]};
do
echo $n1
done
echo "end."

时间

Mac:

1
2
3
date +%s
date -r 1471190400
date -j -f "%Y-%m-%d %H:%M:%S" "2017-6-30 12:00:00" "+%s"

Linux

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
date +%s
1471254010

date -d '@'
2016年 08月 15日 星期一 17:40:10 CST

date -d '2016-08-15' +%s
1471190400

date -d '@1471190400'
2016年 08月 15日 星期一 00:00:00 CST

date -d '2016-08-15 13:20:24' +%s
date -d '@1471238424'
2016年 08月 15日 星期一 13:20:24 CST

date -d 'yesterday'
2016年 08月 15日 星期一 13:20:24 CST

date -d 'yesterday' +"%Y-%-m-%-d"
1
2
3
4
5
6
7
8
9
10
beg=`date -d '2017-05-25 00:00:00' +%s`
end=`date -d '2017-05-25 01:10:00' +%s`
cassandra/bin/nodetool compactionhistory |grep model_result| sort -rk 4 |awk '{if($4/1000>=$beg && $4/1000<=$end) print $0}'

cat compaction_history.log|grep model_result| sort -rk 4 |awk '{if($4/1000>=$beg && $4/1000<=$end) print $0}'

cat compaction_history.log|head -50 |tail -30| awk '{\
$4=strftime("%Y-%m-%d %H:%M:%S", substr($4,0,10));\
$5=$5/1000/1000; $6=$6/1000/1000;\
}1' | cut -d" " -f4-

http://blog.csdn.net/jk110333/article/details/8590746

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
#sh range.sh 20160401 20160405

#datebeg="20160401"
#dateend="20160405"
datebeg=$1
dateend=$2
beg_s=`date -d "$datebeg" +%s`
end_s=`date -d "$dateend" +%s`

excludes="20150101 20150102"

while [ "$beg_s" -le "$end_s" ];do
day=`date -d @$beg_s +"%Y%m%d"`;
beg_s=$((beg_s+86400));
flag=false
for item in $excludes
do
if [ "$day" == "$item" ]; then
echo "$day In the list, skip"
flag=true
fi
done

if [ $flag == false ]; then
echo $day

#代码中只要关心处理一天的记录,由脚本控制,执行多天
/usr/install/spark-1.6.1-bin-hadoop2.4/bin/spark-submit \
--conf spark.mesos.role=production --conf spark.cores.max=30 --conf spark.executor.memory=10g --conf spark.ui.port=4888 \
--conf spark.cassandra.connection.host=192.168.48.163 \
--class cn.fraudmetrix.vulcan.velocity.VelocityRuleApp \
spark-cassandra-1.0.1-SNAPSHOT-jar-with-dependencies.jar cass $day > VelocityRuleApp_$day.log
fi
done

Developer

jar包解压查看是否有某个文件: jar -tvf abc.jar | grep FileName

jar包运行: java -cp xxx-dependency.jar MainClass args
如果是打成fat包,可以这么运行。

但是如果不是fat包,而且依赖了第三方包:java -cp third.jar -jar Run.jar MainClass

CLASSPATH=$(echo /xxx/libs/*.jar | tr ‘ ‘ ‘:’)

jmap -dump:live,format=b,file=.hprof

jstats -gc

远程Debug:

  1. 启动脚本中添加:JAVA_OPTS=”$JAVA_OPTS -Xdebug -Xrunjdwp:server=y,transport=dt_socket,address=8000,suspend=n”
  2. 启动远程服务器
  3. IDEA启动方式用Remoting,设置远程服务器的Host和Port
  4. IDEA中打断点
  5. 打开浏览器,访问页面
  6. 进入断点

mat命令行:http://www.techpaste.com/2015/07/how-to-analyse-large-heap-dumps/

mat

mat

mysql权限:

1
2
3
4
5
6
7
8
CREATE DATABASE `pontus` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci;
create user 'pontus'@'%' identified by 'pontus';
grant all on pontus.* to 'pontus'@'%' identified by 'pontus';
grant all on pontus.* to 'pontus'@'dp0653' identified by 'pontus';
flush privileges;

delete from user where user='pontus' and host='dp0653'
select user,password,host from user where user='pontus';

Env

1
2
3
export JAVA_HOME=$(/usr/libexec/java_home)
export PATH=$JAVA_HOME/bin:$PATH
export CLASS_PATH=$JAVA_HOME/lib

Mac下安装多版本的Java

brew tap caskroom/versions
brew cask install java6
/usr/libexec/java_home -V

Build

maven: ~/.m2/settings.xml

1
2
3
4
5
6
7
8
<mirrors>
<mirror>
<id>alimaven</id>
<name>aliyun maven</name>
<url>http://maven.aliyun.com/nexus/content/groups/public/</url>
<mirrorOf>central</mirrorOf>
</mirror>
</mirrors>

sbt: ~/.sbt/repositories

1
2
3
4
5
6
7
8
9
10
11
12
13
[repositories]
local
local-maven: file:///Users/zhengqh/.m2/repository/
maven-releases: http://maven.fraudmetrix.cn/nexus/content/groups/public/
ali-maven: http://maven.aliyun.com/nexus/content/groups/public/
#repox-maven: http://192.168.6.53:8078/
#repox-ivy: http://192.168.6.53:8078/, [organization]/[module]/(scala_[scalaVersion]/)(sbt_[sbtVersion]/)[revision]/[type]s/[artifact](-[classifier]).[ext]
#osc: http://maven.oschina.net/content/groups/public/
#oschina-ivy: http://maven.oschina.net/content/groups/public/, [organization]/[module]/(scala_[scalaVersion]/)(sbt_[sbtVersion]/)[revision]/[type]s/[artifact](-[classifier]).[ext]
typesafe: http://repo.typesafe.com/typesafe/ivy-releases/, [organization]/[module]/(scala_[scalaVersion]/)(sbt_[sbtVersion]/)[revision]/[type]s/[artifact](-[classifier]).[ext], bootOnly
sonatype-oss-releases
maven-central
sonatype-oss-snapshots

gradle project: build.gradle

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
buildscript {
repositories {
mavenLocal()
maven { url 'http://maven.aliyun.com/nexus/content/groups/public/' }
jcenter()
mavenCentral()
}
dependencies {
}
}

allprojects {
repositories {
mavenLocal()
maven { url 'http://maven.aliyun.com/nexus/content/groups/public/' }
jcenter()
mavenCentral()
}
}

https://my.oschina.net/abcfy2/blog/783743
https://yrom.net/blog/2015/02/07/change-gradle-maven-repo-url/

gradle global: ~/.gradle/init.gradle

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
allprojects{
repositories {
def REPOSITORY_URL = 'http://maven.oschina.net/content/groups/public'
all { ArtifactRepository repo ->
if(repo instanceof MavenArtifactRepository){
def url = repo.url.toString()
if (url.startsWith('https://repo1.maven.org/maven2') || url.startsWith('https://jcenter.bintray.com/')) {
project.logger.lifecycle "Repository ${repo.url} replaced by $REPOSITORY_URL."
remove repo
}
}
}
maven {
url REPOSITORY_URL
}
}
}

快捷键

Mac

👉按键

1
Shift - ⬆️      为什么是向上,因为Shift在Ctrl+Alt+CMD所有键的上方

👉触摸板
右键: 双指单击
三指移动: 当前App移动位置
四指滑动: 全屏下App切换屏幕
两指滑动: Launchpad切换屏幕

👉箭头
FN+⬅️ 一行的开始(Home)和结束位置(End)
FN+⬆️ 上一页(Up),下一页(Down)
ALT+⬅️ 一个一个单词地移动

终端
CMD+⬅️: 标签切换

IDEA

1
2
3
4
5
6
7
8
Go to Definition:       Alt+Click(Mac)/Ctrl+Click(Linux)
Go To Implementation: Alt+CMD+B
Go Back/Forword: Alt+CMD+左右

Type Hierarchy类型树: Ctl+H
Call Hierarchy调用栈: Ctl+Alt+H
全局搜索字符串: Ctrl+Shift+F
CMD+4 Run Window

Install

GO sublime

  • GOROOT:golang的安装包
  • GOPATH:golang应用程序的workspace

安装go后,goroot会自动设置,而gopath需要自己添加到.bashrc中

1
2
3
export GOPATH=$HOME/go
export GOBIN=$GOPATH/bin
export PATH=$PATH:$GOBIN

go env查看环境信息

1
2
3
4
5
6
7
8
9
10
GOARCH="amd64"
GOBIN="/Users/zhengqh/go/bin"
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOOS="darwin"
GOPATH="/Users/zhengqh/go"
GORACE=""
GOROOT="/usr/local/go"
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"

go默认安装到/usr/local/go下,也可以将$GOROOT/bin加入到$PATH下

1
2
export GOROOT=/usr/local/go
export PATH=$PATH:$GOBIN:$GOROOT/bin

查看go的版本

1
2
3
4
➜  go version
go version go1.8.3 darwin/amd64
➜ which go
/usr/local/go/bin/go

在Sublime中安装GoSublime,然后设置Sublime的环境变了:

Preferences -> package settings -> GoSublime -> Settings - Uesrs

1
2
3
4
5
6
{
"env": {
"GOPATH": "/Users/zhengqh/go",
"GOROOT": "/usr/local/go"
}
}

默认的GoSublime build:

1
2
3
4
{
"target": "gs9o_build",
"selector": "source.go"
}

CMD+b后,会在底部弹出终端,需要自己输入go run xxx.go。

1
2
3
4
[ ~/go/src/test/ ] # 输入go run main.go
[ `go run main.go` | done: 902.434403ms ]
GO GO GO!!!
....

但实际上我希望立即执行,自动编译执行。

第一种尝试

可以在Tools>Build System>New Build System

1
2
3
4
5
6
{
  "cmd": ["go", "run", "$file_name"],
"file_regex": "^[ ]*File \"(...*?)\", line ([0-9]*)",
"working_dir": "$file_path",
"selector": "source.go"
}

点击CMD+b后,不会弹出任何终端,而且提示No Build System

第二种尝试,New Build System:

1
2
3
{
"shell_cmd": "go run $file"
}

点击CMD+b后,会编译并运行

1
2
3
GO GO GO!!!
....
[Finished in 1.0s]

go的代码不一定要在GOPATH下。下面在kafka-book的clients下新建一个go文件。
CMD+B后,会产生一个producer进程。但是CTRL+C虽然会提示CANCEL,但是并不会真正杀掉进程。

1
2
3
4
➜  test ps -ef|grep producer
501 24018 14631 0 4:22下午 ?? 0:00.31 go run /Users/zhengqh/Github/kafka-book/clients/src/main/go/producer.go
➜ test ps -ef|grep producer
501 24049 1 0 4:22下午 ?? 0:00.01 /var/folders/xc/x0b8crk9667ddh1zhfs29_zr0000gn/T/go-build870380723/command-line-arguments/_obj/exe/producer

go imports

如果没有导入fmt包,直接编译运行报错:

1
2
3
4
5
6
7
# command-line-arguments
./main.go:4: undefined: fmt in fmt.Println
./main.go:5: undefined: fmt in fmt.Println
[Finished in 0.5s with exit code 2]
[shell_cmd: go run /Users/zhengqh/go/src/test/main.go]
[dir: /Users/zhengqh/go/src/test]
[path: /usr/bin:/bin:/usr/sbin:/sbin]

goimports可以在保存的时候自动导入包

1
2
3
➜  ~ go get golang.org/x/tools/cmd/goimports
package golang.org/x/tools/cmd/goimports: unrecognized import path "golang.org/x/tools/cmd/goimports"
(https fetch: Get https://golang.org/x/tools/cmd/goimports?go-get=1: dial tcp 220.255.2.153:443: i/o timeout)

Sublime

http://www.jianshu.com/p/3cb5c6f2421c

按键 常用指数 说明
Shift+CMD+P 然后输入ip 安装PackageControl install
Alt+CMD+O 预览MD(OmniMarkupPreview), 在本地浏览器打开
CMD+R Markdown的主题列表, 或者源文件的函数列表
CMD+P ⭐️⭐️⭐️⭐️⭐️ 查找整个Workspace的文件
CMD+P,@ ⭐️⭐️⭐️ Markdown的大纲
Alt+CMD+⬇ go to definition
CMD+Shift+T 打开上一个关闭的文件

GoBuild: CMD+b

Sublime自定义设置(以MD GFM为例)
Preferences > Package Settings > Markdown Edit > Markdown GFM Settings - User

1
2
3
4
5
6
{
"word_wrap": false,
"wrap_width": 150,
"line_numbers": true,
"highlight_line": true
}

效果就是打开md文件, 预览时宽度变宽了, 而不再是窄窄的. 而且较长的行不会自动换行

在一个窗口内显示多个文件夹
打开一个文件夹之后, Project > Add Folder to Project > Save Project As .. > 保存在本地
下次直接选择keyspace的名称即可打开上次所有的文件夹

翻墙

http版本:

1
2
3
4
5
6
7
8
9
10
11
12
13
export http_proxy=http://192.168.6.32:1080 
export https_proxy=$http_proxy
export ftp_proxy=$http_proxy
export rsync_proxy=$http_proxy
export no_proxy="localhost,127.0.0.1,localaddress,.localdomain.com,*.tongdun.cn"

git config --global https.proxy http://127.0.0.1:8787
git config --global https.proxy https://127.0.0.1:8787

git config --global --unset http.proxy
git config --global --unset https.proxy

git config http.proxy

socks5版本:

1
2
3
4
5
6
7
export ALL_PROXY="socks5://192.168.6.32:1080"

git config --global http.proxy socks5://192.168.6.32:1080
git config --global https.proxy socks5://192.168.6.32:1080

git config --global http.proxy 'socks5://192.168.6.32:1080'
git config --global https.proxy 'socks5://192.168.6.32:1080'

Git

QuickStart

说明 命令
从master创建并切换到dev分支 git checkout -b dev
相当于创建分支和并切换到分支 git branch dev && git checkout dev
在分支上正常提交所有文件 git add . && git commit -m 'commit log'
切换回master分支 git checkout master
把dev的分支合并到当前分支上 git merge dev
删除dev分支 git branch -d dev
提交dev分支 git push origin dev
clone分支 git clone -b develop git@github.com:user/myproject.git

分支

在主分支上创建一个新的分支,修改文件, 添加到版本库,提交,远程推送到分支

1.在主分支上创建一个新的分支(-b),并切换(checkout)到这个新的分支上.
这里最后还可以跟上一个基础分支名称,表示要从哪个基础分支上创建新的分支,
这里因为在master上,所以是以master版本创建一个新的分支(默认在当前分支上创建新的分支).

1
2
➜  test git:(master) git checkout -b branch001          创建并切换分支
Switched to a new branch 'branch001'

2.在分支上的修改,提交和正常的一样

1
2
3
4
5
➜  test git:(branch001) vi README.md                    修改文件
➜ test git:(branch001) ✗ git add . 加入版本控制
➜ test git:(branch001) ✗ git commit -m 'branch' 提交
[branch001 b078a04] branch
1 file changed, 1 insertion(+)

3.提交分支到远程仓库
在当前新的分支上执行git push origin 分支名字,表示将当前新的分支push到远程仓库对应的分支上.
origin是远程仓库的别名. git push origin master表示将master分支推送到远程的master分支上.
注意:不要在master分支上执行git push origin branch001.那样会把master分支推送到远程的branch001分支上.
也就是说,本地要push的分支和当前所在的分支有关,而origin后面的分支名称,代表的是远程的某个分支.

1
2
3
4
5
6
7
8
 ➜  test git:(branch001) git push origin branch001      远程PUSH
Counting objects: 3, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 287 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To https://github.com/zqhxuyuan/test.git
* [new branch] branch001 -> branch001

4.在分支下文件被修改,回到主分支,文件没有被修改. 即分支的修改对于主分支不可见.

1
2
3
4
5
6
7
8
9
10
➜  test git:(branch001) cat README.md                   分支的内容
# test
branch...

➜ test git:(branch001) git checkout master 主分支
Switched to branch 'master'
Your branch is up-to-date with 'origin/master'.

➜ test git:(master) cat README.md 主分支不可见分支
# test

可以把分支的文件合并到主分支上.操作方式是在主分支上merge其他分支: git merge branch001

git flow

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
master>>  git branch develop                            在master分支创建一个develop分支
master>> git push -u origin develop 提交develop分支,虽然这个分支和master分支是一样的,因为还没有做任何修改

git clone ssh://user@host/path/to/repo.git 克隆仓库,因为上面提交了develop,所以开发分支也会被拉取到本地
master>> git checkout -b develop origin/develop 基于远程仓库的develop分支,切换到develop分支(因为分支已经存在,-b不会在建了)
develop>> git checkout -b some-feature develop 基于develop分支创建一个功能分支,并切换到这个功能分支
feature>> git status
feature>> git add .
feature>> git commit -m 'feature'
feature>> git pull origin develop 拉取最新的develop分支
feature>> git checkout develop 切换到develop分支
develop>> git merge some-feature 合并功能分支到develop分支
develop>> git push 提交本地的develop分支到远程的develop分支,因为上面设置了-u,所以可以不用手动添加origin develop

develop>> git checkout -b release-0.1 develop 在最新的develop上创建一个用于新版本发布的分支
release>> .... 在发布版本上的操作和普通的分支一样
release>> git checkout master 切换到master分支
master>> git merge release-0.1 将发布版本的分支合并到本地的master分支
master>> git push 提交本地的master分支提交到远程的master分支??
master>> git checkout develop 切换到develop分支
develop>> git merge release-0.1 同样将发布版本的分支合并到本地的develop分支
develop>> git push 提交到远程develop分支
develop>> git branch -d release-0.1 删除发布版本这个分支

master>> git checkout -b issue-#001 master 基于master分支新创建并切换到一个新的hotfix分支
hotfix>> # Fix the bug 在hotfix分支上正常操作
hotfix>> git checkout master 切换回master分支
master>> git merge issue-#001 合并hotfix分支的修改到本地master分支
master>> git push 提交到远程master分支

master>> git checkout develop hotfix的修改要同步到master和develop分支上
develop>> git merge issue-#001
develop>> git push
develop>> git branch -d issue-#001

-u选项设置本地分支去跟踪远程对应的分支。 设置好跟踪的分支后,可以使用git push命令省去指定推送分支的参数

解决冲突

场景: 本地修改了pom.xml,和远程已有的pom.xml不一致.

1.拉取数据时,指出了local changes发生在pom.xml文件. 如果直接提交,会被拒绝的.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
➜  td-offline git:(master) ✗ git pull
remote: Counting objects: 14, done.
remote: Compressing objects: 100% (9/9), done.
remote: Total 14 (delta 5), reused 0 (delta 0)
Unpacking objects: 100% (14/14), done.
From gitlab.fraudmetrix.cn:dp/td-offline
9061c47..ce0b695 master -> origin/master
Updating 9061c47..ce0b695
error: Your local changes to the following files would be overwritten by merge:
pom.xml
Please, commit your changes or stash them before you can merge.
Aborting

➜ td-offline git:(master) ✗ git add .
➜ td-offline git:(master) ✗ git commit -m 'split module out'

➜ td-offline git:(master) git push origin master
To git@gitlab.fraudmetrix.cn:dp/td-offline.git
! [rejected] master -> master (non-fast-forward)
error: failed to push some refs to 'git@gitlab.fraudmetrix.cn:dp/td-offline.git'
hint: Updates were rejected because the tip of your current branch is behind
hint: its remote counterpart. Integrate the remote changes (e.g.
hint: 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.

2.再次pull,会把远程的pom.xml和本地修改后的pom.xml进行尝试合并.

1
2
3
4
➜  td-offline git:(master) git pull
Auto-merging pom.xml
CONFLICT (content): Merge conflict in pom.xml
Automatic merge failed; fix conflicts and then commit the result.

上面显示合并失败,要自己去修复,然后提交.

3.修改发生冲突的文件,一般是把自动添加的特殊字符删掉,然后提交.

1
2
3
4
➜  td-offline git:(master) ✗ vi pom.xml
➜ td-offline git:(master) ✗ git add .
➜ td-offline git:(master) ✗ git commit -m 'split module out'
[master 6f86dcd] split module out

4.最后,成功push到远程分支上.

1
2
3
4
5
6
7
8
➜  td-offline git:(master) git push origin master
Counting objects: 67, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (35/35), done.
Writing objects: 100% (67/67), 47.34 KiB | 0 bytes/s, done.
Total 67 (delta 9), reused 0 (delta 0)
To git@gitlab.fraudmetrix.cn:dp/td-offline.git
ce0b695..6f86dcd master -> master

5.尝试拉取,已经是最新的代码了

1
2
➜  td-offline git:(master) git pull
Already up-to-date.

忽略文件

已经提交的文件,可以从cache中删除

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
➜  riemann-cassandra git:(master) ✗ git commit -m 'update rieman and cassandra version'
[master 460d78c] update rieman and cassandra version
4 files changed, 118 insertions(+), 22 deletions(-)
create mode 100644 .gitignore
create mode 100644 riemann-cassandra.iml

➜ riemann-cassandra git:(master) git rm --cached riemann-cassandra.iml
rm 'riemann-cassandra.iml'

➜ riemann-cassandra git:(master) ✗ git status
On branch master
Your branch is ahead of 'origin/master' by 1 commit.
(use "git push" to publish your local commits)
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)

deleted: riemann-cassandra.iml

➜ riemann-cassandra git:(master) ✗ git commit -m 'update rieman and cassandra version'
[master b56cd65] update rieman and cassandra version
1 file changed, 58 deletions(-)
delete mode 100644 riemann-cassandra.iml

拉取tag:git checkout tag_name

改动并提交到新分支

利用stash方式保留现场,然后在新分支切回现场:
git status
git add .
git stash
git log
git status
git checkout -b tt
git status
git stash pop
git status
git add .
git status
git commit -m “add ijk jar”
git push
git push origin tt

在feature/redis(master)分支做了一些修改,但是不想提交到远程的feature/redis(master)分支,想新建一个分支feature/login保存代码
git add .
git commit -m ‘’
git push origin master:feature/batchDelete
git checkout -b feature/batchDelete origin/feature/batchDelete
git log
git reset –hard commit-id
在此利用git 回滚,回到上一次的提交状态:回滚到commit-id,将commit-id之后提交的commit都去除

git reset –hard fb906a9f58b420221b5014a0745fb689c59faae1

回滚分支

git reset –hard xxx
git reset –soft origin/master
git commit -am “revert to master”

其他Git命令:

整个文件夹git pull: find . -name .git -print -execdir git pull \;

Github的本地fork保持与clone的一致

git fork
git remote add kafka https://github.com/apache/kafka.git
git pull kafka trunk 拉取官方的最新代码
git push origin trunk 推送到自己的fork分支上

example: http://blog.xiayf.cn/2016/01/18/github-fork-pull-request/
在本地代码库添加一个新的remote,名为 beego : git remote add beego https://github.com/astaxie/beego.git
在 develop 分支上执行 git pull beego develop,这会获取 astaxie/beego develop 分支最新的状态,并 merge 到本地代码库的 develop 分支
将本地代码库的 develop 分支 push 到 youngsterxyf/beego :git push origin develop

push落后于远程分支失败

https://www.cnblogs.com/daemon369/p/3204646.html

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
➜  msgconsole git:(msgconsole-2.4.3/msgconsole-2.4.3) git push origin msgconsole-2.4.3/msgconsole-2.4.3
To http://gitlab.alipay-inc.com/antcloud-middleware/msgconsole.git
! [rejected] msgconsole-2.4.3/msgconsole-2.4.3 -> msgconsole-2.4.3/msgconsole-2.4.3 (non-fast-forward)
error: failed to push some refs to 'http://gitlab.alipay-inc.com/antcloud-middleware/msgconsole.git'
hint: Updates were rejected because the tip of your current branch is behind
hint: its remote counterpart. Integrate the remote changes (e.g.
hint: 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.
➜ msgconsole git:(msgconsole-2.4.3/msgconsole-2.4.3) git pull
There is no tracking information for the current branch.
Merge branch 'msgconsole-2.4.3/msgconsole-2.4.3' of http://gitlab.alipay-inc.com/antcloud-middleware/msgconsole into msgconsole-2.4.3/msgconsole-2.4.3
Please specify which branch you want to merge with.
See git-pull(1) for details.

git pull <remote> <branch>

If you wish to set tracking information for this branch you can do so with:

git branch --set-upstream-to=origin/<branch> msgconsole-2.4.3/msgconsole-2.4.3

解决办法:

msgconsole git:(msgconsole-2.4.3/msgconsole-2.4.3) git pull origin msgconsole-2.4.3/msgconsole-2.4.3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
From http://gitlab.alipay-inc.com/antcloud-middleware/msgconsole
* branch msgconsole-2.4.3/msgconsole-2.4.3 -> FETCH_HEAD
Merge made by the 'recursive' strategy.
frontend/config/config.js | 1 +
frontend/src/index.less | 9 +++++++++
+--------------------
10 files changed, 275 insertions(+), 64 deletions(-)

➜ msgconsole git:(msgconsole-2.4.3/msgconsole-2.4.3) git push origin msgconsole-2.4.3/msgconsole-2.4.3
Enumerating objects: 4, done.
Counting objects: 100% (4/4), done.
Delta compression using up to 8 threads
Compressing objects: 100% (2/2), done.
Writing objects: 100% (2/2), 356 bytes | 356.00 KiB/s, done.
Total 2 (delta 1), reused 0 (delta 0)
To http://gitlab.alipay-inc.com/antcloud-middleware/msgconsole.git
f14ff250..0a5bb185 msgconsole-2.4.3/msgconsole-2.4.3 -> msgconsole-2.4.3/msgconsole-2.4.3

Docker

批量清理docker images: docker images |grep weeks |grep acs |awk ‘{print $3}’ |xargs docker rmi

docker mysql5:

1
2
3
4
5
6
7
8
9
10
docker run -it --name mysql5 -p 3305:3306 -v /Users/zqh/docker/mysql5/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=root -d mysql:5.7
docker exec -it mysql8 sh
mysql -uroot -proot
ALTER USER 'root'@'%' IDENTIFIED WITH mysql_native_password BY 'root';
flush privileges;
exit
mysql -h 127.0.0.1 -P 3305 -u root -p #在宿主机上访问

#导入表
mysql -h 127.0.0.1 -p confdb -P 3305 -u root -proot < ~/code/msgconsole/init/dmsconsole.sql

mysql8:

1
2
3
4
5
6
ALTER USER 'root'@'localhost' IDENTIFIED BY 'root' PASSWORD EXPIRE NEVER;
ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'root';
flush privileges;
create user root@'%' identified WITH mysql_native_password BY 'root';
grant all privileges on *.* to root@'%' with grant option;
flush privileges;

docker mysql8:

1
2
3
4
5
6
7
8
9
10
11
#docker run -it --rm --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=debezium -e MYSQL_USER=mysqluser -e MYSQL_PASSWORD=mysqlpw debezium/example-mysql:0.9
docker run -it --name mysql8 -p 3306:3306 -v /Users/zqh/docker/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=root -d mysql:latest

docker exec -it mysql8 sh
mysql -uroot -proot
grant all PRIVILEGES on *.* to root@'%' WITH GRANT OPTION;
GRANT ALL ON *.* TO 'root'@'%';
flush privileges;
ALTER user 'root'@'%' IDENTIFIED BY 'root' PASSWORD EXPIRE NEVER;
ALTER user 'root'@'%' IDENTIFIED WITH mysql_native_password BY 'root';
FLUSH PRIVILEGES;

docker mysql client:

docker run -it –rm –name mysqlterm –link mysql –rm mysql:5.7 sh -c ‘exec mysql -h”$MYSQL_PORT_3306_TCP_ADDR” -P”$MYSQL_PORT_3306_TCP_PORT” -uroot -p”$MYSQL_ENV_MYSQL_ROOT_PASSWORD”‘

multi mysql:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
mysql-5.7.25-macos10.14-x86_64/scripts/mysql_install_db --basedir=/Users/zqh/soft/mysql/mysql-5.7.25-macos10.14-x86_64 --datadir=/Users/zqh/soft/mysql/data/5.7.25
mysql-8.0.15-macos10.14-x86_64/scripts/mysql_install_db --basedir=/Users/zqh/soft/mysql/mysql-8.0.15-macos10.14-x86_64 --datadir=/Users/zqh/soft/mysql/data/8.0.15

/etc/mysqld_multi.cnf
[mysqld_multi]
mysqld = /Users/zqh/soft/mysql/mysql-5.7.25-macos10.14-x86_64/bin/mysqld_safe
mysqladmin = /Users/zqh/soft/mysql/mysql-5.7.25-macos10.14-x86_64/bin/mysqladmin
user = root
password = root

[mysqld1]
socket = /Users/zqh/soft/mysql/mysql-5.7.25-macos10.14-x86_64/mysql1.sock
port = 3306
pid-file = /Users/zqh/soft/mysql/mysql-5.7.25-macos10.14-x86_64/mysql1.pid
datadir = /Users/zqh/soft/mysql/mysql-5.7.25-macos10.14-x86_64

export PATH="/Users/zqh/soft/mysql/mysql-5.7.25-macos10.14-x86_64/bin:$PATH"
alias sta-5710="sudo mysqld_multi start 5710 && sleep 2 && ps -ef|grep mysql"
alias sto-5710="ps -ef|grep mysql_5710|grep -v grep|awk '{print \$2}'|xargs sudo kill -9"

文章目录
  1. 1. Linux Daily
    1. 1.1. 1. rpm简单命令
      1. 1.1.1. EPEL-7 下载源
      2. 1.1.2. EPEL-6 下载源
      3. 1.1.3. CentOS使用阿里云源
      4. 1.1.4. CentOS-163源
    2. 1.2. 3. 安装基本软件
    3. 1.3. 4. 磁盘
      1. 1.3.1. 查看文件大小
      2. 1.3.2. 扩容操作
    4. 1.4. 5. 系统
      1. 1.4.1. 文件数和进程数
      2. 1.4.2. 内存
      3. 1.4.3. 自动重启(jstat)
      4. 1.4.4. 用户和权限
      5. 1.4.5. 定时任务cron
    5. 1.5. 6. 进程
      1. 1.5.1. 进程和端口
      2. 1.5.2. top
      3. 1.5.3. telnet
      4. 1.5.4. kill
      5. 1.5.5. screen
      6. 1.5.6. nohup
      7. 1.5.7. disown
    6. 1.6. 7. 文件
      1. 1.6.1. 查看文件编码格式
      2. 1.6.2. cat文件乱码
      3. 1.6.3. GBK转码实践
      4. 1.6.4. wordcount计数
      5. 1.6.5. grep查找
      6. 1.6.6. 大文件定位到某一行
      7. 1.6.7. find文件名
      8. 1.6.8. 文件内容替换
      9. 1.6.9. rename批量修改文件名
      10. 1.6.10. ftp文件夹下载
      11. 1.6.11. 文件按行数分割
      12. 1.6.12. nc
    7. 1.7. 8. VI
    8. 1.8. 9. Awk/sed
      1. 1.8.1. 列编辑
      2. 1.8.2. 列的最大长度
      3. 1.8.3. 一列转多行
      4. 1.8.4. 将^M删除
      5. 1.8.5. 指定行前/后插入内容
      6. 1.8.6. 截取时间段日志文件
      7. 1.8.7. 统计最大值
      8. 1.8.8. 替换
    9. 1.9. 10. 脚本
      1. 1.9.1. 制作程序启动脚本:
      2. 1.9.2. 免密码登录:expect
      3. 1.9.3. 免密码登录:ssh
      4. 1.9.4. 跳板机访问远程命令:端口映射
      5. 1.9.5. 命令行时间格式转换(Mac版)
      6. 1.9.6. 正则表达式
    10. 1.10. 11. Shell
      1. 1.10.1. 基本
      2. 1.10.2. 时间
  2. 2. Developer
    1. 2.1. Env
    2. 2.2. Build
      1. 2.2.1. maven: ~/.m2/settings.xml
      2. 2.2.2. sbt: ~/.sbt/repositories
      3. 2.2.3. gradle project: build.gradle
      4. 2.2.4. gradle global: ~/.gradle/init.gradle
    3. 2.3. 快捷键
      1. 2.3.1. Mac
      2. 2.3.2. IDEA
      3. 2.3.3. Install
        1. 2.3.3.1. GO sublime
        2. 2.3.3.2. go imports
      4. 2.3.4. Sublime
    4. 2.4. 翻墙
  3. 3. Git
    1. 3.1. QuickStart
    2. 3.2. 分支
    3. 3.3. git flow
    4. 3.4. 解决冲突
    5. 3.5. 忽略文件
    6. 3.6. 改动并提交到新分支
    7. 3.7. 回滚分支
    8. 3.8. 其他Git命令:
    9. 3.9. push落后于远程分支失败
  4. 4. Docker