<!--StartFragment-->
![](https://pic1.zhimg.com/80/v2-2581c01223c5f9680b7bd6fbe4f597a8\\_720w.webp)
非常感谢各位 TiDBer 在之前 【TiDBer 唠嗑茶话会 48】非正式 TiDB 相关 SQL 脚本征集大赛!( [https://asktug.com/t/topic/996635](https://link.zhihu.com/?target=https%3A//asktug.com/t/topic/996635) )里提供的各种常用脚本。
在这篇文章中,我们整理了社区同学提供的一系列 TiDB 相关 SQL 脚本,希望能为大家在 TiDB 的使用过程中提供一些帮助和参考。这些脚本涵盖了常见场景下的 SQL 操作, 欢迎各位 TiDBer 持续补充更新~
未来,我们也将整理更多 TiDB 相关实用指南,帮助大家更好地了解、运用 TiDB,敬请期待!
## **1 缓存表**
贡献者:@ShawnYan
```text
alter table xxx cache|nocache;
```
## **2 TSO 时间转换**
贡献者:@我是咖啡哥
**●** 方法一:使用函数 TIDB_PARSE_TSO
```text
SELECT TIDB_PARSE_TSO(437447897305317376);
+------------------------------------+
| TIDB_PARSE_TSO(437447897305317376) |
+------------------------------------+
| 2022-11-18 08:28:17.704000 |
+------------------------------------+
1 row in set (0.25 sec)
```
**●** 方法二:使用 pd-ctl
```text
~\$ tiup ctl:v6.4.0 pd -i -u http://pdip:2379
Starting component `ctl`: /Users/xxx/.tiup/components/ctl/v6.4.0/ctl pd -i -u http://pdip:2379
» tso 437447897305317376
system: 2022-11-18 08:28:17.704 +0800 CST
logic: 0
```
## **3 读取历史数据**
贡献者:@我是咖啡哥
**●** 使用 AS OF TIMESTAMP 语法读取历史数据,可以通过以下三种方式使用 AS OF TIMESTAMP 语法:
SELECT … FROM … AS OF TIMESTAMP
START TRANSACTION READ ONLY AS OF TIMESTAMP
SET TRANSACTION READ ONLY AS OF TIMESTAMP
```text
select * from t as of timestamp '2021-05-26 16:45:26';
start transaction read only as of timestamp '2021-05-26 16:45:26';
set transaction read only as of timestamp '2021-05-26 16:45:26';
```
**●** 通过系统变量 tidb_read_staleness 读取历史数据
从 5 秒前至现在的时间范围内选择一个尽可能新的时间戳
```text
set @@tidb_read_staleness="-5";
```
**●** 通过系统变量 tidb_snapshot 读取历史数据
设置一个特殊的环境变量,这个是一个 session scope 的变量,其意义为读取这个时间之前的最新的一个版本
```text
set @@tidb_snapshot="2016-10-08 16:45:26";
```
清空这个变量后,即可读取最新版本数据
```text
set @@tidb_snapshot=“”;
```
## **4 查询 tikv_gc_life_time 和 tikv_gc_safe_point 默认时长**
贡献者:@TiDBer_m6V1BalM
```text
select VARIABLE_NAME, VARIABLE_VALUE from mysql.tidb where VARIABLE_NAME like “tikv_gc%”;
```
## **5 搜索某个用户的 TopN 慢查询**
贡献者:@fanruinet
```text
select query_time,query,user
from information_schema.slow_query
where is_internal=false -- 排除 TiDB 内部的慢查询 SQL
and user = "user1" -- 查找的用户名
order by query_time desc
limit 2;
```
## **6 统计间隔 5 分钟的数据**
贡献者:@forever
```text
SELECT concat(date_format(create_time,‘%Y-%m-%d %H:’),floor(date_format(create_time,‘%i’)/5)),count(*)
FROM jcxx
GROUP BY 1;
```
## **7 反解析 digest 成** **SQL** **文本**
贡献者:@hey-hoho
```text
select tidb_decode_sql_digests(‘[“xxxxx”]’);
```
## **8 不涉及分区表用下面的方式查看表的使用情况**
贡献者:@xfworld
```text
select TABLE_SCHEMA,TABLE_NAME,TABLE_ROWS,
(DATA_LENGTH+INDEX_LENGTH)/1024/1024/1024 as table_size from tables order by table_size
desc limit 20;
```
## **9 partition 表提供了分区表和非分区表的资源使用情况**
贡献者:@xfworld
```text
select TABLE_SCHEMA,TABLE_NAME,PARTITION_NAME,TABLE_ROWS,
(DATA_LENGTH+INDEX_LENGTH)/1024/1024/1024 as table_size from
information_schema.PARTITIONS order by table_size desc limit 20;
```
## **10 查询分析器中看配置文件参数**
贡献者:@Kongdom
```text
show config
```
SHOW CONFIG 语句用于展示 TiDB 各个组件当前正在应用的配置,请注意,配置与系统变量作用于不同维度,请不要混淆,如果希望获取系统变量信息,请使用 SHOW VARIABLES ( [https://docs.pingcap.com/zh/tidb/stable/sql-statement-show-variables](https://link.zhihu.com/?target=https%3A//docs.pingcap.com/zh/tidb/stable/sql-statement-show-variables) ) 语法。
## **11 查找读流量排名前 10 的热点** **region**
贡献者:@BraveChen
```text
SELECT DISTINCT region_id
FROM INFORMATION_SCHEMA.tikv_region_status
WHERE READ_BYTES > ?
ORDER BY READ_BYTES DESC
LIMIT 10
```
## **12 查看参数和变量的脚本**
贡献者:@buddyyuan
```text
#!/bin/bash
case \$1 in
-pd)
mysql -uroot -h127.0.0.1 -P4000 -p"" -e "SHOW CONFIG WHERE type ='pd' and name like '%\$2%'"
;;
-tidb)
mysql -uroot -h127.0.0.1 -P4000 -p"" -e "SHOW CONFIG WHERE type ='tidb' and name like '%\$2%'"
;;
-tikv)
mysql -uroot -h127.0.0.1 -P4000 -p"" -e "SHOW CONFIG WHERE type ='tikv' and name like '%\$2%'"
;;
-tiflash)
mysql -uroot -h127.0.0.1 -P4000 -p"" -e "SHOW CONFIG WHERE type ='tiflash' and name like '%\$2%'"
;;
-var)
mysql -uroot -h127.0.0.1 -P4000 -p"" -e "show variables like '%\$2%';"
;;
-h)
echo "-pd show pd parameters"
echo "-tidb show tidb parameters"
echo "-tikv show tikv parameters"
echo "-tiflash show tiflash parameters"
echo "-var show itidb variables"
;;
esac
```
还能用 grep 在过滤一次
```text
[root@vm172-16-201-125 ~]# sh showparammeter.sh -tikv memory-pool-quota | grep -i "210:29160"
tikv 192.16.201.210:29160 server.grpc-memory-pool-quota 9223372036854775807B
```
## **13 查找重复记录**
贡献者:@ealam\_ 小羽
```text
select *
from 表
where 重复字段 in
(
select 重复字段
from 表
group by 重复字段
having count(*)>1
)
```
## **14 查询耗时最高的慢** **sql**
贡献者:@caiyfc
```text
select query sql_text,
sum_query_time,
mnt as executions,
avg_query_time,
avg_proc_time,
avg_wait_time,
max_query_time,
avg_backoff_time,
Cop_proc_addr,
digest,
(case
when avg_proc_time = 0 then
'point_get or commit'
when (avg_proc_time > avg_wait_time and
avg_proc_time > avg_backoff_time) then
'coprocessor_process'
when (avg_backoff_time > avg_wait_time and
avg_proc_time < avg_backoff_time) then
'backoff'
else
'coprocessor_wait'
end) as type
from (select substr(query, 1, 100) query,
count(*) mnt,
avg(query_time) avg_query_time,
avg(process_time) avg_proc_time,
avg(wait_time) avg_wait_time,
max(query_time) max_query_time,
sum(query_time) sum_query_time,
digest,
Cop_proc_addr,
avg(backoff_time) avg_backoff_time
from information_schema.cluster_slow_query
where time >= '2022-07-14 17:00:00'
and time <= '2022-07-15 17:10:00'
and DB = 'web'
group by substr(query, 1, 100)) t
order by max_query_time desc limit 20;
```
## **15 日常维护用的最多的** **SQL**
贡献者:@tracy0984
```text
select * from information_schema.cluster_processlist;
– kill id;
```
## **16 恢复数据(适用于 drop 与 truncate)**
贡献者:@凌云 Cloud
```text
FLASHBACK TABLE target_table_name[TO new_table_name]
```
## **17 批量修改库名**
贡献者:@TiDBer_dog
```text
./bat_rename.sh lihongbao/ dev2_kelun dev2_sinodemo 路径./leo_backup
```
## **18 高并发的场景下获取** **sql**
贡献者:@jiawei
```text
select * from information_schema.processlist where info is not null
```
## **19 查看 schema 下的表都有哪些**
贡献者:@Ming
```text
show tables in schema;
```
## **20 查看表 leader**
贡献者:@TiDBer_wTKU9jv6
```text
select count(1),tss.ADDRESS from INFORMATION_SCHEMA.TIKV_REGION_PEERS trp,INFORMATION_SCHEMA.TIKV_REGION_STATUS trs,INFORMATION_SCHEMA.TIKV_STORE_STATUS tss where trp.STORE_ID=tss.STORE_ID and trp.REGION_ID=trs.REGION_ID and trs.DB_NAME=‘test’ and trs.TABLE_NAME=‘test’ and trp.IS_LEADER=1 group by tss.ADDRESS order by tss.ADDRESS;
```
## **21 shell 的调皮加速脚本**
贡献者:@gcworkerishungry
```text
alias ctidb=“mysql -u root -ptidb -Dcktest -h S001 -P4000”
alias dtidb=“tiup cluster display tidb-test”
alias etidb=“tiup cluster edit-config tidb-test”
alias ptidb=“tiup cluster prune tidb-test”
alias rtidb=“tiup cluster restart tidb-test”
```
## **22 恢复数据到新的数据库**
贡献者:@TiDBer\_ 徐川
```text
./loader -h 192.168.180.3 -u root -p q1w2 -P 4000 -t 32 -d leo_backup/
```
## **23 开启** **tiflash**
贡献者:@TiDBer_pFFcXLgY
```text
alter table xxx set tiflash replica 1
```
## **24 表** **region** **分布语句**
贡献者:@秋枫之舞
```text
select
trs.db_name,
trs.table_name,
trs.index_name,
trp.store_id,
count(*),
sum(approximate_keys)
from
information_schema.tikv_region_status trs,
information_schema.tikv_store_status tss,
information_schema.tikv_region_peers trp
where
trs.db_name = ‘prd01’
and trs.table_name = ‘tab_name’
and trp.is_leader = 1
and trp.store_id = tss.store_id
and trs.region_id = trp.region_id
group by
trs.db_name,
trs.table_name,
trs.index_name,
trp.store_id
order by
trs.index_name;
```
## **25 查看列的元数据**
贡献者:@张雨齐0720
```text
show stats_histograms where db_name like ‘test’ and table_name like ‘test1’ ;
```
## **26** **表的存储位置(store、peer 信息)**
贡献者:@bert
```text
SELECT distinct a.TIDB_TABLE_ID, b.DB_NAME, b.TABLE_NAME, b.REGION_ID, b.APPROXIMATE_SIZE
, c.PEER_ID, c.STORE_ID, c.IS_LEADER, c.STATUS, d.ADDRESS
, d.STORE_STATE_NAME, d.VERSION, d.CAPACITY, d.AVAILABLE, d.LABEL
FROM INFORMATION_SCHEMA.TABLES a
INNER JOIN INFORMATION_SCHEMA.TIKV_REGION_STATUS b
INNER JOIN INFORMATION_SCHEMA.TIKV_REGION_PEERS c
INNER JOIN INFORMATION_SCHEMA.TIKV_STORE_STATUS d
WHERE a.TIDB_TABLE_ID = b.TABLE_ID
AND b.REGION_ID = c.REGION_ID
AND c.STORE_ID = d.STORE_ID
AND a.TABLE_SCHEMA = ‘test’
AND a.TABLE_NAME = ‘t’;
```
## **27 将集群升级到指定版本 ( 在线升级 )**
贡献者:@TiDBer\_ 杨龟干外公
```text
tiup cluster upgrade
```
例如升级到 v4.0.0 版本:
tiup cluster upgrade tidb-test v4.0.0
## **28 查询表大小**
贡献者:@我是咖啡哥
```text
SELECT
t.TABLE\\_NAME,
t.TABLE\\_ROWS,
t.TABLE\\_TYPE,
round(t.DATA\\_LENGTH/1024/1024/1024,2) data\\_GB,
round(t.INDEX\\_LENGTH/1024/1024/1024,2) index\\_GB,
t.CREATE\\_OPTIONS,
t.TABLE\\_COMMENT
FROM
INFORMATION\\_SCHEMA.`TABLES` t
WHERE
table\\_schema = 'test'
and t.table\\_type='BASE TABLE'
order by t.TABLE\\_ROWS desc;
SELECT CONCAT(table\\_schema,'.',table\\_name) AS 'Table Name', table\\_rows AS 'Number of Rows', CONCAT(ROUND(data\\_length/(1024*1024*1024),4),'G') AS 'Data Size', CONCAT(ROUND(index\\_length/(1024*1024*1024),4),'G') AS 'Index Size', CONCAT(ROUND((data\\_length+index\\_length)/(1024*1024*1024),4),'G') AS'Total' FROM information\\_schema.TABLES WHERE table\\_schema LIKE 'test';
```
## **29 统计信息**
贡献者:@我是咖啡哥
**●** 查看表的元数据
show stats_meta where db_name like '%sbtest%';
**●** 查看表的健康状态
```text
show stats\\_healthy;
```
Healthy 字段,一般小于等于 60 的表需要做 analyze
```text
show stats\\_healthy where table\\_name ='xxx';
show stats\\_healthy where db\\_name='' and table\\_name='orders';
_name like ‘sbtest’ and table_name like ‘sbtest1’ ;
```
**●** 查看直方图信息
```text
show stats\\_buckets where db\\_name='' and table\\_name='';
```
**●** 查看 analyze 状态
```text
show analyze status;
```
**●** 分析表、分区
```text
analyze table sbtest1;
ANALYZE TABLE xxx PARTITION P202204;
```
## **30 执行计划**
贡献者:@我是咖啡哥
**绑定执行计划**
**●** 默认是 session 级别
```text
create binding for select \\* from t using select \\* from t use index()
create binding for SELECT \\* FROM t1 INNER JOIN t2 ON t1.id = t2.t1\\_id WHERE t1.int\\_col = ? using SELECT /\\*+ INL\\_JOIN(t1, t2) \\*/ \\* FROM t1 INNER JOIN t2 ON t1.id = t2.t1\\_id WHERE t1.int\\_col = ?;
explain SELECT \\* FROM t1 INNER JOIN t2 ON t1.id = t2.t1\\_id WHERE t1.int\\_col = 1;
show bindings for SELECT \\* FROM t1 INNER JOIN t2 ON t1.id = t2.t1\\_id WHERE t1.int\\_col = 1;
show global bindings;
show session bindings;
SELECT @@SESSION.last\\_plan\\_from\\_binding;
```
**●** 使用 explain format = ‘verbose’ 语句查看 SQL 的执行计划
```text
explain format = 'verbose';
drop binding for sql;
```
## **31 查看 regions**
贡献者:@我是咖啡哥
```text
SHOW TABLE t\\_its\\_unload\\_priority\\_intermediate\\_info regions;
SHOW TABLE t\\_its\\_unload\\_priority\\_intermediate\\_info INDEX IDX\\_UPII\\_GROUP\\_BY\\_COMPOSITE regions;
```
## **32 统计读写热点表**
贡献者:@我是咖啡哥
```text
use INFORMATION\\_SCHEMA;
SELECT
db\\_name,
table\\_name,
index\\_name,
type,
sum( flow\\_bytes ),
count( 1 ),
group\\_concat( h.region\\_id ),
count( DISTINCT p.store\\_id ),
group\\_concat( p.store\\_id )
FROM
INFORMATION\\_SCHEMA.tidb\\_hot\\_regions h
JOIN INFORMATION\\_SCHEMA.tikv\\_region\\_peers p ON h.region\\_id = p.region\\_id
AND p.is\\_leader = 1
GROUP BY
db\\_name,
table\\_name,
index\\_name,
type;
SELECT
p.store\\_id,
sum(flow\\_bytes ),
count(1)
FROM
INFORMATION\\_SCHEMA.tidb\\_hot\\_regions h
JOIN INFORMATION\\_SCHEMA.tikv\\_region\\_peers p ON h.region\\_id = p.region\\_id
AND p.is\\_leader = 1
GROUP BY
p.store\\_id
ORDER BY
2 DESC;
select tidb\\_decode\\_plan();
```
## **33** **TiFlash**
贡献者:@我是咖啡哥
```text
ALTER TABLE t\\_test\\_time\\_type SET TIFLASH REPLICA 1;
SELECT \\* FROM information\\_schema.tiflash\\_replica;
select \\* from information\\_schema.CLUSTER\\_HARDWARE where type='tiflash' and DEVICE\\_TYPE='disk' and name='path';
```
## **34 admin 命令**
贡献者:@我是咖啡哥
```text
admin show ddl jobs;
ADMIN CHECK TABLE t_test;
admin show slow
ADMIN SHOW TELEMETRY;
```
## **35 修改隔离参数**
贡献者:@我是咖啡哥
**●** session 级别修改
Engine 隔离:默认:\[“tikv”, “tidb”, “tiflash”] 由于 TiDB Dashboard 等组件需要读取一些存储于 TiDB 内存表区的系统表,因此建议实例级别 engine 配置中始终加入 “tidb” engine。
```text
set session tidb\\_isolation\\_read\\_engines = 'tiflash,tidb';
set @@session.tidb\\_isolation\\_read\\_engines = "tiflash,tidb";
```
**●** 手工 Hint
```text
select /\\*+ read\\_from\\_storage(tiflash\\[table\\_name]) */ ... from table\\_name;
select /*+ read\\_from\\_storage(tiflash\\[alias\\_a,alias\\_b]) \\*/ ... from table\\_name\\_1 as alias\\_a, table\\_name\\_2 as alias\\_b where alias\\_a.column\\_1 = alias\\_b.column\\_2;
set @@tidb\\_allow\\_mpp=1;
show config where name like '%oom%' and type='tidb';
admin show ddl;
```
## **36 排错-查看日志**
贡献者:@我是咖啡哥
```text
SELECT \\* FROM INFORMATION\\_SCHEMA.CLUSTER\\_LOG t
WHERE time > '2022-08-09 00:00:00' AND time < '2022-08-10 00:00:00'
AND TYPE in ('tikv')
AND `LEVEL` = 'ERROR'
ORDER BY time desc;
```
## **37 查询所有节点所在 OS 的 CPU 当前使用率**
贡献者:@人如其名
```text
SELECT
b.time,
a.hostname,
a.ip,
a.types,
b.cpu_used_percent
FROM
(
SELECT
GROUP_CONCAT(TYPE) AS TYPES,
SUBSTRING_INDEX(instance, ':', 1) AS ip,
value AS hostname
FROM
information_schema.cluster_systeminfo
WHERE
name = 'kernel.hostname'
GROUP BY
ip,
hostname
) a,
(
SELECT
time,
SUBSTRING_INDEX(instance, ':', 1) AS ip,
(100 - value) AS cpu_used_percent
FROM
metrics_schema.node_cpu_usage
WHERE
MODE = 'idle'
AND time = NOW()
) b
WHERE
a.ip = b.ip
```
输出示例:
```text
+----------------------------+-----------------------+----------------+----------------------+--------------------+
| time | hostname | ip | types | cpu_used_percent |
+----------------------------+-----------------------+----------------+----------------------+--------------------+
| 2023-01-10 22:40:15.000000 | localhost.localdomain | 192.168.31.201 | tidb,pd,tikv,tiflash | 11.438079153798114 |
+----------------------------+-----------------------+----------------+----------------------+--------------------+
1 row in set (0.04 sec)
```
说明:我这里所有类型组件只创建了有一个而且都在一个 os 上,所以只显示了一行。
## **38 清理** **tidb** **大量数据的脚本,实现删除百万级别以上的数据,而且不影响 tidb 正常使用**
贡献者:@xingzhenxiang
```text
date1=`date --date "7 days ago" +"%Y-%m-%d"`
delete_db_sql=“delete from mysql_table where create_date_time<‘\$date1’ limit 10000”
i=0
while ((++i)); do
a=`/bin/mysql -uroot -p123456 -A mysql_database -h127.0.0.1 --comments -e "\${delete_db_sql}" -vvv|grep "Query OK" |awk '{print \$3}'`
if((\$a<1)); then
break 1
fi
sleep 1
printf “%-4d” \$((i))
```
感谢以上 TiDBer 们贡献的 SQL 脚本~记得点赞收藏,可以随时在你的个人收藏夹里查看到\~
未来我们将继续发布更多来自社区的精品内容,希望能为大家提供更多有价值的信息和经验。如果您有任何感兴趣的话题,可以在下方留言,我们会整理相关的资料与大家分享哦!
<!--EndFragment-->