单节点离线快速部署TiDB 6.1用于测试
1、环境准备
操作系统安装此处省略
系统参数配置请参考官方文档
1.1、挂载光驱及install ntp numactl
-- 挂载光驱
mount /dev/cdrom /mnt
df -h
-- install ntp numactl
yum install -y ntp
yum -y install numactl
2、add user tidb
useradd -m -d /home/tidb tidb
password tidb
visudo
[root@tidb80 ~]# chown -R tidb:tidb /tidb
[root@tidb80 ~]# chmod -R 775 /tidb
[root@tidb80 ~]# su - tidb
3、解压软件
cd /tidb/soft
tar xzvf tidb-community-server-v6.1.0-linux-amd64.tar.gz
cd tidb-community-server-v6.1.0-linux-amd64
ls -lsa
4、install tidb cluster
[tidb@tidb80 tidb-community-server-v6.1.0-linux-amd64]$ sh local_install.sh
Disable telemetry success
Successfully set mirror to /tidb/soft/tidb-community-server-v6.1.0-linux-amd64
Detected shell: bash
Shell profile: /home/tidb/.bash_profile
/home/tidb/.bash_profile has been modified to to add tiup to PATH
open a new terminal or source /home/tidb/.bash_profile to use it
Installed path: /home/tidb/.tiup/bin/tiup
1.source /home/tidb/.bash_profile
2.Have a try: tiup playground
===============================================
[tidb@tidb80 tidb-community-server-v6.1.0-linux-amd64]$ source /home/tidb/.bash_profile
[tidb@tidb80 tidb-community-server-v6.1.0-linux-amd64]$ vi /tidb/config.yaml
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
user: "tidb"
ssh_port: 22
deploy_dir: "/tidb-deploy"
data_dir: "/tidb-data"
# # Monitored variables are applied to all the machines.
monitored:
node_exporter_port: 9100
blackbox_exporter_port: 9115
server_configs:
tidb:
log.slow-threshold: 300
tikv:
readpool.storage.use-unified-pool: false
readpool.coprocessor.use-unified-pool: true
pd:
replication.enable-placement-rules: true
replication.location-labels: ["host"]
tiflash:
logger.level: "info"
pd_servers:
- host: 192.168.80.80
tidb_servers:
- host: 192.168.80.80
tikv_servers:
- host: 192.168.80.80
port: 20160
status_port: 20180
config:
server.labels: { host: "tidb-1" }
- host: 192.168.80.80
port: 20161
status_port: 20181
config:
server.labels: { host: "tidb-2" }
- host: 192.168.80.80
port: 20162
status_port: 20182
config:
server.labels: { host: "tidb-3" }
tiflash_servers:
- host: 192.168.80.80
monitoring_servers:
- host: 192.168.80.80
grafana_servers:
- host: 192.168.80.80
5、tiup cluster deploy 部署集群
[tidb@tidb80 tidb-community-server-v6.1.0-linux-amd64]$ tiup cluster deploy TiDBcluster v6.1.0 /tidb/config.yaml --user root -p
tiup is checking updates for component cluster ...
Starting component cluster: /home/tidb/.tiup/components/cluster/v1.10.2/tiup-cluster deploy TiDBcluster v6.1.0 /tidb/config.yaml --user root -p
Input SSH password:
- Detect CPU Arch Name
- Detecting node 192.168.80.80 Arch info ... Done
- Detect CPU OS Name
- Detecting node 192.168.80.80 OS info ... Done
Please confirm your topology:
Cluster type: tidb
Cluster name: TiDBcluster
Cluster version: v6.1.0
Role Host Ports OS/Arch Directories
--------------------------------------------------------------------------------
pd 192.168.80.80 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv 192.168.80.80 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv 192.168.80.80 20161/20181 linux/x86_64 /tidb-deploy/tikv-20161,/tidb-data/tikv-20161
tikv 192.168.80.80 20162/20182 linux/x86_64 /tidb-deploy/tikv-20162,/tidb-data/tikv-20162
tidb 192.168.80.80 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000
tiflash 192.168.80.80 9000/8123/3930/20170/20292/8234 linux/x86_64 /tidb-deploy/tiflash-9000,/tidb-data/tiflash-9000
prometheus 192.168.80.80 9090/12020 linux/x86_64 /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana 192.168.80.80 3000 linux/x86_64 /tidb-deploy/grafana-3000
Attention:
1. If the topology is not what you expected, check your yaml file.
2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y
- Generate SSH keys ... Done
- Download TiDB components
- Download pd:v6.1.0 (linux/amd64) ... Done
- Download tikv:v6.1.0 (linux/amd64) ... Done
- Download tidb:v6.1.0 (linux/amd64) ... Done
- Download tiflash:v6.1.0 (linux/amd64) ... Done
- Download prometheus:v6.1.0 (linux/amd64) ... Done
- Download grafana:v6.1.0 (linux/amd64) ... Done
- Download node_exporter: (linux/amd64) ... Done
- Download blackbox_exporter: (linux/amd64) ... Done
- Initialize target host environments
- Prepare 192.168.80.80:22 ... Done
- Deploy TiDB instance
- Copy pd -> 192.168.80.80 ... Done
- Copy tikv -> 192.168.80.80 ... Done
- Copy tikv -> 192.168.80.80 ... Done
- Copy tikv -> 192.168.80.80 ... Done
- Copy tidb -> 192.168.80.80 ... Done
- Copy tiflash -> 192.168.80.80 ... Done
- Copy prometheus -> 192.168.80.80 ... Done
- Copy grafana -> 192.168.80.80 ... Done
- Deploy node_exporter -> 192.168.80.80 ... Done
- Deploy blackbox_exporter -> 192.168.80.80 ... Done
- Copy certificate to remote host
- Init instance configs
- Generate config pd -> 192.168.80.80:2379 ... Done
- Generate config tikv -> 192.168.80.80:20160 ... Done
- Generate config tikv -> 192.168.80.80:20161 ... Done
- Generate config tikv -> 192.168.80.80:20162 ... Done
- Generate config tidb -> 192.168.80.80:4000 ... Done
- Generate config tiflash -> 192.168.80.80:9000 ... Done
- Generate config prometheus -> 192.168.80.80:9090 ... Done
- Generate config grafana -> 192.168.80.80:3000 ... Done
- Init monitor configs
- Generate config node_exporter -> 192.168.80.80 ... Done
- Generate config blackbox_exporter -> 192.168.80.80 ... Done
Enabling component pd
Enabling instance 192.168.80.80:2379
Enable instance 192.168.80.80:2379 success
Enabling component tikv
Enabling instance 192.168.80.80:20162
Enabling instance 192.168.80.80:20160
Enabling instance 192.168.80.80:20161
Enable instance 192.168.80.80:20161 success
Enable instance 192.168.80.80:20160 success
Enable instance 192.168.80.80:20162 success
Enabling component tidb
Enabling instance 192.168.80.80:4000
Enable instance 192.168.80.80:4000 success
Enabling component tiflash
Enabling instance 192.168.80.80:9000
Enable instance 192.168.80.80:9000 success
Enabling component prometheus
Enabling instance 192.168.80.80:9090
Enable instance 192.168.80.80:9090 success
Enabling component grafana
Enabling instance 192.168.80.80:3000
Enable instance 192.168.80.80:3000 success
Enabling component node_exporter
Enabling instance 192.168.80.80
Enable 192.168.80.80 success
Enabling component blackbox_exporter
Enabling instance 192.168.80.80
Enable 192.168.80.80 success
Cluster
TiDBcluster deployed successfully, you can start it with command:
tiup cluster start TiDBcluster --init
6、tiup cluster list
[tidb@tidb80 tidb-community-server-v6.1.0-linux-amd64]$ tiup cluster list
tiup is checking updates for component cluster ...
Starting component
cluster: /home/tidb/.tiup/components/cluster/v1.10.2/tiup-cluster list
Name User Version Path PrivateKey
--------------------------------------------------------------------------------
TiDBcluster tidb v6.1.0 /home/tidb/.tiup/storage/cluster/clusters/TiDBcluster /home/tidb/.tiup/storage/cluster/clusters/TiDBcluster/ssh/id_rsa
7、tiup cluster display TiDBcluster
[tidb@tidb80 tidb-community-server-v6.1.0-linux-amd64]$ tiup cluster display TiDBcluster
tiup is checking updates for component cluster ...
Starting component cluster: /home/tidb/.tiup/components/cluster/v1.10.2/tiup-cluster display TiDBcluster
Cluster type: tidb
Cluster name: TiDBcluster
Cluster version: v6.1.0
Deploy user: tidb
SSH type: builtin
Grafana URL: http://192.168.80.80:3000
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
--------------------------------------------------------------------------------
192.168.80.80:3000 grafana 192.168.80.80 3000 linux/x86_64 Down - /tidb-deploy/grafana-3000
192.168.80.80:2379 pd 192.168.80.80 2379/2380 linux/x86_64 Down /tidb-data/pd-2379 /tidb-deploy/pd-2379
192.168.80.80:9090 prometheus 192.168.80.80 9090/12020 linux/x86_64 Down /tidb-data/prometheus-9090 /tidb-deploy/prometheus-9090
192.168.80.80:4000 tidb 192.168.80.80 4000/10080 linux/x86_64 Down - /tidb-deploy/tidb-4000
192.168.80.80:9000 tiflash 192.168.80.80 9000/8123/3930/20170/20292/8234 linux/x86_64 N/A /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000
192.168.80.80:20160 tikv 192.168.80.80 20160/20180 linux/x86_64 N/A /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
192.168.80.80:20161 tikv 192.168.80.80 20161/20181 linux/x86_64 N/A /tidb-data/tikv-20161 /tidb-deploy/tikv-20161
192.168.80.80:20162 tikv 192.168.80.80 20162/20182 linux/x86_64 N/A /tidb-data/tikv-20162 /tidb-deploy/tikv-20162
Total nodes: 8
8、tiup cluster start TiDBcluster
tiup cluster start 是 TiUP(TiDB Universal Package Manager)中的一个命令,用于启动指定的 TiDB 集群。
[tidb@tidb80 tidb-community-server-v6.1.0-linux-amd64]$ tiup cluster start TiDBcluster --init
tiup is checking updates for component cluster ...
Starting component cluster: /home/tidb/.tiup/components/cluster/v1.10.2/tiup-cluster start TiDBcluster
Starting cluster TiDBcluster...
- [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/TiDBcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/TiDBcluster/ssh/id_rsa.pub
- [Parallel] - UserSSH: user=tidb, host=192.168.80.80
- [Parallel] - UserSSH: user=tidb, host=192.168.80.80
- [Parallel] - UserSSH: user=tidb, host=192.168.80.80
- [Parallel] - UserSSH: user=tidb, host=192.168.80.80
- [Parallel] - UserSSH: user=tidb, host=192.168.80.80
- [Parallel] - UserSSH: user=tidb, host=192.168.80.80
- [Parallel] - UserSSH: user=tidb, host=192.168.80.80
- [Parallel] - UserSSH: user=tidb, host=192.168.80.80
- [ Serial ] - StartCluster
Starting component pd
Starting instance 192.168.80.80:2379
Start instance 192.168.80.80:2379 success
Starting component tikv
Starting instance 192.168.80.80:20162
Starting instance 192.168.80.80:20161
Starting instance 192.168.80.80:20160
Start instance 192.168.80.80:20161 success
Start instance 192.168.80.80:20162 success
Start instance 192.168.80.80:20160 success
Starting component tidb
Starting instance 192.168.80.80:4000
Start instance 192.168.80.80:4000 success
Starting component tiflash
Starting instance 192.168.80.80:9000
Start instance 192.168.80.80:9000 success
Starting component prometheus
Starting instance 192.168.80.80:9090
Start instance 192.168.80.80:9090 success
Starting component grafana
Starting instance 192.168.80.80:3000
Start instance 192.168.80.80:3000 success
Starting component node_exporter
Starting instance 192.168.80.80
Start 192.168.80.80 success
Starting component blackbox_exporter
Starting instance 192.168.80.80
Start 192.168.80.80 success
- [ Serial ] - UpdateTopology: cluster=TiDBcluster
Started cluster TiDBcluster successfully
The root password of TiDB database has been changed.
The new password is: '8$1jcxSt_=Kav63*74'.
Copy and record it to somewhere safe, it is only displayed once, and will not be stored.
The generated password can NOT be get and shown again.
-- root password is '8$1jcxSt_=Kav63*74'
注意事项
- 如果忘记了集群名称,可以使用 tiup cluster list 命令来查看集群列表。
- 启动集群时,服务会按照预定的顺序(如 PD -> TiKV -> Pump -> TiDB 等)来启动。
- 如果遇到任何问题,可以查看启动日志以获取更多信息。
9、tiup cluster display TiDBcluster 显示TiDB 集群的状态信息
tiup cluster display 是 TiUP(TiDB Universal Package Manager)的一个命令,用于显示 TiDB 集群的状态信息
[tidb@tidb80 tidb-community-server-v6.1.0-linux-amd64]$ tiup cluster display TiDBcluster
tiup is checking updates for component cluster ...
Starting component
cluster: /home/tidb/.tiup/components/cluster/v1.10.2/tiup-cluster display TiDBcluster
Cluster type: tidb
Cluster name: TiDBcluster
Cluster version: v6.1.0
Deploy user: tidb
SSH type: builtin
Dashboard URL: http://192.168.80.80:2379/dashboard
Grafana URL: http://192.168.80.80:3000
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
--------------------------------------------------------------------------------
192.168.80.80:3000 grafana 192.168.80.80 3000 linux/x86_64 Up - /tidb-deploy/grafana-3000
192.168.80.80:2379 pd 192.168.80.80 2379/2380 linux/x86_64 Up|L|UI /tidb-data/pd-2379 /tidb-deploy/pd-2379
192.168.80.80:9090 prometheus 192.168.80.80 9090/12020 linux/x86_64 Up /tidb-data/prometheus-9090 /tidb-deploy/prometheus-9090
192.168.80.80:4000 tidb 192.168.80.80 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000
192.168.80.80:9000 tiflash 192.168.80.80 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000
192.168.80.80:20160 tikv 192.168.80.80 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
192.168.80.80:20161 tikv 192.168.80.80 20161/20181 linux/x86_64 Up /tidb-data/tikv-20161 /tidb-deploy/tikv-20161
192.168.80.80:20162 tikv 192.168.80.80 20162/20182 linux/x86_64 Up /tidb-data/tikv-20162 /tidb-deploy/tikv-20162
Total nodes: 8
- 通过 http://192.168.80.80:9090 访问 TiDB 的 Prometheus 管理界面。
- 通过 http://192.168.80.80:2379/dashboard 访问 TiDB Dashboard 页面,默认用户名为 root,密码为空。
- 通过 http://192.168.80.80:3000 访问 TiDB 的 Grafana 界面,默认用户名和密码都为 admin
10、安装mysql client
su - root
yum -y install mysql
su - tidb
vi ~/.bash_profile
export MYSQL_PS1="(\u@\h:\p)[\d]>"
source ~/.bash_profile
[tidb@tidb80 ~]$ mysql -u root -h 192.168.80.80 -P 4000 -p
Enter password: 输入start TiDBcluster --init 生成的密码
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 407
Server version: 5.7.25-TiDB-v6.1.0 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
(root@192.168.80.80:4000)[(none)]>show databases;
+--------------------+
| Database |
+--------------------+
| INFORMATION_SCHEMA |
| METRICS_SCHEMA |
| PERFORMANCE_SCHEMA |
| mysql |
| test |
+--------------------+
5 rows in set (0.00 sec)
(root@192.168.80.80:4000)[(none)]>use test;
Database changed
(root@192.168.80.80:4000)[test]> create user super identified by 'Xxxx@go2024';
Query OK, 0 rows affected (0.08 sec)
(root@192.168.80.80:4000)[test]> grant all privileges on *.* to super;
Query OK, 0 rows affected (0.04 sec)
(root@192.168.80.80:4000)[test]> flush privileges;
Query OK, 0 rows affected (0.01 sec)
(root@192.168.80.80:4000)[test]> exit
Bye
11、使用MySQL客户端登陆tidb集群
使用MySQL客户端登陆tidb集群,创建数据库superdb并在superdb中创建测试表
[tidb@tidb80 ~]$ mysql -u super -h 192.168.80.80 -P 4000 -p
Enter password:Xxxx@go2024
-- 或者直接指定数据库
[tidb@tidb80 ~]$ mysql -u super -h 192.168.80.80 -P 4000 -p superdb
-- OR
[tidb@tidb80 ~]$ mysql -u super -h 192.168.80.80 -P 4000 -p -D superdb
Enter password:Xxxx@go2024
(super@192.168.80.80:4000)[(none)]>create database superdb;
Query OK, 0 rows affected (0.09 sec)
(super@192.168.80.80:4000)[test]>use superdb
Database changed
(super@192.168.80.80:4000)[superdb]>create table t_superdept
-> ( deptno int unsigned auto_increment primary key COMMENT '部门编号',
-> dname varchar(15) COMMENT '部门名称' ,
-> loc varchar(50) COMMENT '部门所在位置'
-> )engine = InnoDB DEFAULT charset=utf8mb4 COMMENT='员工部门表';
Query OK, 0 rows affected (0.10 sec)
(super@192.168.80.80:4000)[superdb]>create table t_superemp(
-> empno int unsigned auto_increment primary key COMMENT '雇员编号',
-> ename varchar(15) COMMENT '雇员姓名' ,
-> job varchar(10) COMMENT '雇员职位' ,
-> mgr int unsigned COMMENT '雇员对应的领导的编号',
-> hiredate date COMMENT '雇员的雇佣日期' ,
-> sal decimal(7,2) COMMENT '雇员的基本工资' ,
-> comm decimal(7,2) COMMENT '奖金' ,
-> deptno int unsigned COMMENT '所在部门' ,
-> constraint fk_t_superemp_deptno foreign key(deptno) references t_superdept(deptno)
-> )engine = innodb default charset =utf8mb4 comment='雇员信息表';
Query OK, 0 rows affected (0.11 sec)
(super@192.168.80.80:4000)[superdb]>create table t_supersalgrade
-> (
-> grade int COMMENT '工资等级',
-> losal int COMMENT '此等级的最低工资',
-> hisal int COMMENT '此等级的最高工资'
-> )engine=innodb DEFAULT charset=utf8mb4 COMMENT='工资等级表';
Query OK, 0 rows affected (0.12 sec)
(super@192.168.80.80:4000)[superdb]>create table t_superbonus
-> ( ename varchar(10) COMMENT '雇员姓名',
-> job varchar(9) COMMENT '雇员职位',
-> sal decimal(7,2) COMMENT '雇员工资',
-> comm decimal(7,2) COMMENT '雇员奖金'
-> )engine=innodb DEFAULT charset=utf8mb4 COMMENT='雇员奖金表' ;
Query OK, 0 rows affected (0.12 sec)
(super@192.168.80.80:4000)[superdb]>show tables;
+-------------------+
| Tables_in_superdb |
+-------------------+
| t_superbonus |
| t_superdept |
| t_superemp |
| t_supersalgrade |
+-------------------+
4 rows in set (0.00 sec)
给表插入数据
-- insert t_superdept
INSERT INTO t_superdept VALUES (10,'ACCOUNTING','NEW YORK');
INSERT INTO t_superdept VALUES (20,'RESEARCH','DALLAS');
INSERT INTO t_superdept VALUES (30,'SALES','CHICAGO');
INSERT INTO t_superdept VALUES (40,'OPERATIONS','BOSTON');
-- insert t_superemp
INSERT INTO t_superemp VALUES (7369,'SMITH','CLERK',7902,'1980-12-17',800,NULL,20);
INSERT INTO t_superemp VALUES (7499,'ALLEN','SALESMAN',7698,'1981-2-20',1600,300,30);
INSERT INTO t_superemp VALUES (7521,'WARD','SALESMAN',7698,'1981-2-22',1250,500,30);
INSERT INTO t_superemp VALUES (7566,'JONES','MANAGER',7839,'1981-4-2',2975,NULL,20);
INSERT INTO t_superemp VALUES (7654,'MARTIN','SALESMAN',7698,'1981-9-28',1250,1400,30);
INSERT INTO t_superemp VALUES (7698,'BLAKE','MANAGER',7839,'1981-5-1',2850,NULL,30);
INSERT INTO t_superemp VALUES (7782,'CLARK','MANAGER',7839,'1981-6-9',2450,NULL,10);
INSERT INTO t_superemp VALUES (7788,'SCOTT','ANALYST',7566,'87-7-13',3000,NULL,20);
INSERT INTO t_superemp VALUES (7839,'KING','PRESIDENT',NULL,'1981-11-17',5000,NULL,10);
INSERT INTO t_superemp VALUES (7844,'TURNER','SALESMAN',7698,'1981-9-8',1500,0,30);
INSERT INTO t_superemp VALUES (7876,'ADAMS','CLERK',7788,'87-7-13',1100,NULL,20);
INSERT INTO t_superemp VALUES (7900,'JAMES','CLERK',7698,'1981-12-3',950,NULL,30);
INSERT INTO t_superemp VALUES (7902,'FORD','ANALYST',7566,'1981-12-3',3000,NULL,20);
INSERT INTO t_superemp VALUES (7934,'MILLER','CLERK',7782,'1982-1-23',1300,NULL,10);
-- insert t_supersalgrade
INSERT INTO t_supersalgrade VALUES (1,700,1200);
INSERT INTO t_supersalgrade VALUES (2,1201,1400);
INSERT INTO t_supersalgrade VALUES (3,1401,2000);
INSERT INTO t_supersalgrade VALUES (4,2001,3000);
INSERT INTO t_supersalgrade VALUES (5,3001,9999);
commit;
show tables;
(super@192.168.80.80:4000)[superdb]>SELECT * from t_superemp ;
+-------+--------+-----------+------+------------+---------+---------+--------+
| empno | ename | job | mgr | hiredate | sal | comm | deptno |
+-------+--------+-----------+------+------------+---------+---------+--------+
| 7369 | SMITH | CLERK | 7902 | 1980-12-17 | 800.00 | NULL | 20 |
| 7499 | ALLEN | SALESMAN | 7698 | 1981-02-20 | 1600.00 | 300.00 | 30 |
| 7521 | WARD | SALESMAN | 7698 | 1981-02-22 | 1250.00 | 500.00 | 30 |
| 7566 | JONES | MANAGER | 7839 | 1981-04-02 | 2975.00 | NULL | 20 |
| 7654 | MARTIN | SALESMAN | 7698 | 1981-09-28 | 1250.00 | 1400.00 | 30 |
| 7698 | BLAKE | MANAGER | 7839 | 1981-05-01 | 2850.00 | NULL | 30 |
| 7782 | CLARK | MANAGER | 7839 | 1981-06-09 | 2450.00 | NULL | 10 |
| 7788 | SCOTT | ANALYST | 7566 | 1987-07-13 | 3000.00 | NULL | 20 |
| 7839 | KING | PRESIDENT | NULL | 1981-11-17 | 5000.00 | NULL | 10 |
| 7844 | TURNER | SALESMAN | 7698 | 1981-09-08 | 1500.00 | 0.00 | 30 |
| 7876 | ADAMS | CLERK | 7788 | 1987-07-13 | 1100.00 | NULL | 20 |
| 7900 | JAMES | CLERK | 7698 | 1981-12-03 | 950.00 | NULL | 30 |
| 7902 | FORD | ANALYST | 7566 | 1981-12-03 | 3000.00 | NULL | 20 |
| 7934 | MILLER | CLERK | 7782 | 1982-01-23 | 1300.00 | NULL | 10 |
+-------+--------+-----------+------+------------+---------+---------+--------+
14 rows in set (0.01 sec)
(super@192.168.80.80:4000)[superdb]>SELECT * FROM t_superdept ;
+--------+------------+----------+
| deptno | dname | loc |
+--------+------------+----------+
| 10 | ACCOUNTING | NEW YORK |
| 20 | RESEARCH | DALLAS |
| 30 | SALES | CHICAGO |
| 40 | OPERATIONS | BOSTON |
+--------+------------+----------+
4 rows in set (0.00 sec)
(super@192.168.80.80:4000)[superdb]>SELECT * FROM t_supersalgrade ;
+-------+-------+-------+
| grade | losal | hisal |
+-------+-------+-------+
| 1 | 700 | 1200 |
| 2 | 1201 | 1400 |
| 3 | 1401 | 2000 |
| 4 | 2001 | 3000 |
| 5 | 3001 | 9999 |
+-------+-------+-------+
5 rows in set (0.00 sec)
(super@192.168.80.80:4000)[superdb]>select VERSION() ;
+--------------------+
| VERSION() |
+--------------------+
| 5.7.25-TiDB-v6.1.0 |
+--------------------+
1 row in set (0.00 sec)
12、测试兼容MySQL语法
12.1、支持递归查询(使用 WITH RECURSIVE 语法)
TiDB 是一个兼容 MySQL 协议的分布式关系型数据库。在 SQL 功能上,TiDB 尽可能地与 MySQL 保持一致,包括支持递归查询(使用 WITH RECURSIVE 语法)。
递归查询通常用于处理树形结构或层次结构的数据,例如组织结构、文件系统目录结构等。
以下是一个使用 WITH RECURSIVE 的基本示例,演示如何在 TiDB 中进行递归查询
(super@192.168.80.80:4000)[superdb]>with recursive cte(n)as (
-> select 1
-> union
-> select n+1 from cte where n<10
-> )
-> SELECT * from cte;
+------+
| n |
+------+
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
| 6 |
| 7 |
| 8 |
| 9 |
| 10 |
+------+
10 rows in set (0.00 sec)
12.2 查找t_superemp表中empno=7698 下级员工
(super@192.168.80.80:4000)[superdb]>with recursive cte(empno,ename,mgr,mgr_level) as
-> (
-> select empno,ename,mgr,CAST(empno as char(256)) FROM t_superemp where empno=7698
-> union ALL
-> select t1.empno,t1.ename,t1.mgr,CONCAT(t2.mgr_level,',',t1.empno) as mgr_level
-> FROM t_superemp t1 inner join cte t2 on t1.mgr=t2.empno
-> )
-> select * from cte;
+-------+--------+------+-----------+
| empno | ename | mgr | mgr_level |
+-------+--------+------+-----------+
| 7698 | BLAKE | 7839 | 7698 |
| 7499 | ALLEN | 7698 | 7698,7499 |
| 7521 | WARD | 7698 | 7698,7521 |
| 7654 | MARTIN | 7698 | 7698,7654 |
| 7844 | TURNER | 7698 | 7698,7844 |
| 7900 | JAMES | 7698 | 7698,7900 |
+-------+--------+------+-----------+
6 rows in set (0.01 sec)
12.3、查找t_superemp表中empno=7698 上级领导
(super@192.168.80.80:4000)[superdb]>with recursive cte as(
-> select empno,ename,mgr,CAST(empno as char(256)) as mgr_level FROM t_superemp where empno=7698
-> union ALL
-> select t1.empno,t1.ename,t1.mgr,CONCAT(t2.mgr_level,',',t1.empno) as mgr_level
-> FROM t_superemp t1 inner join cte t2 on t1.empno=t2.mgr
-> )
-> select * from cte;
+-------+-------+------+-----------+
| empno | ename | mgr | mgr_level |
+-------+-------+------+-----------+
| 7698 | BLAKE | 7839 | 7698 |
| 7839 | KING | NULL | 7698,7839 |
+-------+-------+------+-----------+
2 rows in set (0.00 sec)
12.4、group_concat 和 row_number() over字句
TiDB 是一个兼容 MySQL 协议的分布式关系型数据库。在 SQL 语句中,GROUP_CONCAT 是一个聚合函数,用于将多个行的值连接成一个字符串。
在 MySQL 和 TiDB 中,GROUP_CONCAT 的用法基本相同。假设你有一个表 t_city_list,其中有两个字段:country和 city。如果你想知道每个country国家的所有city城市名字(用逗号分隔),可以使用 GROUP_CONCAT。
create table t_city_list(
id bigint auto_increment,
country varchar(64),
city varchar(64),
constraint pk_t_city_list_id primary key(id)
);
insert into t_city_list(country,city) values ('中国','北京');
insert into t_city_list(country,city) values ('中国','广州');
insert into t_city_list(country,city) values ('中国','深圳');
insert into t_city_list(country,city) values ('中国','香港');
insert into t_city_list(country,city) values ('中国','上海');
insert into t_city_list(country,city) values ('日本','东京');
insert into t_city_list(country,city) values ('日本','大阪');
insert into t_city_list(country,city) values ('中国','中山');
insert into t_city_list(country,city) values ('中国','珠海');
commit;
(super@192.168.80.80:4000)[superdb]>select * from t_city_list;
+----+---------+--------+
| id | country | city |
+----+---------+--------+
| 1 | 中国 | 北京 |
| 2 | 中国 | 广州 |
| 3 | 中国 | 深圳 |
| 4 | 中国 | 香港 |
| 5 | 中国 | 上海 |
| 6 | 日本 | 东京 |
| 7 | 日本 | 大阪 |
| 8 | 中国 | 中山 |
| 9 | 中国 | 珠海 |
+----+---------+--------+
9 rows in set (0.00 sec)
(super@192.168.80.80:4000)[superdb]>select COUNTRY,group_concat(CITY separator '-') PERMS from t_city_list group by COUNTRY;
+---------+--------------------------------------------------+
| COUNTRY | PERMS |
+---------+--------------------------------------------------+
| 中国 | 北京-广州-深圳-香港-上海-中山-珠海 |
| 日本 | 东京-大阪 |
+---------+--------------------------------------------------+
2 rows in set (0.00 sec)
select COUNTRY,group_concat(distinct CITY order by id desc separator '-') PERMS from t_city_list group by COUNTRY;
-- 按照特定的顺序连接值
(super@192.168.80.80:4000)[superdb]>select COUNTRY,group_concat(distinct CITY order by id desc separator '-') PERMS from t_city_list group by COUNTRY;
+---------+--------------------------------------------------+
| COUNTRY | PERMS |
+---------+--------------------------------------------------+
| 中国 | 珠海-中山-上海-香港-深圳-广州-北京 |
| 日本 | 大阪-东京 |
+---------+--------------------------------------------------+
2 rows in set (0.00 sec)
在这个查询中:
- GROUP BY class_id 将结果按 class_id 分组。
- GROUP_CONCAT(student_name ORDER BY student_name SEPARATOR ', ') 将每个 class_id 下的所有 student_name 连接成一个字符串,并且按照 student_name 的字母顺序进行排序,使用 ', ’ 作为分隔符。
GROUP_CONCAT 函数有一些可选的参数:
- DISTINCT:你可以使用 DISTINCT 来去除重复的值。例如:GROUP_CONCAT(DISTINCT student_name …)。
- ORDER BY:你可以指定一个或多个列来按照特定的顺序连接值。
- SEPARATOR:你可以指定一个字符串作为值之间的分隔符。如果不指定,默认使用逗号 ,
注意:GROUP_CONCAT 有一个长度限制,默认情况下可能不足以处理大量的数据。在 MySQL 和 TiDB 中,你可以通过设置 group_concat_max_len 系统变量来增加这个限制。例如,在 MySQL 中,你可以这样做
SET SESSION group_concat_max_len = 1000000; -- 设置当前会话的限制
SET GLOBAL group_concat_max_len = 1000000; -- 设置全局的限制
但请注意,增加这个值可能会增加内存使用,因此请根据你的实际情况进行调整。
12.5、row_number() over字句
在 TiDB(以及大多数支持 SQL 标准功能的数据库系统中),ROW_NUMBER() 是一个窗口函数,它用于为结果集中的每一行分配一个唯一的序号。这个序号是根据 OVER() 子句中指定的排序顺序来分配的。
ROW_NUMBER() 通常与 OVER() 子句一起使用,后者定义了窗口函数的操作范围以及如何对数据进行分区和排序。
下面是一个使用 ROW_NUMBER() 的基本示例,假设我们有一个名为 t_city_list 的表,其中包含了国家城市的信息,并且我们想要为每个国家不同城市分配一个城市序号:
select id,COUNTRY,CITY,
row_number() over(PARTITION BY COUNTRY order by id desc) as rn
from t_city_list;
(super@192.168.80.80:4000)[superdb]>select id,COUNTRY,CITY,row_number() over( PARTITION BY COUNTRY order by id desc) as rn from t_city_list;
+----+---------+--------+------+
| id | COUNTRY | CITY | rn |
+----+---------+--------+------+
| 9 | 中国 | 珠海 | 1 |
| 8 | 中国 | 中山 | 2 |
| 5 | 中国 | 上海 | 3 |
| 4 | 中国 | 香港 | 4 |
| 3 | 中国 | 深圳 | 5 |
| 2 | 中国 | 广州 | 6 |
| 1 | 中国 | 北京 | 7 |
| 7 | 日本 | 大阪 | 1 |
| 6 | 日本 | 东京 | 2 |
+----+---------+--------+------+
9 rows in set (0.00 sec)
在这个查询中:
- ROW_NUMBER() 为每一行生成一个唯一的序号。
- OVER() 子句定义了如何分配这些序号。
- PARTITION BY COUNTRY 表示我们想要为每个 COUNTRY 分区数据,并为每个分区中的行分配序号。
- ORDER BY ID 表示在每个分区中,序号将根据 ID 倒序或升序 的值来分配。
结果将包含 id、COUNTRY 、CITY 和 rn 列,其中 rn 是为每个国家不同城市分配一个城市序号。
注意:TiDB 是一个分布式数据库,因此在执行复杂的窗口函数或分析查询时,可能需要考虑数据分区和分布的影响。确保你了解你的数据和查询是如何在 TiDB 集群中分布的,以及这如何影响查询的性能和结果
本站资源均来自互联网,仅供研究学习,禁止违法使用和商用,产生法律纠纷本站概不负责!如果侵犯了您的权益请与我们联系!
转载请注明出处: 免费源码网-免费的源码资源网站 » 单节点离线部署TiDB 6.1用于测试
发表评论 取消回复