博客搬家说明

之前的博客是在阿里云买的服务器,平均每天将近八毛钱,自从大模型问世之后,写博客的欲望聊胜于无,所以经常很久不更新了,但是感觉每天都付这钱太亏了,所以马年上班第一天把博客迁移到了 GitHub pages,因为迁移的比较随便,所以 1. 博客里面嵌入的代码都乱掉了,不过还好现在有大模型了,也不需要拷贝代码了,所以也懒得整理了,2 就是之前有些博客带图,图还在,但需要一一替换,也懒得弄了,所以图片资源也没有了,3. 就是评论都丢失了,其实之前也没几个评论,也都没啥价值,就没迁移,后续看看还加不加评论框吧,之前一直自建博客,一方面是懒,不想动,另一方面是比较喜欢 inove 这个主题,特此说明。

February 24, 2026 · 1 min · 9 words · Bridge Li

一次系统响应慢的排查

先说一下,系统的相关信息,一共两个系统分别称呼是 A 和 B 吧。1. 两个系统都没有源码;2. A 是 springboot 项目,B 是传统的 Tomcat;3. 两者都是使用内网 IP 地址连接同一个数据库;4. 两个系统包括数据库都是一个月前老服务器迁移到了新服务器。表现是 A 系统也就是 springboot 系统响应速度正常,B 系统也就是那个 Tomcat 系统响应速度相对正常,就是有一点慢,但是一直没有放在心上,内部系统,能用,有一天发现 B 系统的某个接口特别特别慢,以至于会响应超时,然后排查这个问题,还有就是目前只发现了这一个接口慢,A 系统有一个类似功能的接口,响应速度也正常。 当时看到这个响应,第一反应:B 系统有个配置错了,因为一个月前刚迁移了系统,而 B 系统一共有三个配置文件,需要配置数据库连接,有个地方忘记修改了,刚好这个接口用的是这个系统的配置文件的连接地址,所以调用数据库连接超时报错,经排查发现三个地址的配置均修改了,是正确的。 在上一步的排查中也没有发现有任何报错日志,所以第二反应,出现了慢查询,但是感觉不太应该,数据库是完整迁移的,以前也不慢啊,而且 A 系统有个功能类似的接口,响应速度也算正常。其实正常是看慢查询日志,看有没有相关的 SQL,但是系统没有记录,所以通过 SQL 语句查询: SELECT trx_id AS 事务ID, trx_state AS 状态, trx_started AS 开始时间, trx_mysql_thread_id AS 线程ID, TIME_TO_SEC(TIMEDIFF(NOW(), trx_started)) AS 已运行秒数, trx_query AS 当前SQL FROM information_schema.INNODB_TRX ORDER BY trx_started\G 经查询,调用这个接口的时候,没有超长执行的 SQL 语句所以排除。 然后下一个排查方向是不是 jdbc 的驱动版本不对,B 系统的 jdbc 版本还是 5 点几,而数据库的版本已经是 8 点几,这有点对不上啊,大模型说 8 点几的认证方式修改了,所以那就修改 jdbc 的版本测试,然后没有任何变化。 ...

February 2, 2026 · 2 min · 222 words · Bridge Li

本地开发不同项目使用不同版本 JDK 的解决方案

一般我们在公司工作中,很少有人就负责一个项目,而负责不同的项目,由于各种原因使用的 JDK 版本可能并不相同,如果存在不兼容的情况,那么本地 mvn clean compile 的时候就会报错,这个时候就需要我们去修改我们设置的环境变量,如果多个项目同时开发,那么就需要时不时切来切去,特别烦人,其实这个问题 maven 早就替大家考虑过了,我们只需要: 修改 maven 的 toolchains.xml 文件 <?xml version="1.0" encoding="UTF-8"?> <!&#8211; Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. &#8211;> <!&#8211; | This is the toolchains file for Maven. It can be specified at two levels: | | 1. User Level. This toolchains.xml file provides configuration for a single user, | and is normally provided in ${user.home}/.m2/toolchains.xml. | | NOTE: This location can be overridden with the CLI option: | | -t /path/to/user/toolchains.xml | | 2. Global Level. This toolchains.xml file provides configuration for all Maven | users on a machine (assuming they&#8217;re all using the same Maven | installation). It&#8217;s normally provided in | ${maven.conf}/toolchains.xml. | | NOTE: This location can be overridden with the CLI option: | | -gt /path/to/global/toolchains.xml | | The sections in this sample file are intended to give you a running start at | getting the most out of your Maven installation. |&#8211;> <toolchains xmlns="http://maven.apache.org/TOOLCHAINS/1.1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/TOOLCHAINS/1.1.0 http://maven.apache.org/xsd/toolchains-1.1.0.xsd"> <!&#8211; | With toolchains you can refer to installations on your system. This | way you don&#8217;t have to hardcode paths in your pom.xml. | | Every toolchain consist of 3 elements: | * type: the type of tool. An often used value is &#8216;jdk&#8217;. Toolchains-aware | plugins should document which type you must use. | | * provides: A list of key/value-pairs. | Based on the toolchain-configuration in the pom.xml Maven will search for | matching <provides/> configuration. You can decide for yourself which key-value | pairs to use. Often used keys are &#8216;version&#8217;, &#8216;vendor&#8217; and &#8216;arch&#8217;. By default | the version has a special meaning. If you configured in the pom.xml &#8216;1.5&#8217; | Maven will search for 1.5 and above. | | * configuration: Additional configuration for this tool. | Look for documentation of the toolchains-aware plugin which configuration elements | can be used. | | See also https://maven.apache.org/guides/mini/guide-using-toolchains.html | | General example <toolchain> <type/> <provides> <version>1.0</version> </provides> <configuration/> </toolchain> | JDK examples <toolchain> <type>jdk</type> <provides> <version>1.5</version> <vendor>sun</vendor> </provides> <configuration> <jdkHome>/path/to/jdk/1.5</jdkHome> </configuration> </toolchain> <toolchain> <type>jdk</type> <provides> <version>1.6</version> <vendor>sun</vendor> </provides> <configuration> <jdkHome>/path/to/jdk/1.6</jdkHome> </configuration> </toolchain> &#8211;> <toolchain> <type>jdk</type> <provides> <version>17</version> </provides> <configuration> <jdkHome>D:\J2EE\Java\jdk-17</jdkHome> </configuration> </toolchain> <toolchain> <type>jdk</type> <provides> <version>8</version> </provides> <configuration> <jdkHome>D:\J2EE\Java\jdk1.8.0_311</jdkHome> </configuration> </toolchain> </toolchains> 修改项目的 pom.xml 文件 <profiles> <profile> <id>dev</id> <activation> <activeByDefault>false</activeByDefault> </activation> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-toolchains-plugin</artifactId> <version>3.1.0</version> <executions> <execution> <goals> <goal>toolchain</goal> </goals> </execution> </executions> <configuration> <toolchains> <jdk> <version>8</version> </jdk> </toolchains> </configuration> </plugin> </plugins> </build> </profile> </profiles> 修改完我们在编译的时候只需要运行: mvn clean compile -Pdev 即可,这样编译就会自动用 jdk8 也不是系统环境变量配置的,当然还有更简单的,我们可以修改 pom.xml 中的配置,只需要把: ...

December 25, 2025 · 3 min · 522 words · Bridge Li

MySQL 备份及恢复脚本

作为开发,我们都知道数据备份的重要性,而数据备份最重要的就是数据库备份,前一段时间由于操作失误,误删过一次数据库,所以特把备份和恢复脚本分享出来,作为笔记。 MySQL 备份脚本 #!/bin/bash \# =================================================================== \# MySQL 分库全量备份脚本(生产级 | 自适应 &#8211;source-data / &#8211;master-data) \# 功能: \# &#8211; 自动发现用户数据库 \# &#8211; 每库独立压缩备份 \# &#8211; 自动选择 &#8211;source-data (8.0+) 或 &#8211;master-data (5.7) \# &#8211; 智能处理 GTID(仅在启用时设置) \# &#8211; 提取 binlog 位置生成 .info 文件 \# &#8211; 清理 N 天前旧备份 \# 作者:BridgeLi \# 版本:1.0 \# =================================================================== \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- \# 配置区 \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- BACKUP_DIR="/project/backup/mysql/dbs" CNF_FILE="/project/backup/mysql/my.cnf" RETENTION_DAYS=7 MIN_FREE_SPACE_GB=5 HOSTNAME=$(hostname -s) DT=$(date +%Y-%m-%d_%H%M%S) \# 排除系统库 EXCLUDED_DBS="^(mysql|sys|information_schema|performance_schema)$" \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- \# 初始化 & 依赖检查 \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- for cmd in mysql mysqldump gzip gunzip df date awk sed; do command -v "$cmd" >/dev/null || { echo "错误:缺少命令 &#8216;$cmd&#8217;"; exit 1; } done mkdir -p "$BACKUP_DIR" || { echo "错误:无法创建目录 $BACKUP_DIR"; exit 1; } LOG_FILE="$BACKUP_DIR/backup.log" LOCK_FILE="$BACKUP_DIR/.backup.lock" if [ -f "$LOCK_FILE" ]; then echo "[$(date +&#8217;%Y-%m-%d %H:%M:%S&#8217;)] 错误:锁文件存在,可能已有备份在运行。" exit 1 fi trap "rm -f &#8216;$LOCK_FILE&#8217;" EXIT touch "$LOCK_FILE" AVAILABLE_GB=$(df -P "$BACKUP_DIR" | tail -1 | awk &#8216;{print int($4/1024/1024)}&#8217;) if [ "$AVAILABLE_GB" -lt "$MIN_FREE_SPACE_GB" ]; then echo "[$(date +&#8217;%Y-%m-%d %H:%M:%S&#8217;)] 错误:磁盘空间不足 (${AVAILABLE_GB}GB < ${MIN_FREE_SPACE_GB}GB)" exit 1 fi exec > >(tee -a "$LOG_FILE") 2>&1 echo "\[$(date +&#8217;%Y-%m-%d %H:%M:%S&#8217;)\] \[$HOSTNAME\] 开始全量分库备份&#8230;" \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- \# 检测 MySQL 版本和 GTID 状态 \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- MYSQL_CMD="mysql &#8211;defaults-extra-file=$CNF_FILE -sN" \# 获取 MySQL 主版本(57, 80) MYSQL_VERSION=$($MYSQL_CMD -e "SELECT REPLACE(LEFT(VERSION(), 4), &#8216;.&#8217;, &#8221;);") if ! [[ "$MYSQL_VERSION" =~ ^[0-9]+$ ]]; then echo "[$(date +&#8217;%Y-%m-%d %H:%M:%S&#8217;)] 错误:无法获取 MySQL 版本" exit 1 fi \# 选择 source-data / master-data if [ "$MYSQL_VERSION" -ge 80 ]; then REPLICATION_OPT="&#8211;source-data=2" else REPLICATION_OPT="&#8211;master-data=2" fi \# 检查 GTID 是否启用 GTID_MODE=$($MYSQL_CMD -e "SELECT @@GLOBAL.gtid_mode;" 2>/dev/null || echo "OFF") if [[ "$GTID_MODE" =~ ^(ON|ON_PERMISSIVE|OFF_PERMISSIVE)$ ]]; then GTID_PURGED_OPT="&#8211;set-gtid-purged=ON" GTID_ENABLED=true else GTID_PURGED_OPT="&#8211;set-gtid-purged=OFF" GTID_ENABLED=false fi echo "[$(date +&#8217;%Y-%m-%d %H:%M:%S&#8217;)] MySQL 版本: $MYSQL_VERSION, 使用: $REPLICATION_OPT, GTID: ${GTID_MODE:-OFF}" \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- \# 获取所有非系统数据库 \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- dbs=() while IFS= read -r db; do [[ -z "$db" ]] && continue if [[ ! "$db" =~ $EXCLUDED_DBS ]]; then dbs+=("$db") fi done < <($MYSQL_CMD -e "SHOW DATABASES;" 2>>"$LOG_FILE") if [ ${#dbs[@]} -eq 0 ]; then echo "[$(date +&#8217;%Y-%m-%d %H:%M:%S&#8217;)] 警告:未找到可备份的数据库。" exit 0 fi echo "[$(date +&#8217;%Y-%m-%d %H:%M:%S&#8217;)] 发现 ${#dbs[@]} 个数据库: ${dbs[*]}" \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- \# 执行备份 \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- SUCCESS_COUNT=0 FAILURE_COUNT=0 START_TIME=$(date +%s) for db in "${dbs[@]}"; do DUMP_FILE="${db}-${DT}.sql.gz" DUMP_PATH="$BACKUP_DIR/$DUMP_FILE" INFO_PATH="${DUMP_PATH%.gz}.info" echo "[$(date +&#8217;%Y-%m-%d %H:%M:%S&#8217;)] 正在备份: $db -> $DUMP_PATH" mysqldump &#8211;defaults-extra-file="$CNF_FILE" \ &#8211;single-transaction \ &#8211;routines \ &#8211;triggers \ $REPLICATION_OPT \ $GTID_PURGED_OPT \ &#8211;databases "$db" 2>>"$LOG_FILE" | gzip -c > "$DUMP_PATH" if [ $? -eq 0 ] && [ -s "$DUMP_PATH" ] && gunzip -t "$DUMP_PATH" >/dev/null 2>&1; then \# 提取 binlog 位置(source/master 兼容) read master_file master_pos < <(gzip -dc "$DUMP_PATH" | sed -n "/^&#8211; CHANGE MASTER TO / s/.\*LOG_FILE=&#8217;\\([^&#8217;]\*\\)&#8217;,.\*LOG_POS=\\([0-9]\*\\).*/\\1 \\2/p" | head -1) \# 提取 GTID(仅当启用) if [ "$GTID_ENABLED" = true ]; then gtid_purged=$(gzip -dc "$DUMP_PATH" | sed -n "s/^SET @@GLOBAL.GTID_PURGED=&#8217;\\([^&#8217;]*\)&#8217;;\$/\\1/p" | head -1) [ -z "$gtid_purged" ] && gtid_purged="NONE" else gtid_purged="DISABLED" fi \# 写入 .info { [ -n "$master_file" ] && echo "File: $master_file" [ -n "$master_pos" ] && echo "Position: $master_pos" echo "GTID: $gtid_purged" } > "$INFO_PATH" chmod 600 "$DUMP_PATH" "$INFO_PATH" 2>/dev/null || true echo "[$(date +&#8217;%Y-%m-%d %H:%M:%S&#8217;)] 成功: $db (binlog: $master_file, pos: $master_pos, gtid: $gtid_purged)" ((SUCCESS_COUNT++)) else echo "[$(date +&#8217;%Y-%m-%d %H:%M:%S&#8217;)] 失败: $db" rm -f "$DUMP_PATH" "$INFO_PATH" ((FAILURE_COUNT++)) fi done \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- \# 清理 N 天前的旧备份 \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- if [ $FAILURE_COUNT -eq 0 ]; then cleanup_old() { local pattern="$1" local now_ts=$(date +%s) local files=() mapfile -t files < <(find "$BACKUP_DIR" -name "$pattern" -type f 2>/dev/null) for file in "${files[@]}"; do if [[ "$file" =~ -([0-9]{4})-([0-9]{2})-([0-9]{2})_([0-9]{2})([0-9]{2})([0-9]{2})\. ]]; then \# 提取各部分:年、月、日、时、分、秒 local y=${BASH_REMATCH[1]} local m=${BASH_REMATCH[2]} local d=${BASH_REMATCH[3]} local H=${BASH_REMATCH[4]} local M=${BASH_REMATCH[5]} local S=${BASH_REMATCH[6]} \# 构造合法日期字符串:2025-09-25 02:00:01 local datetime="$y-$m-$d $H:$M:$S" \# 转换为时间戳 local file_ts=$(date -d "$datetime" +%s 2>/dev/null) || continue local age_days=$(( (now_ts &#8211; file_ts) / 86400 )) if [ $age_days -ge $RETENTION_DAYS ]; then echo "[$(date +&#8217;%Y-%m-%d %H:%M:%S&#8217;)] 清理过期文件: $file (已存在 $age_days 天)" rm -f "$file" fi fi done } cleanup_old "*.sql.gz" cleanup_old "*.info" END_TIME=$(date +%s) echo "[$(date +&#8217;%Y-%m-%d %H:%M:%S&#8217;)] 备份完成: 成功 $SUCCESS_COUNT,失败 $FAILURE_COUNT,耗时 $((END_TIME &#8211; START_TIME)) 秒" else echo "[$(date +&#8217;%Y-%m-%d %H:%M:%S&#8217;)] 警告:有 $FAILURE_COUNT 个数据库备份失败,跳过清理。" exit 1 fi exit 0 MySQL binlog 备份脚本: #!/bin/bash \# =================================================================== \# MySQL Binlog 增量备份脚本(优化版) \# 配合全量备份实现 PITR \# 作者:BridgeLi \# 版本:1.0 \# =================================================================== \# 配置参数 BINLOG_DIR="/var/lib/mysql" BACKUP_DIR="/project/backup/mysql/binlogs" CNF_FILE="/project/backup/mysql/my.cnf" LOG_FILE="$BACKUP_DIR/binlog_backup.log" LOCK_FILE="$BACKUP_DIR/.backup.lock" LAST_COPIED_FILE="$BACKUP_DIR/.last_binlog" \# 创建备份目录 mkdir -p "$BACKUP_DIR" || { echo "[$(date)] 错误:无法创建备份目录 $BACKUP_DIR" >&2; exit 1; } \# 使用 flock 防止并发执行 exec 200>"$LOCK_FILE" if ! flock -n 200; then echo "[$(date)] 错误:备份脚本已在运行,退出。" | tee -a "$LOG_FILE" exit 1 fi \# 重定向所有输出到日志 exec >> "$LOG_FILE" 2>&1 echo "==================================" echo "[$(date +&#8217;%Y-%m-%d %H:%M:%S&#8217;)] 开始 binlog 增量备份&#8230;" \# 获取当前活跃 binlog CURRENT_LOG=$(mysql &#8211;defaults-extra-file="$CNF_FILE" -sN -e "SHOW MASTER STATUS;" 2>/dev/null | awk &#8216;{print $1}&#8217;) if [ -z "$CURRENT_LOG" ]; then echo "[$(date +&#8217;%Y-%m-%d %H:%M:%S&#8217;)] 错误:无法获取当前 binlog 名称,请检查 MySQL 连接或权限。" exit 1 fi echo "当前活跃 binlog: $CURRENT_LOG" \# 读取上次备份的 binlog if [[ -f "$LAST_COPIED_FILE" ]]; then LAST_LOG=$(cat "$LAST_COPIED_FILE") echo "上次已备份至: $LAST_LOG" else LAST_LOG="" echo "首次运行,将备份所有历史 binlog(除当前外)。" fi COPIED=0 cd "$BINLOG_DIR" || { echo "无法进入 binlog 目录: $BINLOG_DIR"; exit 1; } \# 获取所有 binlog 文件并按版本排序 mapfile -t LOGS < <(find . -maxdepth 1 -name &#8216;mysql-bin.*&#8217; -type f -printf &#8216;%f\n&#8217; | sort -V) for log in "${LOGS[@]}"; do [[ ! -f "$log" ]] && continue \# 跳过当前活跃的 binlog [[ "$log" == "$CURRENT_LOG" ]] && continue \# 判断是否需要备份:log > LAST_LOG(版本排序) if [[ -n "$LAST_LOG" ]]; then \# 使用 sort -V 判断顺序 greater=$(printf &#8216;%s\n%s&#8217; "$LAST_LOG" "$log" | sort -V | tail -1) if [[ "$greater" != "$log" || "$log" == "$LAST_LOG" ]]; then continue fi fi \# 执行压缩备份 if gzip -c "$log" > "$BACKUP_DIR/${log}.gz"; then echo "[$(date +&#8217;%Y-%m-%d %H:%M:%S&#8217;)] 已压缩备份: $log" ((COPIED++)) else echo "[$(date +&#8217;%Y-%m-%d %H:%M:%S&#8217;)] 错误:备份失败 $log" fi done \# 原子更新最后备份的 binlog echo "$CURRENT_LOG" > "${LAST_COPIED_FILE}.tmp" && mv "${LAST_COPIED_FILE}.tmp" "$LAST_COPIED_FILE" echo "[$(date +&#8217;%Y-%m-%d %H:%M:%S&#8217;)] binlog 增量备份完成,共复制 $COPIED 个文件。" echo "==================================" MySQL 健康检查脚本(非必需,只需要备份就行) #!/bin/bash \# =================================================================== \# MySQL 备份健康检查脚本(优化生产版 &#8211; 已修复) \# 功能:检查最近备份时效、磁盘使用率,记录日志,发送告警,输出 Prometheus 指标 \# 作者:BridgeLi \# 版本:1.0 \# =================================================================== \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- \# 配置参数 \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- BACKUP_DIR="/project/backup/mysql/dbs" LOG_DIR="/var/log/mysql" LOG_FILE="$LOG_DIR/health_check.log" ALERT_EMAIL="admin@example.com" HOSTNAME=$(hostname -s) \# Prometheus 指标输出路径 PROM_FILE="/tmp/backup_health.prom" PROM_TMP_FILE="/tmp/backup_health.prom.tmp" \# 告警阈值(小时) MAX_BACKUP_AGE_HOURS=26 DISK_WARN_THRESHOLD=80 DISK_CRIT_THRESHOLD=90 \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- \# 创建日志目录(避免 tee 报错) \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- if [ ! -d "$LOG_DIR" ]; then mkdir -p "$LOG_DIR" && chmod 755 "$LOG_DIR" [ $? -ne 0 ] && echo "ERROR: Cannot create log directory $LOG_DIR" && exit 1 fi \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- \# 重定向输出:同时输出到日志和终端 \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- exec > >(tee -a "$LOG_FILE") 2>&1 \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- \# 检查最近一次备份是否在合理时间内 \# 输出:状态信息 \# 返回值: \# 0 = OK \# 1 = CRITICAL(超时) \# 2 = ERROR(无备份) \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- check_last_backup() { local latest_entry=$(find "$BACKUP_DIR" -name "*.sql.gz" -type f -printf &#8216;%T@ %p\n&#8217; 2>/dev/null | sort -n | tail -1) if [ -z "$latest_entry" ]; then echo "ERROR no_recent_backup" return 2 fi \# 提取时间戳(取整数部分) local mtime_epoch=$(echo "$latest_entry" | awk &#8216;{split($1,a,"."); print a[1]}&#8217;) local now_epoch=$(date +%s) local age_seconds=$((now_epoch &#8211; mtime_epoch)) local age_hours=$((age_seconds / 3600)) if [ $age_hours -gt $MAX_BACKUP_AGE_HOURS ]; then echo "CRITICAL backup_too_old $age_hours hours" return 1 else echo "OK last_backup $age_hours hours ago" return 0 fi } \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- \# 检查备份目录所在磁盘使用率 \# 返回值: \# 0 = OK (<80%) \# 1 = CRITICAL (>90%) \# 2 = WARN (80%~90%) \# 输出:状态信息 \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- check_disk_usage() { if [ ! -d "$BACKUP_DIR" ]; then echo "ERROR backup_dir_not_found: $BACKUP_DIR" return 1 fi local df_out=$(df -P "$BACKUP_DIR" 2>/dev/null) if [ -z "$df_out" ]; then echo "ERROR disk_check_failed" return 1 fi local used_percent=$(echo "$df_out" | tail -1 | awk &#8216;{print $5}&#8217; | tr -d &#8216;%&#8217;) if ! [[ "$used_percent" =~ ^[0-9]+$ ]]; then echo "ERROR disk_usage_invalid: $used_percent" return 1 fi if [ $used_percent -gt $DISK_CRIT_THRESHOLD ]; then echo "CRITICAL disk_usage ${used_percent}%" return 1 elif [ $used_percent -gt $DISK_WARN_THRESHOLD ]; then echo "WARN disk_usage ${used_percent}%" return 2 else echo "OK disk_usage ${used_percent}%" return 0 fi } \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- \# 发送告警邮件 \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- send_alert() { local subject="$1" local body="$2" if command -v mail >/dev/null 2>&1; then echo -e "$body" | mail -s "$subject" "$ALERT_EMAIL" echo "Alert sent to $ALERT_EMAIL" else echo "WARNING: &#8216;mail&#8217; command not available. Skipping alert." logger "MySQL Backup Alert: $subject | $body" fi } \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- \# 主逻辑开始 \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- echo "=== Backup Health Check ($(date)) on $HOSTNAME ===" \# 执行检查,捕获输出和返回值(只执行一次!) output1=$(check_last_backup) res1=$? echo "$output1" output2=$(check_disk_usage) res2=$? echo "$output2" \# 判断是否需要告警 alert_needed=false if [ $res1 -eq 1 ] || [ $res1 -eq 2 ] || [ $res2 -eq 1 ]; then alert_needed=true fi \# 发送告警 if [ "$alert_needed" = true ]; then subject="⚠️ MySQL 备份异常 &#8211; $HOSTNAME" body="【备份状态】$output1\n【磁盘状态】$output2\n\n请立即检查备份目录:$BACKUP_DIR" send_alert "$subject" "$body" else echo "All checks OK." fi \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- \# 输出 Prometheus 指标(原子写入) \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- ( echo "# HELP mysql_backup_last_success_age_hours Age of last successful backup in hours, -1 if none" echo "# TYPE mysql_backup_last_success_age_hours gauge" local latest_entry=$(find "$BACKUP_DIR" -name "*.sql.gz" -type f -printf &#8216;%T@ %p\n&#8217; 2>/dev/null | sort -n | tail -1) if [ -z "$latest_entry" ]; then echo "mysql_backup_last_success_age_hours -1" else local mtime_epoch=$(echo "$latest_entry" | awk &#8216;{split($1,a,"."); print a[1]}&#8217;) local now=$(date +%s) local age_hours=$(( (now &#8211; mtime_epoch) / 3600 )) echo "mysql_backup_last_success_age_hours $age_hours" fi echo "" echo "# HELP mysql_backup_disk_usage_percent Disk usage of the backup partition (%)" echo "# TYPE mysql_backup_disk_usage_percent gauge" df -P "$BACKUP_DIR" 2>/dev/null | tail -1 | awk &#8216;{gsub(/%/,"",$5); print "mysql_backup_disk_usage_percent", $5}&#8217; ) > "$PROM_TMP_FILE" && mv "$PROM_TMP_FILE" "$PROM_FILE" if [ $? -eq 0 ]; then echo "Prometheus metrics written to $PROM_FILE" else echo "ERROR: Failed to write Prometheus metrics" fi echo "=== Check completed ===" MySQL 恢复脚本 #!/bin/bash \# =================================================================== \# MySQL Point-in-Time Recovery (PITR) 脚本 \# 功能:基于全量备份 + binlog 恢复到指定时间点 \# 作者:BridgeLi \# 版本:1.0 # \# 用法: \# ./mysql-pitr-restore.sh "2025-09-24 10:00:00" [数据库名] \# ./mysql-pitr-restore.sh &#8211;dry-run "2025-09-24 10:00:00" [db_name] \# ./mysql-pitr-restore.sh &#8211;help # \# 依赖: \# mysql, mysqlbinlog, gzip, find, sort \# =================================================================== set -euo pipefail # 严格模式:出错即退出 \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- \# 默认配置 \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- BACKUP_DIR="/project/backup/mysql/dbs" BINLOG_DIR="/project/backup/mysql/binlogs" RESTORE_DIR="/tmp/mysql_restore_$$" # 使用 PID 避免冲突 CNF_FILE="/project/backup/mysql/my.cnf" LOG_FILE="$BACKUP_DIR/restore.log" DRY_RUN=0 DEBUG=0 TARGET_TIME="" TARGET_DB="" \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- \# 函数定义 \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- usage() { cat << &#8216;EOF&#8217; 用法: ./mysql-pitr-restore.sh [选项] <目标时间 &#8216;YYYY-MM-DD HH:MM:SS&#8217;> [数据库名] 选项: &#8211;dry-run 模拟执行,不真正恢复数据 &#8211;debug 启用调试输出 &#8211;help 显示此帮助信息 示例: ./mysql-pitr-restore.sh "2025-09-24 10:00:00" ./mysql-pitr-restore.sh &#8211;dry-run "2025-09-24 10:00:00" mydb ./mysql-pitr-restore.sh &#8211;debug "2025-09-24 12:30:00" 注意: &#8211; 全量备份文件需包含 CHANGE MASTER TO 语句以提取 binlog 位置 &#8211; binlog 文件需为 .gz 压缩格式,命名如 mysql-bin.000001.gz &#8211; 恢复前请确保数据库无写入操作! EOF } log() { local level="${1}" shift echo "\[$(date &#8216;+%F %T&#8217;)\] \[$level\] $*" | tee -a "$LOG_FILE" } debug() { [[ $DEBUG -eq 1 ]] && log "DEBUG" "$@" } cleanup() { if [[ -d "$RESTORE_DIR" ]]; then debug "正在清理临时目录: $RESTORE_DIR" rm -rf "$RESTORE_DIR" fi } trap cleanup EXIT confirm_proceed() { log "WARN" "即将开始恢复至时间点: $TARGET_TIME" if [[ -n "$TARGET_DB" ]]; then log "INFO" "仅恢复数据库: $TARGET_DB" fi log "WARN" "请确保 MySQL 当前无写入操作,否则可能导致数据不一致!" read -p "确定继续?[y/N]: " -n 1 -r echo if [[ ! $REPLY =~ ^[Yy]$ ]]; then log "INFO" "用户取消操作" exit 1 fi } find_latest_full_backup() { local latest_file="" local latest_time="" local file time_str backup_time for file in "$BACKUP_DIR"/*.sql{,.gz}; do [[ -f "$file" ]] || continue \# 提取时间戳:匹配 -YYYY-MM-DD_HHMMSS.sql 或 .sql.gz if [[ "$file" =~ -([0-9]{4}-[0-9]{2}-[0-9]{2}_[0-9]{6})\.sql(\.gz)?$ ]]; then time_str="${BASH_REMATCH[1]}" backup_time="${time_str:0:10} ${time_str:11:2}:${time_str:13:2}:${time_str:15:2}" debug "发现备份: $file -> 时间: $backup_time" if [[ "$backup_time" < "$TARGET_TIME" ]]; then if [[ -z "$latest_time" || "$backup_time" > "$latest_time" ]]; then latest_file="$file" latest_time="$backup_time" fi fi else debug "跳过不匹配的文件: $file" fi done if [[ -z "$latest_file" ]]; then log "ERROR" "未找到早于 $TARGET_TIME 的全量备份" return 1 fi echo "$latest_file|$latest_time" } extract_binlog_position() { local backup_file="$1" local content_cmd="gzip -dc" # 默认是 .gz [[ "$backup_file" == *.sql ]] && content_cmd="cat" local line line=$(eval "$content_cmd" "$backup_file" | sed -n "s/.\*CHANGE MASTER TO MASTER_LOG_FILE=&#8217;\([^&#8217;]\*\)&#8217;,.\*, MASTER_LOG_POS=\([0-9]\*\).*/\1 \2/p" | head -1) if [[ -z "$line" ]]; then log "ERROR" "无法从备份中提取 binlog 位置信息,请检查是否启用 &#8211;master-data=2" return 1 fi echo "$line" } \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- \# 参数解析 \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- while [[ $# -gt 0 ]]; do case $1 in &#8211;dry-run) DRY_RUN=1 shift ;; &#8211;debug) DEBUG=1 shift ;; &#8211;help) usage exit 0 ;; -*) log "ERROR" "未知选项: $1" usage exit 1 ;; *) break ;; esac done if [[ $# -lt 1 ]]; then log "ERROR" "缺少目标时间参数" usage exit 1 fi TARGET_TIME="$1" if [[ ! "$TARGET_TIME" =~ ^[0-9]{4}-[0-9]{2}-[0-9]{2}\ [0-9]{2}:[0-9]{2}:[0-9]{2}$ ]]; then log "ERROR" "时间格式无效,应为 &#8216;YYYY-MM-DD HH:MM:SS&#8217;" exit 1 fi TARGET_DB="${2:-}" \# 设置日志输出 exec > >(tee -a "$LOG_FILE") 2>&1 \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- \# 主流程 \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- log "INFO" "开始 PITR 恢复流程,目标时间: $TARGET_TIME" if (( DRY_RUN )); then log "DRYRUN" "运行在模拟模式 (&#8211;dry-run),不会执行实际恢复" fi \# 1. 查找最接近的全量备份 log "INFO" "正在查找最接近且早于 $TARGET_TIME 的全量备份&#8230;" result=$(find_latest_full_backup) if [[ $? -ne 0 ]]; then exit 1 fi FULL_BACKUP=$(echo "$result" | cut -d&#8217;|&#8217; -f1) FULL_TIME=$(echo "$result" | cut -d&#8217;|&#8217; -f2) log "INFO" "使用全量备份: $FULL_BACKUP (时间: $FULL_TIME)" \# 2. 用户确认 if (( !DRY_RUN )); then confirm_proceed fi \# 3. 恢复全量数据 log "INFO" "开始导入全量数据&#8230;" if (( DRY_RUN )); then log "DRYRUN" "将导入: $FULL_BACKUP" else mkdir -p "$RESTORE_DIR" if [[ "$FULL_BACKUP" == *.gz ]]; then if gzip -dc "$FULL_BACKUP" | mysql &#8211;defaults-extra-file="$CNF_FILE"; then log "INFO" "全量恢复成功" else log "ERROR" "全量导入失败" exit 1 fi else if mysql &#8211;defaults-extra-file="$CNF_FILE" < "$FULL_BACKUP"; then log "INFO" "全量恢复成功" else log "ERROR" "全量导入失败" exit 1 fi fi fi \# 4. 提取 binlog 起始位置 log "INFO" "提取 binlog 起始位置&#8230;" position_line=$(extract_binlog_position "$FULL_BACKUP") if [[ $? -ne 0 ]]; then exit 1 fi read -r START_FILE START_POS <<< "$position_line" log "INFO" "从 binlog 开始应用: $START_FILE, 位置: $START_POS" \# 5. 应用 binlog 到目标时间 log "INFO" "开始应用 binlog 增量日志&#8230;" applied=0 for binlog_gz in $(find "$BINLOG_DIR" -name "*.gz" | sort); do local binlog_base binlog_base=$(basename "$binlog_gz" .gz) \# 跳过早于起始文件的日志 if [[ "$binlog_base" < "$START_FILE" ]]; then continue fi log "INFO" "处理 binlog: $binlog_base" \# 解压到临时目录 mkdir -p "$RESTORE_DIR/binlogs" local tmp_binlog="$RESTORE_DIR/binlogs/$binlog_base" if (( DRY_RUN )); then log "DRYRUN" "将解压并应用: $binlog_gz -> $tmp_binlog" applied=1 continue fi gzip -dc "$binlog_gz" > "$tmp_binlog" \# 构建 mysqlbinlog 命令 local mysqlbinlog_cmd=( mysqlbinlog &#8211;start-position="$START_POS" &#8211;stop-datetime="$TARGET_TIME" "${TARGET_DB:+&#8211;database=$TARGET_DB}" "$tmp_binlog" ) local mysql_cmd=(mysql &#8211;defaults-extra-file="$CNF_FILE") debug "执行命令: ${mysqlbinlog_cmd[\*]} | ${mysql_cmd[\*]}" if "${mysqlbinlog_cmd[@]}" | "${mysql_cmd[@]}"; then log "INFO" "成功应用 binlog: $binlog_base" applied=1 else local ret=$? log "INFO" "完成 binlog 应用(可能已到达目标时间或中断),返回码: $ret" applied=1 break # 关键:不再处理后续 binlog fi rm -f "$tmp_binlog" START_POS=4 # 下一个文件从事件头后开始 done \# 6. 结果汇报 if (( applied == 0 )); then log "WARN" "未应用任何 binlog,请检查 binlog 是否存在、时间范围是否合理" fi if (( DRY_RUN )); then log "DRYRUN" "模拟执行结束。真实恢复请移除 &#8211;dry-run 参数。" else log "INFO" "恢复完成:已恢复至 $TARGET_TIME" log "INFO" "请立即验证数据一致性,并检查关键业务逻辑。" fi exit 0 my.cnf 文件(600 权限) [client] user=用户名 password=密码 host=localhost port=3306 crontab 表达式 \# 每天 2:00 全量备份 0 2 \* \* * /project/backup/mysql/backup_per_db.sh \# 每小时 0 分 增量 binlog 0 \* \* \* \* /project/backup/mysql/backup_binlog.sh \# 每天 3:10 健康检查 10 3 \* \* * /project/backup/mysql/check_backup_health.sh 如果报:bash: ./backup_per_db.sh: /bin/bash^M: 坏的解释器: 没有那个文件或目录,解决方案: 使用 vim 编辑器 vim backup_per_db.sh 在命令模式下执行: :set fileformat=unix :wq

October 23, 2025 · 11 min · 2332 words · Bridge Li

使用 docker 一键部署 ELK(包含中文分词) 服务脚本

前几天公司有个需求要做全文索引,于是写了一个脚本使用 docker 一键部署 ELK 服务,内容如下: #!/bin/bash set -e echo "==================================================" echo "🚀 开始部署 ELK + MySQL 同步(中文分词 + 固定索引)" echo "==================================================" \# ==================== 配置区(请修改)==================== MYSQL_HOST="192.168.124.6" # ✏️ 修改为你的 MySQL IP MYSQL_USER="root" # 读取权限用户 MYSQL_PASSWORD="123456" # 用户密码 MYSQL_DB="ams" # 数据库名 ELASTIC_PASSWORD="i*B4j6eD+g0e" # ES 密码(至少 8 位,含大小写+数字) ES_VERSION="8.11.3" # 必须与 IK 插件版本一致 LOGSTASH_VERSION="8.11.3" \# ======================================================== \# 项目路径 ROOT_DIR="/project/elastic-sync" mkdir -p "$ROOT_DIR" cd "$ROOT_DIR" echo "📁 创建项目目录结构" mkdir -p config/mysql config data/es data/kibana logs/logstash plugins/ik \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- 下载 IK 分词插件 &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212; IK_URL="https://release.infinilabs.com/analysis-ik/stable/elasticsearch-analysis-ik-ik-${ES_VERSION}.zip" IK_DIR="$ROOT_DIR/plugins/ik" if [ ! -f "$IK_DIR/plugin-descriptor.properties" ]; then echo "📥 正在下载 IK 分词插件 v${ES_VERSION}&#8230;" wget -q "$IK_URL" -O /tmp/ik.zip unzip -q /tmp/ik.zip -d "$IK_DIR" rm /tmp/ik.zip chown -R 1000:1000 "$IK_DIR" echo "✅ IK 插件安装完成" else echo "ℹ️ IK 插件已存在,跳过安装" fi \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- 生成 elasticsearch.yml &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- cat > config/elasticsearch.yml << EOF cluster.name: production-cluster node.name: node-1 node.roles: [ data, master, ingest ] path: data: /usr/share/elasticsearch/data logs: /usr/share/elasticsearch/logs network.host: 0.0.0.0 http.port: 9200 http.cors.enabled: true http.cors.allow-origin: "*" discovery.type: single-node xpack.security.enabled: true xpack.security.http.ssl.enabled: false xpack.monitoring.collection.enabled: true EOF \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- 生成 logstash.yml &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- cat > config/logstash.yml << EOF http.host: "0.0.0.0" xpack.monitoring.enabled: false config.reload.automatic: false EOF \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- 生成 logstash.conf &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- cat > config/logstash.conf << EOF input { jdbc { jdbc_connection_string => "jdbc:mysql://$MYSQL_HOST:3306/$MYSQL_DB?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai" jdbc_user => "$MYSQL_USER" jdbc_password => "$MYSQL_PASSWORD" jdbc_driver_library => "/usr/share/logstash/mysql/mysql-connector-java-8.0.30.jar" jdbc_driver_class => "com.mysql.cj.jdbc.Driver" jdbc_default_timezone => "Asia/Shanghai" statement => " SELECT * FROM article WHERE updated_at >= :sql_last_value ORDER BY updated_at ASC " use_column_value => true tracking_column => "updated_at" tracking_column_type => "timestamp" last_run_metadata_path => "/usr/share/logstash/.logstash_jdbc_last_run" schedule => "\*/2 \* \* \* *" } } filter { \# 清洗 content 字段中的 HTML 标签 if [content] { mutate { gsub => [ "content", "<[^>]*>", "" # 删除所有 HTML 标签:<p>、<div>、<span> 等 ] } \# 可选:进一步清理多余的空白字符 mutate { gsub => [ "content", "\s+", " " # 多个空白字符(空格、换行、制表符)合并为一个空格 ] } mutate { strip => ["content"] # 去除首尾空格 } } \# 如果 del_flag 是 1,标记该记录用于删除 if [del_flag] == 1 { mutate { add_tag => ["delete_document"] } } } output { if "delete_document" in [tags] { elasticsearch { hosts => ["http://elasticsearch:9200"] user => "elastic" password => "$ELASTIC_PASSWOR" action => "delete" document_id => "%{id}" index => "articles" } } else { elasticsearch { hosts => ["http://elasticsearch:9200"] user => "elastic" password => "$ELASTIC_PASSWOR" index => "articles" # ✅ 固定索引 document_id => "%{id}" # 支持 varchar id doc_as_upsert => true # 更新覆盖 } } stdout { codec => rubydebug } } EOF \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- 生成 docker-compose.yml &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- cat > docker-compose.yml << EOF services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:$ES_VERSION container_name: elasticsearch environment: &#8211; discovery.type=single-node &#8211; ES_JAVA_OPTS=-Xms2g -Xmx2g &#8211; xpack.security.enabled=true &#8211; xpack.security.http.ssl.enabled=false &#8211; ELASTIC_PASSWORD=$ELASTIC_PASSWORD ports: &#8211; "9200:9200" volumes: &#8211; ./data/es:/usr/share/elasticsearch/data &#8211; ./config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml &#8211; ./plugins/ik:/usr/share/elasticsearch/plugins/ik networks: &#8211; elastic restart: unless-stopped healthcheck: test: ["CMD-SHELL", "curl -f http://localhost:9200 || exit 1"] interval: 30s timeout: 10s retries: 3 kibana: image: docker.elastic.co/kibana/kibana:$LOGSTASH_VERSION container_name: kibana depends_on: elasticsearch: condition: service_healthy environment: &#8211; ELASTICSEARCH_HOSTS=["http://elasticsearch:9200"] &#8211; ELASTICSEARCH_USERNAME=elastic &#8211; ELASTICSEARCH_PASSWORD=$ELASTIC_PASSWORD &#8211; SERVER_NAME=kibana.example.com &#8211; I18N_LOCALE=zh-CN ports: &#8211; "5601:5601" volumes: &#8211; ./data/kibana:/usr/share/kibana/data networks: &#8211; elastic restart: unless-stopped logstash: image: docker.elastic.co/logstash/logstash:$LOGSTASH_VERSION container_name: logstash depends_on: &#8211; elasticsearch volumes: &#8211; ./config/logstash.conf:/usr/share/logstash/pipeline/logstash.conf &#8211; ./config/logstash.yml:/usr/share/logstash/config/logstash.yml &#8211; ./logs/logstash:/var/log/logstash &#8211; ./config/mysql:/usr/share/logstash/mysql networks: &#8211; elastic restart: unless-stopped networks: elastic: driver: bridge EOF \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- 下载 MySQL JDBC 驱动 &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- JDBC_JAR="config/mysql/mysql-connector-java-8.0.30.jar" if [ ! -f "$JDBC_JAR" ]; then echo "📥 下载 MySQL JDBC 驱动&#8230;" mkdir -p config/mysql wget -q https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.30/mysql-connector-java-8.0.30.jar -O "$JDBC_JAR" echo "✅ JDBC 驱动下载完成" fi \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- 设置权限 &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- echo "🔐 设置目录权限" chown -R 1000:1000 data/es plugins/ik chmod -R 755 config logs chmod -R 777 data/kibana \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- 生成创建索引脚本 &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- cat > create-index.sh << &#8216;EOF&#8217; #!/bin/bash echo "🔄 正在创建索引 &#8216;articles&#8217; 并配置 IK 分词器&#8230;" curl -X PUT "http://localhost:9200/articles" \ -u elastic:$ELASTIC_PASSWORD \ -H "Content-Type: application/json" \ -d &#8216; { "settings": { "index": { "number_of_shards": 1, "number_of_replicas": 1 }, "analysis": { "analyzer": { "ik_analyzer": { "type": "custom", "tokenizer": "ik_max_word", "filter": ["lowercase"] } } } }, "mappings": { "properties": { "id": { "type": "keyword" }, "title": { "type": "text", "analyzer": "ik_max_word", "search_analyzer": "ik_smart" }, "content": { "type": "text", "analyzer": "ik_max_word", "search_analyzer": "ik_smart" }, "author": { "type": "keyword" }, "created_at": { "type": "date" }, "updated_at": { "type": "date" } } } }&#8217; && echo "✅ 索引 &#8216;articles&#8217; 创建成功!" EOF chmod +x create-index.sh \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- 完成 &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- echo "==================================================" echo "🎉 部署准备完成!" echo "==================================================" echo "" echo "📌 下一步操作:" echo "1. 检查配置:nano setup-elastic-sync.sh (修改 MySQL 地址、用户、密码)" echo "2. 增加权限:chmod +x setup-elastic-sync.sh" echo "3. 执行脚本:sudo ./setup-elastic-sync.sh" echo "4. 启动服务:sudo docker compose up -d" echo "5. 创建索引:bash ./create-index.sh" echo "6. 访问 Kibana:http://你的服务器IP:5601" echo " &#8211; 用户:elastic" echo " &#8211; 密码:$ELASTIC_PASSWORD" echo "" echo "💡 首次运行会全量同步 articles 表,之后每 2 分钟增量同步" echo "🔍 在 Kibana 中搜索中文(如“阿里巴巴”),应能命中结果" echo "" echo "⚠️ 注意:必须先运行 create-index.sh 再让 Logstash 写入,否则分词无效!" 如果 kibana 用 elastic 用户连不上,通过 Elasticsearch 的 Security API 为内置用户 kibana_system 修改密码: ...

August 28, 2025 · 5 min · 891 words · Bridge Li

关于 Redis incr 的一个问题

前一段时间有一个需求,需要计数,理所当地的使用了 redis 的 incr 方法。代码大概如下: @Scheduled(cron = "0 0/10 \* \* * ?") public void test() { long yellowInterval = 5L; boolean isReachable = false; // TODO long delta = isReachable ? -1L : 1L; ValueOperations<String, Long> valueOperations = redisTemplate.opsForValue(); String key = RFID_NETWORK_STATUS_PREFIX + rfDevice.getId(); Long increment = valueOperations.increment(key, delta); if (increment == null || increment <= 0L) { } } else if (increment >= yellowInterval) { if (Constants.RFID_NETWORK_STATUS_GREEN.equals(rfDevice.getNetworkStatus())) { } if (increment >= yellowInterval * 3) { valueOperations.set(key, yellowInterval * 3); } } 大概就是某个操作之后记一下数,加一或者减一,如果加到了某个值,就把它设置为某个值。我们这里先不考虑并发问题。结果在运行的时候遇到了一个问题,报错信息如下: ...

July 30, 2025 · 2 min · 274 words · Bridge Li

Gradle 项目打包构建中的两个小问题

打包的时候报错,提示 jar 重复,具体详情: * What went wrong: Execution failed for task &#8216;:web-admin:bootJar&#8217;. > Entry BOOT-INF/lib/jaxb-core-4.0.3.jar is a duplicate but no duplicate handling strategy has been set. Please refer to https://docs.gradle.org/7.6.3/dsl/org.gradle.api.tasks.Copy.html#org.gradle.api.tasks.Copy:duplicatesStrategy for details. * Try: > Run with &#8211;stacktrace option to get the stack trace. > Run with &#8211;info or &#8211;debug option to get more log output. > Run with &#8211;scan to get full insights. 在打包Spring Boot应用时,BOOT-INF/lib/jaxb-core-4.0.3.jar 文件出现了重复项,而构建脚本中没有设置处理重复文件的策略。Gradle不允许默认情况下存在重复文件,因此构建失败。要解决这个问题,只修改构建配置: bootJar { duplicatesStrategy = DuplicatesStrategy.EXCLUDE } 这段代码告诉 Gradle 在发现重复文件时排除它们。根据你的需求,你也可以选择其他策略如 DuplicatesStrategy.INCLUDE 或者 DuplicatesStrategy.WARN。然后清理和重新构建项目即可。 ...

May 17, 2025 · 1 min · 144 words · Bridge Li

nginx 代理 sse 接口,报:(failed) net::ERR HTTP2 PROTOCOL ERROR

前一段时间曾写了一篇关于Spring MVC 通过 SSE 实现消息推送的小文章,后来系统上线的时候,遇到了一个小问题,打开浏览器的 network,看到接口报:(failed) net::ERR HTTP2 PROTOCOL ERROR,通常是因为 HTTP/2 协议与 SSE 的某些特性不兼容所导致的。SSE 是基于 HTTP 协议的服务器推送技术,它要求连接保持打开状态以便服务器可以持续发送更新给客户端。我们使用的 nginx version 是:nginx/1.26.1 只需要按如下配置即可解决: server { listen 80; server_name bridgeli.com; access_log /var/log/nginx/bridgeli_access.log; error_log /var/log/nginx/bridgeli_error.log warn; location ^~ /admin-api/ { proxy_pass http://192.168.124.34:8080/; \# 确保使用HTTP/1.1来支持SSE proxy_http_version 1.1; \# 关闭代理连接的“Connection”头,以避免潜在的问题 proxy_set_header Connection &#8221;; \# 增加超时设置,确保长时间连接不会被关闭 proxy_read_timeout 86400s; proxy_send_timeout 86400s; \# 如果需要禁用HTTP/2(可选) \# 注意:这个设置是在server块中,而不是location块中 \# listen 80 http2 off; 对于HTTP/2协议错误特别有用 } location / { root /project/www/bridgeli/admin/; try_files $uri $uri/ /index.html; } }

April 3, 2025 · 1 min · 75 words · Bridge Li

关于 druid 监控的两个小问题

前一段时间做一个小需求,需要展示 druid 的监控页面,我们都知道 druid 监控地址是:http://ip:port/druid/index.html,但是当时一直报 404,后来查了资料,引入的 jar 包: <dependency> <groupId>com.alibaba</groupId> <artifactId>druid</artifactId> <version>1.2.24</version> </dependency> 应该是因为不是通过 starter 包引入的,所以需要手动配置: package cn.bridgeli.demo; import com.alibaba.druid.support.jakarta.StatViewServlet; import com.alibaba.druid.support.jakarta.WebStatFilter; import com.alibaba.druid.util.Utils; import jakarta.servlet.Filter; import jakarta.servlet.FilterChain; import jakarta.servlet.ServletException; import jakarta.servlet.ServletRequest; import jakarta.servlet.ServletResponse; import org.springframework.boot.web.servlet.FilterRegistrationBean; import org.springframework.boot.web.servlet.ServletRegistrationBean; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import java.io.IOException; @Configuration public class DruidConfig { @Bean public ServletRegistrationBean<StatViewServlet> statViewServlet() { ServletRegistrationBean<StatViewServlet> servletRegistrationBean = new ServletRegistrationBean<>(new StatViewServlet(), "/druid/*"); // 设置登录用户名和密码 servletRegistrationBean.addInitParameter("loginUsername", "BridgeLi"); servletRegistrationBean.addInitParameter("loginPassword", "BridgeLi"); return servletRegistrationBean; } @Bean public FilterRegistrationBean<WebStatFilter> webStatFilter() { FilterRegistrationBean<WebStatFilter> filterRegistrationBean = new FilterRegistrationBean<>(new WebStatFilter()); filterRegistrationBean.addUrlPatterns("/*"); filterRegistrationBean.addInitParameter("exclusions", "\*.js,\*.gif,\*.jpg,\*.png,\*.css,\*.ico,/druid/*"); return filterRegistrationBean; } } 另外一个小问题就是,这么引入之后,打开监控页最下面会有 alibaba 的广告,一般情况下,我们肯定是想去掉的,广告代码所在的位置在 support/http/resources/js/common.js 这个 js 里面,网上有人说直接解压删掉,重新打包就行了,但是这个有问题就是必须用重新打包的这个 jar,搜了一下资料,其实去掉也很简单,在上面的那个类里面增加一个 配置过滤掉即可: ...

March 12, 2025 · 2 min · 310 words · Bridge Li

Spring MVC 通过 SSE 实现消息推送

又好久没有写文章了,自从有了大模型之后写文章的态度越来越提不起兴趣了,有问题,直接问大模型即可。前几天公司有个需求,想用 SSE 实现,之前从没写过,所以让大模型直接写,然后实现超级简单: 编写 SSE 服务,来进行创建链接和发送消息 package cn.bridgeli.demo; import lombok.Getter; import lombok.extern.slf4j.Slf4j; import org.apache.commons.collections4.CollectionUtils; import org.springframework.stereotype.Service; import org.springframework.web.servlet.mvc.method.annotation.SseEmitter; import java.io.IOException; import java.util.List; import java.util.Map; import java.util.concurrent.ConcurrentHashMap; @Slf4j @Getter @Service public class SseService { private final Map<String, SseEmitter> emitters = new ConcurrentHashMap<>(); public SseEmitter stream(String usrId) { SseEmitter emitter = emitters.computeIfAbsent(usrId, k -> new SseEmitter(Long.MAX_VALUE)); emitter.onCompletion(() -> { log.info("SSE emitter completed"); emitters.remove(usrId); }); emitter.onError((throwable) -> { log.error("Error occurred in SSE emitter", throwable); emitter.complete(); emitters.remove(usrId); }); emitter.onTimeout(() -> { log.warn("SSE emitter timed out"); emitter.complete(); emitters.remove(usrId); }); // 可选:连接成功时向客户端发送一个初始事件 try { emitter.send(SseEmitter.event().name("connect").data("连接成功")); } catch (IOException e) { log.error("Error occurred while sending initial event", e); emitter.completeWithError(e); } return emitter; } public void send(List<String> userIds, String name, Object object) { if (!emitters.isEmpty()) { // 遍历所有用户的 SseEmitter,推送数据 if (CollectionUtils.isEmpty(userIds)) { emitters.forEach((userId, emitter) -> { try { emitter.send(SseEmitter.event().name(name).data(object)); } catch (IOException e) { // 如果发送失败,则移除该用户的 emitter log.error("Error occurred while sending event to user {}", userId, e); emitter.completeWithError(e); emitters.remove(userId); } }); } else { userIds.forEach(userId -> { SseEmitter emitter = emitters.get(userId); if (emitter != null) { try { emitter.send(SseEmitter.event().name(name).data(object)); } catch (IOException e) { // 如果发送失败,则移除该用户的 emitter log.error("Error occurred while sending event to user {}", userId, e); emitter.completeWithError(e); emitters.remove(userId); } } }); } } } } 编写对应的 Controller 给前端提供接口: package cn.bridgeli.demo; import cn.bridgeli.BaseAuthController; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.annotation.Resource; import lombok.extern.slf4j.Slf4j; import org.springframework.http.MediaType; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; import org.springframework.web.servlet.mvc.method.annotation.SseEmitter; @Slf4j @RestController @Tag(name = "SSE 推送服务") @RequestMapping("/auth/common/sse") public class SseController extends BaseAuthController { @Resource private SseService sseService; @GetMapping(value = "/stream", produces = MediaType.TEXT_EVENT_STREAM_VALUE) public SseEmitter stream() { return sseService.stream(getLoginUsr().getUsrId()); } } 消息推送具体实现: package cn.bridgeli.demo; import cn.bridgeli.common.SseService; import cn.bridgeli.monitor.MonitorService; import cn.bridgeli.vo.CpuInfoVo; import jakarta.annotation.Resource; import lombok.extern.slf4j.Slf4j; import org.springframework.scheduling.annotation.Scheduled; import org.springframework.stereotype.Component; import org.springframework.web.servlet.mvc.method.annotation.SseEmitter; import java.util.Map; @Component @Slf4j public class ScheduledTask { @Resource private MonitorService monitorService; @Resource private SseService sseService; /** * 每分钟执行一次 */ @Scheduled(cron = "0 0/1 \* \* * ?") public void updateOrderStatus() { log.info("=============定时任务============="); Map<String, SseEmitter> emitters = sseService.getEmitters(); if (null == emitters || emitters.isEmpty()) { log.info("sse emitters is empty"); return; } CpuInfoVo cpuData = monitorService.getCpuData(); sseService.send(null, "cpu", cpuData); } } 其实就是前端连接之后创建一个连接,保存连接,然后别的地方产生消息,推送消息,我的实例是通过 oshi 获取 CPU 使用率,实现对 CPU 的实时监控。

February 27, 2025 · 2 min · 338 words · Bridge Li