mvn install 和 mvn install:install-file 在传递依赖方面的区别

一句话总结:Maven 的传递依赖不是靠 JAR 文件本身实现的,而是靠 .pom 文件中的元数据。如果你用 mvn install:install-file 安装一个 JAR 却没提供 POM,那它就是一座“断桥”——别人能引用你,但走不到你依赖的库。 在日常开发中,我们经常遇到这样的场景:第三方供应商给了你一个闭源 SDK,只有 .jar 文件,我们都会使用: mvn install:install-file -Dfile=your-jar.jar -DgroupId=com.yourcompany -DartifactId=your-jar -Dversion=1.0.0 -Dpackaging=jar 看起来一切正常,项目也能编译通过。但一运行就报错: java.lang.NoClassDefFoundError: com/yourcompany/yourjar/YourClass 奇怪了,你的 JAR 确实依赖它,为什么 Maven 没自动下载?答案就藏在 传递依赖(Transitive Dependencies) 的工作机制里。 什么是传递依赖?假设你的项目 A 依赖库 B,而库 B 又依赖库 C 和 D。在 Maven 中,你不需要显式声明 C 和 D,Maven 会自动把它们加入 classpath —— 这就是传递依赖。 <dependency> <groupId>com.google.guava</groupId> <artifactId>guava</artifactId> <version>32.1.3-jre</version> </dependency> Guava 的 POM 文件中声明了对 jsr305、checker-qual 等库的依赖。当你引入 Guava 时,这些依赖会自动传递进来。但有个前提:传递依赖的信息必须存在于该构件对应的 .pom 文件中。 mvn install:自带“导航地图” 当你在一个标准 Maven 项目中运行: mvn install Maven 会:编译代码、运行测试、打包生成 target/my-app-1.0.jar;同时将项目的 pom.xml 处理后安装为 my-app-1.0.pom 到本地仓库(如 ~/.m2/repository/com/example/my-app/1.0/);这个 .pom 文件完整保留了 块。因此,当其他项目引用 com.example:my-app:1.0 时,Maven 读取 .pom,自动解析并下载所有传递依赖。 ...

March 13, 2026 · 1 min · 182 words · Bridge Li

本地开发不同项目使用不同版本 JDK 的解决方案

一般我们在公司工作中,很少有人就负责一个项目,而负责不同的项目,由于各种原因使用的 JDK 版本可能并不相同,如果存在不兼容的情况,那么本地 mvn clean compile 的时候就会报错,这个时候就需要我们去修改我们设置的环境变量,如果多个项目同时开发,那么就需要时不时切来切去,特别烦人,其实这个问题 maven 早就替大家考虑过了,我们只需要: 修改 maven 的 toolchains.xml 文件 <?xml version="1.0" encoding="UTF-8"?> <!&#8211; Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. &#8211;> <!&#8211; | This is the toolchains file for Maven. It can be specified at two levels: | | 1. User Level. This toolchains.xml file provides configuration for a single user, | and is normally provided in ${user.home}/.m2/toolchains.xml. | | NOTE: This location can be overridden with the CLI option: | | -t /path/to/user/toolchains.xml | | 2. Global Level. This toolchains.xml file provides configuration for all Maven | users on a machine (assuming they&#8217;re all using the same Maven | installation). It&#8217;s normally provided in | ${maven.conf}/toolchains.xml. | | NOTE: This location can be overridden with the CLI option: | | -gt /path/to/global/toolchains.xml | | The sections in this sample file are intended to give you a running start at | getting the most out of your Maven installation. |&#8211;> <toolchains xmlns="http://maven.apache.org/TOOLCHAINS/1.1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/TOOLCHAINS/1.1.0 http://maven.apache.org/xsd/toolchains-1.1.0.xsd"> <!&#8211; | With toolchains you can refer to installations on your system. This | way you don&#8217;t have to hardcode paths in your pom.xml. | | Every toolchain consist of 3 elements: | * type: the type of tool. An often used value is &#8216;jdk&#8217;. Toolchains-aware | plugins should document which type you must use. | | * provides: A list of key/value-pairs. | Based on the toolchain-configuration in the pom.xml Maven will search for | matching <provides/> configuration. You can decide for yourself which key-value | pairs to use. Often used keys are &#8216;version&#8217;, &#8216;vendor&#8217; and &#8216;arch&#8217;. By default | the version has a special meaning. If you configured in the pom.xml &#8216;1.5&#8217; | Maven will search for 1.5 and above. | | * configuration: Additional configuration for this tool. | Look for documentation of the toolchains-aware plugin which configuration elements | can be used. | | See also https://maven.apache.org/guides/mini/guide-using-toolchains.html | | General example <toolchain> <type/> <provides> <version>1.0</version> </provides> <configuration/> </toolchain> | JDK examples <toolchain> <type>jdk</type> <provides> <version>1.5</version> <vendor>sun</vendor> </provides> <configuration> <jdkHome>/path/to/jdk/1.5</jdkHome> </configuration> </toolchain> <toolchain> <type>jdk</type> <provides> <version>1.6</version> <vendor>sun</vendor> </provides> <configuration> <jdkHome>/path/to/jdk/1.6</jdkHome> </configuration> </toolchain> &#8211;> <toolchain> <type>jdk</type> <provides> <version>17</version> </provides> <configuration> <jdkHome>D:\J2EE\Java\jdk-17</jdkHome> </configuration> </toolchain> <toolchain> <type>jdk</type> <provides> <version>8</version> </provides> <configuration> <jdkHome>D:\J2EE\Java\jdk1.8.0_311</jdkHome> </configuration> </toolchain> </toolchains> 修改项目的 pom.xml 文件 <profiles> <profile> <id>dev</id> <activation> <activeByDefault>false</activeByDefault> </activation> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-toolchains-plugin</artifactId> <version>3.1.0</version> <executions> <execution> <goals> <goal>toolchain</goal> </goals> </execution> </executions> <configuration> <toolchains> <jdk> <version>8</version> </jdk> </toolchains> </configuration> </plugin> </plugins> </build> </profile> </profiles> 修改完我们在编译的时候只需要运行: mvn clean compile -Pdev 即可,这样编译就会自动用 jdk8 也不是系统环境变量配置的,当然还有更简单的,我们可以修改 pom.xml 中的配置,只需要把: ...

December 25, 2025 · 3 min · 522 words · Bridge Li

使用 docker 一键部署 ELK(包含中文分词) 服务脚本

前几天公司有个需求要做全文索引,于是写了一个脚本使用 docker 一键部署 ELK 服务,内容如下: #!/bin/bash set -e echo "==================================================" echo "🚀 开始部署 ELK + MySQL 同步(中文分词 + 固定索引)" echo "==================================================" \# ==================== 配置区(请修改)==================== MYSQL_HOST="192.168.124.6" # ✏️ 修改为你的 MySQL IP MYSQL_USER="root" # 读取权限用户 MYSQL_PASSWORD="123456" # 用户密码 MYSQL_DB="ams" # 数据库名 ELASTIC_PASSWORD="i*B4j6eD+g0e" # ES 密码(至少 8 位,含大小写+数字) ES_VERSION="8.11.3" # 必须与 IK 插件版本一致 LOGSTASH_VERSION="8.11.3" \# ======================================================== \# 项目路径 ROOT_DIR="/project/elastic-sync" mkdir -p "$ROOT_DIR" cd "$ROOT_DIR" echo "📁 创建项目目录结构" mkdir -p config/mysql config data/es data/kibana logs/logstash plugins/ik \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- 下载 IK 分词插件 &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212; IK_URL="https://release.infinilabs.com/analysis-ik/stable/elasticsearch-analysis-ik-ik-${ES_VERSION}.zip" IK_DIR="$ROOT_DIR/plugins/ik" if [ ! -f "$IK_DIR/plugin-descriptor.properties" ]; then echo "📥 正在下载 IK 分词插件 v${ES_VERSION}&#8230;" wget -q "$IK_URL" -O /tmp/ik.zip unzip -q /tmp/ik.zip -d "$IK_DIR" rm /tmp/ik.zip chown -R 1000:1000 "$IK_DIR" echo "✅ IK 插件安装完成" else echo "ℹ️ IK 插件已存在,跳过安装" fi \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- 生成 elasticsearch.yml &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- cat > config/elasticsearch.yml << EOF cluster.name: production-cluster node.name: node-1 node.roles: [ data, master, ingest ] path: data: /usr/share/elasticsearch/data logs: /usr/share/elasticsearch/logs network.host: 0.0.0.0 http.port: 9200 http.cors.enabled: true http.cors.allow-origin: "*" discovery.type: single-node xpack.security.enabled: true xpack.security.http.ssl.enabled: false xpack.monitoring.collection.enabled: true EOF \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- 生成 logstash.yml &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- cat > config/logstash.yml << EOF http.host: "0.0.0.0" xpack.monitoring.enabled: false config.reload.automatic: false EOF \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- 生成 logstash.conf &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- cat > config/logstash.conf << EOF input { jdbc { jdbc_connection_string => "jdbc:mysql://$MYSQL_HOST:3306/$MYSQL_DB?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=Asia/Shanghai" jdbc_user => "$MYSQL_USER" jdbc_password => "$MYSQL_PASSWORD" jdbc_driver_library => "/usr/share/logstash/mysql/mysql-connector-java-8.0.30.jar" jdbc_driver_class => "com.mysql.cj.jdbc.Driver" jdbc_default_timezone => "Asia/Shanghai" statement => " SELECT * FROM article WHERE updated_at >= :sql_last_value ORDER BY updated_at ASC " use_column_value => true tracking_column => "updated_at" tracking_column_type => "timestamp" last_run_metadata_path => "/usr/share/logstash/.logstash_jdbc_last_run" schedule => "\*/2 \* \* \* *" } } filter { \# 清洗 content 字段中的 HTML 标签 if [content] { mutate { gsub => [ "content", "<[^>]*>", "" # 删除所有 HTML 标签:<p>、<div>、<span> 等 ] } \# 可选:进一步清理多余的空白字符 mutate { gsub => [ "content", "\s+", " " # 多个空白字符(空格、换行、制表符)合并为一个空格 ] } mutate { strip => ["content"] # 去除首尾空格 } } \# 如果 del_flag 是 1,标记该记录用于删除 if [del_flag] == 1 { mutate { add_tag => ["delete_document"] } } } output { if "delete_document" in [tags] { elasticsearch { hosts => ["http://elasticsearch:9200"] user => "elastic" password => "$ELASTIC_PASSWOR" action => "delete" document_id => "%{id}" index => "articles" } } else { elasticsearch { hosts => ["http://elasticsearch:9200"] user => "elastic" password => "$ELASTIC_PASSWOR" index => "articles" # ✅ 固定索引 document_id => "%{id}" # 支持 varchar id doc_as_upsert => true # 更新覆盖 } } stdout { codec => rubydebug } } EOF \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- 生成 docker-compose.yml &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- cat > docker-compose.yml << EOF services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:$ES_VERSION container_name: elasticsearch environment: &#8211; discovery.type=single-node &#8211; ES_JAVA_OPTS=-Xms2g -Xmx2g &#8211; xpack.security.enabled=true &#8211; xpack.security.http.ssl.enabled=false &#8211; ELASTIC_PASSWORD=$ELASTIC_PASSWORD ports: &#8211; "9200:9200" volumes: &#8211; ./data/es:/usr/share/elasticsearch/data &#8211; ./config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml &#8211; ./plugins/ik:/usr/share/elasticsearch/plugins/ik networks: &#8211; elastic restart: unless-stopped healthcheck: test: ["CMD-SHELL", "curl -f http://localhost:9200 || exit 1"] interval: 30s timeout: 10s retries: 3 kibana: image: docker.elastic.co/kibana/kibana:$LOGSTASH_VERSION container_name: kibana depends_on: elasticsearch: condition: service_healthy environment: &#8211; ELASTICSEARCH_HOSTS=["http://elasticsearch:9200"] &#8211; ELASTICSEARCH_USERNAME=elastic &#8211; ELASTICSEARCH_PASSWORD=$ELASTIC_PASSWORD &#8211; SERVER_NAME=kibana.example.com &#8211; I18N_LOCALE=zh-CN ports: &#8211; "5601:5601" volumes: &#8211; ./data/kibana:/usr/share/kibana/data networks: &#8211; elastic restart: unless-stopped logstash: image: docker.elastic.co/logstash/logstash:$LOGSTASH_VERSION container_name: logstash depends_on: &#8211; elasticsearch volumes: &#8211; ./config/logstash.conf:/usr/share/logstash/pipeline/logstash.conf &#8211; ./config/logstash.yml:/usr/share/logstash/config/logstash.yml &#8211; ./logs/logstash:/var/log/logstash &#8211; ./config/mysql:/usr/share/logstash/mysql networks: &#8211; elastic restart: unless-stopped networks: elastic: driver: bridge EOF \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- 下载 MySQL JDBC 驱动 &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- JDBC_JAR="config/mysql/mysql-connector-java-8.0.30.jar" if [ ! -f "$JDBC_JAR" ]; then echo "📥 下载 MySQL JDBC 驱动&#8230;" mkdir -p config/mysql wget -q https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.30/mysql-connector-java-8.0.30.jar -O "$JDBC_JAR" echo "✅ JDBC 驱动下载完成" fi \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- 设置权限 &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- echo "🔐 设置目录权限" chown -R 1000:1000 data/es plugins/ik chmod -R 755 config logs chmod -R 777 data/kibana \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- 生成创建索引脚本 &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- cat > create-index.sh << &#8216;EOF&#8217; #!/bin/bash echo "🔄 正在创建索引 &#8216;articles&#8217; 并配置 IK 分词器&#8230;" curl -X PUT "http://localhost:9200/articles" \ -u elastic:$ELASTIC_PASSWORD \ -H "Content-Type: application/json" \ -d &#8216; { "settings": { "index": { "number_of_shards": 1, "number_of_replicas": 1 }, "analysis": { "analyzer": { "ik_analyzer": { "type": "custom", "tokenizer": "ik_max_word", "filter": ["lowercase"] } } } }, "mappings": { "properties": { "id": { "type": "keyword" }, "title": { "type": "text", "analyzer": "ik_max_word", "search_analyzer": "ik_smart" }, "content": { "type": "text", "analyzer": "ik_max_word", "search_analyzer": "ik_smart" }, "author": { "type": "keyword" }, "created_at": { "type": "date" }, "updated_at": { "type": "date" } } } }&#8217; && echo "✅ 索引 &#8216;articles&#8217; 创建成功!" EOF chmod +x create-index.sh \# &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- 完成 &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- echo "==================================================" echo "🎉 部署准备完成!" echo "==================================================" echo "" echo "📌 下一步操作:" echo "1. 检查配置:nano setup-elastic-sync.sh (修改 MySQL 地址、用户、密码)" echo "2. 增加权限:chmod +x setup-elastic-sync.sh" echo "3. 执行脚本:sudo ./setup-elastic-sync.sh" echo "4. 启动服务:sudo docker compose up -d" echo "5. 创建索引:bash ./create-index.sh" echo "6. 访问 Kibana:http://你的服务器IP:5601" echo " &#8211; 用户:elastic" echo " &#8211; 密码:$ELASTIC_PASSWORD" echo "" echo "💡 首次运行会全量同步 articles 表,之后每 2 分钟增量同步" echo "🔍 在 Kibana 中搜索中文(如“阿里巴巴”),应能命中结果" echo "" echo "⚠️ 注意:必须先运行 create-index.sh 再让 Logstash 写入,否则分词无效!" 如果 kibana 用 elastic 用户连不上,通过 Elasticsearch 的 Security API 为内置用户 kibana_system 修改密码: ...

August 28, 2025 · 5 min · 891 words · Bridge Li

关于 Redis incr 的一个问题

前一段时间有一个需求,需要计数,理所当地的使用了 redis 的 incr 方法。代码大概如下: @Scheduled(cron = "0 0/10 \* \* * ?") public void test() { long yellowInterval = 5L; boolean isReachable = false; // TODO long delta = isReachable ? -1L : 1L; ValueOperations<String, Long> valueOperations = redisTemplate.opsForValue(); String key = RFID_NETWORK_STATUS_PREFIX + rfDevice.getId(); Long increment = valueOperations.increment(key, delta); if (increment == null || increment <= 0L) { } } else if (increment >= yellowInterval) { if (Constants.RFID_NETWORK_STATUS_GREEN.equals(rfDevice.getNetworkStatus())) { } if (increment >= yellowInterval * 3) { valueOperations.set(key, yellowInterval * 3); } } 大概就是某个操作之后记一下数,加一或者减一,如果加到了某个值,就把它设置为某个值。我们这里先不考虑并发问题。结果在运行的时候遇到了一个问题,报错信息如下: ...

July 30, 2025 · 2 min · 274 words · Bridge Li

Gradle 项目打包构建中的两个小问题

打包的时候报错,提示 jar 重复,具体详情: * What went wrong: Execution failed for task &#8216;:web-admin:bootJar&#8217;. > Entry BOOT-INF/lib/jaxb-core-4.0.3.jar is a duplicate but no duplicate handling strategy has been set. Please refer to https://docs.gradle.org/7.6.3/dsl/org.gradle.api.tasks.Copy.html#org.gradle.api.tasks.Copy:duplicatesStrategy for details. * Try: > Run with &#8211;stacktrace option to get the stack trace. > Run with &#8211;info or &#8211;debug option to get more log output. > Run with &#8211;scan to get full insights. 在打包Spring Boot应用时,BOOT-INF/lib/jaxb-core-4.0.3.jar 文件出现了重复项,而构建脚本中没有设置处理重复文件的策略。Gradle不允许默认情况下存在重复文件,因此构建失败。要解决这个问题,只修改构建配置: bootJar { duplicatesStrategy = DuplicatesStrategy.EXCLUDE } 这段代码告诉 Gradle 在发现重复文件时排除它们。根据你的需求,你也可以选择其他策略如 DuplicatesStrategy.INCLUDE 或者 DuplicatesStrategy.WARN。然后清理和重新构建项目即可。 ...

May 17, 2025 · 1 min · 144 words · Bridge Li

nginx 代理 sse 接口,报:(failed) net::ERR HTTP2 PROTOCOL ERROR

前一段时间曾写了一篇关于Spring MVC 通过 SSE 实现消息推送的小文章,后来系统上线的时候,遇到了一个小问题,打开浏览器的 network,看到接口报:(failed) net::ERR HTTP2 PROTOCOL ERROR,通常是因为 HTTP/2 协议与 SSE 的某些特性不兼容所导致的。SSE 是基于 HTTP 协议的服务器推送技术,它要求连接保持打开状态以便服务器可以持续发送更新给客户端。我们使用的 nginx version 是:nginx/1.26.1 只需要按如下配置即可解决: server { listen 80; server_name bridgeli.com; access_log /var/log/nginx/bridgeli_access.log; error_log /var/log/nginx/bridgeli_error.log warn; location ^~ /admin-api/ { proxy_pass http://192.168.124.34:8080/; \# 确保使用HTTP/1.1来支持SSE proxy_http_version 1.1; \# 关闭代理连接的“Connection”头,以避免潜在的问题 proxy_set_header Connection &#8221;; \# 增加超时设置,确保长时间连接不会被关闭 proxy_read_timeout 86400s; proxy_send_timeout 86400s; \# 如果需要禁用HTTP/2(可选) \# 注意:这个设置是在server块中,而不是location块中 \# listen 80 http2 off; 对于HTTP/2协议错误特别有用 } location / { root /project/www/bridgeli/admin/; try_files $uri $uri/ /index.html; } }

April 3, 2025 · 1 min · 75 words · Bridge Li

关于 druid 监控的两个小问题

前一段时间做一个小需求,需要展示 druid 的监控页面,我们都知道 druid 监控地址是:http://ip:port/druid/index.html,但是当时一直报 404,后来查了资料,引入的 jar 包: <dependency> <groupId>com.alibaba</groupId> <artifactId>druid</artifactId> <version>1.2.24</version> </dependency> 应该是因为不是通过 starter 包引入的,所以需要手动配置: package cn.bridgeli.demo; import com.alibaba.druid.support.jakarta.StatViewServlet; import com.alibaba.druid.support.jakarta.WebStatFilter; import com.alibaba.druid.util.Utils; import jakarta.servlet.Filter; import jakarta.servlet.FilterChain; import jakarta.servlet.ServletException; import jakarta.servlet.ServletRequest; import jakarta.servlet.ServletResponse; import org.springframework.boot.web.servlet.FilterRegistrationBean; import org.springframework.boot.web.servlet.ServletRegistrationBean; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import java.io.IOException; @Configuration public class DruidConfig { @Bean public ServletRegistrationBean<StatViewServlet> statViewServlet() { ServletRegistrationBean<StatViewServlet> servletRegistrationBean = new ServletRegistrationBean<>(new StatViewServlet(), "/druid/*"); // 设置登录用户名和密码 servletRegistrationBean.addInitParameter("loginUsername", "BridgeLi"); servletRegistrationBean.addInitParameter("loginPassword", "BridgeLi"); return servletRegistrationBean; } @Bean public FilterRegistrationBean<WebStatFilter> webStatFilter() { FilterRegistrationBean<WebStatFilter> filterRegistrationBean = new FilterRegistrationBean<>(new WebStatFilter()); filterRegistrationBean.addUrlPatterns("/*"); filterRegistrationBean.addInitParameter("exclusions", "\*.js,\*.gif,\*.jpg,\*.png,\*.css,\*.ico,/druid/*"); return filterRegistrationBean; } } 另外一个小问题就是,这么引入之后,打开监控页最下面会有 alibaba 的广告,一般情况下,我们肯定是想去掉的,广告代码所在的位置在 support/http/resources/js/common.js 这个 js 里面,网上有人说直接解压删掉,重新打包就行了,但是这个有问题就是必须用重新打包的这个 jar,搜了一下资料,其实去掉也很简单,在上面的那个类里面增加一个 配置过滤掉即可: ...

March 12, 2025 · 2 min · 310 words · Bridge Li

Spring MVC 通过 SSE 实现消息推送

又好久没有写文章了,自从有了大模型之后写文章的态度越来越提不起兴趣了,有问题,直接问大模型即可。前几天公司有个需求,想用 SSE 实现,之前从没写过,所以让大模型直接写,然后实现超级简单: 编写 SSE 服务,来进行创建链接和发送消息 package cn.bridgeli.demo; import lombok.Getter; import lombok.extern.slf4j.Slf4j; import org.apache.commons.collections4.CollectionUtils; import org.springframework.stereotype.Service; import org.springframework.web.servlet.mvc.method.annotation.SseEmitter; import java.io.IOException; import java.util.List; import java.util.Map; import java.util.concurrent.ConcurrentHashMap; @Slf4j @Getter @Service public class SseService { private final Map<String, SseEmitter> emitters = new ConcurrentHashMap<>(); public SseEmitter stream(String usrId) { SseEmitter emitter = emitters.computeIfAbsent(usrId, k -> new SseEmitter(Long.MAX_VALUE)); emitter.onCompletion(() -> { log.info("SSE emitter completed"); emitters.remove(usrId); }); emitter.onError((throwable) -> { log.error("Error occurred in SSE emitter", throwable); emitter.complete(); emitters.remove(usrId); }); emitter.onTimeout(() -> { log.warn("SSE emitter timed out"); emitter.complete(); emitters.remove(usrId); }); // 可选:连接成功时向客户端发送一个初始事件 try { emitter.send(SseEmitter.event().name("connect").data("连接成功")); } catch (IOException e) { log.error("Error occurred while sending initial event", e); emitter.completeWithError(e); } return emitter; } public void send(List<String> userIds, String name, Object object) { if (!emitters.isEmpty()) { // 遍历所有用户的 SseEmitter,推送数据 if (CollectionUtils.isEmpty(userIds)) { emitters.forEach((userId, emitter) -> { try { emitter.send(SseEmitter.event().name(name).data(object)); } catch (IOException e) { // 如果发送失败,则移除该用户的 emitter log.error("Error occurred while sending event to user {}", userId, e); emitter.completeWithError(e); emitters.remove(userId); } }); } else { userIds.forEach(userId -> { SseEmitter emitter = emitters.get(userId); if (emitter != null) { try { emitter.send(SseEmitter.event().name(name).data(object)); } catch (IOException e) { // 如果发送失败,则移除该用户的 emitter log.error("Error occurred while sending event to user {}", userId, e); emitter.completeWithError(e); emitters.remove(userId); } } }); } } } } 编写对应的 Controller 给前端提供接口: package cn.bridgeli.demo; import cn.bridgeli.BaseAuthController; import io.swagger.v3.oas.annotations.tags.Tag; import jakarta.annotation.Resource; import lombok.extern.slf4j.Slf4j; import org.springframework.http.MediaType; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; import org.springframework.web.servlet.mvc.method.annotation.SseEmitter; @Slf4j @RestController @Tag(name = "SSE 推送服务") @RequestMapping("/auth/common/sse") public class SseController extends BaseAuthController { @Resource private SseService sseService; @GetMapping(value = "/stream", produces = MediaType.TEXT_EVENT_STREAM_VALUE) public SseEmitter stream() { return sseService.stream(getLoginUsr().getUsrId()); } } 消息推送具体实现: package cn.bridgeli.demo; import cn.bridgeli.common.SseService; import cn.bridgeli.monitor.MonitorService; import cn.bridgeli.vo.CpuInfoVo; import jakarta.annotation.Resource; import lombok.extern.slf4j.Slf4j; import org.springframework.scheduling.annotation.Scheduled; import org.springframework.stereotype.Component; import org.springframework.web.servlet.mvc.method.annotation.SseEmitter; import java.util.Map; @Component @Slf4j public class ScheduledTask { @Resource private MonitorService monitorService; @Resource private SseService sseService; /** * 每分钟执行一次 */ @Scheduled(cron = "0 0/1 \* \* * ?") public void updateOrderStatus() { log.info("=============定时任务============="); Map<String, SseEmitter> emitters = sseService.getEmitters(); if (null == emitters || emitters.isEmpty()) { log.info("sse emitters is empty"); return; } CpuInfoVo cpuData = monitorService.getCpuData(); sseService.send(null, "cpu", cpuData); } } 其实就是前端连接之后创建一个连接,保存连接,然后别的地方产生消息,推送消息,我的实例是通过 oshi 获取 CPU 使用率,实现对 CPU 的实时监控。

February 27, 2025 · 2 min · 338 words · Bridge Li

全国中小企业融资综合信用服务平台-省级节点数据接口规范-河南省营商环境和社会信用建设中心

开始之前先说一点题外话,几年前曾经看过一个视频,其中一个观点大概就是程序员是一个反传统的群体,其他群体掌握了某个技术,一般都是当做内部商业机密,而程序员则不一样,喜欢开源,尤其 GPL 协议的开源,不仅自己毫无保留的开源,还要求使用他的软件也得开源,也正是这种开源造就了互联网的蓬勃发展。我目前所在公司因为是做金融相关的公司,国家出于某些原因,要求要上报相关的数据到省平台,而省平台的技术采用的是 webservice,和我们目前习惯的 http 接口不太一样,所以前一段时间在写这个的时候走了不少弯路,而网上也没有参考资料,所以决定把相关的核心代码公布出来,供需要的同学参考。需要说明的是:这是我们河南省的系统相关接口,不知道外省是否一致,省平台给的接口文档名是:全国中小企业融资综合信用服务平台省级节点数据接口规范V5.3.pdf,首页写的是:全国中小企业融资综合信用服务平台省级节点数据接口规范,国家公共信用信息中心,河南省营商环境和社会信用建设中心,2024年7月 jar 包引入 <dependency> <groupId>cn.hutool</groupId> <artifactId>hutool-all</artifactId> <version>5.8.10</version> </dependency> <dependency> <groupId>org.bouncycastle</groupId> <artifactId>bcprov-jdk15on</artifactId> <version>1.70</version> </dependency> <dependency> <groupId>org.bouncycastle</groupId> <artifactId>bcpkix-jdk15on</artifactId> <version>1.70</version> </dependency> <dependency> <groupId>org.apache.cxf</groupId> <artifactId>cxf-rt-frontend-jaxws</artifactId> <version>4.0.5</version> </dependency> <dependency> <groupId>org.apache.cxf</groupId> <artifactId>cxf-rt-transports-http</artifactId> <version>4.0.5</version> </dependency> 上报工具类,其中:queryData 方法是用来查询,sendData 方法用来上报数据 package cn.bridgeli.demo; import cn.hutool.core.util.RandomUtil; import com.alibaba.fastjson.JSONObject; import lombok.extern.slf4j.Slf4j; import org.apache.commons.codec.CharEncoding; import org.apache.commons.lang3.StringUtils; import org.apache.cxf.endpoint.Client; import org.apache.cxf.jaxws.endpoint.dynamic.JaxWsDynamicClientFactory; import org.bouncycastle.jcajce.provider.asymmetric.ec.BCECPrivateKey; import org.bouncycastle.util.encoders.Base64; import org.bouncycastle.util.encoders.Hex; import javax.xml.namespace.QName; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.math.BigInteger; import java.security.PublicKey; @Slf4j public class XyhnUtil { private static String pubKey; private static String queryPubKey; private static PublicKey reportPublicKey; // private static XyhnConfig xyhnConfig = SpringUtils.getBean(XyhnConfig.class); private static XyhnConfig xyhnConfig = null; /** * 授权查询省平台接口 * * @param method 要调取的方法名 * @param object 查询方法的参数(不包含:publicKey、appKey) * @return 省平台解密后的数据 */ public static String queryData(String method, JSONObject object) { //1、组装报文 if (StringUtils.isEmpty(queryPubKey)) { queryPubKey = convertFileToBase64(xyhnConfig.getQueryPubKeyPath()); } object.put("publicKey", queryPubKey); object.put("appKey", xyhnConfig.getQueryAppKey()); //2、发送报文 String jsonRes = callInterface("query", method, object.toJSONString()); //3、解密返回数据 JSONObject json = JSONObject.parseObject(jsonRes); Boolean success = json.getBoolean("success"); if ("uploadLicense".equals(method)) { if (null == success || !success) { return jsonRes; } } else if ("cancelLicense".equals(method)) { if (null != success && success) { return jsonRes; } } BigInteger d = new BigInteger(xyhnConfig.getQueryPriKey(), 16); BCECPrivateKey bcecPrivateKey = GMUtil.getPrivatekeyFromD(d); String key0 = json.getString("key"); String data = json.getString("data"); String signatureData = json.getString("signatureData"); byte[] decode = Hex.decode(key0); // sm2解密 byte[] bytes1 = GMUtil.sm2Decrypt(decode, bcecPrivateKey); // sm4解密 String content = GMUtil.sm4Decrypt(new String(bytes1), data); log.info("content: " + content); // 4、验签 File file = base64ToFileEx(json.getString("pubKey")); PublicKey publicKey = GMUtil.getPublickeyFromX509File(file); byte[] signatureData1 = Hex.decode(signatureData); boolean verifyRes = GMUtil.verifySm3WithSm2(content.getBytes(), xyhnConfig.getUserId().getBytes(), signatureData1, publicKey); log.info("method 验签结果:" + verifyRes); if (!verifyRes) { return null; } return content; } private static PublicKey getPublicKey() { //1、组装报文 if (StringUtils.isEmpty(pubKey)) { pubKey = convertFileToBase64(xyhnConfig.getPubKeyPath()); } JSONObject jsonObject = new JSONObject(); jsonObject.put("key", xyhnConfig.getAppKey()); JSONObject object = new JSONObject(); object.put("requestData", jsonObject); object.put("publicKey", pubKey); //2、发送报文 String jsonRes = callInterface("report", "getPublicKey", object.toJSONString()); //3、解密返回数据 JSONObject json = JSONObject.parseObject(jsonRes); BigInteger d = new BigInteger(xyhnConfig.getPriKey(), 16); BCECPrivateKey bcecPrivateKey = GMUtil.getPrivatekeyFromD(d); String key0 = json.getString("key"); String data = json.getString("data"); String signatureData = json.getString("signatureData"); byte[] decode = Hex.decode(key0); // sm2解密 byte[] bytes1 = GMUtil.sm2Decrypt(decode, bcecPrivateKey); // sm4解密 String returnData = GMUtil.sm4Decrypt(new String(bytes1), data); log.info("returnData:" + returnData); // 4、验签 File file = base64ToFileEx(returnData); PublicKey publicKey = GMUtil.getPublickeyFromX509File(file); byte[] signatureData1 = Hex.decode(signatureData); boolean verifyRes = GMUtil.verifySm3WithSm2(returnData.getBytes(), xyhnConfig.getUserId().getBytes(), signatureData1, publicKey); log.info("getPublicKey 接口验签结果:" + verifyRes); if (!verifyRes) { return null; } return publicKey; } public static String sendData(String jsonStr, String method) { log.info("调用省平台方法名:" + method + ",参数:" + jsonStr); try { if (StringUtils.isEmpty(pubKey)) { // 解析公钥 pubKey = convertFileToBase64(xyhnConfig.getPubKeyPath()); } if (reportPublicKey == null) { //请求接口,获取公钥 reportPublicKey = getPublicKey(); } // 1、摘要签名 sm2withsm3 byte[] msg = jsonStr.getBytes(CharEncoding.ISO_8859_1); byte[] userIdBytes = xyhnConfig.getUserId().getBytes(); BigInteger d = new BigInteger(xyhnConfig.getPriKey(), 16); BCECPrivateKey bcecPrivateKey = GMUtil.getPrivatekeyFromD(d); byte[] sig = GMUtil.signSm3WithSm2(msg, userIdBytes, bcecPrivateKey); String signature = Hex.toHexString(sig); // 2、sm4加密数据报文 String key1 = RandomUtil.randomString(16); String jsonobj = GMUtil.sm4Encrypt(key1, jsonStr); //3、sm2加密16位随机码key byte[] datamsg = GMUtil.sm2Encrypt(key1.getBytes(), reportPublicKey); String s = Hex.toHexString(datamsg); //4、组装并发送报文 JSONObject jsonObject2 = new JSONObject(); jsonObject2.put("requestData", jsonobj); // sm4加密的数据集 jsonObject2.put("key", s); // sm2加密的16位随机码 jsonObject2.put("signatureData", signature); // 签名 jsonObject2.put("publicKey", pubKey); // 我的公钥 jsonObject2.put("appKey", xyhnConfig.getAppKey()); // 数据授权的key String setFPRes = callInterface("report", method, jsonObject2.toJSONString()); return setFPRes; } catch (Exception ex) { log.error("请求省平台接口异常", ex); return null; } } /** * 请求接口 * * @param type,report 回传数据,query 查询 * @param method * @param json * @return */ private static String callInterface(String type, String method, String json) { log.info("调用省平台类型:" + type + ",接口:" + method + ",参数:" + json); Client client = null; QName name = null; JaxWsDynamicClientFactory dcf = JaxWsDynamicClientFactory.newInstance(); String result = null; try { if ("report".equals(type)) { client = dcf.createClient(xyhnConfig.getWsAddr()); name = new QName(xyhnConfig.getNamespaceURI(), method); } else if ("query".equals(type)) { client = dcf.createClient(xyhnConfig.getQueryWsAddr()); name = new QName(xyhnConfig.getQueryNamespaceURI(), method); } Object[] objects = client.invoke(name, json); result = objects[0].toString(); } catch (Exception e) { log.error("调用省平台接口报错", e); } finally { if (client != null) { try { client.close(); } catch (Exception e) { log.error("关闭Client资源时发生异常", e); } } } log.info("调用省平台类型:" + type + ",接口:" + method + ",返回值:" + result); return result; } private static String convertFileToBase64(String imgPath) { byte[] data = null; // 读取图片字节数组 try (InputStream in = new FileInputStream(imgPath);) { data = new byte[in.available()]; in.read(data); } catch (IOException e) { log.error("IOException", e); } // 对字节数组进行Base64编码,得到Base64编码的字符串 String base64Str = java.util.Base64.getEncoder().encodeToString(data); return base64Str; } private static File base64ToFileEx(String base64) { if (StringUtils.isBlank(base64)) { return null; } byte[] buff = Base64.decode(base64); File file = null; FileOutputStream fout = null; try { file = File.createTempFile("tmp", null); fout = new FileOutputStream(file); fout.write(buff); file.deleteOnExit(); } catch (IOException e) { e.printStackTrace(); } finally { if (fout != null) { try { fout.close(); } catch (IOException e) { e.printStackTrace(); } } } return file; } } 用到的工具类 package cn.bridgeli.demo; import lombok.extern.slf4j.Slf4j; import org.bouncycastle.asn1.ASN1EncodableVector; import org.bouncycastle.asn1.ASN1Integer; import org.bouncycastle.asn1.ASN1Sequence; import org.bouncycastle.asn1.DERSequence; import org.bouncycastle.asn1.gm.GMNamedCurves; import org.bouncycastle.asn1.x9.X9ECParameters; import org.bouncycastle.crypto.InvalidCipherTextException; import org.bouncycastle.crypto.engines.SM2Engine; import org.bouncycastle.crypto.params.ECDomainParameters; import org.bouncycastle.crypto.params.ECPrivateKeyParameters; import org.bouncycastle.crypto.params.ECPublicKeyParameters; import org.bouncycastle.crypto.params.ParametersWithRandom; import org.bouncycastle.jcajce.provider.asymmetric.ec.BCECPrivateKey; import org.bouncycastle.jcajce.provider.asymmetric.ec.BCECPublicKey; import org.bouncycastle.jcajce.spec.SM2ParameterSpec; import org.bouncycastle.jce.provider.BouncyCastleProvider; import org.bouncycastle.jce.spec.ECParameterSpec; import org.bouncycastle.jce.spec.ECPrivateKeySpec; import org.bouncycastle.util.encoders.Hex; import javax.crypto.Cipher; import javax.crypto.spec.SecretKeySpec; import java.io.File; import java.io.FileInputStream; import java.io.IOException; import java.math.BigInteger; import java.nio.charset.StandardCharsets; import java.security.Key; import java.security.PrivateKey; import java.security.PublicKey; import java.security.SecureRandom; import java.security.Security; import java.security.Signature; import java.security.cert.CertificateFactory; import java.security.cert.X509Certificate; import java.util.Arrays; @Slf4j public class GMUtil { private static X9ECParameters x9ECParameters = GMNamedCurves.getByName("sm2p256v1"); private static ECDomainParameters ecDomainParameters; private static ECParameterSpec ecParameterSpec; public static byte[] signSm3WithSm2(byte[] msg, byte[] userId, PrivateKey privateKey) { return rsAsn1ToPlainByteArray(signSm3WithSm2Asn1Rs(msg, userId, privateKey)); } public static byte[] signSm3WithSm2Asn1Rs(byte[] msg, byte[] userId, PrivateKey privateKey) { try { SM2ParameterSpec parameterSpec = new SM2ParameterSpec(userId); Signature signer = Signature.getInstance("SM3withSM2", "BC"); signer.setParameter(parameterSpec); signer.initSign(privateKey, new SecureRandom()); signer.update(msg, 0, msg.length); byte[] sig = signer.sign(); return sig; } catch (Exception e) { throw new RuntimeException(e); } } public static boolean verifySm3WithSm2(byte[] msg, byte[] userId, byte[] rs, PublicKey publicKey) { return verifySm3WithSm2Asn1Rs(msg, userId, rsPlainByteArrayToAsn1(rs), publicKey); } public static boolean verifySm3WithSm2Asn1Rs(byte[] msg, byte[] userId, byte[] rs, PublicKey publicKey) { try { SM2ParameterSpec parameterSpec = new SM2ParameterSpec(userId); Signature verifier = Signature.getInstance("SM3withSM2", "BC"); verifier.setParameter(parameterSpec); verifier.initVerify(publicKey); verifier.update(msg, 0, msg.length); return verifier.verify(rs); } catch (Exception e) { throw new RuntimeException(e); } } public static byte[] changeC1C2C3ToC1C3C2(byte[] c1c2c3) { int c1Len = (x9ECParameters.getCurve().getFieldSize() + 7) / 8 * 2 + 1; byte[] result = new byte[c1c2c3.length]; System.arraycopy(c1c2c3, 0, result, 0, c1Len); System.arraycopy(c1c2c3, c1c2c3.length &#8211; 32, result, c1Len, 32); System.arraycopy(c1c2c3, c1Len, result, c1Len + 32, c1c2c3.length &#8211; c1Len &#8211; 32); return result; } public static byte[] changeC1C3C2ToC1C2C3(byte[] c1c3c2) { int c1Len = (x9ECParameters.getCurve().getFieldSize() + 7) / 8 * 2 + 1; byte[] result = new byte[c1c3c2.length]; System.arraycopy(c1c3c2, 0, result, 0, c1Len); System.arraycopy(c1c3c2, c1Len + 32, result, c1Len, c1c3c2.length &#8211; c1Len &#8211; 32); System.arraycopy(c1c3c2, c1Len, result, c1c3c2.length &#8211; 32, 32); return result; } public static byte[] sm2Decrypt(byte[] data, PrivateKey key) { return sm2DecryptOld(changeC1C3C2ToC1C2C3(data), key); } public static byte[] sm2Encrypt(byte[] data, PublicKey key) { return changeC1C2C3ToC1C3C2(sm2EncryptOld(data, key)); } public static byte[] sm2EncryptOld(byte[] data, PublicKey key) { BCECPublicKey localECPublicKey = (BCECPublicKey) key; ECPublicKeyParameters ecPublicKeyParameters = new ECPublicKeyParameters(localECPublicKey.getQ(), ecDomainParameters); SM2Engine sm2Engine = new SM2Engine(); sm2Engine.init(true, new ParametersWithRandom(ecPublicKeyParameters, new SecureRandom())); try { return sm2Engine.processBlock(data, 0, data.length); } catch (InvalidCipherTextException e) { throw new RuntimeException(e); } } public static byte[] sm2DecryptOld(byte[] data, PrivateKey key) { BCECPrivateKey localECPrivateKey = (BCECPrivateKey) key; ECPrivateKeyParameters ecPrivateKeyParameters = new ECPrivateKeyParameters(localECPrivateKey.getD(), ecDomainParameters); SM2Engine sm2Engine = new SM2Engine(); sm2Engine.init(false, ecPrivateKeyParameters); try { return sm2Engine.processBlock(data, 0, data.length); } catch (InvalidCipherTextException e) { throw new RuntimeException(e); } } public static byte[] sm4Encrypt(byte[] keyBytes, byte[] plain) { byte[] keyBytes0; if (keyBytes.length != 16) { keyBytes0 = new byte[16]; for (int i = 0; i < keyBytes0.length; ++i) { if (keyBytes.length > i) { keyBytes0[i] = keyBytes[i]; } } keyBytes = keyBytes0; } if (plain.length % 16 != 0) { keyBytes0 = new byte[16 * (plain.length / 16 + 1)]; System.arraycopy(plain, 0, keyBytes0, 0, plain.length); plain = keyBytes0; } try { Key key = new SecretKeySpec(keyBytes, "SM4"); Cipher out = Cipher.getInstance("SM4/ECB/NoPadding", "BC"); out.init(1, key); return out.doFinal(plain); } catch (Exception e) { throw new RuntimeException(e); } } public static byte[] sm4Decrypt(byte[] keyBytes, byte[] cipher) { byte[] keyBytes0; if (keyBytes.length != 16) { keyBytes0 = new byte[16]; for (int i = 0; i < keyBytes0.length; ++i) { if (keyBytes.length > i) { keyBytes0[i] = keyBytes[i]; } } keyBytes = keyBytes0; } if (cipher.length % 16 != 0) { keyBytes0 = new byte[16 * (cipher.length / 16 + 1)]; System.arraycopy(cipher, 0, keyBytes0, 0, cipher.length); cipher = keyBytes0; } try { Key key = new SecretKeySpec(keyBytes, "SM4"); Cipher in = Cipher.getInstance("SM4/ECB/NoPadding", "BC"); in.init(2, key); byte[] bytes = in.doFinal(cipher); for (int i = bytes.length &#8211; 1; i >= 0; &#8211;i) { if (bytes[i] != 0) { byte[] bytes1 = new byte[i + 1]; System.arraycopy(bytes, 0, bytes1, 0, i + 1); bytes = bytes1; i = -1; } } return bytes; } catch (Exception e) { throw new RuntimeException(e); } } public static String sm4Encrypt(String key, String plan) { byte[] keyBytes = new byte[16]; byte[] keyBytes0 = key.getBytes(StandardCharsets.UTF_8); for (int i = 0; i < keyBytes.length; ++i) { if (keyBytes0.length > i) { keyBytes[i] = keyBytes0[i]; } } byte[] cipher = plan.getBytes(StandardCharsets.UTF_8); byte[] bytes = sm4Encrypt(keyBytes, cipher); return Hex.toHexString(bytes).toUpperCase(); } public static String sm4Decrypt(String key, String cipher) { byte[] keyBytes = new byte[16]; byte[] keyBytes0 = key.getBytes(StandardCharsets.UTF_8); for (int i = 0; i < keyBytes.length; ++i) { if (keyBytes0.length > i) { keyBytes[i] = keyBytes0[i]; } } byte[] cipherbytes = Hex.decode(cipher); byte[] bytes = sm4Decrypt(keyBytes, cipherbytes); return new String(bytes, StandardCharsets.UTF_8); } private static byte[] bigIntToFixexLengthBytes(BigInteger rOrS) { byte[] rs = rOrS.toByteArray(); if (rs.length == 32) { return rs; } else if (rs.length == 33 && rs[0] == 0) { return Arrays.copyOfRange(rs, 1, 33); } else if (rs.length < 32) { byte[] result = new byte[32]; Arrays.fill(result, (byte) 0); System.arraycopy(rs, 0, result, 32 &#8211; rs.length, rs.length); return result; } else { throw new RuntimeException("err rs: " + Hex.toHexString(rs)); } } private static byte[] rsAsn1ToPlainByteArray(byte[] rsDer) { ASN1Sequence seq = ASN1Sequence.getInstance(rsDer); byte[] r = bigIntToFixexLengthBytes(ASN1Integer.getInstance(seq.getObjectAt(0)).getValue()); byte[] s = bigIntToFixexLengthBytes(ASN1Integer.getInstance(seq.getObjectAt(1)).getValue()); byte[] result = new byte[64]; System.arraycopy(r, 0, result, 0, r.length); System.arraycopy(s, 0, result, 32, s.length); return result; } private static byte[] rsPlainByteArrayToAsn1(byte[] sign) { if (sign.length != 64) { throw new RuntimeException("err rs. "); } else { BigInteger r = new BigInteger(1, Arrays.copyOfRange(sign, 0, 32)); BigInteger s = new BigInteger(1, Arrays.copyOfRange(sign, 32, 64)); ASN1EncodableVector v = new ASN1EncodableVector(); v.add(new ASN1Integer(r)); v.add(new ASN1Integer(s)); try { return (new DERSequence(v)).getEncoded("DER"); } catch (IOException var5) { throw new RuntimeException(var5); } } } public static BCECPrivateKey getPrivatekeyFromD(BigInteger d) { ECPrivateKeySpec ecPrivateKeySpec = new ECPrivateKeySpec(d, ecParameterSpec); return new BCECPrivateKey("EC", ecPrivateKeySpec, BouncyCastleProvider.CONFIGURATION); } public static PublicKey getPublickeyFromX509File(File file) { try { CertificateFactory cf = CertificateFactory.getInstance("X.509", "BC"); FileInputStream in = new FileInputStream(file); X509Certificate x509 = (X509Certificate) cf.generateCertificate(in); return x509.getPublicKey(); } catch (Exception var4) { throw new RuntimeException(var4); } } static { ecDomainParameters = new ECDomainParameters(x9ECParameters.getCurve(), x9ECParameters.getG(), x9ECParameters.getN()); ecParameterSpec = new ECParameterSpec(x9ECParameters.getCurve(), x9ECParameters.getG(), x9ECParameters.getN()); if (Security.getProvider("BC") == null) { Security.addProvider(new BouncyCastleProvider()); } } } 相关配置 package cn.bridgeli.demo; import lombok.Data; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.stereotype.Component; @Data @Component @ConfigurationProperties(prefix = "xyhn") //@RefreshScope public class XyhnConfig { private String userId; private String platformId; private String appKey; private String priKey; private String wsAddr; private String namespaceURI; private String pubKeyPath; private String queryAppKey; private String queryPriKey; private String queryWsAddr; private String queryNamespaceURI; private String queryPubKeyPath; } 以上是上报和查询数据的核心方法,下面是查询具体数据的封装 ...

September 24, 2024 · 13 min · 2571 words · Bridge Li

使用 knife4j 实现 Swagger 文档增强

相信使用 Java 开发的人,对 Swagger 一定不会感到陌生,不过个人对 Swagger 一直没有太多好感,因为他的 UI 实在太难看了,用起来也颇为不顺手,所以国内有人开发了 knife4j 对 Swagger 进行增强,随着时间的推移,现在很多项目都在从 Java8 到 Java17,SpringBoot2 到 SpringBoot3 的迁移,发现 knife4j 现在也开始做了支持,而且用起来更方便。下面简单说一说如何使用。 引入依赖 <dependency> <groupId>com.github.xiaoymin</groupId> <artifactId>knife4j-openapi3-jakarta-spring-boot-starter</artifactId> <version>4.5.0</version> </dependency> 从中我们可以看到 artifactId 做了全新的修改,这个需要注意。另外 Spring Boot 3 只支持 OpenAPI3 规范。Knife4j提供的 starter 已经引用 springdoc-openapi 的 jar,大家需注意避免 jar 包冲突,引入之后,其余的配置,开发者即可完全参考 springdoc-openapi 的项目说明,Knife4j 只提供了增强部分,如果要启用 Knife4j 的增强功能,可以在配置文件中进行开启,其实个人测试就算完全不配置,此时也已经可以通过 http://ip:port/doc.html 查看文档: knife4j: enable: true basic: enable: true username: BridgeLi password: BridgeLi springdoc: default-flat-param-object: true 最后,使用 OpenAPI3 的规范注解,注释各个 Spring 的 Rest 接口。 ...

June 10, 2024 · 1 min · 205 words · Bridge Li