相关文章推荐
IoT小程序在展示中央空调采集数据和实时运行状态上的应用
IoT小程序框架在跨系统平台(AliOS Things、Ubuntu、Linux、MacOS、Window等)方面提供了非常优秀的基础能力,应用的更新升级提供了多种方式,在实际业务开发过程中可以灵活选择。IoT小程序框架通过JSAPI提供了调用系统底层应用的能力,同时提供了自定义JSAPI扩展封装的方法,这样就足够业务开发通过自定义的方式满足特殊的业务需求。 IoT小程序在前端框架能力、应用框架能力、图形框架能力都进行了适配和优化。那么接下来,我们按照其官方步骤搭建开发环境,然后结合中央空调数据采集和状态显示的实际应用场景开发物联网小程序应用。
23449

IoT小程序在展示中央空调采集数据和实时运行状态上的应用

利用前端语言实现跨平台应用开发似乎是大势所趋,跨平台并不是一个新的概念,“一次编译、到处运行”是老牌服务端跨平台语言Java的一个基本特性。随着时代的发展,无论是后端开发语言还是前端开发语言,一切都在朝着减少工作量,降低工作成本的方向发展。  与后端开发语言不同,利用前端语言实现跨平台有先天的优势,比如后端语言Java跨平台需要将源代码编译为class字节码文件后,再放进 Java 虚拟机运行;而前端语言JavaScript是直接将源代码放进JavaScript解释器运行。这就使得以JavaScript为跨平台语言开发的应用,可移植性非常强大。  目前跨平台技术按照解决方案分类,主要分为 Web 跨平台、容器跨平台、小程序跨平台。这里,我们主要以小程序跨端为例,测试对比IoT小程序和其他小程序在开发和应用上的优缺点。说到小程序,大家肯定想到微信小程序,实际在各大互联网公司:支付宝、百度、头条等等都有自己的小程序,小程序跨平台和Web跨平台十分类似,都是基于前端语言实现,小程序跨平台的优势在于可以调用系统底层能力,例如:蓝牙、相机等,性能方面也优于Web跨平台。  IoT小程序和大多数小程序一样,它是一套跨平台应用显示框架,它利用JS语言低门槛和API标准化大幅度降低了IoT应用的研发难度,其官方框架介绍如下:  IoT小程序在前端框架能力、应用框架能力、图形框架能力都进行了适配和优化。那么接下来,我们按照其官方步骤搭建开发环境,然后结合中央空调数据采集和状态显示的实际应用场景开发物联网小程序应用。一、IoT小程序开发环境搭建  IoT小程序开发环境搭建一共分为四步,对于前端开发来说,安装NodeJS、配置cnpm、安装VSCode都是轻车熟路,不需要细讲,唯一不同的是按照官方说明安装IoT小程序的模拟器和VSCode开发插件HaaS UI,前期开发环境准备完毕,运行Demo查看一下效果,然后就可以进行IoT小程序应用开发了。搭建开发环境,安装HaaS UI插件和运行新建项目,出现一下界面说明开发环境搭建成功,就可以进行IoT小程序开发了:二、开发展示中央空调采集数据和运行状态的IoT小程序应用应用场景  中央空调的维保单位会对中央空调进行定期维护保养,定期的维护保养可排出故障隐患,减少事故发生,降低运行费用,延长设备的使用寿命,同时保障正常的工作时序。除了定期的维护保养外,还需要实时监测中央空调的运行参数(温度、累计排污量、不锈钢_腐蚀率等)和运行状态,及时发现中央空调运行过程中某些参数低于或高于报警值的问题,以便及时定位诊断中央空调存在的问题,然后进行相应的维护保养操作。架构实现  中央空调的数据采集和展示是典型的物联网应用架构,在中央空调端部署采集终端,通过Modbus通信协议采集中央空调设备参数,然后再由采集终端通过MQTT消息发送的我们的云端服务器,云端服务器接收到MQTT消息后转发到消息队列Kafka中,由云服务器上的自定义服务应用订阅Kafka主题,再存储到我们时序数据库中。下图展示了物联网应用的整体架构和IoT小程序在物联网架构中的位置:  IoT小程序框架作为跨平台应用显示框架,顾名思义,其在物联网应用中主要作为显示框架开发。在传统应用中,我们使用微信小程序实现采集数据和运行状态的展示。而IoT小程序支持部署在AliOS Things、Ubuntu、Linux、MacOS、Window等系统中,这就使得我们可以灵活的将IoT小程序部署到多种设备终端中运行。  下面将以阿里云ASP-80智显面板为例,把展示中央空调采集数据和运行状态的IoT小程序部署在阿里云ASP-80智显面板中。IoT小程序开发  我们将从IoT小程序提供的前端框架能力、应用框架能力、图形框架能力来规划相应的功能开发。  IoT小程序采用Vue.js(v2.6.12)开源框架,实现了W3C标准的标签和样式子集;定义了四个应用生命周期,分别是:onLaunch,onShow,onHide,onDestroy;定义了十四个前端基础组件,除了基础的CSS样式支持外,还提供了对Less的支持;Net网络请求通过框架内置的JSAPI实现。  为了快速熟悉IoT小程序框架的开发方式,我们将在VSCode中导入官方公版案例,并以公版案例为基础框架开发我们想要的功能。简单实现通过网络请求获取中央空调采集数据并展示:1、在VSCode编辑器中导入从IoT小程序官网下载的公版案例,下载地址。2、因为IoT小程序前端框架使用的是Vue.js框架,所以在新增页面时也是按照Vue.js框架的模式,将页面添加到pages目录。我们是空调项目的IoT小程序,所以这里在pages目录下新增air-conditioning目录用于存放空调IoT小程序相关前端代码。3、在app.json中配置新增的页面,修改pages项,增加"air-conditioning": "pages/air-conditioning/index.vue"。{ "pages": { ...... "air-conditioning": "pages/air-conditioning/index.vue", ...... "options": { "style": { "theme": "theme-dark" }4、在air-conditioning目录下新增index.vue前端页面代码,用于展示空调的采集数据是否正常及历史曲线图。设计需要开发的界面如下,页面的元素有栅格布局、Tabs 标签页、Radio单选框、日期选择框、曲线图表等元素。5、首先是实现Tabs标签页,IoT小程序没有Tabs组件,只能自己设置多个Text组件自定义样式并添加click事件来实现。 <div class="tab-list"> <fl-icon name="back" class="nav-back" @click="onBack" /> <text v-for="(item, index) in scenes" :key="index" :class="'tab-item' + (index === selectedIndex ? ' tab-item-selected' : '')" @click="tabSelected(index)" >{{ item }}</text </div> ...... data() { return { scenes: ["设备概览", "实时数据", "数据统计", "状态统计"], selectedIndex: 0 ......6、添加采集数据显示列表,在其他小程序框架中,尤其是以Vue.js为基础框架的小程序框架,这里有成熟的组件,而IoT小程序也是需要自己来实现。<template> <div class="scene-wrapper" v-if="current"> <div class="label-temperature-wrapper top-title"> <div class="label-temperature-wrapper left-text"> <text class="label-temperature">设备编码:</text> <text class="label-temperature-unit">{{deviceNo}}</text> </div> <div class="label-temperature-wrapper right-text"> <text class="label-temperature">数据日期:</text> <text class="label-temperature-unit">{{collectTime}}</text> </div> </div> <div class="main-wrapper"> <div class="section"> <div class="demo-block icon-block"> <div class="icons-item" v-for="(value, key, index) in IconTypes" :key="index"> <div class="label-title-wrapper"> <text class="label-title left-text">{{paramName}}</text> <text class="label-title-unit right-text" style="padding-right: 5px;">{{paramWarn}}</text> </div> <div class="label-zhibiao-wrapper"> <text class="label-zhibiao">当前值:</text> <text class="label-zhibiao-unit">{{value}}</text> </div> <div class="label-zhibiao-wrapper" style="margin-bottom: 10px;"> <text class="label-zhibiao">目标值:</text> <text class="label-zhibiao-unit">{{targetValue}}</text> </div> </div> </div> </div> </div> </div> </template>  在开发过程中发现,IoT小程序对样式的支持不是很全面,本来想将组件放置在同一行,一般情况下,只需要使用标准CSS样式display: inline就可以实现,但这里没有效果只能通过Flexbox进行布局在同一行。在设置字体方面,本来想把采集数据显示的描述字段加粗,用于突出显示,但是使用CSS样式font-weight无效,无论是设置数值还是blod,没有一点加粗效果。7、界面实现之后,需要发送数据请求,来查询采集数据并显示在界面上。IoT小程序通过框架内置JSAPI的Net网络提供网络请求工具。目前从官方文档和代码中来看,官方框架只提供了http请求,没有提供物联网中常用的WebSocket和MQTT工具,估计需要自定义扩展系统JSAPI实现其他网络请求。 created() { const http = $falcon.jsapi.http http.request({ url: 'http://服务域名/device/iot/query/data/point', data: { 'deviceNo': '97306000000000005', 'rangeType': 'mo', 'lastPoint': '1', 'beginDateTime': '2023-02-10+16:09:42', 'endDateTime': '2023-03-12+16:09:42' header: { 'Accept': 'application/json;charset=UTF-8', 'Accept-Encoding': 'gzip, deflate, br', 'Content-Type': 'application/json;charset=UTF-8', 'Authorization': '有效token' }, (response) => { console.log(response) var obj = JSON.parse(response.result) console.log(obj.success) console.log(JSON.parse(obj.data)) },  按照官方要求编写http请求,发现默认未开启https请求:Protocol "https" not supported or disabled in libcurl。切换为http请求,返回数据为乱码,设置Accept-Encoding和Accept为application/json;charset=UTF-8仍然无效,且返回数据为JSON字符串,需要自己手动使用JSON.parse()进行转换,对于习惯于应用成熟框架的人来说,十分不友好。想了解更多关于 $falcon.jsapi.http的相关配置和实现,但是官方文档只有寥寥几句,没有详细的说明如何使用和配置,以及http请求中遇到一些常见问题的解决方式。8、IoT小程序框架提供画布组件,原则上来讲可以实现常用的曲线图表功能,但是如果使用其基础能力从零开始开发一套图表系统,耗时又耗力,所以这里尝试引入常用的图表组件库ECharts,使用ECharts在IoT小程序上显示曲线图表。执行cnpm install echarts --save安装echarts组件cnpm install echarts --save新建echarts配置文件,按需引入// 加载echarts,注意引入文件的路径 import echarts from 'echarts/lib/echarts' // 再引入你需要使用的图表类型,标题,提示信息等 import 'echarts/lib/chart/bar' import 'echarts/lib/chart/pie' import 'echarts/lib/component/legend' import 'echarts/lib/component/title' import 'echarts/lib/component/tooltip' export default echarts新增echarts组件ChartDemo.vue<template> <div ref="chartDemo" style="height:200px;" ></div> </template> <script> import echarts from '@/utils/echarts-config.js' const ChartDemo = { name: 'ChartDemo', data() { return { chart: null watch: { option: { handler(newValue, oldValue) { this.chart.setOption(newValue) deep: true mounted() { this.chart = echarts.init(this.$refs.chartDemo) methods: { setOption(option) { this.chart && this.chart.setOption(option) throttle(func, wait, options) { let time, context, args let previous = 0 if (!options) options = {} const later = function() { previous = options.leading === false ? 0 : new Date().getTime() time = null func.apply(context, args) if (!time) context = args = null const throttled = function() { const now = new Date().getTime() if (!previous && options.leading === false) previous = now const remaining = wait - (now - previous) context = this args = arguments if (remaining <= 0 || remaining > wait) { if (time) { clearTimeout(time) time = null previous = now func.apply(context, args) if (!time) context = args = null } else if (!time && options.trailing !== false) { time = setTimeout(later, remaining) return throttled export default ChartDemo </script> 在base-page.js中注册全局组件...... import ChartDemo from './components/ChartDemo.vue'; export class BasePage extends $falcon.Page { constructor() { super() beforeVueInstantiate(Vue) { ...... Vue.component('ChartDemo', ChartDemo); }新建空调采集数据展示页history-charts.vue,用于展示Echarts图表<template> <div class="scene-wrapper" v-if="current"> <div class="brightness-wrap"> <ChartBlock ref="chart2"></ChartBlock> </div> </div> </template> <script> let option2 = { title: { text: '中央空调状态图', subtext: '运行状态占比', left: 'center' tooltip: { trigger: 'item', formatter: '{a} <br/>{b} : {c} ({d}%)' legend: { orient: 'vertical', left: 'left', data: ['开机', '关机', '报警', '故障', '空闲'] series: [ name: '运行状态', type: 'pie', radius: '55%', center: ['50%', '60%'], data: [ { value: 335, name: '开机' }, { value: 310, name: '关机' }, { value: 234, name: '报警' }, { value: 135, name: '故障' }, { value: 1548, name: '空闲' } emphasis: { itemStyle: { shadowBlur: 10, shadowOffsetX: 0, shadowColor: 'rgba(0, 0, 0, 0.5)' export default { props:{ current:{ type:Boolean, default:false data() { return { methods: { mounted: function() { this.$refs.chart2.setOption(option2) </script>执行HaaS UI: Build-Debug ,显示打包成功执行HaaS UI: Simulator ,显示“当前HaaS UI: Simulator任务正在执行,请稍后再试”  本来想在模拟器上看一下Echarts显示效果,但是执行HaaS UI: Simulator时一直显示任务正在执行。然后以为是系统进程占用,但是重启、关闭进程等操作一系列操作下来,仍然显示此提示,最后将Echarts代码删除,恢复到没有Echarts的状态,又可以执行了。这里不清楚是否是IoT小程序不支持引入第三方图表组件,从官方文档中没有找到答案。后来又使用echarts的封装组件v-charts进行了尝试,结果依然不能展示。  如果不能使用第三方组件,那么只能使用IoT官方小程序提供的画布组件来自己实现图表功能,官方提供的画布曲线图示例。9、通过IoT小程序提供的组件分别实现显示中央空调采集数据的实时数据、数据统计、状态统计图表。-实现实时数据折线图<template> <div class="scene-wrapper" v-show="current"> <div class="main-wrapper"> <div class="label-temperature-wrapper top-title"> <div class="label-temperature-wrapper left-text"> <text class="label-temperature">设备编码:</text> <text class="label-temperature-unit">{{deviceNo}}</text> </div> <div class="label-temperature-wrapper right-text"> <text class="label-temperature">数据日期:</text> <text class="label-temperature-unit">{{collectTime}}</text> </div> </div> <canvas ref="c2" class="canvas" width="650" height="300"></canvas> </div> </div> </template> <script> export default { name: "canvas", props: {}, data() { return { deviceNo: '97306000000000005', collectTime: '2023-03-11 23:59:59' mounted() { this.c2(); methods: { c2() { let ctx = typeof createCanvasContext === "function" ? createCanvasContext(this.$refs.c2) : this.$refs.c1.getContext("2d"); // Demo测试数据 let arr = [{key:'01:00',value:61.68},{key:'02:00',value:83.68},{key:'03:00',value:56.68},{key:'04:00',value:86.68},{key:'05:00',value:53.68}, {key:'06:00',value:41.68},{key:'07:00',value:33.68}]; this.drawStat(ctx, arr); //该函数用来绘制折线图 drawStat(ctx, arr) { //画布的款高 var cw = 700; var ch = 300; //内间距padding var padding = 35; //原点,bottomRight:X轴终点,topLeft:Y轴终点 var origin = {x:padding,y:ch-padding}; var bottomRight = {x:cw-padding,y:ch-padding}; var topLeft = {x:padding,y:padding}; ctx.strokeStyle='#FF9500'; ctx.fillStyle='#FF9500'; //绘制X轴 ctx.beginPath(); ctx.moveTo(origin.x,origin.y); ctx.lineTo(bottomRight.x,bottomRight.y); //绘制X轴箭头 ctx.lineTo(bottomRight.x-10,bottomRight.y-5); ctx.moveTo(bottomRight.x,bottomRight.y); ctx.lineTo(bottomRight.x-10,bottomRight.y+5); //绘制Y轴 ctx.moveTo(origin.x,origin.y); ctx.lineTo(topLeft.x,topLeft.y); //绘制Y轴箭头 ctx.lineTo(topLeft.x-5,topLeft.y+10); ctx.moveTo(topLeft.x,topLeft.y); ctx.lineTo(topLeft.x+5,topLeft.y+10); //设置字号 var color = '#FF9500'; ctx.fillStyle=color; ctx.font = "13px scans-serif";//设置字体 //绘制X方向刻度 //计算刻度可使用的总宽度 var avgWidth = (cw - 2*padding - 50)/(arr.length-1); for(var i=0;i<arr.length;i++){ //循环绘制所有刻度线 if(i > 0){ //移动刻度起点 ctx.moveTo(origin.x+i*avgWidth,origin.y); //绘制到刻度终点 ctx.lineTo(origin.x+i*avgWidth,origin.y-10); //X轴说明文字:1月,2月... var txtWidth = 35; ctx.fillText( arr[i].key, origin.x+i*avgWidth-txtWidth/2 + 10, origin.y+20); //绘制Y方向刻度 //最大刻度max var max = 0; for(var i=0;i<arr.length;i++){ if(arr[i].value>max){ max=arr[i].value; console.log(max); /*var max = Math.max.apply(this,arr); console.log(max);*/ var avgValue=Math.floor(max/5); var avgHeight = (ch-padding*2-50)/5; for(var i=1;i<arr.length;i++){ //绘制Y轴刻度 ctx.moveTo(origin.x,origin.y-i*avgHeight); ctx.lineTo(origin.x+10,origin.y-i*avgHeight); //绘制Y轴文字 var txtWidth = 40; ctx.fillText(avgValue*i, origin.x-txtWidth-5, origin.y-i*avgHeight+6); //绘制折线 for(var i=0;i<arr.length;i++){ var posY = origin.y - Math.floor(arr[i].value/max*(ch-2*padding-50)); if(i==0){ ctx.moveTo(origin.x+i*avgWidth,posY); }else{ ctx.lineTo(origin.x+i*avgWidth,posY); //具体金额文字 ctx.fillText(arr[i].value, origin.x+i*avgWidth, ctx.stroke(); //绘制折线上的小圆点 ctx.beginPath(); for(var i=0;i<arr.length;i++){ var posY = origin.y - Math.floor(arr[i].value/max*(ch-2*padding-50)); ctx.arc(origin.x+i*avgWidth,posY,4,0,Math.PI*2);//圆心,半径,画圆 ctx.closePath(); ctx.fill(); </script>-数据统计图表<template> <div class="scene-wrapper" v-show="current"> <div class="main-wrapper"> <div class="label-temperature-wrapper top-title"> <div class="label-temperature-wrapper left-text"> <text class="label-temperature">设备编码:</text> <text class="label-temperature-unit">{{deviceNo}}</text> </div> <div class="label-temperature-wrapper right-text"> <text class="label-temperature">数据日期:</text> <text class="label-temperature-unit">{{collectTime}}</text> </div> </div> <canvas ref="c1" class="canvas" width="650" height="300"></canvas> </div> </div> </template> <script> export default { name: "canvas", props: {}, data() { return { deviceNo: '97306000000000005', collectTime: '2023-03-13 20:23:36' mounted() { this.c1(); methods: { c1() { let ctx = typeof createCanvasContext === "function" ? createCanvasContext(this.$refs.c1) : this.$refs.c1.getContext("2d"); this.draw(ctx); draw(ctx){ var x0=30,//x轴0处坐标 y0=280,//y轴0处坐标 x1=700,//x轴顶处坐标 y1=30,//y轴顶处坐标 dis=30; //先绘制X和Y轴 ctx.beginPath(); ctx.lineWidth=1; ctx.strokeStyle='#FF9500'; ctx.fillStyle='#FF9500'; ctx.moveTo(x0,y1);//笔移动到Y轴的顶部 ctx.lineTo(x0,y0);//绘制Y轴 ctx.lineTo(x1,y0);//绘制X轴 ctx.stroke(); //绘制虚线和Y轴值 var yDis = y0-y1; var n=1; ctx.fillText(0,x0-20,y0);//x,y轴原点显示0 while(yDis>dis){ ctx.beginPath(); //每隔30划一个虚线 ctx.setLineDash([2,2]);//实线和空白的比例 ctx.moveTo(x1,y0-dis); ctx.lineTo(x0,y0-dis); ctx.fillText(dis,x0-20,y0-dis); //每隔30划一个虚线 dis+=30; ctx.stroke(); var xDis=30,//设定柱子之前的间距 width=40;//设定每个柱子的宽度 //绘制柱状和在顶部显示值 for(var i=0;i<12;i++){//假设有8个月 ctx.beginPath(); var color = '#' + Math.random().toString(16).substr(2, 6).toUpperCase();//随机颜色 ctx.fillStyle=color; ctx.font = "13px scans-serif";//设置字体 var height = Math.round(Math.random()*220+20);//在一定范围内随机高度 var rectX=x0+(width+xDis)*i,//柱子的x位置 rectY=height;//柱子的y位置 ctx.color='#FF9500'; ctx.fillText((i+1)+'月份',rectX,y0+15);//绘制最下面的月份稳住 ctx.fillRect(rectX,y0, width, -height);//绘制一个柱状 ctx.fillText(rectY,rectX+10,280-rectY-5);//显示柱子的值 </script>-状态统计图表<template> <div class="scene-wrapper" v-show="current"> <div class="main-wrapper"> <div class="label-temperature-wrapper top-title"> <div class="label-temperature-wrapper left-text"> <text class="label-temperature">设备编码:</text> <text class="label-temperature-unit">{{deviceNo}}</text> </div> <div class="label-temperature-wrapper right-text"> <text class="label-temperature">数据日期:</text> <text class="label-temperature-unit">{{collectTime}}</text> </div> </div> <canvas ref="c3" class="canvas" width="600" height="300"></canvas> </div> </div> </template> <script> export default { name: "canvas", props: {}, data() { return { deviceNo: '97306000000000005', collectTime: '2023-03-13 20:29:36' mounted() { this.c3(); methods: { c3() { let ctx = typeof createCanvasContext === "function" ? createCanvasContext(this.$refs.c3) : this.$refs.c3.getContext("2d"); this.drawPie(ctx); drawPie(pen){ // Demo测试数据 var deg = Math.PI / 180 var arr = [ name: "开机", time: 8000, color: '#7CFF00' name: "关机", time: 1580, color: '#737F9C' name: "空闲", time: 5790, color: '#0ECC9B' name: "故障", time: 4090, color: '#893FCD' name: "报警", time: 2439, color: '#EF4141' pen.translate(30,-120); arr.tatol = 0; for (let i = 0; i < arr.length; i++) { arr.tatol = arr.tatol + arr[i].time var stardeg = 0 arr.forEach(el => { pen.beginPath() var r1 = 115 pen.fillStyle = el.color pen.strokeStyle='#209AAD'; pen.font = "15px scans-serif"; //求出每个time的占比 var angle = (el.time / arr.tatol) * 360 //利用占比来画圆弧 pen.arc(300, 300, r1, stardeg * deg, (stardeg + angle) * deg) //将圆弧与圆心相连接,形成扇形 pen.lineTo(300, 300) var r2 = r1+10; if(el.name === '关机' || el.name === '空闲') r2 = r1+30 //给每个扇形添加数组的name var y1 = 300 + Math.sin((stardeg + angle) * deg-angle*deg/2 ) *( r2) var x1 = 300 + Math.cos((stardeg + angle) * deg-angle*deg/2 ) * (r2) pen.fillText(`${el.name}`, x1, y1) stardeg = stardeg + angle pen.fill() pen.stroke() </script>三、将IoT小程序更新到ASP-80智显面板查看运行效果  将IoT小程序更新到ASP-80智显面板,在硬件设备上查看IoT应用运行效果。如果是使用PC端初次连接,那么需要安装相关驱动和配置,否则无法使用VSCode直接更新IoT小程序到ASP-80智显面板。1、如果使用Win10将IoT小程序包更新到ASP-80智显面板上,必须用到CH340串口驱动,第一次通过TypeC数据线连接设备,PC端设备管理器的端口处不显示端口,这时需要下载Windows版本的CH340串口驱动下载链接 。2、将下载的驱动文件CH341SER.ZIP解压并安装之后,再次查看PC端设备管理器端口就有了USB Serial CH340端口。3、使用SourceCRT连接ASP-80智显面板,按照官方文档说明,修改配置文件,连接好WiFi无线网,下一步通过VSCode直接更新IoT小程序到ASP-80智显面板上查看测试。4、所有准备工作就绪后,点击VSCode的上传按钮HaaS UI: Device,将应用打包并上传至ASP-80智显面板。在选择ip地址框的时候,输入我们上一步获取到的ip地址192.168.1.112,其他参数保持默认即可,上传成功后,VSCode控制台提示安装app成功。5、IoT小程序安装成功之后就可以在ASP-80智显面板上查看运行效果了。  综上所述,IoT小程序框架在跨系统平台(AliOS Things、Ubuntu、Linux、MacOS、Window等)方面提供了非常优秀的基础能力,应用的更新升级提供了多种方式,在实际业务开发过程中可以灵活选择。IoT小程序框架通过JSAPI提供了调用系统底层应用的能力,同时提供了自定义JSAPI扩展封装的方法,这样就足够业务开发通过自定义的方式满足特殊的业务需求。  虽然多家互联网公司都提供了小程序框架,但在128M 128M这样的低资源设备里输出,IoT小程序是比较领先的,它不需要另外下载APP作为小程序的容器,降低了资源的消耗,这一点是其他小程序框架所不能比拟的。  但是在前端框架方面,实用组件太少。其他小程序已发展多年,基于基础组件封装并开源的前端组件应用场景非常丰富,对于中小企业来说,习惯于使用成熟的开源组件,如果使用IoT小程序开发物联网应用可能需要耗费一定的人力物力。既然是基于Vue.js的框架,却没有提供引入其他优秀组件的文档说明和示例,不利于物联网应用的快速开发,希望官方能够完善文档,详细说明IoT小程序开发框架配置项,将来能够提供更多的实用组件。

SpringCloud微服务实战——搭建企业级开发框架(四十三):多租户可配置的电子邮件发送系统设计与实现

在日常生活中,邮件已经被聊天软件、短信等更便捷的信息传送方式代替。但在日常工作中,我们的重要的信息通知等非常有必要去归档追溯,那么邮件就是不可或缺的信息传送渠道。对于我们工作中经常用到的系统,里面也基本都集成了邮件发送功能。  SpringBoot提供了基于JavaMail的starter,我们只要按照官方的说明配置邮件服务器信息,即可使我们的系统拥有发送电子邮件的功能。但是,在我们GitEgg开发框架的实际业务开发过程中,有两个问题需要解决:一个是SpringBoot邮箱服务器的配置是配置在配置文件中的,不支持灵活的界面配置。另外一个是我们的开发框架需要支持多租户,那么此时需要对SpringBoot提供的邮件发送功能进行扩展,以满足我们的需求。那么,基于以上需求和问题,我们对GitEgg框架进行扩展,增加以下功能:扩展系统配置:将邮箱服务器的配置信息持久化到数据库、Redis缓存,和配置文件一起使用,制定读取优先级。扩展多租户配置:如果系统开启了多租户功能,那么在邮件发送时,首先读取租户的当前配置,如果没有配置,那么在读取系统配置。3.自有选择服务器:用户可在系统界面上选择指定的邮箱服务器进行邮件发送。4.提供邮件发送模板:用户可选择预先制定的邮件模板进行发送特定邮件。5.增加发送数量、频率限制:增加配置,限制模板邮件的发送数量和频率。6.保存邮件发送记录:不一定把所有附件都保存,只需保存邮件发送关键信息,如果需要保存所有附件等需要自己扩展。  同一个租户可以配置多个电子邮件服务器,但只可以设置一个服务器为启用状态。默认情况下,系统通知类的功能只使用启用状态的服务器进行邮件发送。在有定制化需求的情况下,比如从页面直接指定某个服务器进行邮件发送,那么提供可以选择的接口,指定某个服务器进行邮件发送。一、集成spring-boot-starter-mail扩展基础邮件发送功能1、在基础框架gitegg-platform中新建gitegg-platform-mail子项目,引入邮件必需的相关依赖包。 <dependencies> <!-- gitegg Spring Boot自定义及扩展 --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-boot</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-configuration-processor</artifactId> <optional>true</optional> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-mail</artifactId> <!-- 去除springboot默认的logback配置--> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-logging</artifactId> </exclusion> </exclusions> </dependency> </dependencies>2、扩展邮件服务器配置类,增加租户等信息,方便从缓存读取到信息之后进行配置转换。@Data @JsonIgnoreProperties(ignoreUnknown = true) public class GitEggMailProperties extends MailProperties { * 配置id private Long id; * 租户id private Long tenantId; * 渠道id private String channelCode; private Integer channelStatus; * 配置的md5值 private String md5; }3、扩展邮件发送实现类JavaMailSenderImpl,添加多租户和邮箱服务器编码,便于多租户和渠道选择。@Data public class GitEggJavaMailSenderImpl extends JavaMailSenderImpl { * 配置id private Long id; * 租户id private Long tenantId; * 渠道编码 private String channelCode; * 配置的md5值 private String md5; }4、新建邮件发送实例工厂类JavaMailSenderFactory,在邮件发送时,根据需求生产需要的邮件发送实例。@Slf4j public class JavaMailSenderFactory { private RedisTemplate redisTemplate; private JavaMailSenderImpl javaMailSenderImpl; * 是否开启租户模式 private Boolean enable; * JavaMailSender 缓存 * 尽管存在多个微服务,但是只需要在每个微服务初始化一次即可 private final static Map<String, GitEggJavaMailSenderImpl> javaMailSenderMap = new ConcurrentHashMap<>(); public JavaMailSenderFactory(RedisTemplate redisTemplate, JavaMailSenderImpl javaMailSenderImpl, Boolean enable) { this.redisTemplate = redisTemplate; this.javaMailSenderImpl = javaMailSenderImpl; this.enable = enable; * 指定邮件发送渠道 * @return public JavaMailSenderImpl getMailSender(String... channelCode){ if (null == channelCode || channelCode.length == GitEggConstant.COUNT_ZERO || null == channelCode[GitEggConstant.Number.ZERO]) return this.getDefaultMailSender(); // 首先判断是否开启多租户 String mailConfigKey = JavaMailConstant.MAIL_TENANT_CONFIG_KEY; if (enable) { mailConfigKey += GitEggAuthUtils.getTenantId(); } else { mailConfigKey = JavaMailConstant.MAIL_CONFIG_KEY; // 从缓存获取邮件配置信息 // 根据channel code获取配置,用channel code时,不区分是否是默认配置 String propertiesStr = (String) redisTemplate.opsForHash().get(mailConfigKey, channelCode[GitEggConstant.Number.ZERO]); if (StringUtils.isEmpty(propertiesStr)) throw new BusinessException("未获取到[" + channelCode[GitEggConstant.Number.ZERO] + "]的邮件配置信息"); GitEggMailProperties properties = null; try { properties = JsonUtils.jsonToPojo(propertiesStr, GitEggMailProperties.class); } catch (Exception e) { log.error("转换邮件配置信息异常:{}", e); throw new BusinessException("转换邮件配置信息异常:" + e); return this.getMailSender(mailConfigKey, properties); * 不指定邮件发送渠道,取默认配置 * @return public JavaMailSenderImpl getDefaultMailSender(){ // 首先判断是否开启多租户 String mailConfigKey = JavaMailConstant.MAIL_TENANT_CONFIG_KEY; if (enable) { mailConfigKey += GitEggAuthUtils.getTenantId(); } else { mailConfigKey = JavaMailConstant.MAIL_CONFIG_KEY; // 获取所有邮件配置列表 Map<Object, Object> propertiesMap = redisTemplate.opsForHash().entries(mailConfigKey); Iterator<Map.Entry<Object, Object>> entries = propertiesMap.entrySet().iterator(); // 如果没有设置取哪个配置,那么获取默认的配置 GitEggMailProperties properties = null; try { while (entries.hasNext()) { Map.Entry<Object, Object> entry = entries.next(); // 转为系统配置对象 GitEggMailProperties propertiesEnable = JsonUtils.jsonToPojo((String) entry.getValue(), GitEggMailProperties.class); if (propertiesEnable.getChannelStatus().intValue() == GitEggConstant.ENABLE) { properties = propertiesEnable; break; } catch (Exception e) { e.printStackTrace(); return this.getMailSender(mailConfigKey, properties); private JavaMailSenderImpl getMailSender(String mailConfigKey, GitEggMailProperties properties) { // 根据最新配置信息判断是否从本地获取mailSender,在配置保存时,计算实体配置的md5值,然后进行比较,不要在每次对比的时候进行md5计算 if (null != properties && !StringUtils.isEmpty(properties.getMd5())) GitEggJavaMailSenderImpl javaMailSender = javaMailSenderMap.get(mailConfigKey); if (null == javaMailSender || !properties.getMd5().equals(javaMailSender.getMd5())) // 如果没有配置信息,那么直接返回系统默认配置的mailSender javaMailSender = new GitEggJavaMailSenderImpl(); this.applyProperties(properties, javaMailSender); javaMailSender.setMd5(properties.getMd5()); javaMailSender.setId(properties.getId()); // 将MailSender放入缓存 javaMailSenderMap.put(mailConfigKey, javaMailSender); return javaMailSender; return this.javaMailSenderImpl; private void applyProperties(MailProperties properties, JavaMailSenderImpl sender) { sender.setHost(properties.getHost()); if (properties.getPort() != null) { sender.setPort(properties.getPort()); sender.setUsername(properties.getUsername()); sender.setPassword(properties.getPassword()); sender.setProtocol(properties.getProtocol()); if (properties.getDefaultEncoding() != null) { sender.setDefaultEncoding(properties.getDefaultEncoding().name()); if (!properties.getProperties().isEmpty()) { sender.setJavaMailProperties(this.asProperties(properties.getProperties())); private Properties asProperties(Map<String, String> source) { Properties properties = new Properties(); properties.putAll(source); return properties; }5、配置异步邮件发送的线程池,这里需注意异步线程池上下文变量共享问题,有两种方式解决,一个是使用装饰器TaskDecorator将父子线程变量进行复制,还有一种方式是transmittable-thread-local来共享线程上下文,这里不展开描述,后续会专门针对如何在微服务异步线程池中共享上线文进行说明。@Configuration public class MailThreadPoolConfig { @Value("${spring.mail-task.execution.pool.core-size}") private int corePoolSize; @Value("${spring.mail-task.execution.pool.max-size}") private int maxPoolSize; @Value("${spring.mail-task.execution.pool.queue-capacity}") private int queueCapacity; @Value("${spring.mail-task.execution.thread-name-prefix}") private String namePrefix; @Value("${spring.mail-task.execution.pool.keep-alive}") private int keepAliveSeconds; * 邮件发送的线程池 * @return @Bean("mailTaskExecutor") public Executor mailTaskExecutor(){ ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor(); //最大线程数 executor.setMaxPoolSize(maxPoolSize); //核心线程数 executor.setCorePoolSize(corePoolSize); //任务队列的大小 executor.setQueueCapacity(queueCapacity); //线程前缀名 executor.setThreadNamePrefix(namePrefix); //线程存活时间 executor.setKeepAliveSeconds(keepAliveSeconds); // 设置装饰器,父子线程共享request header变量 executor.setTaskDecorator(new RequestHeaderTaskDecorator()); * 拒绝处理策略 * CallerRunsPolicy():交由调用方线程运行,比如 main 线程。 * AbortPolicy():直接抛出异常。 * DiscardPolicy():直接丢弃。 * DiscardOldestPolicy():丢弃队列中最老的任务。 executor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy()); // 线程初始化 executor.initialize(); return executor; }6、增加邮件发送结果的枚举类MailResultCodeEnumpublic enum MailResultCodeEnum { SUCCESS("success", "邮件发送成功"), * 自定义 ERROR("error", "邮件发送失败"); public String code; public String message; MailResultCodeEnum(String code, String message) { this.code = code; this.message = message; public String getCode() { return code; public void setCode(String code) { this.code = code; public String getMessage() { return message; public void setMessage(String message) { this.message = message; }7、增加邮箱服务器相关默认配置的常量类JavaMailConstant.javapublic class JavaMailConstant { * Redis JavaMail配置config key public static final String MAIL_CONFIG_KEY = "mail:config"; * 当开启多租户模式时,Redis JavaMail配置config key public static final String MAIL_TENANT_CONFIG_KEY = "mail:tenant:config:"; }8、增加GitEggJavaMail自动装配类,根据Nacos或者系统配置进行装配。@Slf4j @Configuration @RequiredArgsConstructor(onConstructor_ = @Autowired) public class GitEggJavaMailConfiguration { private final JavaMailSenderImpl javaMailSenderImpl; private final RedisTemplate redisTemplate; * 是否开启租户模式 @Value("${tenant.enable}") private Boolean enable; @Bean public JavaMailSenderFactory gitEggAuthRequestFactory() { return new JavaMailSenderFactory(redisTemplate, javaMailSenderImpl, enable); }二、增加邮箱服务器配置界面  邮箱服务器的配置,实际就是不同邮箱渠道的配置,这里我们将表和字段设计好,然后使用GitEgg自带代码生成器,生成业务的CRUD代码即可。1、邮箱渠道配置表设计CREATE TABLE `t_sys_mail_channel` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `channel_code` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '渠道编码', `channel_name` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '渠道名称', `host` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 'SMTP服务器地址', `port` int(11) NULL DEFAULT NULL COMMENT 'SMTP服务器端口', `username` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '账户名', `password` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '密码', `protocol` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL DEFAULT 'smtp' COMMENT '协议', `default_encoding` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '默认编码', `jndi_name` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '会话JNDI名称', `properties` varchar(500) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 'JavaMail 配置', `channel_status` tinyint(2) NOT NULL DEFAULT 0 COMMENT '渠道状态 1有效 0禁用', `md5` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT 'MD5', `comments` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '描述', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '是否删除', PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = '邮件渠道' ROW_FORMAT = DYNAMIC; SET FOREIGN_KEY_CHECKS = 1;2、根据表设计,然后配置代码生成界面,生成前后端代码。3、生成代码后,进行相关权限配置,前端界面展示:三、以同样的方式增加邮箱模板配置界面和邮件发送日志记录1、邮箱模板和邮件发送日志数据库表设计邮件模板数据库表设计:CREATE TABLE `t_sys_mail_template` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `template_code` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '模板编码', `template_name` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '模板名称', `sign_name` varchar(64) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '模板签名', `template_status` tinyint(2) NOT NULL DEFAULT 1 COMMENT '模板状态', `template_type` tinyint(2) NULL DEFAULT NULL COMMENT '模板类型', `template_content` text CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL COMMENT '模板内容', `cache_code_key` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '缓存key', `cache_time_out` bigint(20) NULL DEFAULT 0 COMMENT '缓存有效期 值', `cache_time_out_unit` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '缓存有效期 单位', `send_times_limit` bigint(20) NULL DEFAULT 0 COMMENT '发送次数限制', `send_times_limit_period` bigint(20) NULL DEFAULT 0 COMMENT '限制时间间隔', `send_times_limit_period_unit` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '限制时间间隔 单位', `comments` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '描述', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '是否删除', PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = '邮件模板' ROW_FORMAT = DYNAMIC; SET FOREIGN_KEY_CHECKS = 1;邮件日志数据库表设计:CREATE TABLE `t_sys_mail_log` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `channel_id` bigint(20) NULL DEFAULT NULL COMMENT 'mail渠道id', `template_id` bigint(20) NULL DEFAULT NULL COMMENT 'mail模板id', `mail_subject` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '邮件主题', `mail_from` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '发送人', `mail_to` text CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL COMMENT '收件人', `mail_cc` text CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL COMMENT '抄送', `mail_bcc` text CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL COMMENT '密抄送', `mail_content` text CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL COMMENT '邮件内容', `attachment_name` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT '0' COMMENT '附件名称', `attachment_size` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT '0' COMMENT '附件大小', `send_time` datetime(0) NULL DEFAULT NULL COMMENT '发送时间', `send_result_code` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT '1' COMMENT '发送结果码', `send_result_msg` varchar(500) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '发送结果消息', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建日期', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新日期', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(2) NOT NULL DEFAULT 0 COMMENT '是否删除 1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = '邮件记录' ROW_FORMAT = DYNAMIC; SET FOREIGN_KEY_CHECKS = 1;2、邮件模板和邮件发送日志界面四、QQ邮箱配置和阿里云企业邮箱配置测试  上面的基本功能开发完成之后,那么我们就需要进行测试,这里选择两种类型的邮箱进行测试,一种是QQ邮箱,还有一种是阿里云企业邮箱。1、QQ邮箱配置QQ邮箱在配置的时候不能使用QQ的登录密码,需要单独设置QQ邮箱的授权码,下面是操作步骤:开通qq邮箱的smtp功能经过一系列的验证之后,会获取到一个授权码:系统中配置QQ邮箱相关信息2、 阿里云企业邮箱配置阿里云企业邮箱的配置相比较而言就简单一些,配置的密码就是企业邮箱登录的密码。账户设置,开启POP3/SMTP和IMAP/SMTP服务系统中配置阿里云企业邮箱相关信息3、Nacos中配置默认邮件服务器,同时增加邮件异步线程池配置 mail: username: XXXXXXXXXXX password: XXXXXXXXXX default-encoding: UTF-8 host: smtp.mxhichina.com port: 25 protocol: smtp properties: mail: smtp: auth: true enable: false # 异步发送邮件,核心线程池数配置 mail-task: execution: pool: core-size: 5 max-size: 10 queue-capacity: 5 keep-alive: 60 thread-name-prefix: mail-send-task-4、在邮件渠道配置界面进行邮件发送测试,有两种测试方式,一种是选择指定渠道进行发送,另外一种是选择系统默认渠道进行邮件发送。发送完成后查看邮件日志模块,检查是否有邮件发送成功的记录。选择需要测试的邮箱服务器填写测试邮箱发送内容查看邮箱发送日志源码地址: Gitee: https://gitee.com/wmz1930/GitEgg GitHub: https://github.com/wmz1930/GitEgg

SpringCloud微服务实战——搭建企业级开发框架(四十二):集成分布式任务调度平台XXL-JOB,实现定时任务功能

定时任务几乎是每个业务系统必不可少的功能,计算到期时间、过期时间等,定时触发某项任务操作。在使用单体应用时,基本使用Spring提供的注解即可实现定时任务,而在使用微服务集群时,这种方式就要考虑添加分布式锁来防止多个微服务同时运行定时任务而导致同一个任务重复执行。  除了使用注解,现在还有一种方式,就是搭建分布式任务平台,所有的微服务注册到分布式任务平台,由分布式任务平台统一调度,这样避免了同一任务被重复执行。这里我们选择使用XXL-JOB作为分布式任务调度平台,XXL-JOB核心设计目标是开发迅速、学习简单、轻量级、易扩展。  使用分布式任务调度平台的优点除了避免同一任务重复执行外,还有使用简单,可以手动执行、有详细的调度日志查看任务具体执行情况等优点。  XXL-JOB官方架构设计图:  下面我们按照步骤来介绍,如何结合我们的微服务平台将分布式任务调度平台XXL-JOB集成进来,实现我们需要的定时任务功能。一、微服务框架整合xxl-job-admin1、XXL-JOB开源网站下载源码,下载地址 https://github.com/xuxueli/xxl-job/releases ,下载下来的源码如下:xxl-job-admin:调度中心 xxl-job-core:公共依赖 xxl-job-executor-samples:执行器Sample示例(选择合适的版本执行器,可直接使用,也可以参考其并将现有项目改造成执行器) :xxl-job-executor-sample-springboot:Springboot版本,通过Springboot管理执行器,推荐这种方式; :xxl-job-executor-sample-frameless:无框架版本;  下载下来的开源包有三个目录:xxl-job-admin、xxl-job-core和xxl-job-executor-samples,顾名思义,xxl-job-admin是分布式任务平台的服务端兼管理台,我们需要部署的也是这个工程,我们可以把整个工程集成到我们的微服务中,统一打包部署;xxl-job-core是公共依赖包,我们其他需要实现定时任务的微服务需要引入这个包来实现定时任务执行器。xxl-job-executor-samples为定时任务执行器的实例代码。2、在基础平台gitegg-platform工程gitegg-platform-bom中引入xxl-job-core核心包,统一版本管理。...... <!--分布式任务调度平台XXL-JOB核心包--> <xxl-job.version>2.3.1</xxl-job.version> ...... <!--分布式任务调度平台XXL-JOB核心包--> <dependency> <groupId>com.xuxueli</groupId> <artifactId>xxl-job-core</artifactId> <version>${xxl-job.version}</version> </dependency>3、将xxl-job-admin集成到微服务工程中,方便统一打包部署  根据我们的微服务架构设计,gitegg-plugin作为我们系统的插件工程,里面放置我们需要的插件服务。有些插件是必须的,而有些插件可能会用不到,此时我们就可以根据自己的业务需求去选择部署业务插件。  为和我们的微服务深度集成就不是解耦的特性,我们需要对xxl-job-admin的配置文件进行适当的修改:首先修改pom.xml,保持各依赖库版本一致,修改parent标签,使其引用GitEgg工程的基础jar包和微服务配置注册功能,同时排除logback,使用log4j2记录日志<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>gitegg-plugin</artifactId> <groupId>com.gitegg.cloud</groupId> <version>1.0.1.RELEASE</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>gitegg-job</artifactId> <name>${project.artifactId}</name> <packaging>jar</packaging> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <maven.compiler.encoding>UTF-8</maven.compiler.encoding> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> <maven.test.skip>true</maven.test.skip> <netty-all.version>4.1.63.Final</netty-all.version> <gson.version>2.9.0</gson.version> <spring.version>5.3.20</spring.version> <spring-boot.version>2.6.7</spring-boot.version> <mybatis-spring-boot-starter.version>2.2.2</mybatis-spring-boot-starter.version> <mysql-connector-java.version>8.0.29</mysql-connector-java.version> <slf4j-api.version>1.7.36</slf4j-api.version> <junit-jupiter.version>5.8.2</junit-jupiter.version> <javax.annotation-api.version>1.3.2</javax.annotation-api.version> <groovy.version>3.0.10</groovy.version> <maven-source-plugin.version>3.2.1</maven-source-plugin.version> <maven-javadoc-plugin.version>3.4.0</maven-javadoc-plugin.version> <maven-gpg-plugin.version>3.0.1</maven-gpg-plugin.version> </properties> <dependencies> <!-- gitegg Spring Boot自定义及扩展 --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-boot</artifactId> </dependency> <!-- gitegg Spring Cloud自定义及扩展 --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-cloud</artifactId> </dependency> <dependency> <groupId>org.mybatis.spring.boot</groupId> <artifactId>mybatis-spring-boot-starter</artifactId> <version>${mybatis-spring-boot-starter.version}</version> <!-- 去除springboot默认的logback配置--> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> <!-- 去除springboot默认的logback配置--> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> </exclusion> </exclusions> </dependency> <!-- freemarker-starter --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-freemarker</artifactId> <!-- 去除springboot默认的logback配置--> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> </exclusion> </exclusions> </dependency> <!-- mail-starter --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-mail</artifactId> <!-- 去除springboot默认的logback配置--> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> </exclusion> </exclusions> </dependency> <!-- starter-actuator --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> <!-- 去除springboot默认的logback配置--> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> </exclusion> </exclusions> </dependency> <!-- mysql --> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>${mysql-connector-java.version}</version> </dependency> <!--分布式任务调度平台XXL-JOB核心包--> <dependency> <groupId>com.xuxueli</groupId> <artifactId>xxl-job-core</artifactId> <!-- 去除冲突的slf4j配置--> <exclusions> <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> </exclusion> </exclusions> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>com.google.cloud.tools</groupId> <artifactId>jib-maven-plugin</artifactId> </plugin> </plugins> </build> </project> 修改application.properties ,根据我们系统的规范,新增bootstrap.yml、bootstrap-dev.yml、bootstrap-prod.yml、bootstrap-test.yml文件。将application.properties部分配置,移到bootstrap.yml配置中。因xxl-job-admin单独数据库,且其默认使用的是Hikari数据库连接池,这里我们不打算改动,仍然使其保持原有的数据库配置,我们将可配置的内容放置在Nacos微服务配置中心上,同时在bootstrap.yml中添加多yaml文件配置(请注意,在我们本地使用的是yml结尾的文件,Nacos服务注册中心上使用的是yaml结尾的文件,两者是一样的,只是扩展名的不同)。bootstrap.yml配置:server: port: 8007 spring: profiles: active: '@spring.profiles.active@' application: name: '@artifactId@' cloud: inetutils: ignored-interfaces: docker0 nacos: discovery: server-addr: ${spring.nacos.addr} config: server-addr: ${spring.nacos.addr} file-extension: yaml extension-configs: # 必须带文件扩展名,此时 file-extension 的配置对自定义扩展配置的 Data Id 文件扩展名没有影响 - data-id: ${spring.nacos.config.prefix}.yaml group: ${spring.nacos.config.group} refresh: true - data-id: ${spring.nacos.config.prefix}-xxl-job.yaml group: ${spring.nacos.config.group} refresh: true ### xxl-job-admin config servlet: load-on-startup: 0 static-path-pattern: /static/** resources: static-locations: classpath:/static/ ### freemarker freemarker: templateLoaderPath: classpath:/templates/ suffix: .ftl charset: UTF-8 request-context-attribute: request settings.number_format: 0.########## ### actuator management: server: servlet: context-path: /actuator health: mail: enabled: false ### mybatis mybatis: mapper-locations: classpath:/mybatis-mapper/*Mapper.xml Nacos上gitegg-cloud-config-xxl-job.yaml配置:server: servlet: context-path: /xxl-job-admin spring: datasource: url: jdbc:mysql://127.0.0.1/xxl_job?useSSL=false&zeroDateTimeBehavior=convertToNull&useUnicode=true&characterEncoding=utf8&allowMultiQueries=true&serverTimezone=GMT%2B8 username: root password: root driver-class-name: com.mysql.cj.jdbc.Driver ### datasource-pool type: com.zaxxer.hikari.HikariDataSource hikari: minimum-idle: 10 maximum-pool-size: 30 auto-commit: true idle-timeout: 30000 pool-name: HikariCP max-lifetime: 900000 connection-timeout: 10000 connection-test-query: SELECT 1 validation-timeout: 1000 ### email mail: host: smtp.qq.com port: 25 username: xxx@qq.com from: xxx@qq.com password: xxx properties: mail: smtp: auth: true starttls: enable: true required: true socketFactory: class: javax.net.ssl.SSLSocketFactory ### xxl-job, access token accessToken: default_token ### xxl-job, i18n (default is zh_CN, and you can choose "zh_CN", "zh_TC" and "en") i18n: zh_CN ## xxl-job, triggerpool max size triggerpool: fast: max: 200 slow: max: 100 ### xxl-job, log retention days logretentiondays: 304、初始化xxl-job-admin需要的数据库脚本  初始化脚本存放在下载的包目录的\xxl-job-2.3.1\doc\db\tables_xxl_job.sql中,一共需要8张表。我们将xxl-job-admin的数据库和业务数据库分开,配置不同的数据源,在Nacos配置单独的xxl-job-admin配置文件。新建xxl_job数据库打开数据库执行建表语句5、在GitEgg工程的父级pom.xml下添加静态文件过滤  xxl-job-admin是SpringMVC项目,其前端页面由ftl文件和静态文件组成,默认情况下maven启用分环境读取配置时,会对resource目录下的@进行替换,导致静态文件下的字体文件不能用,所以,这里需要进行和jks文件一样的过滤配置: <resources> <!-- 增加分环境读取配置 --> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> <excludes> <exclude>**/*.jks</exclude> <exclude>static/**</exclude> </excludes> </resource> <!-- 解决jks被过滤掉的问题 --> <resource> <directory>src/main/resources</directory> <filtering>false</filtering> <includes> <include>**/*.jks</include> <include>static/**</include> </includes> </resource> <resource> <directory>src/main/java</directory> <includes> <include>**/*.xml</include> </includes> </resource> </resources>6、在Gateway添加xxl-job-admin路由转发  xxl-job-admin路由转发需要添加两方面内容,一个是xxl-job-admin注册到Nacos注册中心上的gitegg-job服务,一个是xxl-job-admin前端页面请求的静态文件转发。第一个是为了和我们整体微服务保持一致,第二个是为了解决xxl-job-admin前端ftl页面在请求静态文件时,请求的是/xxl-job-admin根路径。新增Gateway路由转发配置如下: - id: gitegg-job uri: lb://gitegg-job predicates: - Path=/gitegg-job/** filters: - StripPrefix=1 - id: xxl-job-admin uri: lb://gitegg-job predicates: - Path=/xxl-job-admin/** filters: - StripPrefix=07、增加xxl-job-admin访问白名单  xxl-job-admin有自己的权限访问控制,我们不在网关对其进行鉴权,所以在Nacos配置中,增加白名单配置:# 网关放行设置 1、whiteUrls不需要鉴权的公共url,白名单,配置白名单路径 2、authUrls需要鉴权的公共url oauth-list: ...... whiteUrls: ...... - "/gitegg-job/**" - "/xxl-job-admin/**" ......8、启动xxl-job-admin微服务,查看是否启动成功,默认用户名密码: admin/123456二、测试XXL-JOB定时任务功能  我们在上面的第一步中,完成了xxl-job-admin的整合和启动,xxl-job-admin可以看做是分布式任务的服务注册中心和管理台,如果我们需要实现定时任务,还需要具体实现执行器让xxl-job-admin调用执行。  XXL-JOB支持多种方式的定时任务调用,可以将定时任务执行器写在业务代码中,也可以写在xxl-job-admin服务端:BEAN模式(类形式): Bean模式任务,支持基于类的开发方式,每个任务对应一个Java类。BEAN模式(方法形式): Bean模式任务,支持基于方法的开发方式,每个任务对应一个方法。GLUE模式(Java/Shell/Python/NodeJS/PHP/PowerShell) :任务以源码方式维护在调度中心,支持通过Web IDE在线更新,实时编译和生效,因此不需要指定JobHandler。1、增加xxl-job通用配置  新增gitegg-platform-xxl-job工程,增加通用配置XxlJobConfig.java通用配置,这样在需要使用定时任务的微服务中,只需要引入一次即可,不需要重复配置。XxlJobConfig.java:package com.gitegg.platform.xxl.job.config; import com.xxl.job.core.executor.impl.XxlJobSpringExecutor; import lombok.extern.slf4j.Slf4j; import org.springframework.beans.factory.annotation.Value; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; * xxl-job config * @author xuxueli 2017-04-28 @Slf4j @Configuration public class XxlJobConfig { @Value("${xxl.job.admin.addresses}") private String adminAddresses; @Value("${xxl.job.accessToken}") private String accessToken; @Value("${xxl.job.executor.appname}") private String appname; @Value("${xxl.job.executor.address}") private String address; @Value("${xxl.job.executor.ip}") private String ip; @Value("${xxl.job.executor.port}") private int port; @Value("${xxl.job.executor.logpath}") private String logPath; @Value("${xxl.job.executor.logretentiondays}") private int logRetentionDays; @Bean public XxlJobSpringExecutor xxlJobExecutor() { log.info(">>>>>>>>>>> xxl-job config init."); XxlJobSpringExecutor xxlJobSpringExecutor = new XxlJobSpringExecutor(); xxlJobSpringExecutor.setAdminAddresses(adminAddresses); xxlJobSpringExecutor.setAppname(appname); xxlJobSpringExecutor.setAddress(address); xxlJobSpringExecutor.setIp(ip); xxlJobSpringExecutor.setPort(port); xxlJobSpringExecutor.setAccessToken(accessToken); xxlJobSpringExecutor.setLogPath(logPath); xxlJobSpringExecutor.setLogRetentionDays(logRetentionDays); return xxlJobSpringExecutor; * 针对多网卡、容器内部署等情况,可借助 "spring-cloud-commons" 提供的 "InetUtils" 组件灵活定制注册IP; * 1、引入依赖: * <dependency> * <groupId>org.springframework.cloud</groupId> * <artifactId>spring-cloud-commons</artifactId> * <version>${version}</version> * </dependency> * 2、配置文件,或者容器启动变量 * spring.cloud.inetutils.preferred-networks: 'xxx.xxx.xxx.' * 3、获取IP * String ip_ = inetUtils.findFirstNonLoopbackHostInfo().getIpAddress(); }Nacos配置中心:xxl: admin: addresses: http://127.0.0.1/xxl-job-admin accessToken: 'default_token' executor: appname: ${spring.application.name} address: port: 9999 logpath: D:\\log4j2_nacos\\xxl-job\\jobhandler logretentiondays: 302、实现定时任务测试代码  我们在gitegg-service-system中测试定时任务执行器,先在pom.xml中添加gitegg-platform-xxl-job依赖,然后新增SystemJobHandler.java测试类SystemJobHandler.java:package com.gitegg.service.system.jobhandler; import com.xxl.job.core.biz.model.ReturnT; import com.xxl.job.core.context.XxlJobHelper; import com.xxl.job.core.handler.annotation.XxlJob; import lombok.extern.slf4j.Slf4j; import org.springframework.stereotype.Component; import java.util.concurrent.TimeUnit; * 定时任务示例代码,其他更多示例请查看 * https://www.xuxueli.com/xxl-job * @author GitEgg @Slf4j @Component public class SystemJobHandler { * 1、简单任务示例(Bean模式)不带返回值 @XxlJob("systemJobHandler") public void systemJobHandler() throws Exception { XxlJobHelper.log("不带返回值:XXL-JOB, Hello World."); for (int i = 0; i < 5; i++) { XxlJobHelper.log("beat at:" + i); TimeUnit.SECONDS.sleep(2); * 2、简单任务示例(Bean模式)带成功或失败返回值 @XxlJob("userJobHandler") public ReturnT<String> userJobHandler() throws Exception { XxlJobHelper.log("带返回值:XXL-JOB, Hello World."); for (int i = 0; i < 5; i++) { XxlJobHelper.log("beat at:" + i); TimeUnit.SECONDS.sleep(2); return ReturnT.SUCCESS; 3、配置xxl-job-admin新增执行器新增时:gitegg-service-system服务启动后,自动注册:4、新增xxl-job-admin任务  执行器可以看做是一组微服务,而任务是微服务具体执行的方法。任务新增后,默认是STOP状态,需要手动启动,当列表显示RUNNING时,表示该任务是运行状态,会根据配置的时间执行。5、查看执行器是否执行  在本地开发环境查看任务执行的方式有多种,直接Debug也可以,生产环境我们可以查看xxl-job日志,在测试代码中记录的log,在xxl-job-admin管理台都可以详细查看。  通过以上操作步骤,我们将xxl-job和xxl-job-admin整合到了我们的微服务架构中,只需要在有任务调度需求的微服务中实现执行器就可以满足我们的需求了。源码地址:Gitee: https://gitee.com/wmz1930/GitEgg GitHub: https://github.com/wmz1930/GitEgg

SpringCloud微服务实战——搭建企业级开发框架(四十一):扩展JustAuth+SpringSecurity+Vue实现多租户系统微信扫码、钉钉扫码等第三方登录

前面我们详细介绍了SSO、OAuth2的定义和实现原理,也举例说明了如何在微服务框架中使用spring-security-oauth2实现单点登录授权服务器和单点登录客户端。目前很多平台都提供了单点登录授权服务器功能,比如我们经常用到的QQ登录、微信登录、新浪微博登录、支付宝登录等等。  如果我们自己的系统需要调用第三方登录,那么我们就需要实现单点登录客户端,然后跟需要对接的平台调试登录SDK。JustAuth是第三方授权登录的工具类库,对接了国外内数十家第三方登录的SDK,我们在需要实现第三方登录时,只需要集成JustAuth工具包,然后配置即可实现第三方登录,省去了需要对接不同SDK的麻烦。  JustAuth官方提供了多种入门指南,集成使用非常方便。但是如果要贴合我们自有开发框架的业务需求,还是需要进行整合优化。下面根据我们的系统需求,从两方面进行整合:一是支持多租户功能,二是和自有系统的用户进行匹配。一、JustAuth多租户系统配置GitEgg多租户功能实现介绍  GitEgg框架支持多租户功能,从多租户的实现来讲,目前大多数平台都是在登录界面输入租户的标识来确定属于哪个租户,这种方式简单有效,但是对于用户来讲体验不是很好。我们更希望的多租户功能是能够让用户无感知,且每个租户有自己不同的界面展示。  GitEgg在实现多租户功能时,考虑到同一域名可以设置多个子域名,每个子域名可对应不同的租户。所以,对于多租户的识别方式,首先是根据浏览器当前访问的域名或IP地址和系统配置的多租户域名或IP地址信息进行自动识别,如果是域名或IP地址存在多个,或者未找到相关配置时,才会由用户自己选择属于哪个租户。自定义JustAuth配置文件信息到数据库和缓存  在JustAuth的官方Demo中,SpringBoot集成JustAuth是将第三方授权信息配置在yml配置文件中的,对于单租户系统来说,可以这样配置。但是,对于多租户系统,我们需要考虑多种情况:一种是整个多租户系统使用同一套第三方授权,授权之后再由用户选择绑定到具体的租户;另外一种是每个租户配置自己的第三方授权,更具差异化。  出于功能完整性的考虑,我们两种情况都实现,当租户不配置自有的第三方登录参数时,使用的是系统默认自带的第三方登录参数。当租户配置了自有的第三方登录参数时,就是使用租户自己的第三方授权服务器。我们将JustAuth原本配置在yml配置文件中的第三方授权服务器信息配置在数据库中,并增加多租户标识,这样在不同租户调用第三方登录时就是相互隔离的。JustAuth配置信息表字段设计  首先我们通过JustAuth官方Demo justauth-spring-boot-starter-demo 了解到JustAuth主要的配置参数为:JustAuth功能启用开关自定义第三方登录的配置信息内置默认第三方登录的配置信息Http请求代理的配置信息缓存的配置信息justauth: # JustAuth功能启用开关 enabled: true # 自定义第三方登录的配置信息 extend: enum-class: com.xkcoding.justauthspringbootstarterdemo.extend.ExtendSource config: TEST: request-class: com.xkcoding.justauthspringbootstarterdemo.extend.ExtendTestRequest client-id: xxxxxx client-secret: xxxxxxxx redirect-uri: http://oauth.xkcoding.com/demo/oauth/test/callback MYGITLAB: request-class: com.xkcoding.justauthspringbootstarterdemo.extend.ExtendMyGitlabRequest client-id: xxxxxx client-secret: xxxxxxxx redirect-uri: http://localhost:8443/oauth/mygitlab/callback # 内置默认第三方登录的配置信息 type: GOOGLE: client-id: xxxxxx client-secret: xxxxxxxx redirect-uri: http://localhost:8443/oauth/google/callback ignore-check-state: false scopes: - profile - email - openid # Http请求代理的配置信息 http-config: timeout: 30000 proxy: GOOGLE: type: HTTP hostname: 127.0.0.1 port: 10080 MYGITLAB: type: HTTP hostname: 127.0.0.1 port: 10080 # 缓存的配置信息 cache: type: default prefix: 'demo::' timeout: 1h  在对配置文件存储格式进行设计时,结合对多租户系统的需求分析,我们需要选择哪些配置是系统公共配置,哪些是租户自己的配置。比如自定义第三方登录的enum-class这个是需要由系统开发的,是整个多租户系统的功能,这种可以看做是通用配置,但是在这里,考虑到后续JustAuth系统升级,我们不打算破坏原先配置文件的结构,所以我们仍选择各租户隔离配置。  我们将JustAuth配置信息拆分为两张表存储,一张是配置JustAuth开关、自定义第三方登录配置类、缓存配置、Http超时配置等信息的表(t_just_auth_config),这些配置信息的同一特点是与第三方登录系统无关,不因第三方登录系统的改变而改变;还有一张表是配置第三方登录相关的参数、Http代理请求表(t_just_auth_source)。租户和t_just_auth_config为一对一关系,和t_just_auth_source为一对多关系。t_just_auth_config(租户第三方登录功能配置表)表定义:SET NAMES utf8mb4; SET FOREIGN_KEY_CHECKS = 0; -- ---------------------------- -- Table structure for t_just_auth_config -- ---------------------------- DROP TABLE IF EXISTS `t_just_auth_config`; CREATE TABLE `t_just_auth_config` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `enabled` tinyint(1) NULL DEFAULT NULL COMMENT 'JustAuth开关', `enum_class` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '自定义扩展第三方登录的配置类', `http_timeout` bigint(20) NULL DEFAULT NULL COMMENT 'Http请求的超时时间', `cache_type` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '缓存类型', `cache_prefix` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '缓存前缀', `cache_timeout` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '缓存超时时间', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '是否删除', PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = '租户第三方登录功能配置表' ROW_FORMAT = DYNAMIC;SET FOREIGN_KEY_CHECKS = 1; t_just_auth_sourc(租户第三方登录信息配置表)表定义: SET NAMES utf8mb4; SET FOREIGN_KEY_CHECKS = 0; -- ---------------------------- -- Table structure for t_just_auth_source -- ---------------------------- DROP TABLE IF EXISTS `t_just_auth_source`; CREATE TABLE `t_just_auth_source` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `source_name` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '第三方登录的名称', `source_type` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '第三方登录类型:默认default 自定义custom', `request_class` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '自定义第三方登录的请求Class', `client_id` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '客户端id:对应各平台的appKey', `client_secret` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '客户端Secret:对应各平台的appSecret', `redirect_uri` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '登录成功后的回调地址', `alipay_public_key` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '支付宝公钥:当选择支付宝登录时,该值可用', `union_id` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '是否需要申请unionid,目前只针对qq登录', `stack_overflow_key` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 'Stack Overflow Key', `agent_id` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '企业微信,授权方的网页应用ID', `user_type` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '企业微信第三方授权用户类型,member|admin', `domain_prefix` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '域名前缀 使用 Coding 登录和 Okta 登录时,需要传该值。', `ignore_check_state` tinyint(1) NOT NULL DEFAULT 0 COMMENT '忽略校验code state}参数,默认不开启。', `scopes` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '支持自定义授权平台的 scope 内容', `device_id` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '设备ID, 设备唯一标识ID', `client_os_type` int(11) NULL DEFAULT NULL COMMENT '喜马拉雅:客户端操作系统类型,1-iOS系统,2-Android系统,3-Web', `pack_id` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '喜马拉雅:客户端包名', `pkce` tinyint(1) NULL DEFAULT NULL COMMENT ' 是否开启 PKCE 模式,该配置仅用于支持 PKCE 模式的平台,针对无服务应用,不推荐使用隐式授权,推荐使用 PKCE 模式', `auth_server_id` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 'Okta 授权服务器的 ID, 默认为 default。', `ignore_check_redirect_uri` tinyint(1) NOT NULL DEFAULT 0 COMMENT '忽略校验 {@code redirectUri} 参数,默认不开启。', `proxy_type` varchar(10) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 'Http代理类型', `proxy_host_name` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 'Http代理Host', `proxy_port` int(11) NULL DEFAULT NULL COMMENT 'Http代理Port', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '是否删除', PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = '租户第三方登录信息配置表' ROW_FORMAT = DYNAMIC; SET FOREIGN_KEY_CHECKS = 1;使用GitEgg代码生成工具生成JustAuth配置信息的CRUD代码  我们将JustAuth配置信息管理的相关代码和JustAuth实现业务逻辑的代码分开,配置信息我们在系统启动时加载到Redis缓存,JustAuth在调用时,直接调用Redis缓存中的配置。  前面讲过如何通过数据库表设计生成CRUD的前后端代码,这里不再赘述,生成好的后台代码我们放在gitegg-service-extension工程下,和短信、文件存储等的配置放到同一工程下,作为框架的扩展功能。基础配置:第三方列表:代码生成之后,需要做初始化缓存处理,即在第三方配置服务启动的时候,将多租户的配置信息初始化到Redis缓存中。初始化的CommandLineRunner类 InitExtensionCacheRunner.java/** * 容器启动完成加载资源权限数据到缓存 * @author GitEgg @Slf4j @RequiredArgsConstructor(onConstructor_ = @Autowired) @Component public class InitExtensionCacheRunner implements CommandLineRunner { private final IJustAuthConfigService justAuthConfigService; private final IJustAuthSourceService justAuthSourceService; @Override public void run(String... args) { log.info("InitExtensionCacheRunner running"); // 初始化第三方登录主配置 justAuthConfigService.initJustAuthConfigList(); // 初始化第三方登录 第三方配置 justAuthSourceService.initJustAuthSourceList(); 第三方登录主配置初始化方法 * 初始化配置表列表 * @return @Override public void initJustAuthConfigList() { QueryJustAuthConfigDTO queryJustAuthConfigDTO = new QueryJustAuthConfigDTO(); queryJustAuthConfigDTO.setStatus(GitEggConstant.ENABLE); List<JustAuthConfigDTO> justAuthSourceInfoList = justAuthConfigMapper.initJustAuthConfigList(queryJustAuthConfigDTO); // 判断是否开启了租户模式,如果开启了,那么角色权限需要按租户进行分类存储 if (enable) { Map<Long, List<JustAuthConfigDTO>> authSourceListMap = justAuthSourceInfoList.stream().collect(Collectors.groupingBy(JustAuthConfigDTO::getTenantId)); authSourceListMap.forEach((key, value) -> { String redisKey = AuthConstant.SOCIAL_TENANT_CONFIG_KEY + key; redisTemplate.delete(redisKey); addJustAuthConfig(redisKey, value); } else { redisTemplate.delete(AuthConstant.SOCIAL_CONFIG_KEY); addJustAuthConfig(AuthConstant.SOCIAL_CONFIG_KEY, justAuthSourceInfoList); private void addJustAuthConfig(String key, List<JustAuthConfigDTO> configList) { Map<String, String> authConfigMap = new TreeMap<>(); Optional.ofNullable(configList).orElse(new ArrayList<>()).forEach(config -> { try { authConfigMap.put(config.getTenantId().toString(), JsonUtils.objToJson(config)); redisTemplate.opsForHash().putAll(key, authConfigMap); } catch (Exception e) { log.error("初始化第三方登录失败:{}" , e); 第三方登录参数配置初始化方法 * 初始化配置表列表 * @return @Override public void initJustAuthSourceList() { QueryJustAuthSourceDTO queryJustAuthSourceDTO = new QueryJustAuthSourceDTO(); queryJustAuthSourceDTO.setStatus(GitEggConstant.ENABLE); List<JustAuthSourceDTO> justAuthSourceInfoList = justAuthSourceMapper.initJustAuthSourceList(queryJustAuthSourceDTO); // 判断是否开启了租户模式,如果开启了,那么角色权限需要按租户进行分类存储 if (enable) { Map<Long, List<JustAuthSourceDTO>> authSourceListMap = justAuthSourceInfoList.stream().collect(Collectors.groupingBy(JustAuthSourceDTO::getTenantId)); authSourceListMap.forEach((key, value) -> { String redisKey = AuthConstant.SOCIAL_TENANT_SOURCE_KEY + key; redisTemplate.delete(redisKey); addJustAuthSource(redisKey, value); } else { redisTemplate.delete(AuthConstant.SOCIAL_SOURCE_KEY); addJustAuthSource(AuthConstant.SOCIAL_SOURCE_KEY, justAuthSourceInfoList); private void addJustAuthSource(String key, List<JustAuthSourceDTO> sourceList) { Map<String, String> authConfigMap = new TreeMap<>(); Optional.ofNullable(sourceList).orElse(new ArrayList<>()).forEach(source -> { try { authConfigMap.put(source.getSourceName(), JsonUtils.objToJson(source)); redisTemplate.opsForHash().putAll(key, authConfigMap); } catch (Exception e) { log.error("初始化第三方登录失败:{}" , e); 引入JustAuth相关依赖jar包在gitegg-platform-bom工程中引入JustAuth包和版本,JustAuth提供了SpringBoot集成版本justAuth-spring-security-starter,如果简单使用,可以直接引用SpringBoot集成版本,我们这里因为需要做相应的定制修改,所以引入JustAuth基础工具包。······ <!-- JustAuth第三方登录 --> <just.auth.version>1.16.5</just.auth.version> <!-- JustAuth SpringBoot集成 --> <just.auth.spring.version>1.4.0</just.auth.spring.version> ······ <!--JustAuth第三方登录--> <dependency> <groupId>me.zhyd.oauth</groupId> <artifactId>JustAuth</artifactId> <version>${just.auth.version}</version> </dependency> <!--JustAuth SpringBoot集成--> <dependency> <groupId>com.xkcoding.justauth</groupId> <artifactId>justauth-spring-boot-starter</artifactId> <version>${just.auth.spring.version}</version> </dependency> ······ 新建gitegg-platform-justauth工程,用于实现公共自定义代码,并在pom.xml中引入需要的jar包。 <dependencies> <!-- gitegg Spring Boot自定义及扩展 --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-boot</artifactId> </dependency> <!--JustAuth第三方登录--> <dependency> <groupId>me.zhyd.oauth</groupId> <artifactId>JustAuth</artifactId> </dependency> <!--JustAuth SpringBoot集成--> <dependency> <groupId>com.xkcoding.justauth</groupId> <artifactId>justauth-spring-boot-starter</artifactId> <!-- 不使用JustAuth默认版本--> <exclusions> <exclusion> <groupId>me.zhyd.oauth</groupId> <artifactId>JustAuth</artifactId> </exclusion> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> </exclusion> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-autoconfigure</artifactId> </exclusion> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-configuration-processor</artifactId> </exclusion> </exclusions> </dependency> </dependencies>自定义实现获取和实例化多租户第三方登录配置的AuthRequest工厂类GitEggAuthRequestFactory.java/** * GitEggAuthRequestFactory工厂类 * @author GitEgg @Slf4j @RequiredArgsConstructor public class GitEggAuthRequestFactory { private final RedisTemplate redisTemplate; private final AuthRequestFactory authRequestFactory; private final JustAuthProperties justAuthProperties; * 是否开启租户模式 @Value("${tenant.enable}") private Boolean enable; public GitEggAuthRequestFactory(AuthRequestFactory authRequestFactory, RedisTemplate redisTemplate, JustAuthProperties justAuthProperties) { this.authRequestFactory = authRequestFactory; this.redisTemplate = redisTemplate; this.justAuthProperties = justAuthProperties; * 返回当前Oauth列表 * @return Oauth列表 public List<String> oauthList() { // 合并 return authRequestFactory.oauthList(); * 返回AuthRequest对象 * @param source {@link AuthSource} * @return {@link AuthRequest} public AuthRequest get(String source) { if (StrUtil.isBlank(source)) { throw new AuthException(AuthResponseStatus.NO_AUTH_SOURCE); // 组装多租户的缓存配置key String authConfigKey = AuthConstant.SOCIAL_TENANT_CONFIG_KEY; if (enable) { authConfigKey += GitEggAuthUtils.getTenantId(); } else { authConfigKey = AuthConstant.SOCIAL_CONFIG_KEY; // 获取主配置,每个租户只有一个主配置 String sourceConfigStr = (String) redisTemplate.opsForHash().get(authConfigKey, GitEggAuthUtils.getTenantId()); AuthConfig authConfig = null; JustAuthSource justAuthSource = null; AuthRequest tenantIdAuthRequest = null; if (!StringUtils.isEmpty(sourceConfigStr)) try { // 转为系统配置对象 JustAuthConfig justAuthConfig = JsonUtils.jsonToPojo(sourceConfigStr, JustAuthConfig.class); // 判断该配置是否开启了第三方登录 if (justAuthConfig.getEnabled()) // 根据配置生成StateCache CacheProperties cacheProperties = new CacheProperties(); if (!StringUtils.isEmpty(justAuthConfig.getCacheType()) && !StringUtils.isEmpty(justAuthConfig.getCachePrefix()) && null != justAuthConfig.getCacheTimeout()) cacheProperties.setType(CacheProperties.CacheType.valueOf(justAuthConfig.getCacheType().toUpperCase())); cacheProperties.setPrefix(justAuthConfig.getCachePrefix()); cacheProperties.setTimeout(Duration.ofMinutes(justAuthConfig.getCacheTimeout())); cacheProperties = justAuthProperties.getCache(); GitEggRedisStateCache gitEggRedisStateCache = new GitEggRedisStateCache(redisTemplate, cacheProperties, enable); // 组装多租户的第三方配置信息key String authSourceKey = AuthConstant.SOCIAL_TENANT_SOURCE_KEY; if (enable) { authSourceKey += GitEggAuthUtils.getTenantId(); } else { authSourceKey = AuthConstant.SOCIAL_SOURCE_KEY; // 获取具体的第三方配置信息 String sourceAuthStr = (String)redisTemplate.opsForHash().get(authSourceKey, source.toUpperCase()); if (!StringUtils.isEmpty(sourceAuthStr)) // 转为系统配置对象 justAuthSource = JsonUtils.jsonToPojo(sourceAuthStr, JustAuthSource.class); authConfig = BeanCopierUtils.copyByClass(justAuthSource, AuthConfig.class); // 组装scopes,因为系统配置的是逗号分割的字符串 if (!StringUtils.isEmpty(justAuthSource.getScopes())) String[] scopes = justAuthSource.getScopes().split(StrUtil.COMMA); authConfig.setScopes(Arrays.asList(scopes)); // 设置proxy if (StrUtil.isAllNotEmpty(justAuthSource.getProxyType(), justAuthSource.getProxyHostName()) && null != justAuthSource.getProxyPort()) JustAuthProperties.JustAuthProxyConfig proxyConfig = new JustAuthProperties.JustAuthProxyConfig(); proxyConfig.setType(justAuthSource.getProxyType()); proxyConfig.setHostname(justAuthSource.getProxyHostName()); proxyConfig.setPort(justAuthSource.getProxyPort()); if (null != proxyConfig) { HttpConfig httpConfig = HttpConfig.builder().timeout(justAuthSource.getProxyPort()).proxy(new Proxy(Proxy.Type.valueOf(proxyConfig.getType()), new InetSocketAddress(proxyConfig.getHostname(), proxyConfig.getPort()))).build(); if (null != justAuthConfig.getHttpTimeout()) httpConfig.setTimeout(justAuthConfig.getHttpTimeout()); authConfig.setHttpConfig(httpConfig); // 组装好配置后,从配置生成request,判断是默认的第三方登录还是自定义第三方登录 if (SourceTypeEnum.DEFAULT.key.equals(justAuthSource.getSourceType())) tenantIdAuthRequest = this.getDefaultRequest(source, authConfig, gitEggRedisStateCache); else if (!StringUtils.isEmpty(justAuthConfig.getEnumClass()) && SourceTypeEnum.CUSTOM.key.equals(justAuthSource.getSourceType())) try { Class enumConfigClass = Class.forName(justAuthConfig.getEnumClass()); tenantIdAuthRequest = this.getExtendRequest(enumConfigClass, source, (ExtendProperties.ExtendRequestConfig) authConfig, gitEggRedisStateCache); } catch (ClassNotFoundException e) { log.error("初始化自定义第三方登录时发生异常:{}", e); } catch (Exception e) { log.error("获取第三方登录时发生异常:{}", e); if (null == tenantIdAuthRequest) tenantIdAuthRequest = authRequestFactory.get(source); return tenantIdAuthRequest; * 获取单个的request * @param source * @return private AuthRequest getDefaultRequest(String source, AuthConfig authConfig, GitEggRedisStateCache gitEggRedisStateCache) { AuthDefaultSource authDefaultSource; try { authDefaultSource = EnumUtil.fromString(AuthDefaultSource.class, source.toUpperCase()); } catch (IllegalArgumentException var4) { return null; // 从缓存获取租户单独配置 switch(authDefaultSource) { case GITHUB: return new AuthGithubRequest(authConfig, gitEggRedisStateCache); case WEIBO: return new AuthWeiboRequest(authConfig, gitEggRedisStateCache); case GITEE: return new AuthGiteeRequest(authConfig, gitEggRedisStateCache); case DINGTALK: return new AuthDingTalkRequest(authConfig, gitEggRedisStateCache); case DINGTALK_ACCOUNT: return new AuthDingTalkAccountRequest(authConfig, gitEggRedisStateCache); case BAIDU: return new AuthBaiduRequest(authConfig, gitEggRedisStateCache); case CSDN: return new AuthCsdnRequest(authConfig, gitEggRedisStateCache); case CODING: return new AuthCodingRequest(authConfig, gitEggRedisStateCache); case OSCHINA: return new AuthOschinaRequest(authConfig, gitEggRedisStateCache); case ALIPAY: return new AuthAlipayRequest(authConfig, gitEggRedisStateCache); case QQ: return new AuthQqRequest(authConfig, gitEggRedisStateCache); case WECHAT_OPEN: return new AuthWeChatOpenRequest(authConfig, gitEggRedisStateCache); case WECHAT_MP: return new AuthWeChatMpRequest(authConfig, gitEggRedisStateCache); case WECHAT_ENTERPRISE: return new AuthWeChatEnterpriseQrcodeRequest(authConfig, gitEggRedisStateCache); case WECHAT_ENTERPRISE_WEB: return new AuthWeChatEnterpriseWebRequest(authConfig, gitEggRedisStateCache); case TAOBAO: return new AuthTaobaoRequest(authConfig, gitEggRedisStateCache); case GOOGLE: return new AuthGoogleRequest(authConfig, gitEggRedisStateCache); case FACEBOOK: return new AuthFacebookRequest(authConfig, gitEggRedisStateCache); case DOUYIN: return new AuthDouyinRequest(authConfig, gitEggRedisStateCache); case LINKEDIN: return new AuthLinkedinRequest(authConfig, gitEggRedisStateCache); case MICROSOFT: return new AuthMicrosoftRequest(authConfig, gitEggRedisStateCache); case MI: return new AuthMiRequest(authConfig, gitEggRedisStateCache); case TOUTIAO: return new AuthToutiaoRequest(authConfig, gitEggRedisStateCache); case TEAMBITION: return new AuthTeambitionRequest(authConfig, gitEggRedisStateCache); case RENREN: return new AuthRenrenRequest(authConfig, gitEggRedisStateCache); case PINTEREST: return new AuthPinterestRequest(authConfig, gitEggRedisStateCache); case STACK_OVERFLOW: return new AuthStackOverflowRequest(authConfig, gitEggRedisStateCache); case HUAWEI: return new AuthHuaweiRequest(authConfig, gitEggRedisStateCache); case GITLAB: return new AuthGitlabRequest(authConfig, gitEggRedisStateCache); case KUJIALE: return new AuthKujialeRequest(authConfig, gitEggRedisStateCache); case ELEME: return new AuthElemeRequest(authConfig, gitEggRedisStateCache); case MEITUAN: return new AuthMeituanRequest(authConfig, gitEggRedisStateCache); case TWITTER: return new AuthTwitterRequest(authConfig, gitEggRedisStateCache); case FEISHU: return new AuthFeishuRequest(authConfig, gitEggRedisStateCache); case JD: return new AuthJdRequest(authConfig, gitEggRedisStateCache); case ALIYUN: return new AuthAliyunRequest(authConfig, gitEggRedisStateCache); case XMLY: return new AuthXmlyRequest(authConfig, gitEggRedisStateCache); case AMAZON: return new AuthAmazonRequest(authConfig, gitEggRedisStateCache); case SLACK: return new AuthSlackRequest(authConfig, gitEggRedisStateCache); case LINE: return new AuthLineRequest(authConfig, gitEggRedisStateCache); case OKTA: return new AuthOktaRequest(authConfig, gitEggRedisStateCache); default: return null; private AuthRequest getExtendRequest(Class clazz, String source, ExtendProperties.ExtendRequestConfig extendRequestConfig, GitEggRedisStateCache gitEggRedisStateCache) { String upperSource = source.toUpperCase(); try { EnumUtil.fromString(clazz, upperSource); } catch (IllegalArgumentException var8) { return null; if (extendRequestConfig != null) { Class<? extends AuthRequest> requestClass = extendRequestConfig.getRequestClass(); if (requestClass != null) { return (AuthRequest) ReflectUtil.newInstance(requestClass, new Object[]{extendRequestConfig, gitEggRedisStateCache}); return null; 登录后注册或绑定用户  实现了第三方登录功能,我们自己的系统也需要做相应的用户匹配,通过OAuth2协议我们可以了解到,单点登录成功后可以获取第三方系统的用户信息,当然,具体获取到第三方用户的哪些信息是由第三方系统决定的。所以目前大多数系统平台再第三方登录成功之后,都会显示用户注册或绑定页面,将第三方用户和自有系统平台用户进行绑定。那么在下一次第三方登录成功之后,就会自动匹配到自有系统的用户,进一步的获取到该用户在自有系统的权限、菜单等。JustAuth官方提供的账户整合流程图:JustAuth整合现有用户系统  我们通常的第三方登录业务流程是点击登录,获取到第三方授权时,会去查询自有系统数据是否有匹配的用户,如果有,则自动登录到后台,如果没有,则跳转到账号绑定或者注册页面,进行账户绑定或者注册。我们将此业务流程放到gitegg-oauth微服务中去实现,新建SocialController类:/** * 第三方登录 * @author GitEgg @Slf4j @RestController @RequestMapping("/social") @RequiredArgsConstructor(onConstructor_ = @Autowired) public class SocialController { private final GitEggAuthRequestFactory factory; private final IJustAuthFeign justAuthFeign; private final IUserFeign userFeign; private final ISmsFeign smsFeign; @Value("${system.secret-key}") private String secretKey; @Value("${system.secret-key-salt}") private String secretKeySalt; private final RedisTemplate redisTemplate; * 密码最大尝试次数 @Value("${system.maxTryTimes}") private int maxTryTimes; * 锁定时间,单位 秒 @Value("${system.maxTryTimes}") private long maxLockTime; * 第三方登录缓存时间,单位 秒 @Value("${system.socialLoginExpiration}") private long socialLoginExpiration; @GetMapping public List<String> list() { return factory.oauthList(); * 获取到对应类型的登录url * @param type * @return @GetMapping("/login/{type}") public Result login(@PathVariable String type) { AuthRequest authRequest = factory.get(type); return Result.data(authRequest.authorize(AuthStateUtils.createState())); * 保存或更新用户数据,并进行判断是否进行注册或绑定 * @param type * @param callback * @return @RequestMapping("/{type}/callback") public Result login(@PathVariable String type, AuthCallback callback) { AuthRequest authRequest = factory.get(type); AuthResponse response = authRequest.login(callback); if (response.ok()) AuthUser authUser = (AuthUser) response.getData(); JustAuthSocialInfoDTO justAuthSocialInfoDTO = BeanCopierUtils.copyByClass(authUser, JustAuthSocialInfoDTO.class); BeanCopierUtils.copyByObject(authUser.getToken(), justAuthSocialInfoDTO); // 获取到第三方用户信息后,先进行保存或更新 Result<Object> createResult = justAuthFeign.userCreateOrUpdate(justAuthSocialInfoDTO); if(createResult.isSuccess() && null != createResult.getData()) Long socialId = Long.parseLong((String)createResult.getData()); // 判断此第三方用户是否被绑定到系统用户 Result<Object> bindResult = justAuthFeign.userBindQuery(socialId); // 这里需要处理返回消息,前端需要根据返回是否已经绑定好的消息来判断 // 将socialId进行加密返回 DES des = new DES(Mode.CTS, Padding.PKCS5Padding, secretKey.getBytes(), secretKeySalt.getBytes()); // 这里将source+uuid通过des加密作为key返回到前台 String socialKey = authUser.getSource() + StrPool.UNDERLINE + authUser.getUuid(); // 将socialKey放入缓存,默认有效期2个小时,如果2个小时未完成验证,那么操作失效,重新获取,在system:socialLoginExpiration配置 redisTemplate.opsForValue().set(AuthConstant.SOCIAL_VALIDATION_PREFIX + socialKey, createResult.getData(), socialLoginExpiration, TimeUnit.SECONDS); String desSocialKey = des.encryptHex(socialKey); bindResult.setData(desSocialKey); // 这里返回的成功是请求成功,里面放置的result是是否有绑定用户的成功 return Result.data(bindResult); return Result.error("获取第三方用户绑定信息失败"); throw new BusinessException(response.getMsg()); * 绑定用户手机号 * 这里不走手机号登录的流程,因为如果手机号不存在那么可以直接创建一个用户并进行绑定 @PostMapping("/bind/mobile") @ApiOperation(value = "绑定用户手机号") public Result<?> bindMobile(@Valid @RequestBody SocialBindMobileDTO socialBind) { Result<?> smsResult = smsFeign.checkSmsVerificationCode(socialBind.getSmsCode(), socialBind.getPhoneNumber(), socialBind.getCode()); // 判断短信验证是否成功 if (smsResult.isSuccess() && null != smsResult.getData() && (Boolean)smsResult.getData()) { // 解密前端传来的socialId DES des = new DES(Mode.CTS, Padding.PKCS5Padding, secretKey.getBytes(), secretKeySalt.getBytes()); String desSocialKey = des.decryptStr(socialBind.getSocialKey()); // 将socialKey放入缓存,默认有效期2个小时,如果2个小时未完成验证,那么操作失效,重新获取,在system:socialLoginExpiration配置 String desSocialId = (String)redisTemplate.opsForValue().get(AuthConstant.SOCIAL_VALIDATION_PREFIX + desSocialKey); // 查询第三方用户信息 Result<Object> justAuthInfoResult = justAuthFeign.querySocialInfo(Long.valueOf(desSocialId)); if (null == justAuthInfoResult || !justAuthInfoResult.isSuccess() || null == justAuthInfoResult.getData()) throw new BusinessException("未查询到第三方用户信息,请返回到登录页重试"); JustAuthSocialInfoDTO justAuthSocialInfoDTO = BeanUtil.copyProperties(justAuthInfoResult.getData(), JustAuthSocialInfoDTO.class); // 查询用户是否存在,如果存在,那么直接调用绑定接口 Result<Object> result = userFeign.queryUserByPhone(socialBind.getPhoneNumber()); Long userId; // 判断返回信息 if (null != result && result.isSuccess() && null != result.getData()) { GitEggUser gitEggUser = BeanUtil.copyProperties(result.getData(), GitEggUser.class); userId = gitEggUser.getId(); // 如果用户不存在,那么调用新建用户接口,并绑定 UserAddDTO userAdd = new UserAddDTO(); userAdd.setAccount(socialBind.getPhoneNumber()); userAdd.setMobile(socialBind.getPhoneNumber()); userAdd.setNickname(justAuthSocialInfoDTO.getNickname()); userAdd.setPassword(StringUtils.isEmpty(justAuthSocialInfoDTO.getUnionId()) ? justAuthSocialInfoDTO.getUuid() : justAuthSocialInfoDTO.getUnionId()); userAdd.setStatus(GitEggConstant.UserStatus.ENABLE); userAdd.setAvatar(justAuthSocialInfoDTO.getAvatar()); userAdd.setEmail(justAuthSocialInfoDTO.getEmail()); userAdd.setStreet(justAuthSocialInfoDTO.getLocation()); userAdd.setComments(justAuthSocialInfoDTO.getRemark()); Result<?> resultUserAdd = userFeign.userAdd(userAdd); if (null != resultUserAdd && resultUserAdd.isSuccess() && null != resultUserAdd.getData()) userId = Long.parseLong((String) resultUserAdd.getData()); // 如果添加失败,则返回失败信息 return resultUserAdd; // 执行绑定操作 return justAuthFeign.userBind(Long.valueOf(desSocialId), userId); return smsResult; * 绑定账号 * 这里只有绑定操作,没有创建用户操作 @PostMapping("/bind/account") @ApiOperation(value = "绑定用户账号") public Result<?> bindAccount(@Valid @RequestBody SocialBindAccountDTO socialBind) { // 查询用户是否存在,如果存在,那么直接调用绑定接口 Result<?> result = userFeign.queryUserByAccount(socialBind.getUsername()); // 判断返回信息 if (null != result && result.isSuccess() && null != result.getData()) { GitEggUser gitEggUser = BeanUtil.copyProperties(result.getData(), GitEggUser.class); // 必须添加次数验证,和登录一样,超过最大验证次数那么直接锁定账户 // 从Redis获取账号密码错误次数 Object lockTimes = redisTemplate.boundValueOps(AuthConstant.LOCK_ACCOUNT_PREFIX + gitEggUser.getId()).get(); // 判断账号密码输入错误几次,如果输入错误多次,则锁定账号 if(null != lockTimes && (int)lockTimes >= maxTryTimes){ throw new BusinessException("密码尝试次数过多,请使用其他方式绑定"); PasswordEncoder passwordEncoder = PasswordEncoderFactories.createDelegatingPasswordEncoder(); String password = AuthConstant.BCRYPT + gitEggUser.getAccount() + DigestUtils.md5DigestAsHex(socialBind.getPassword().getBytes()); // 验证账号密码是否正确 if ( passwordEncoder.matches(password, gitEggUser.getPassword())) // 解密前端传来的socialId DES des = new DES(Mode.CTS, Padding.PKCS5Padding, secretKey.getBytes(), secretKeySalt.getBytes()); String desSocialKey = des.decryptStr(socialBind.getSocialKey()); // 将socialKey放入缓存,默认有效期2个小时,如果2个小时未完成验证,那么操作失效,重新获取,在system:socialLoginExpiration配置 String desSocialId = (String)redisTemplate.opsForValue().get(AuthConstant.SOCIAL_VALIDATION_PREFIX + desSocialKey); // 执行绑定操作 return justAuthFeign.userBind(Long.valueOf(desSocialId), gitEggUser.getId()); // 增加锁定次数 redisTemplate.boundValueOps(AuthConstant.LOCK_ACCOUNT_PREFIX + gitEggUser.getId()).increment(GitEggConstant.Number.ONE); redisTemplate.expire(AuthConstant.LOCK_ACCOUNT_PREFIX +gitEggUser.getId(), maxLockTime , TimeUnit.SECONDS); throw new BusinessException("账号或密码错误"); throw new BusinessException("账号不存在"); }所有的配置和绑定注册功能实现之后,我们还需要实现关键的一步,就是自定义实现OAuth2的第三方登录模式SocialTokenGranter,在第三方授权之后,通过此模式进行登录,自定义实现之后,记得t_oauth_client_details表需增加social授权。SocialTokenGranter.java/** * 第三方登录模式 * @author GitEgg public class SocialTokenGranter extends AbstractTokenGranter { private static final String GRANT_TYPE = "social"; private final AuthenticationManager authenticationManager; private UserDetailsService userDetailsService; private IJustAuthFeign justAuthFeign; private RedisTemplate redisTemplate; private String captchaType; private String secretKey; private String secretKeySalt; public SocialTokenGranter(AuthenticationManager authenticationManager, AuthorizationServerTokenServices tokenServices, ClientDetailsService clientDetailsService, OAuth2RequestFactory requestFactory, RedisTemplate redisTemplate, IJustAuthFeign justAuthFeign, UserDetailsService userDetailsService, String captchaType, String secretKey, String secretKeySalt) { this(authenticationManager, tokenServices, clientDetailsService, requestFactory, GRANT_TYPE); this.redisTemplate = redisTemplate; this.captchaType = captchaType; this.secretKey = secretKey; this.secretKeySalt = secretKeySalt; this.justAuthFeign = justAuthFeign; this.userDetailsService = userDetailsService; protected SocialTokenGranter(AuthenticationManager authenticationManager, AuthorizationServerTokenServices tokenServices, ClientDetailsService clientDetailsService, OAuth2RequestFactory requestFactory, String grantType) { super(tokenServices, clientDetailsService, requestFactory, grantType); this.authenticationManager = authenticationManager; @Override protected OAuth2Authentication getOAuth2Authentication(ClientDetails client, TokenRequest tokenRequest) { Map<String, String> parameters = new LinkedHashMap<>(tokenRequest.getRequestParameters()); String socialKey = parameters.get(TokenConstant.SOCIAL_KEY); // Protect from downstream leaks of password parameters.remove(TokenConstant.SOCIAL_KEY); // 校验socialId String socialId; try { // 将socialId进行加密返回 DES des = new DES(Mode.CTS, Padding.PKCS5Padding, secretKey.getBytes(), secretKeySalt.getBytes()); String desSocialKey = des.decryptStr(socialKey); // 获取缓存中的key socialId = (String) redisTemplate.opsForValue().get(AuthConstant.SOCIAL_VALIDATION_PREFIX + desSocialKey); catch (Exception e) throw new InvalidGrantException("第三方登录验证已失效,请返回登录页重新操作"); if (StringUtils.isEmpty(socialId)) throw new InvalidGrantException("第三方登录验证已失效,请返回登录页重新操作"); // 校验userId String userId; try { Result<Object> socialResult = justAuthFeign.userBindQuery(Long.parseLong(socialId)); if (null == socialResult || StringUtils.isEmpty(socialResult.getData())) { throw new InvalidGrantException("操作失败,请返回登录页重新操作"); userId = (String) socialResult.getData(); catch (Exception e) throw new InvalidGrantException("操作失败,请返回登录页重新操作"); if (StringUtils.isEmpty(userId)) throw new InvalidGrantException("操作失败,请返回登录页重新操作"); // 这里是通过用户id查询用户信息 UserDetails userDetails = this.userDetailsService.loadUserByUsername(userId); Authentication userAuth = new UsernamePasswordAuthenticationToken(userDetails, null, userDetails.getAuthorities()); ((AbstractAuthenticationToken)userAuth).setDetails(parameters); OAuth2Request storedOAuth2Request = getRequestFactory().createOAuth2Request(client, tokenRequest); return new OAuth2Authentication(storedOAuth2Request, userAuth); 后台处理完成之后,前端VUE也需要做回调处理  因为是前后端分离的项目,我们这里需要将第三方回调接口配置在vue页面,前端页面根据账户信息判断是直接登录还是进行绑定或者注册等操作。新建SocialCallback.vue用于处理前端第三方登录授权后的回调操作。SocialCallback.vue<template> <div> </div> </template> <script> import { socialLoginCallback } from '@/api/login' import { mapActions } from 'vuex' export default { name: 'SocialCallback', created () { this.$loading.show({ tip: '登录中......' }) const query = this.$route.query const socialType = this.$route.params.socialType this.socialCallback(socialType, query) methods: { ...mapActions(['Login']), getUrlKey: function (name) { // eslint-disable-next-line no-sparse-arrays return decodeURIComponent((new RegExp('[?|&]' + name + '=' + '([^&;]+?)(&|#|;|$)').exec(window.opener.location.href) || [, ''])[1].replace(/\+/g, '%20')) || null socialCallback (socialType, parameter) { const that = this socialLoginCallback(socialType, parameter).then(res => { that.$loading.hide() const bindResult = res.data if (bindResult && bindResult !== '') { if (bindResult.success && bindResult.data) { // 授权后发现已绑定,那么直接调用第三方登录 this.socialLogin(bindResult.data) } else if (bindResult.code === 601) { // 授权后没有绑定则跳转到绑定界面 that.$router.push({ name: 'socialBind', query: { redirect: this.getUrlKey('redirect'), key: bindResult.data } }) } else if (bindResult.code === 602) { // 该账号已绑定多个账号,请联系系统管理员,或者到个人中心解绑 this.$notification['error']({ message: '错误', description: ((res.response || {}).data || {}).message || '该账号已绑定多个账号,请联系系统管理员,或者到个人中心解绑', duration: 4 } else { // 提示获取第三方登录失败 this.$notification['error']({ message: '错误', description: '第三方登录失败,请稍后再试', duration: 4 } else { // 提示获取第三方登录失败 this.$notification['error']({ message: '错误', description: '第三方登录失败,请稍后再试', duration: 4 // 第三方登录后回调 socialLogin (key) { const { Login } = this // 执行登录操作 const loginParams = { grant_type: 'social', social_key: key this.$loading.show({ tip: '登录中......' }) Login(loginParams) .then((res) => this.loginSuccess(res)) .catch(err => this.loginError(err)) .finally(() => { this.$loading.hide() if (this.getUrlKey('redirect')) { window.opener.location.href = window.opener.location.origin + this.getUrlKey('redirect') } else { window.opener.location.reload() window.close() loginSuccess (res) { this.$notification['success']({ message: '提示', description: '第三方登录成功', duration: 4 loginError (err) { this.$notification['error']({ message: '错误', description: ((err.response || {}).data || {}).message || '请求出现错误,请稍后再试', duration: 4 </script> <style> </style> 二、登录和绑定测试JustAuth官方提供了详细的第三方登录的使用指南,按照其介绍,到需要的第三方网站申请,然后进行配置即可,这里只展示GitHub的登录测试步骤。1、按照官方提供的注册申请步骤,获取到GitHub的client-id和client-secret并配置回调地址Nacos配置 client-id: 59ced49784f3cebfb208 client-secret: 807f52cc33a1aae07f97521b5501adc6f36375c8 redirect-uri: http://192.168.0.2:8000/social/github/callback ignore-check-state: false 或者使用多租户系统配置 ,每个租户仅允许有一个主配置2、登录页添加Github登录链接 <div class="user-login-other"> <span>{{ $t('user.login.sign-in-with') }}</span> <a @click="openSocialLogin('wechat_open')"> <a-icon class="item-icon" type="wechat"></a-icon> </a> <a @click="openSocialLogin('qq')"> <a-icon class="item-icon" type="qq"></a-icon> </a> <a @click="openSocialLogin('github')"> <a-icon class="item-icon" type="github"></a-icon> </a> <a @click="openSocialLogin('dingtalk')"> <a-icon class="item-icon" type="dingding"></a-icon> </a> <a class="register" @click="openRegister" >{{ $t('user.login.signup') }} </a> </div>3、点击登录,如果此时GitHub账号没有登录过,则跳转到绑定或者注册账号界面4、输入手机号+验证码或者账号+密码,即可进入到登录前的页面。使用手机号+验证码的模式,如果系统不存在账号,可以直接注册新账号并登录。5、JustAuth支持的第三方登录列表,只需到相应第三方登录申请即可,下面图片取自JustAuth官网:GitEgg-Cloud是一款基于SpringCloud整合搭建的企业级微服务应用开发框架,开源项目地址:Gitee: https://gitee.com/wmz1930/GitEggGitHub: https://github.com/wmz1930/GitEgg欢迎感兴趣的小伙伴Star支持一下。

4、Flutter开发-导入并升级flutter-go示例

因Flutter升级,FlutterGo暂停维护,这里导入的项目只能切回到旧版本,这里为了适应新版本的Flutter和Dart,我们新建项目,升级flutter-go,并记录学习。1、按照之前章节,新建一个flutter_go的Flutter项目,修改build.gradle文件buildscript { ext.kotlin_version = '1.3.50' repositories { // google() // jcenter() maven { url 'http://maven.aliyun.com/nexus/content/groups/public/' } maven { url 'http://maven.aliyun.com/nexus/content/repositories/jcenter' } maven { url 'http://maven.aliyun.com/nexus/content/repositories/google' } maven { url 'http://maven.aliyun.com/nexus/content/repositories/gradle-plugin' } maven { url 'https://storage.googleapis.com/download.flutter.io' } maven { url 'https://maven.fabric.io/public' } dependencies { classpath 'com.android.tools.build:gradle:3.5.0' classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version" allprojects { repositories { // google() // jcenter() maven { url 'http://maven.aliyun.com/nexus/content/groups/public/' } maven { url 'http://maven.aliyun.com/nexus/content/repositories/jcenter' } maven { url 'http://maven.aliyun.com/nexus/content/repositories/google' } maven { url 'http://maven.aliyun.com/nexus/content/repositories/gradle-plugin' } maven { url 'https://storage.googleapis.com/download.flutter.io' } maven { url 'https://maven.fabric.io/public' } rootProject.buildDir = '../build' subprojects { project.buildDir = "${rootProject.buildDir}/${project.name}" subprojects { project.evaluationDependsOn(':app') task clean(type: Delete) { delete rootProject.buildDir }2、首先修改pubspec.yaml文件,将依赖库dependencies升级到最新版本,将静态文件和widgets添加到配置中。name: flutter_go description: Flutter GO application. # The following line prevents the package from being accidentally published to # pub.dev using `pub publish`. This is preferred for private packages. publish_to: 'none' # Remove this line if you wish to publish to pub.dev # The following defines the version and build number for your application. # A version number is three numbers separated by dots, like 1.2.43 # followed by an optional build number separated by a +. # Both the version and the builder number may be overridden in flutter # build by specifying --build-name and --build-number, respectively. # In Android, build-name is used as versionName while build-number used as versionCode. # Read more about Android versioning at https://developer.android.com/studio/publish/versioning # In iOS, build-name is used as CFBundleShortVersionString while build-number used as CFBundleVersion. # Read more about iOS versioning at # https://developer.apple.com/library/archive/documentation/General/Reference/InfoPlistKeyReference/Articles/CoreFoundationKeys.html version: 1.0.0+1 environment: sdk: ">=2.7.0 <3.0.0" dependencies: flutter: sdk: flutter # The following adds the Cupertino Icons font to your application. # Use with the CupertinoIcons class for iOS style icons. cupertino_icons: ^1.0.0 dev_dependencies: flutter_test: sdk: flutter bloc: ^2.0.0 flutter_bloc: ^2.0.0 dio: ^3.0.10 dio_cookie_manager: 1.0.0 html: ^0.14.0+4 event_bus: ^1.1.0 sqflite: ^1.3.1+2 pull_to_refresh: ^1.6.2 fluro: ^1.7.3 firebase_analytics: ^6.0.2 #firebase_auth: ^0.8.3 #auth firebase_core: ^0.5.0+1 # add dependency for Firebase Core package_info: ^0.4.3 url_launcher: ^5.7.5 cookie_jar: ^1.0.1 path_provider: ^1.6.21 # 本地存储、收藏功能 shared_preferences: ^0.5.12+2 share: ^0.6.5+3 flutter_spinkit: ^4.1.2 zefyr: ^0.12.0 fluttertoast: ^7.1.1 flutter_webview_plugin: ^0.3.11 city_pickers: ^0.2.0 image_picker: ^0.5.0 flutter_jpush: ^0.0.4 markdown: ^3.0.0 # For information on the generic Dart part of this file, see the # following page: https://dart.dev/tools/pub/pubspec # The following section is specific to Flutter. flutter: # The following line ensures that the Material Icons font is # included with your application, so that you can use the icons in # the material Icons class. uses-material-design: true assets: - lib/widgets/elements/Form/Input/TextField/text_field_demo.dart - lib/widgets/elements/Form/CheckBox/Checkbox/demo.dart - lib/widgets/components/Bar/AppBar/demo.dart - lib/widgets/components/Bar/BottomAppBar/demo.dart - lib/widgets/components/Bar/ButtonBar/demo.dart - lib/widgets/components/Bar/FlexibleSpaceBar/demo.dart - lib/widgets/components/Bar/SliverAppBar/demo.dart - lib/widgets/components/Bar/SnackBar/demo.dart - lib/widgets/components/Bar/SnackBarAction/demo.dart - lib/widgets/components/Bar/TabBar/demo.dart - lib/widgets/components/Card/Card/demo.dart - lib/widgets/components/Chip/Chip/demo.dart - lib/widgets/components/Chip/ChipTheme/demo.dart - lib/widgets/components/Chip/ChipThemeData/demo.dart - lib/widgets/components/Chip/ChoiceChip/demo.dart - lib/widgets/components/Chip/FilterChip/demo.dart - lib/widgets/components/Chip/InputChip/demo.dart - lib/widgets/components/Chip/RawChip/demo.dart - lib/widgets/components/Dialog/AboutDialog/demo.dart - lib/widgets/components/Dialog/AlertDialog/demo.dart - lib/widgets/components/Dialog/Dialog/demo.dart - lib/widgets/components/Dialog/SimpleDialog/demo.dart - lib/widgets/components/Grid/GridTile/demo.dart - lib/widgets/components/Grid/GridTileBar/demo.dart - lib/widgets/components/Grid/GridView/demo.dart - lib/widgets/components/Grid/GridPaper/demo.dart - lib/widgets/components/Grid/SliverGrid/demo.dart - lib/widgets/components/List/AnimatedList/demo.dart - lib/widgets/components/List/ListBody/demo.dart - lib/widgets/components/List/ListView/demo.dart - lib/widgets/components/Menu/CheckedPopupMenuItem/demo.dart - lib/widgets/components/Menu/DropdownMenuItem/demo.dart - lib/widgets/components/Menu/PopupMenuButton/demo.dart - lib/widgets/components/Menu/PopupMenuDivider/demo.dart - lib/widgets/components/Navigation/BottomNavigationBar/demo.dart - lib/widgets/components/Navigation/BottomNavigationBarItem/demo.dart - lib/widgets/components/Panel/ExpansionPanel/demo.dart - lib/widgets/components/Panel/ExpansionPanelList/demo.dart - lib/widgets/components/Pick/DayPicker/demo.dart - lib/widgets/components/Pick/MonthPicker/demo.dart - lib/widgets/components/Pick/ShowdatePicker/demo.dart - lib/widgets/components/Pick/YearPicker/demo.dart - lib/widgets/components/Progress/CircularProgressIndicator/demo.dart - lib/widgets/components/Progress/LinearProgressIndicator/demo.dart - lib/widgets/components/Progress/RefreshProgressIndicator/demo.dart - lib/widgets/components/Scaffold/Scaffold/demo.dart - lib/widgets/components/Scaffold/ScaffoldState/demo.dart - lib/widgets/components/Scroll/BoxScrollView/demo.dart - lib/widgets/components/Scroll/CustomScrollView/demo.dart - lib/widgets/components/Scroll/NestedScrollView/demo.dart - lib/widgets/components/Scroll/Scrollable/demo.dart - lib/widgets/components/Scroll/ScrollbarPainter/demo.dart - lib/widgets/components/Scroll/ScrollMetrics/demo.dart - lib/widgets/components/Scroll/ScrollPhysics/demo.dart - lib/widgets/components/Scroll/ScrollView/demo.dart - lib/widgets/components/Tab/Tab/demo.dart - lib/widgets/elements/Form/Button/DropdownButton/demo.dart - lib/widgets/elements/Form/Button/FlatButton/demo.dart - lib/widgets/elements/Form/Button/FloatingActionButton/demo.dart - lib/widgets/elements/Form/Button/IconButton/demo.dart - lib/widgets/elements/Form/Button/OutlineButton/demo.dart - lib/widgets/elements/Form/Button/PopupMenuButton/demo.dart - lib/widgets/elements/Form/Button/RaisedButton/demo.dart - lib/widgets/elements/Form/Button/RawMaterialButton/demo.dart - lib/widgets/elements/Form/CheckBox/Checkbox/demo.dart - lib/widgets/elements/Form/CheckBox/CheckboxListTile/demo.dart - lib/widgets/elements/Form/Radio/Radio/demo.dart - lib/widgets/elements/Form/Radio/RadioListTile/demo.dart - lib/widgets/elements/Form/Slider/Slider/demo.dart - lib/widgets/elements/Form/Slider/SliderTheme/demo.dart - lib/widgets/elements/Form/Slider/SliderThemeData/demo.dart - lib/widgets/elements/Form/Switch/AnimatedSwitcher/demo.dart - lib/widgets/elements/Form/Switch/Switch/demo.dart - lib/widgets/elements/Form/Switch/SwitchListTile/demo.dart - lib/widgets/elements/Frame/Align/Align/demo.dart - lib/widgets/elements/Frame/Box/ConstrainedBox/demo.dart - lib/widgets/elements/Frame/Box/DecoratedBox/demo.dart - lib/widgets/elements/Frame/Box/FittedBox/demo.dart - lib/widgets/elements/Frame/Box/LimitedBox/demo.dart - lib/widgets/elements/Frame/Box/OverflowBox/demo.dart - lib/widgets/elements/Frame/Box/RotatedBox/demo.dart - lib/widgets/elements/Frame/Box/SizeBox/demo.dart - lib/widgets/elements/Frame/Box/SizedOverflowBox/demo.dart - lib/widgets/elements/Form/Text/Text/demo.dart - lib/widgets/elements/Form/Text/RichText/index.dart - lib/widgets/elements/Frame/Box/UnconstrainedBox/demo.dart - lib/widgets/elements/Frame/Expanded/Expanded/expanded_demo.dart - lib/widgets/elements/Frame/Layout/Center/demo.dart - lib/widgets/elements/Frame/Layout/Column/demo.dart - lib/widgets/elements/Frame/Layout/Container/demo.dart - lib/widgets/elements/Frame/Layout/Row/demo.dart - lib/widgets/elements/Frame/Spacing/AnimatedPadding/animatedPadding_demo.dart - lib/widgets/elements/Frame/Spacing/Padding/padding_demo.dart - lib/widgets/elements/Frame/Spacing/SliverPadding/sliverpadding_demo.dart - lib/widgets/elements/Frame/Stack/IndexedStack/demo.dart - lib/widgets/elements/Frame/Stack/Stack/demo.dart - lib/widgets/elements/Frame/Table/Table/table_demo.dart - lib/widgets/elements/Media/Icon/Icon/demo.dart - lib/widgets/elements/Media/Icon/IconData/demo.dart - lib/widgets/elements/Media/Icon/IconTheme/demo.dart - lib/widgets/elements/Media/Icon/IconThemeData/demo.dart - lib/widgets/elements/Media/Icon/ImageIcon/demo.dart - lib/widgets/elements/Media/Image/AssetImage/assetImage_demo.dart - lib/widgets/elements/Media/Image/DecorationImage/decorationImage_demo.dart - lib/widgets/elements/Media/Image/DecorationImagePainter/decoration_image_painter_demo.dart - lib/widgets/elements/Media/Image/ExactAssetImage/exact_asset_image_demo.dart - lib/widgets/elements/Media/Image/FadeInImage/fade_in_image_demo.dart - lib/widgets/elements/Media/Image/FileImage/file_image_demo.dart - lib/widgets/elements/Media/Image/Image/demo.dart - lib/widgets/elements/Media/Image/MemoryImage/memory_image_demo.dart - lib/widgets/elements/Media/Image/NetworkImage/network_image_demo.dart - lib/widgets/elements/Media/Image/paintImage/paint_image_demo.dart - lib/widgets/elements/Media/Image/precacheImage/precache_image_demo.dart - lib/widgets/elements/Media/Image/RawImage/raw_image_demo.dart - lib/widgets/elements/Media/Canvas/Canvas/demo.dart - lib/widgets/elements/Media/Canvas/CircleProgressBarPainter/demo.dart - lib/widgets/elements/Media/Canvas/PainterPath/demo.dart - lib/widgets/elements/Media/Canvas/PainterSketch/demo.dart - lib/widgets/themes/Material/MaterialApp/demo.dart - lib/widgets/themes/Material/MaterialButton/demo.dart - lib/widgets/themes/Material/MaterialColor/demo.dart - lib/widgets/themes/Material/MaterialPageRoute/demo.dart - lib/widgets/themes/Material/MergeableMaterialItem/demo.dart - lib/widgets/themes/Cupertino/CupertinoApp/demo.dart - lib/widgets/themes/Cupertino/CupertinoButton/demo.dart - lib/widgets/themes/Cupertino/CupertinoColors/demo.dart - lib/widgets/themes/Cupertino/CupertinoIcons/demo.dart - lib/widgets/themes/Cupertino/CupertinoNavigationBar/demo.dart - lib/widgets/themes/Cupertino/CupertinoPageRoute/demo.dart - lib/widgets/themes/Cupertino/CupertinoPageScaffold/demo.dart - lib/widgets/themes/Cupertino/CupertinoPicker/demo.dart - lib/widgets/themes/Cupertino/CupertinoPopupSurface/demo.dart - lib/widgets/themes/Cupertino/CupertinoScrollbar/demo.dart - lib/widgets/themes/Cupertino/CupertinoSegmentedControl/demo.dart - lib/widgets/elements/Form/Switch/Switch/demo.dart - lib/widgets/themes/Cupertino/CupertinoSlider/demo.dart - lib/widgets/themes/Cupertino/CupertinoSliverNavigationBar/demo.dart - lib/widgets/themes/Cupertino/CupertinoSwitch/demo.dart - lib/widgets/themes/Cupertino/CupertinoTabBar/demo.dart - lib/widgets/themes/Cupertino/CupertinoTabScaffold/demo.dart - lib/widgets/themes/Cupertino/CupertinoTabView/demo.dart - lib/widgets/themes/Cupertino/CupertinoTimerPicker/demo.dart - lib/page_demo_package/.demo.json - lib/standard_pages/.pages.json - assets/app.db - assets/images/ - assets/fonts/ fonts: - family: FlamanteRoma fonts: - asset: assets/fonts/Flamante-Roma-Medium.ttf - asset: assets/fonts/Flamante-Roma-MediumItalic.ttf - family: LatoBold fonts: - asset: assets/fonts/Lato-Bold.ttf # To add assets to your application, add an assets section, like this: # assets: # - images/a_dot_burr.jpeg # - images/a_dot_ham.jpeg # An image asset can refer to one or more resolution-specific "variants", see # https://flutter.dev/assets-and-images/#resolution-aware. # For details regarding adding assets from package dependencies, see # https://flutter.dev/assets-and-images/#from-packages # To add custom fonts to your application, add a fonts section here, # in this "flutter" section. Each entry in this list should have a # "family" key with the font family name, and a "fonts" key with a # list giving the asset and other descriptors for the font. For # example: # fonts: # - family: Schyler # fonts: # - asset: fonts/Schyler-Regular.ttf # - asset: fonts/Schyler-Italic.ttf # style: italic # - family: Trajan Pro # fonts: # - asset: fonts/TrajanPro.ttf # - asset: fonts/TrajanPro_Bold.ttf # weight: 700 # For details regarding fonts from package dependencies, # see https://flutter.dev/custom-fonts/#from-packages3、将下载下来的FlutterGo项目中main.dart代码复制到我们新建的main.dart中,点击Android Studio右上角的Get Dependencies,下载依赖包,下载完成之后,代码中有很多报错,我们一步一步修改,先将不需要的代码注释掉,只保留首页需要用到的代码。4、复制FlutterGo项目中lib/api,lib/blocs,lib/components,lib/event,lib/model,lib/page_demo_package,lib/resources,lib/routers,lib/standard_pages,lib/utils,lib/widgets文件夹下所有代码到新建的项目中,将FlutterGo项目中assets文件夹复制到新建项目的根目录下。5、修改main.dart,application.dart,routers.dart文件中的Router为FluroRouter,因fluro最新版本升级,Router命名有修改。6、修改net_utils.dart文件,引入import 'package:dio_cookie_manager/dio_cookie_manager.dart';7、修改search_input.dart文件,将isInitialRoute: false,修改为arguments: {'isInitialRoute': false}8、注释掉报错的导入,暂时只保留views里面首页相关页面。9、修改flutter_go\android\app\src\main\AndroidManifest.xml文件,添加<uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.MODE_WORLD_READABLE" /> <uses-permission android:name="android.permission.MODE_WORLD_WRITEABLE" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />10、修改flutter_go\android\app\build.gradle文件,将以下代码复制到defaultConfig标签里。manifestPlaceholders = [ JPUSH_PKGNAME : "com.alibaba.fluttergo", JPUSH_APPKEY : "62eb07d227d1f11dd7fa6239", //JPush上注册的包名对应的appkey. JPUSH_CHANNEL : "developer-default", ]11、在lib下新建views目录,将FlutterGo项目中login_page,first_page目录和home.dart文件复制进去,然后修改login_page.dart中的timeInSecForIos为timeInSecForIosWeb。12、修改search_page.dart文件中的suggestion.dispatch为suggestion.add13、将FlutterGo中的welcome_page和fourth_page目录复制到我们新建的项目中。修改fourth_page.dart文件中的import 'package:flutter/material.dart'; 改为import 'package:flutter/material.dart' hide Page;14、将FlutterGo中的lib/views下的widget_page,collection_page,issuse_message_page,standard_demo_page目录复制到我们新建的项目中。修改issuse_message_page.dart文件,将timeInSecForIos改为timeInSecForIosWeb,将ZefyrToolbarTheme改为ToolbarTheme15、将FlutterGo下的lib/widgets目录复制到我们新建的项目中。注意事项:1、将classpath 'io.fabric.tools:gradle:1.26.1'改为classpath 'io.fabric.tools:gradle:1.25.4'2、A problem occurred configuring project ':city_pickers'.3、将Router改为FluroRouter4、欢迎页不展示,以及图标不展示,打开注释掉的welcome page5、如果jar包不能下载,将maven { url 'https://storage.googleapis.com/download.flutter.io' }放到maven { url 'https://maven.fabric.io/public' }上面

十三、Linux(CentOS7) Redis集群模式和哨兵模式配置

一、Redis集群配置创建集群目录mkdir -p /usr/local/redis-cluster cd /usr/local/redis-cluster mkdir 6379 6378修改配置文件vi redis.confdaemonize yes port 6379 dir /usr/local/redis-cluster/6379/ cluster-enabled yes #启动集群模式 cluster-config-file nodes-6379.conf cluster-node-timeout 5000 bind 0.0.0.0 protected-mode no appendonly yes #如果要设置密码需要增加如下配置: #(设置redis访问密码) requirepass 123456 #(设置集群节点间访问密码,跟上面一致) masterauth 123456把修改后的配置文件,copy到6379、6378,修改第2、3、5项里的端口号,可以用批量%s/源字符串/目的字符串/g启动redis:./redis-server /usr/local/redis-cluster/6379/redis.conf ./redis-server /usr/local/redis-cluster/6378/redis.conf查看是否启动成功ps -ef | grep redis用redis-cli创建整个redis集群:测试环境:./redis-cli --cluster create 192.168.10.195:6379 192.168.10.195:6378 192.168.10.124:6379 192.168.10.124:6378 192.168.10.100:6379 192.168.10.100:6378 --cluster-replicas 1 -a sbjcptTest生产环境:./redis-cli --cluster create 10.1.8.111:6301 10.1.8.111:6302 10.1.8.112:6303 10.1.8.112:6304 10.1.8.113:6305 10.1.8.113:6306 --cluster-replicas 1 -a sbjcptTest验证集群:./redis-cli -c -a sbjcptTest -h 192.168.10.195 -p 6379 ./redis-cli -c -a xxx -h 10.1.8.111 -p 6301 ./redis-cli -c -a sbjcptTest -h 10.1.8.112 -p 6301常用命令:# 查看集群信息 cluster info # 查看节点列表 cluster nodes二、Redis哨兵模式配置(主备):创建数据存放目录mkdir /data mkdir /data/redis mkdir /data/redis/redis-log mkdir /data/redis/data首先配置Redis的主服务器,修改redis.conf文件如下# 使得Redis服务器可以跨网络访问 bind 0.0.0.0 dir "/data/redis/data" daemonize yes logfile "/data/redis/redis-log/redis.log"# 设置密码 requirepass "123456"# 主服务器密码,注意:有关slaveof的配置只是配置从服务器,主服务器不需要配置 masterauth 123456配置Redis的从服务器,修改配置文件redis.conf# 使得Redis服务器可以跨网络访问 bind 0.0.0.0 dir "/data/redis/data" daemonize yes logfile "/data/redis/redis-log/redis.log" # 设置密码 requirepass "123456" # 主服务器密码,,这个都要配置,不然主从无法用 masterauth 123456 # 注意:有关slaveof的配置只是配置从服务器,主服务器不需要配置 slaveof 192.168.10.195 6379 # 关闭防火墙: systemctl stop firewalld.service systemctl disable firewalld.service systemctl status firewalld.service # 启动:./redis-server ../redis.conf # 查看集群是否正常: redis-cli -h 192.168.10.195 -p 6379 -a 123456 info Replication [root@localhost src]# redis-cli -h 192.168.10.195 -p 6379 -a 123456 info Replication Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.# Replication role:master connected_slaves:2 slave0:ip=192.168.10.100,port=6379,state=online,offset=70,lag=0 slave1:ip=192.168.10.124,port=6379,state=online,offset=70,lag=0 master_replid:808f22bacf3af9192301aba5c63afff7d60f3b41 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:70 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:70 redis-cli -h 192.168.10.195 -p 6379 -a 123456 AUTH 123456 set k1 v1 redis-cli -h 192.168.10.124 -p 6379 -a 123456 AUTH 123456 get k1安装哨兵# 3台Redis服务器都需执行 vi sentinel.confmkdir /data/redis/sentinel-log dataport 26379 protected-mode no daemonize yes pidfile /var/run/redis-sentinel.pid logfile "/data/redis/sentinel-log/sentinel.log" dir /tmp sentinel monitor mymaster 192.168.10.195 6379 2 sentinel down-after-milliseconds mymaster 30000 sentinel parallel-syncs mymaster 1 sentinel failover-timeout mymaster 180000 sentinel deny-scripts-reconfig yes sentinel auth-pass mymaster 123456启动哨兵:./redis-sentinel ../sentinel.conf测试查看哨兵:./redis-cli -h 192.168.10.195 -p 26379 INFO Sentinel ./redis-cli -h 192.168.10.195 -p 6379 -a 123456 info Replication ./redis-cli -h 10.1.8.112 -p 26379 INFO Sentinel ./redis-cli -h 10.1.8.112 -p 6379 -a 123456 info Replication关闭命令:pkill redis-sentinel pkill redis-server

十二、Linux(CentOS7) 时序数据库InfluxDB及Influx-proxy安装配置

一、安装InfluxDB安装wget https://dl.influxdata.com/influxdb/releases/influxdb-1.8.0.x86_64.rpm sudo yum localinstall influxdb-1.8.0.x86_64.rpm启动systemctl enable influxdb systemctl start influxdb配置文件详解vi /etc/influxdb/influxdb.conf全局配置reporting-disabled = false # 该选项用于上报influxdb的使用信息给InfluxData公司,默认值为false bind-address = ":8088" # 备份恢复时使用,默认值为80881、meta相关配置[meta] dir = "/var/lib/influxdb/meta" # meta数据存放目录 retention-autocreate = true # 用于控制默认存储策略,数据库创建时,会自动生成autogen的存储策略,默认值:true logging-enabled = true # 是否开启meta日志,默认值:true2、data相关配置[data] dir = "/var/lib/influxdb/data" # 最终数据(TSM文件)存储目录 wal-dir = "/var/lib/influxdb/wal" # 预写日志存储目录 query-log-enabled = true # 是否开启tsm引擎查询日志,默认值: true cache-max-memory-size = 1048576000 # 用于限定shard最大值,大于该值时会拒绝写入,默认值:1000MB,单位:byte cache-snapshot-memory-size = 26214400 # 用于设置快照大小,大于该值时数据会刷新到tsm文件,默认值:25MB,单位:byte cache-snapshot-write-cold-duration = "10m" # tsm引擎 snapshot写盘延迟,默认值:10Minute compact-full-write-cold-duration = "4h" # tsm文件在压缩前可以存储的最大时间,默认值:4Hour max-series-per-database = 1000000 # 限制数据库的级数,该值为0时取消限制,默认值:1000000 max-values-per-tag = 100000 # 一个tag最大的value数,0取消限制,默认值:1000003、coordinator查询管理的配置选项[coordinator] write-timeout = "10s" # 写操作超时时间,默认值: 10s max-concurrent-queries = 0 # 最大并发查询数,0无限制,默认值: 0 query-timeout = "0s # 查询操作超时时间,0无限制,默认值:0s log-queries-after = "0s" # 慢查询超时时间,0无限制,默认值:0s max-select-point = 0 # SELECT语句可以处理的最大点数(points),0无限制,默认值:0 max-select-series = 0 # SELECT语句可以处理的最大级数(series),0无限制,默认值:0 max-select-buckets = 0 # SELECT语句可以处理的最大"GROUP BY time()"的时间周期,0无限制,默认值:04、retention旧数据的保留策略[retention] enabled = true # 是否启用该模块,默认值 : true check-interval = "30m" # 检查时间间隔,默认值 :"30m" 5、shard-precreation分区预创建 [shard-precreation] enabled = true # 是否启用该模块,默认值 : true check-interval = "10m" # 检查时间间隔,默认值 :"10m" advance-period = "30m" # 预创建分区的最大提前时间,默认值 :"30m"6、monitor 控制InfluxDB自有的监控系统。 默认情况下,InfluxDB把这些数据写入_internal 数据库,如果这个库不存在则自动创建。 _internal 库默认的retention策略是7天,如果你想使用一个自己的retention策略,需要自己创建。[monitor] store-enabled = true # 是否启用该模块,默认值 :true store-database = "_internal" # 默认数据库:"_internal" store-interval = "10s # 统计间隔,默认值:"10s"7、admin web管理页面[admin] enabled = true # 是否启用该模块,默认值 : false bind-address = ":8083" # 绑定地址,默认值 :":8083" https-enabled = false # 是否开启https ,默认值 :false https-certificate = "/etc/ssl/influxdb.pem" # https证书路径,默认值:"/etc/ssl/influxdb.pem"8、http API[http] enabled = true # 是否启用该模块,默认值 :true bind-address = ":8086" # 绑定地址,默认值:":8086" auth-enabled = false # 是否开启认证,默认值:false realm = "InfluxDB" # 配置JWT realm,默认值: "InfluxDB" log-enabled = true # 是否开启日志,默认值:true write-tracing = false # 是否开启写操作日志,如果置成true,每一次写操作都会打日志,默认值:false pprof-enabled = true # 是否开启pprof,默认值:true https-enabled = false # 是否开启https,默认值:false https-certificate = "/etc/ssl/influxdb.pem" # 设置https证书路径,默认值:"/etc/ssl/influxdb.pem" https-private-key = "" # 设置https私钥,无默认值 shared-secret = "" # 用于JWT签名的共享密钥,无默认值 max-row-limit = 0 # 配置查询返回最大行数,0无限制,默认值:0 max-connection-limit = 0 # 配置最大连接数,0无限制,默认值:0 unix-socket-enabled = false # 是否使用unix-socket,默认值:false bind-socket = "/var/run/influxdb.sock" # unix-socket路径,默认值:"/var/run/influxdb.sock"9、subscriber 控制Kapacitor接受数据的配置[subscriber] enabled = true # 是否启用该模块,默认值 :true http-timeout = "30s" # http超时时间,默认值:"30s" insecure-skip-verify = false # 是否允许不安全的证书 ca-certs = "" # 设置CA证书 write-concurrency = 40 # 设置并发数目,默认值:40 write-buffer-size = 1000 # 设置buffer大小,默认值:1000 10、graphite 相关配置 [[graphite]] enabled = false # 是否启用该模块,默认值 :false database = "graphite" # 数据库名称,默认值:"graphite" retention-policy = "" # 存储策略,无默认值 bind-address = ":2003" # 绑定地址,默认值:":2003" protocol = "tcp" # 协议,默认值:"tcp" consistency-level = "one" # 一致性级别,默认值:"one batch-size = 5000 # 批量size,默认值:5000 batch-pending = 10 # 配置在内存中等待的batch数,默认值:10 batch-timeout = "1s" # 超时时间,默认值:"1s" udp-read-buffer = 0 # udp读取buffer的大小,0表示使用操作系统提供的值,如果超过操作系统的默认配置则会出错。 该配置的默认值:0 separator = "." # 多个measurement间的连接符,默认值: "."11、collectd[[collectd]] enabled = false # 是否启用该模块,默认值 :false bind-address = ":25826" # 绑定地址,默认值: ":25826" database = "collectd" # 数据库名称,默认值:"collectd" retention-policy = "" # 存储策略,无默认值 typesdb = "/usr/local/share/collectd" # 路径,默认值:"/usr/share/collectd/types.db" auth-file = "/etc/collectd/auth_file" batch-size = 5000 batch-pending = 10 batch-timeout = "10s" read-buffer = 0 # udp读取buffer的大小,0表示使用操作系统提供的值,如果超过操作系统的默认配置则会出错。默认值:012、opentsdb[[opentsdb]] enabled = false # 是否启用该模块,默认值:false bind-address = ":4242" # 绑定地址,默认值:":4242" database = "opentsdb" # 默认数据库:"opentsdb" retention-policy = "" # 存储策略,无默认值 consistency-level = "one" # 一致性级别,默认值:"one" tls-enabled = false # 是否开启tls,默认值:false certificate= "/etc/ssl/influxdb.pem" # 证书路径,默认值:"/etc/ssl/influxdb.pem" log-point-errors = true # 出错时是否记录日志,默认值:true batch-size = 1000 batch-pending = 5 batch-timeout = "1s"13、udp[[udp]] enabled = false # 是否启用该模块,默认值:false bind-address = ":8089" # 绑定地址,默认值:":8089" database = "udp" # 数据库名称,默认值:"udp" retention-policy = "" # 存储策略,无默认值 batch-size = 5000 batch-pending = 10 batch-timeout = "1s" read-buffer = 0 # udp读取buffer的大小,0表示使用操作系统提供的值,如果超过操作系统的默认配置则会出错。 该配置的默认值:014、continuous_queries[continuous_queries] enabled = true # enabled 是否开启CQs,默认值:true log-enabled = true # 是否开启日志,默认值:true run-interval = "1s" # 时间间隔,默认值:"1s"二、安装Influx-proxy依赖的环境有:Golang >= 1.7 Redis-server Python >= 2.7 ,redis使用已有的集群,这里不再安装。1、下载golang版本https://golang.google.cn/dl/2、解压到指定文件夹,这里我解压到 /usr/local 目录下,这也是官方文档推荐的位置。tar -C /usr/local -xzvf go1.13.8.linux-amd64.tar.gz3、创建工作目录,我把 Go 代码放在自己的用户目录下,根据自己的需要进行创建即可。mkdir -p ~/code/go4、配置环境变量sudo vi /etc/profile# 在尾部写入 export GOROOT=/usr/local/go export PATH=$PATH:$GOROOT/bin export GOPATH=/home/hesunfly/code/go# 保存退出 source /etc/profile5、测试是否生效:go env三、安装python1、安装下载好的所有的rpm包(使用最新的gcc安装包)rpm -Uvh --force --nodeps *rpm 解压zlib tar -zxvf zlib-1.2.11.tar.gz cd zlib-1.2.11 ./configure make install2、执行安装命令./configure --prefix=/usr/local/python make install/home/root/code/go/src/github.com/shell909090/influx-proxy/bin3、将go目录下的依赖库上传到go的src目录在influx-proxy下执行 python config.py make启动命令:nohup ./influx-proxy -redis 192.168.10.195:26379,192.168.10.124:26379,192.168.10.100:26379 -redis-pwd 123456 -redis-db 1 &

十一、Linux(CentOS7) 搭建Kafka集群

一、环境准备:  首先准备好三台CentOS系统的主机,设置ip为:172.16.20.220、172.16.20.221、172.16.20.222。  Kafka会使用大量文件和网络socket,Linux默认配置的File descriptors(文件描述符)不能够满足Kafka高吞吐量的要求,所以这里需要调整(更多性能优化,请查看Kafka官方文档):vi /etc/security/limits.conf # 在最后加入,修改完成后,重启系统生效。 * soft nofile 131072 * hard nofile 131072  新建kafka的日志目录和zookeeper数据目录,因为这两项默认放在tmp目录,而tmp目录中内容会随重启而丢失,所以我们自定义以下目录:mkdir /data/zookeeper mkdir /data/zookeeper/data mkdir /data/zookeeper/logs mkdir /data/kafka mkdir /data/kafka/data mkdir /data/kafka/logs二、zookeeper.properties配置vi /usr/local/kafka/config/zookeeper.properties修改如下:# 修改为自定义的zookeeper数据目录 dataDir=/data/zookeeper/data # 修改为自定义的zookeeper日志目录 dataLogDir=/data/zookeeper/logs clientPort=2181 # 注释掉 #maxClientCnxns=0 # 设置连接参数,添加如下配置 # 为zk的基本时间单元,毫秒 tickTime=2000 # Leader-Follower初始通信时限 tickTime*10 initLimit=10 # Leader-Follower同步通信时限 tickTime*5 syncLimit=5 # 设置broker Id的服务地址,本机ip一定要用0.0.0.0代替 server.1=0.0.0.0:2888:3888 server.2=172.16.20.221:2888:3888 server.3=172.16.20.222:2888:3888三、在各台服务器的zookeeper数据目录/data/zookeeper/data添加myid文件,写入服务broker.id属性值在data文件夹中新建myid文件,myid文件的内容为1(一句话创建:echo 1 > myid)cd /data/zookeeper/data vi myid #添加内容:1 其他两台主机分别配置 2和3 1四、kafka配置,进入config目录下,修改server.properties文件vi /usr/local/kafka/config/server.properties# 每台服务器的broker.id都不能相同 broker.id=1 # 是否可以删除topic delete.topic.enable=true # topic 在当前broker上的分片个数,与broker保持一致 num.partitions=3 # 每个主机地址不一样: listeners=PLAINTEXT://172.16.20.220:9092 advertised.listeners=PLAINTEXT://172.16.20.220:9092 # 具体一些参数 log.dirs=/data/kafka/kafka-logs # 设置zookeeper集群地址与端口如下: zookeeper.connect=172.16.20.220:2181,172.16.20.221:2181,172.16.20.222:2181五、Kafka启动kafka启动时先启动zookeeper,再启动kafka;关闭时相反,先关闭kafka,再关闭zookeeper。1、zookeeper启动命令./zookeeper-server-start.sh ../config/zookeeper.properties &后台运行启动命令:nohup ./zookeeper-server-start.sh ../config/zookeeper.properties >/data/zookeeper/logs/zookeeper.log 2>1 &或者./zookeeper-server-start.sh -daemon ../config/zookeeper.properties &查看集群状态:./zookeeper-server-start.sh status ../config/zookeeper.properties2、kafka启动命令./kafka-server-start.sh ../config/server.properties &后台运行启动命令:nohup bin/kafka-server-start.sh ../config/server.properties >/data/kafka/logs/kafka.log 2>1 &或者./kafka-server-start.sh -daemon ../config/server.properties &3、创建topic,最新版本已经不需要使用zookeeper参数创建。./kafka-topics.sh --create --replication-factor 2 --partitions 1 --topic test --bootstrap-server 172.16.20.220:9092参数解释:复制两份  --replication-factor 2创建1个分区  --partitions 1topic 名称  --topic test4、查看已经存在的topic(三台设备都执行时可以看到)./kafka-topics.sh --list --bootstrap-server 172.16.20.220:90925、启动生产者:./kafka-console-producer.sh --broker-list 172.16.20.220:9092 --topic test6、启动消费者:./kafka-console-consumer.sh --bootstrap-server 172.16.20.221:9092 --topic test ./kafka-console-consumer.sh --bootstrap-server 172.16.20.222:9092 --topic test添加参数 --from-beginning 从开始位置消费,不是从最新消息./kafka-console-consumer.sh --bootstrap-server 172.16.20.221 --topic test --from-beginning7、测试:在生产者输入test,可以在消费者的两台服务器上看到同样的字符test,说明Kafka服务器集群已搭建成功。

SpringCloud微服务实战——搭建企业级开发框架(四十):使用Spring Security OAuth2实现单点登录(SSO)系统

一、单点登录SSO介绍  目前每家企业或者平台都存在不止一套系统,由于历史原因每套系统采购于不同厂商,所以系统间都是相互独立的,都有自己的用户鉴权认证体系,当用户进行登录系统时,不得不记住每套系统的用户名密码,同时,管理员也需要为同一个用户设置多套系统登录账号,这对系统的使用者来说显然是不方便的。我们期望的是如果存在多个系统,只需要登录一次就可以访问多个系统,只需要在其中一个系统执行注销登录操作,则所有的系统都注销登录,无需重复操作,这就是单点登录(Single Sign On 简称SSO)系统实现的功能。  单点登录是系统功能的定义,而实现单点登录功能,目前开源且流行的有CAS和OAuth2两种方式,过去我们用的最多的是CAS,现在随着SpringCloud的流行,更多人选择使用SpringSecurity提供的OAuth2认证授权服务器实现单点登录功能。  OAuth2是一种授权协议的标准,任何人都可以基于这个标准开发Oauth2授权服务器,现在百度开放平台、腾讯开放平台等大部分的开放平台都是基于OAuth2协议实现, OAuth2.0定义了四种授权类型,最新版OAuth2.1协议定义了七种授权类型,其中有两种因安全问题已不再建议使用:【OAuth2.1 建议使用的五种授权类型】Authorization Code 【授权码授权】:用户通过授权服务器重定向URL返回到客户端后,应用程序从URL中获取授权码,并使用授权码请求访问令牌。PKCE【Proof Key for Code Exchange 授权码交换证明密钥】:授权码类型的扩展,用于防止CSRF和授权码注入攻击。Client Credentials【客户端凭证授权】:直接由客户端使用客户端 ID 和客户端密钥向授权服务器请求访问令牌,无需用户授权,通常用与系统和系统之间的授权。Device Code【设备代码授权】:用于无浏览器或输入受限的设备,使用提前获取好的设备代码获取访问令牌。Refresh Token【刷新令牌授权】:当访问令牌失效时,可以通过刷新令牌获取访问令牌,不需要用户进行交互。【OAuth2.1 不建议/禁止使用的两种授权类型】Implicit Flow【隐式授权】:隐式授权是以前推荐用于本机应用程序和 JavaScript 应用程序的简化 OAuth 流程,其中访问令牌立即返回,无需额外的授权代码交换步骤。其通过HTTP重定向直接返回访问令牌,存在很大的风险,不建议使用,有些授权服务器直接禁止使用此授权类型。Password Grant【密码授权】:客户端通过用户名密码向授权服务器获取访问令牌。因客户端需收集用户名和密码,所以不建议使用,最新的 OAuth 2 安全最佳实践完全不允许密码授权。【SpringSecurity对OAuth2协议的支持】:  通过SpringSecurity官网可知,通过长期的对OAuth2的支持,以及对实际业务的情景考虑,大多数的系统都不需要授权服务器,所以,Spring官方不再推荐使用spring-security-oauth2,SpringSecurity逐渐将spring-security-oauth2中的OAuth2登录、客户端、资源服务器等功能抽取出来,集成在SpringSecurity中,并单独新建spring-authorization-server项目实现授权服务器功能。  目前我们了解最多的是Spring Security OAuth对OAuth2协议的实现和支持,这里需要区分Spring Security OAuth和Spring Security是两个项目,过去OAth2相关功能都在Spring Security OAuth项目中实现,但是自SpringSecurity5.X开始,SpringSecurity项目开始逐渐增加Spring Security OAuth中的功能,自SpringSecurity5.2开始,添加了OAuth 2.0 登录, 客户端, 资源服务器的功能。但授权服务器的功能,并不打算集成在SpringSecurity项目中,而是新建了spring-authorization-server项目作为单独的授权服务器:详细介绍。spring-security实现的是OAuth2.1协议,spring-security-oauth2实现的是OAuth2.0协议。  Spring未来的计划是将 Spring Security OAuth 中当前的所有功能构建到 Spring Security 5.x 中。 在 Spring Security 达到与 Spring Security OAuth 的功能对等之后,他们将继续支持错误和安全修复至少一年。【GitEgg框架单点登录实现计划】:  因spring-authorization-server目前最新发布版本0.2.3,部分功能仍在不断的修复和完善,还不足以应用到实际生产环境中,所以,我们目前使用spring-security-oauth2作为授权服务器,待后续spring-authorization-server发布稳定版本后,再进行迁移升级。【spring-security-oauth2默认实现的授权类型】:隐式授权(Implicit Flow)【spring-authorization-server不再支持此类型】授权码授权(Authorization Code)密码授权(Password Grant)【spring-authorization-server不再支持此类型】客户端凭证授权(Client Credentials)刷新令牌授权 (Refresh Token)  在GitEgg微服务框架中,gitegg-oauth已经引入了spring-security-oauth2,代码中使用了了Oauth2的密码授权和刷新令牌授权,并且自定义扩展了【短信验证码授权类型】和【图形验证码授权】,这其实是密码授权的扩展授权类型。  目前,基本上所有的SpringCloud微服务授权方式都是使用的OAuth2密码授权模式获取token,可能你会有疑惑,为什么上面最新的Oauth2协议已经不建议甚至是禁止使用密码授权类型了,而我们GitEgg框架的系统管理界面还要使用密码授权模式来获取token?因为不建议使用密码授权类型的原因是第三方客户端会收集用户名密码,存在安全风险。而在我们这里,我们的客户端是自有系统管理界面,不是第三方客户端,所有的用户名密码都是我们自有系统的用户名密码,只要做好系统安全防护,就可最大限度的避免用户名密码泄露给第三方的风险。  在使用spring-security-oauth2实现单点登录之前,首先我们一定要搞清楚单点登录SSO、OAuth2、spring-security-oauth2的区别和联系:单点登录SSO是一种系统登录解决方案的定义,企业内部系统登录以及互联网上第三方QQ、微信、GitHub登录等都是单点登录。OAuth2是一种系统授权协议,它包含多种授权类型,我们可以使用授权码授权和刷新令牌授权两种授权类型来实现单点登录功能。spring-security-oauth2是对OAuth2协议中授权类型的具体实现,也是我们实现单点登录功能实际用到的代码。二、SpringSecurity单点登录服务端和客户端实现流程解析单点登录业务流程时序图:spring-security-oauth2单点登录.pngA系统(单点登录客户端)首次访问受保护的资源触发单点登录流程说明1、用户通过浏览器访问A系统被保护的资源链接2、A系统判断当前会话是否登录,如果没有登录则跳转到A系统登录地址/login3、A系统首次接收到/login请求时没有state和code参数,此时A系统拼接系统配置的单点登录服务器授权url,并重定向至授权链接。4、单点登录服务器判断此会话是否登录,如果没有登录,那么返回单点登录服务器的登录页面。5、用户在登录页面填写用户名、密码等信息执行登录操作。6、单点登录服务器校验用户名、密码并将登录信息设置到上下文会话中。7、单点登录服务器重定向到A系统的/login链接,此时链接带有code和state参数。8、A系统再次接收到/login请求,此请求携带state和code参数,系统A通过OAuth2RestTemplate请求单点登录服务端/oauth/token接口获取token。9、A系统获取到token后,首先会对token进行解析,并使用配置的公钥对token进行校验(非对称加密),如果校验通过,则将token设置到上下文,下次访问请求时直接从上下文中获取。10、A系统处理完上下问会话之后重定向到登录前请求的受保护资源链接。B系统(单点登录客户端)访问受保护的资源流程说明1、用户通过浏览器访问B系统被保护的资源链接2、B系统判断当前会话是否登录,如果没有登录则跳转到B系统登录地址/login3、B系统首次接收到/login请求时没有state和code参数,此时B系统拼接系统配置的单点登录服务器授权url,并重定向至授权链接。4、单点登录服务器判断此会话是否登录,因上面访问A系统时登陆过,所以此时不会再返回登录界面。5、单点登录服务器重定向到B系统的/login链接,此时链接带有code和state参数。6、B系统再次接收到/login请求,此请求携带state和code参数,系统B通过OAuth2RestTemplate请求单点登录服务端/oauth/token接口获取token。7、B系统获取到token后,首先会对token进行解析,并使用配置的公钥对token进行校验(非对称加密),如果校验通过,则将token设置到上下文,下次访问请求时直接从上下文中获取。8、B系统处理完上下问会话之后重定向到登录前请求的受保护资源链接。spring-security-oauth2 单点登录代码实现流程说明:1、用户通过浏览器访问单点登录被保护的资源链接2、SpringSecurity通过上下文判断是否登录(SpringSecurity单点登录服务端和客户端默认都是基于session的),如果没有登录则跳转到单点登录客户端地址/login3、单点登录客户端OAuth2ClientAuthenticationProcessingFilter拦截器通过上下文获取token,因第一次访问单点登录客户端/login时,没有code和state参数,所以抛出UserRedirectRequiredException异常4、单点登录客户端捕获UserRedirectRequiredException异常,并根据配置文件中的配置,组装并跳转到单点登录服务端的授权链接/oauth/authorize,链接及请求中会带相关配置参数5、单点登录服务端收到授权请求,根据session判断是否此会话是否登录,如果没有登录则跳转到单点登录服务器的统一登录界面(单点登录服务端也是根据session判断是否登录的,在这里为了解决微服务的session集群共享问题,引入了spring-session-data-redis)6、用户完成登录操作后,单点登录服务端重定向到单点登录客户端的/login链接,此时链接带有code和state参数7、再次用到第三步的OAuth2ClientAuthenticationProcessingFilter拦截器通过上下文获取token,此时上下文中肯定没有token,所以会通过OAuth2RestTemplate请求单点登录服务端/oauth/token接口使用重定向获得的code和state换取token8、单点登录客户端获取到token后,首先会对token进行解析,并使用配置的公钥对token进行校验(非对称加密),如果校验通过,则将token设置到上下文,下次访问请求时直接从上下文中获取。9、单点登录客户端处理完上下问会话之后重定向到登录前请求的受保护资源链接。三、使用【授权码授权】和【刷新令牌授权】来实现单点登录服务器1、自定义单点登录服务器页面  当我们的gitegg-oauth作为授权服务器使用时,我们希望定制自己的登录页等信息,下面我们自定义登录、主页、错误提示页、找回密码页。其他需要的页面可以自己定义,比如授权确认页,我们此处业务不需要用户二次确认,所以这里没有自定义此页面。在gitegg-oauth工程的pom.xml中添加Thymeleaf依赖,作为Spring官方推荐的模板引擎,我们使用Thymeleaf来实现前端页面的渲染展示。<!--thymeleaf 模板引擎 渲染单点登录服务器页面--> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-thymeleaf</artifactId> </dependency>在GitEggOAuthController中新增页面跳转路径/** * 单点登录-登录页 * @return @GetMapping("/login") public String login() { return "login"; * 单点登录-首页:当直接访问单点登录系统成功后进入的页面。从客户端系统进入的,直接返回到客户端页面 * @return @GetMapping("/index") public String index() { return "index"; * 单点登录-错误页 * @return @GetMapping("/error") public String error() { return "error"; * 单点登录-找回密码页 * @return @GetMapping("/find/pwd") public String findPwd() { return "findpwd"; }在resources目录下新建static(静态资源)目录和templates(页面代码)目录,新增favicon.ico文件单点登录页面目录自定义登录页login.html代码<!DOCTYPE html> <html xmlns:th="http://www.thymeleaf.org"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <meta name="description" content="统一身份认证平台"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>统一身份认证平台</title> <link rel="shortcut icon" th:href="@{/gitegg-oauth/favicon.ico}"/> <link rel="bookmark" th:href="@{/gitegg-oauth/favicon.ico}"/> <link type="text/css" rel="stylesheet" th:href="@{/gitegg-oauth/assets/bootstrap-4.3.1-dist/css/bootstrap.min.css}"> <link type="text/css" rel="stylesheet" th:href="@{/gitegg-oauth/assets/bootstrap-validator-0.5.3/css/bootstrapValidator.css}"> <link type="text/css" rel="stylesheet" th:href="@{/gitegg-oauth/assets/css/font-awesome.min.css}"> <link type="text/css" rel="stylesheet" th:href="@{/gitegg-oauth/assets/css/login.css}"> <!--[if IE]> <script type="text/javascript" th:src="@{/gitegg-oauth/assets/js/html5shiv.min.js}"></script> <![endif]--> </head> <body> <div class="htmleaf-container"> <div class="form-bg"> <div class="container"> <div class="row login_wrap"> <div class="login_left"> <span class="circle"> <!-- <span></span> <span></span> --> <img th:src="@{/gitegg-oauth/assets/images/logo.svg}" class="logo" alt="logo"> </span> <span class="star"> <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> </span> <span class="fly_star"> <span></span> <span></span> </span> <p id="title"> GitEgg Cloud 统一身份认证平台 </p> </div> <div class="login_right"> <div class="title cf"> <ul class="title-list fr cf "> <li class="on">账号密码登录</li> <li>验证码登录</li> <p></p> </ul> </div> <div class="login-form-container account-login"> <form class="form-horizontal account-form" th:action="@{/gitegg-oauth/login}" method="post"> <input type="hidden" class="form-control" name="client_id" value="gitegg-admin"> <input id="user_type" type="hidden" class="form-control" name="type" value="user"> <input id="user_mobileType" type="hidden" class="form-control" name="mobile" value="0"> <div class="input-wrapper input-account-wrapper form-group"> <div class="input-icon-wrapper"> <i class="input-icon"> <svg t="1646301169630" class="icon" viewBox="64 64 896 896" version="1.1" xmlns="http://www.w3.org/2000/svg" p-id="8796" width="1.2em" height="1.2em" fill="currentColor"><path d="M858.5 763.6c-18.9-44.8-46.1-85-80.6-119.5-34.5-34.5-74.7-61.6-119.5-80.6-0.4-0.2-0.8-0.3-1.2-0.5C719.5 518 760 444.7 760 362c0-137-111-248-248-248S264 225 264 362c0 82.7 40.5 156 102.8 201.1-0.4 0.2-0.8 0.3-1.2 0.5-44.8 18.9-85 46-119.5 80.6-34.5 34.5-61.6 74.7-80.6 119.5C146.9 807.5 137 854 136 901.8c-0.1 4.5 3.5 8.2 8 8.2h60c4.4 0 7.9-3.5 8-7.8 2-77.2 33-149.5 87.8-204.3 56.7-56.7 132-87.9 212.2-87.9s155.5 31.2 212.2 87.9C779 752.7 810 825 812 902.2c0.1 4.4 3.6 7.8 8 7.8h60c4.5 0 8.1-3.7 8-8.2-1-47.8-10.9-94.3-29.5-138.2zM512 534c-45.9 0-89.1-17.9-121.6-50.4S340 407.9 340 362c0-45.9 17.9-89.1 50.4-121.6S466.1 190 512 190s89.1 17.9 121.6 50.4S684 316.1 684 362c0 45.9-17.9 89.1-50.4 121.6S557.9 534 512 534z" p-id="8797"></path></svg> </i> </div> <input type="text" class="input" name="username" placeholder="请输入您的账号"> </div> <div class="input-wrapper input-psw-wrapper form-group"> <div class="input-icon-wrapper"> <i class="input-icon"> <svg t="1646302713220" class="icon" viewBox="64 64 896 896" version="1.1" xmlns="http://www.w3.org/2000/svg" p-id="8931" width="1.2em" height="1.2em" fill="currentColor"><path d="M832 464h-68V240c0-70.7-57.3-128-128-128H388c-70.7 0-128 57.3-128 128v224h-68c-17.7 0-32 14.3-32 32v384c0 17.7 14.3 32 32 32h640c17.7 0 32-14.3 32-32V496c0-17.7-14.3-32-32-32zM332 240c0-30.9 25.1-56 56-56h248c30.9 0 56 25.1 56 56v224H332V240z m460 600H232V536h560v304z" p-id="8932"></path><path d="M484 701v53c0 4.4 3.6 8 8 8h40c4.4 0 8-3.6 8-8v-53c12.1-8.7 20-22.9 20-39 0-26.5-21.5-48-48-48s-48 21.5-48 48c0 16.1 7.9 30.3 20 39z" p-id="8933"></path></svg> </i> </div> <input id="password" type="password" class="input" name="password" placeholder="请输入您的密码"> </div> <div id="account-err" class="err-msg" style="width: 100%; text-align: center;"></div> <button type="submit" class="login-btn" id="loginSubmit">立即登录</button> <div class="forget" id="forget">忘记密码?</div> </form> </div> <div class="login-form-container mobile-login" style="display: none;"> <form class="form-horizontal mobile-form" th:action="@{/gitegg-oauth/phoneLogin}" method="post"> <input id="tenantId" type="hidden" class="form-control" name="tenant_id" value="0"> <input id="type" type="hidden" class="form-control" name="type" value="phone"> <input id="mobileType" type="hidden" class="form-control" name="mobile" value="0"> <input id="smsId" type="hidden" class="form-control" name="smsId"> <div class="input-wrapper input-account-wrapper form-group input-phone-wrapper"> <div class="input-icon-wrapper"> <i class="input-icon"> <svg t="1646302822533" class="icon" viewBox="0 0 1024 1024" version="1.1" xmlns="http://www.w3.org/2000/svg" p-id="9067" width="1.2em" height="1.2em" fill="currentColor"><path d="M744 62H280c-35.3 0-64 28.7-64 64v768c0 35.3 28.7 64 64 64h464c35.3 0 64-28.7 64-64V126c0-35.3-28.7-64-64-64z m-8 824H288V134h448v752z" p-id="9068"></path><path d="M512 784m-40 0a40 40 0 1 0 80 0 40 40 0 1 0-80 0Z" p-id="9069"></path></svg> </i> </div> <input id="phone" type="text" class="input" name="phone" maxlength="11" placeholder="请输入手机号"> </div> <div class="code-form form-group sms-code-wrapper"> <div class="input-wrapper input-sms-wrapper"> <div class="input-icon-wrapper"> <i class="input-icon"> <svg t="1646302879723" class="icon" viewBox="0 0 1024 1024" version="1.1" xmlns="http://www.w3.org/2000/svg" p-id="9203" width="1.2em" height="1.2em" fill="currentColor"><path d="M928 160H96c-17.7 0-32 14.3-32 32v640c0 17.7 14.3 32 32 32h832c17.7 0 32-14.3 32-32V192c0-17.7-14.3-32-32-32z m-40 110.8V792H136V270.8l-27.6-21.5 39.3-50.5 42.8 33.3h643.1l42.8-33.3 39.3 50.5-27.7 21.5z" p-id="9204"></path><path d="M833.6 232L512 482 190.4 232l-42.8-33.3-39.3 50.5 27.6 21.5 341.6 265.6c20.2 15.7 48.5 15.7 68.7 0L888 270.8l27.6-21.5-39.3-50.5-42.7 33.2z" p-id="9205"></path></svg> </i> </div> <input id="code" type="text" class="input-code" name="code" maxlength="6" placeholder="请输入验证码"> </div> <div class="input-code-wrapper"> <a id="sendBtn" href="javascript:sendCode();">获取验证码</a> </div> </div> <div id="mobile-err" class="err-msg" style="width: 100%; text-align: center;"></div> <button type="submit" class="login-btn" id="loginSubmitByCode">立即登录</button> </form> </div> </div> </div> </div> </div> <div class="related"> </div> </div> <script type="text/javascript" th:src="@{/gitegg-oauth/assets/js/jquery-2.1.4.min.js}"></script> <script type="text/javascript" th:src="@{/gitegg-oauth/assets/bootstrap-4.3.1-dist/js/bootstrap.min.js}"></script> <script type="text/javascript" th:src="@{/gitegg-oauth/assets/bootstrap-validator-0.5.3/js/bootstrapValidator.js}"></script> <script type="text/javascript" th:src="@{/gitegg-oauth/assets/js/md5.js}"></script> <script type="text/javascript" th:src="@{/gitegg-oauth/assets/js/jquery.form.js}"></script> <script type="text/javascript" th:src="@{/gitegg-oauth/assets/js/login.js}"></script> </body> </html>自定义登录login.js代码var countdown=60; jQuery(function ($) { countdown = 60; $('.account-form').bootstrapValidator({ message: '输入错误', feedbackIcons: { valid: 'glyphicon glyphicon-ok', invalid: 'glyphicon glyphicon-remove', validating: 'glyphicon glyphicon-refresh' fields: { username: { container: '.input-account-wrapper', message: '输入错误', validators: { notEmpty: { message: '用户账号不能为空' stringLength: { min: 2, max: 32, message: '账号长度范围2-32个字符。' regexp: { regexp: /^[a-zA-Z0-9_\.]+$/, message: '用户名只能由字母、数字、点和下划线组成' password: { container: '.input-psw-wrapper', validators: { notEmpty: { message: '密码不能为空' stringLength: { min: 5, max: 32, message: '密码长度范围6-32个字符。' $('.mobile-form').bootstrapValidator({ message: '输入错误', feedbackIcons: { valid: 'glyphicon glyphicon-ok', invalid: 'glyphicon glyphicon-remove', validating: 'glyphicon glyphicon-refresh' fields: { phone: { message: '输入错误', container: '.input-phone-wrapper', validators: { notEmpty: { message: '手机号不能为空' regexp: { regexp: /^1\d{10}$/, message: '手机号格式错误' code: { container: '.input-sms-wrapper', validators: { notEmpty: { message: '验证码不能为空' stringLength: { min: 6, max: 6, message: '验证码长度为6位。' var options={ beforeSerialize: beforeFormSerialize, success: formSuccess,//提交成功后执行的回掉函数 error: formError,//提交失败后执行的回掉函数 headers : {"TenantId" : 0}, clearForm: true,//提交成功后是否清空表单中的字段值 restForm: true,//提交成功后是否充值表单中的字段值,即恢复到页面加载是的状态 timeout: 6000//设置请求时间,超过时间后,自动退出请求,单位(毫秒) var mobileOptions={ success: mobileFormSuccess,//提交成功后执行的回掉函数 error: mobileFormError,//提交失败后执行的回掉函数 headers : {"TenantId" : 0}, clearForm: true,//提交成功后是否清空表单中的字段值 restForm: true,//提交成功后是否充值表单中的字段值,即恢复到页面加载是的状态 timeout: 6000//设置请求时间,超过时间后,自动退出请求,单位(毫秒) function beforeFormSerialize(){ $("#account-err").html(""); $("#username").val($.trim($("#username").val())); $("#password").val($.md5($.trim($("#password").val()))); function formSuccess(response){ $(".account-form").data('bootstrapValidator').resetForm(); if (response.success) window.location.href = response.targetUrl; $("#account-err").html(response.message); function formError(response){ $("#account-err").html(response); function mobileFormSuccess(response){ $(".mobile-form").data('bootstrapValidator').resetForm(); if (response.success) window.location.href = response.targetUrl; $("#mobile-err").html(response.message); function mobileFormError(response){ $("#mobile-err").html(response); $(".account-form").ajaxForm(options); $(".mobile-form").ajaxForm(mobileOptions); $(".nav-left a").click(function(e){ $(".account-login").show(); $(".mobile-login").hide(); $(".nav-right a").click(function(e){ $(".account-login").hide(); $(".mobile-login").show(); $("#forget").click(function(e){ window.location.href = "/find/pwd"; $('.title-list li').click(function(){ var liindex = $('.title-list li').index(this); $(this).addClass('on').siblings().removeClass('on'); $('.login_right div.login-form-container').eq(liindex).fadeIn(150).siblings('div.login-form-container').hide(); var liWidth = $('.title-list li').width(); if (liindex == 0) $('.login_right .title-list p').css("transform","translate3d(0px, 0px, 0px)"); else { $('.login_right .title-list p').css("transform","translate3d("+liWidth+"px, 0px, 0px)"); function sendCode(){ $(".mobile-form").data('bootstrapValidator').validateField('phone'); if(!$(".mobile-form").data('bootstrapValidator').isValidField("phone")) return; if(countdown != 60) return; sendmsg(); var phone = $.trim($("#phone").val()); var tenantId = $("#tenantId").val(); $.ajax({ //请求方式 type : "POST", //请求的媒体类型 contentType: "application/x-www-form-urlencoded;charset=UTF-8", dataType: 'json', //请求地址 url : "/code/sms/login", //数据,json字符串 data : { tenantId: tenantId, phoneNumber: phone, code: "aliValidateLogin" //请求成功 success : function(result) { $("#smsId").val(result.data); //请求失败,包含具体的错误信息 error : function(e){ console.log(e); function sendmsg(){ if(countdown==0){ $("#sendBtn").css("color","#181818"); $("#sendBtn").html("获取验证码"); countdown=60; return false; else{ $("#sendBtn").css("color","#74777b"); $("#sendBtn").html("重新发送("+countdown+")"); countdown--; setTimeout(function(){ sendmsg(); },1000); }2、授权服务器配置修改web安全配置WebSecurityConfig,将静态文件添加到不需要授权就能访问@Override public void configure(WebSecurity web) throws Exception { web.ignoring().antMatchers("/assets/**", "/css/**", "/images/**"); }修改Nacos配置,将新增页面访问路径添加到访问白名单,使资源服务器配置ResourceServerConfig中的配置不进行鉴权就能够访问,同时增加tokenUrls配置,此配置在网关不进行鉴权,但是需要OAuth2进行Basic鉴权,授权码模式必须要用到此鉴权。# 以下配置为新增 whiteUrls: - "/gitegg-oauth/oauth/login" - "/gitegg-oauth/oauth/find/pwd" - "/gitegg-oauth/oauth/error" authUrls: - "/gitegg-oauth/oauth/index" whiteUrls: - "/*/v2/api-docs" - "/gitegg-oauth/oauth/public_key" - "/gitegg-oauth/oauth/token_key" - "/gitegg-oauth/find/pwd" - "/gitegg-oauth/code/sms/login" - "/gitegg-oauth/change/password" - "/gitegg-oauth/error" - "/gitegg-oauth/oauth/sms/captcha/send" # 新增OAuth2认证接口,此处网关放行,由认证中心进行认证 tokenUrls: - "/gitegg-oauth/oauth/token"因GitEgg框架使用用户名+密码再加密存储的密码,所以这里需要自定义登录过滤器来做相应处理,也可以用同样的方式新增手机验证码登录、扫码登录等功能。package com.gitegg.oauth.filter; import cn.hutool.core.bean.BeanUtil; import com.gitegg.oauth.token.PhoneAuthenticationToken; import com.gitegg.platform.base.constant.AuthConstant; import com.gitegg.platform.base.domain.GitEggUser; import com.gitegg.platform.base.result.Result; import com.gitegg.service.system.client.feign.IUserFeign; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.security.authentication.AbstractAuthenticationToken; import org.springframework.security.authentication.AuthenticationServiceException; import org.springframework.security.authentication.UsernamePasswordAuthenticationToken; import org.springframework.security.core.Authentication; import org.springframework.security.core.AuthenticationException; import org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter; import org.springframework.util.StringUtils; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; * 自定义登陆 * @author GitEgg public class GitEggLoginAuthenticationFilter extends UsernamePasswordAuthenticationFilter { public static final String SPRING_SECURITY_RESTFUL_TYPE_PHONE = "phone"; public static final String SPRING_SECURITY_RESTFUL_TYPE_QR = "qr"; public static final String SPRING_SECURITY_RESTFUL_TYPE_DEFAULT = "user"; // 登陆类型:user:用户密码登陆;phone:手机验证码登陆;qr:二维码扫码登陆 private static final String SPRING_SECURITY_RESTFUL_TYPE_KEY = "type"; // 登陆终端:1:移动端登陆,包括微信公众号、小程序等;0:PC后台登陆 private static final String SPRING_SECURITY_RESTFUL_MOBILE_KEY = "mobile"; private static final String SPRING_SECURITY_RESTFUL_USERNAME_KEY = "username"; private static final String SPRING_SECURITY_RESTFUL_PASSWORD_KEY = "password"; private static final String SPRING_SECURITY_RESTFUL_PHONE_KEY = "phone"; private static final String SPRING_SECURITY_RESTFUL_VERIFY_CODE_KEY = "code"; private static final String SPRING_SECURITY_RESTFUL_QR_CODE_KEY = "qrCode"; @Autowired private IUserFeign userFeign; private boolean postOnly = true; @Override public Authentication attemptAuthentication(HttpServletRequest request, HttpServletResponse response) throws AuthenticationException { if (postOnly && !"POST".equals(request.getMethod())) { throw new AuthenticationServiceException( "Authentication method not supported: " + request.getMethod()); String type = obtainParameter(request, SPRING_SECURITY_RESTFUL_TYPE_KEY); String mobile = obtainParameter(request, SPRING_SECURITY_RESTFUL_MOBILE_KEY); AbstractAuthenticationToken authRequest; String principal; String credentials; // 手机验证码登陆 if(SPRING_SECURITY_RESTFUL_TYPE_PHONE.equals(type)){ principal = obtainParameter(request, SPRING_SECURITY_RESTFUL_PHONE_KEY); credentials = obtainParameter(request, SPRING_SECURITY_RESTFUL_VERIFY_CODE_KEY); principal = principal.trim(); authRequest = new PhoneAuthenticationToken(principal, credentials); // 账号密码登陆 else { principal = obtainParameter(request, SPRING_SECURITY_RESTFUL_USERNAME_KEY); credentials = obtainParameter(request, SPRING_SECURITY_RESTFUL_PASSWORD_KEY); Result<Object> result = userFeign.queryUserByAccount(principal); if (null != result && result.isSuccess()) { GitEggUser gitEggUser = new GitEggUser(); BeanUtil.copyProperties(result.getData(), gitEggUser, false); if (!StringUtils.isEmpty(gitEggUser.getAccount())) { principal = gitEggUser.getAccount(); credentials = AuthConstant.BCRYPT + gitEggUser.getAccount() + credentials; authRequest = new UsernamePasswordAuthenticationToken(principal, credentials); // Allow subclasses to set the "details" property setDetails(request, authRequest); return this.getAuthenticationManager().authenticate(authRequest); private void setDetails(HttpServletRequest request, AbstractAuthenticationToken authRequest) { authRequest.setDetails(authenticationDetailsSource.buildDetails(request)); private String obtainParameter(HttpServletRequest request, String parameter) { String result = request.getParameter(parameter); return result == null ? "" : result; }四、实现单点登录客户端   spring-security-oauth2提供OAuth2授权服务器的同时也提供了单点登录客户端的实现,通用通过几行注解即可实现单点登录功能。1、新建单点登录客户端工程,引入oauth2客户端相关jar包<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-oauth2-client</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency> <dependency> <groupId>org.springframework.security.oauth.boot</groupId> <artifactId>spring-security-oauth2-autoconfigure</artifactId> </dependency>2、新建WebSecurityConfig类,添加@EnableOAuth2Sso注解@EnableOAuth2Sso @Configuration public class WebSecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http.authorizeRequests() .anyRequest().authenticated() .and() .csrf().disable(); }3、配置单点登录服务端相关信息server: port: 8080 servlet: context-path: /ssoclient1 security: oauth2: client: # 配置在授权服务器配置的客户端id和secret client-id: ssoclient client-secret: 123456 # 获取token的url access-token-uri: http://127.0.0.1/gitegg-oauth/oauth/token # 授权服务器的授权地址 user-authorization-uri: http://127.0.0.1/gitegg-oauth/oauth/authorize resource: # 获取公钥的地址,验证token需使用,系统启动时会初始化,不会每次验证都请求 key-uri: http://127.0.0.1/gitegg-oauth/oauth/token_key备注:1、GitEgg框架中自定义了token返回格式,SpringSecurity获取token的/oauth/token默认返回的是ResponseEntity,自有系统登录和单点登录时需要做转换处理。2、Gateway网关鉴权需要的公钥地址是gitegg-oauth/oauth/public_key,单点登录客户端需要公钥地址/oauth/token_key,两者返回的格式不一样,需注意区分。3、请求/oauth/tonen和/oauth/token_key时,默认都需要使用Basic认证,也就是请求时需添加client_id和client_security参数。

SpringCloud微服务实战——搭建企业级开发框架(三十九):使用Redis分布式锁(Redisson)+自定义注解+AOP实现微服务重复请求控制

通常我们可以在前端通过防抖和节流来解决短时间内请求重复提交的问题,如果因网络问题、Nginx重试机制、微服务Feign重试机制或者用户故意绕过前端防抖和节流设置,直接频繁发起请求,都会导致系统防重请求失败,甚至导致后台产生多条重复记录,此时我们需要考虑在后台增加防重设置。  考虑到微服务分布式的场景,这里通过使用Redisson分布式锁+自定义注解+AOP的方式来实现后台防止重复请求的功能,基本实现思路:通过在需要防重的接口添加自定义防重注解,设置防重参数,通过AOP拦截请求参数,根据注解配置,生成分布式锁的Key,并设置有效时间。每次请求访问时,都会尝试获取锁,如果获取到,则执行,如果获取不到,那么说明请求在设置的重复请求间隔内,返回请勿频繁请求提示信息。1、自定义防止重复请求注解,根据业务场景设置了以下参数:interval: 防止重复提交的时间间隔。timeUnit: 防止重复提交的时间间隔的单位。currentSession: 是否将sessionId作为防重参数(微服务及跨域前后端分离时,无法使用,Chrome等浏览器跨域时禁止携带cookie,每次sessionId都是新的)。currentUser: 是否将用户id作为防重参数。keys: 可以作为防重参数的字段(通过Spring Expression表达式,可以做到多参数时,具体取哪个参数的值)。ignoreKeys: 需要忽略的防重参数字段,例如有些参数中的时间戳,此和keys互斥,当keys配置了之后,ignoreKeys失效。conditions:当参数中的某个字段达到条件时,执行防重配置,默认不需要配置。argsIndex: 当没有配置keys参数时,防重拦截后会对所有参数取值作为分布式锁的key,这里时,当多参数时,配置取哪一个参数作为key,可以多个。此和keys互斥,当keys配置了之后,argsIndex配置失效。package com.gitegg.platform.base.annotation.resubmit; import java.lang.annotation.*; import java.util.concurrent.TimeUnit; * 防止重复提交注解 * 1、当设置了keys时,通过表达式确定取哪几个参数作为防重key * 2、当未设置keys时,可以设置argsIndex设置取哪几个参数作为防重key * 3、argsIndex和ignoreKeys是未设置keys时生效,排除不需要防重的参数 * 4、因部分浏览器在跨域请求时,不允许request请求携带cookie,导致每次sessionId都是新的,所以这里默认使用用户id作为key的一部分,不使用sessionId * @author GitEgg @Target(ElementType.METHOD) @Retention(RetentionPolicy.RUNTIME) @Documented public @interface ResubmitLock { * 防重复提交校验的时间间隔 long interval() default 5; * 防重复提交校验的时间间隔的单位 TimeUnit timeUnit() default TimeUnit.SECONDS; * 是否仅在当前session内进行防重复提交校验 boolean currentSession() default false; * 是否选用当前操作用户的信息作为防重复提交校验key的一部分 boolean currentUser() default true; * keys和ignoreKeys不能同时使用 * 参数Spring EL表达式例如 #{param.name},表达式的值作为防重复校验key的一部分 String[] keys() default {}; * keys和ignoreKeys不能同时使用 * ignoreKeys不区分入参,所有入参拥有相同的字段时,都将过滤掉 String[] ignoreKeys() default {}; * Spring EL表达式,决定是否进行重复提交校验,多个条件之间为且的关系,默认是进行校验 String[] conditions() default {"true"}; * 当未配置key时,设置哪几个参数作为防重对象,默认取所有参数 * @return int[] argsIndex() default {}; }2、自定义AOP拦截防重请求的业务逻辑处理,详细逻辑处理请看代码注释。可以在Nacos中增加配置resubmit-lock: enable: false 使防重配置失效,默认不配置为生效状态。因为是ResubmitLockAspect是否初始化的ConditionalOnProperty配置,此配置修改需要重启服务生效。package com.gitegg.platform.boot.aspect; import com.gitegg.platform.base.annotation.resubmit.ResubmitLock; import com.gitegg.platform.base.enums.ResultCodeEnum; import com.gitegg.platform.base.exception.SystemException; import com.gitegg.platform.base.util.JsonUtils; import com.gitegg.platform.boot.util.ExpressionUtils; import com.gitegg.platform.boot.util.GitEggAuthUtils; import com.gitegg.platform.boot.util.GitEggWebUtils; import com.gitegg.platform.redis.lock.IDistributedLockService; import com.google.common.collect.Maps; import lombok.RequiredArgsConstructor; import lombok.extern.log4j.Log4j2; import org.apache.commons.lang3.ArrayUtils; import org.aspectj.lang.JoinPoint; import org.aspectj.lang.ProceedingJoinPoint; import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Before; import org.aspectj.lang.annotation.Pointcut; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.autoconfigure.condition.ConditionalOnProperty; import org.springframework.lang.NonNull; import org.springframework.stereotype.Component; import org.springframework.util.DigestUtils; import java.lang.reflect.Field; import java.util.Arrays; import java.util.Comparator; import java.util.Map; import java.util.TreeMap; * @author GitEgg * @date 2022-4-10 @Log4j2 @Component @Aspect @RequiredArgsConstructor(onConstructor_ = @Autowired) @ConditionalOnProperty(name = "enabled", prefix = "resubmit-lock", havingValue = "true", matchIfMissing = true) public class ResubmitLockAspect { private static final String REDIS_SEPARATOR = ":"; private static final String RESUBMIT_CHECK_KEY_PREFIX = "resubmit_lock" + REDIS_SEPARATOR; private final IDistributedLockService distributedLockService; * Before切点 @Pointcut("@annotation(com.gitegg.platform.base.annotation.resubmit.ResubmitLock)") public void resubmitLock() { * 前置通知 防止重复提交 * @param joinPoint 切点 * @param resubmitLock 注解配置 @Before("@annotation(resubmitLock)") public Object resubmitCheck(JoinPoint joinPoint, ResubmitLock resubmitLock) throws Throwable { final Object[] args = joinPoint.getArgs(); final String[] conditions = resubmitLock.conditions(); //根据条件判断是否需要进行防重复提交检查 if (!ExpressionUtils.getConditionValue(args, conditions) || ArrayUtils.isEmpty(args)) { return ((ProceedingJoinPoint) joinPoint).proceed(); doCheck(resubmitLock, args); return ((ProceedingJoinPoint) joinPoint).proceed(); * key的组成为: resubmit_lock:userId:sessionId:uri:method:(根据spring EL表达式对参数进行拼接) * @param resubmitLock 注解 * @param args 方法入参 private void doCheck(@NonNull ResubmitLock resubmitLock, Object[] args) { final String[] keys = resubmitLock.keys(); final boolean currentUser = resubmitLock.currentUser(); final boolean currentSession = resubmitLock.currentSession(); String method = GitEggWebUtils.getRequest().getMethod(); String uri = GitEggWebUtils.getRequest().getRequestURI(); StringBuffer lockKeyBuffer = new StringBuffer(RESUBMIT_CHECK_KEY_PREFIX); if (null != GitEggAuthUtils.getTenantId()) lockKeyBuffer.append( GitEggAuthUtils.getTenantId()).append(REDIS_SEPARATOR); // 此判断暂时预留,适配后续无用户登录场景,因部分浏览器在跨域请求时,不允许request请求携带cookie,导致每次sessionId都是新的,所以这里默认使用用户id作为key的一部分,不使用sessionId if (currentSession) lockKeyBuffer.append( GitEggWebUtils.getSessionId()).append(REDIS_SEPARATOR); // 默认没有将user数据作为防重key if (currentUser && null != GitEggAuthUtils.getCurrentUser()) lockKeyBuffer.append( GitEggAuthUtils.getCurrentUser().getId() ).append(REDIS_SEPARATOR); lockKeyBuffer.append(uri).append(REDIS_SEPARATOR).append(method); StringBuffer parametersBuffer = new StringBuffer(); // 优先判断是否设置防重字段,因keys试数组,取值时是按照顺序排列的,这里不需要重新排序 if (ArrayUtils.isNotEmpty(keys)) Object[] argsForKey = ExpressionUtils.getExpressionValue(args, keys); for (Object obj : argsForKey) { parametersBuffer.append(REDIS_SEPARATOR).append(String.valueOf(obj)); // 如果没有设置防重的字段,那么需要把所有的字段和值作为key,因通过反射获取字段时,顺序时不确定的,这里取出来之后需要进行排序 else{ // 只有当keys为空时,ignoreKeys和argsIndex生效 final String[] ignoreKeys = resubmitLock.ignoreKeys(); final int[] argsIndex = resubmitLock.argsIndex(); if (ArrayUtils.isNotEmpty(argsIndex)) for(int index : argsIndex){ parametersBuffer.append(REDIS_SEPARATOR).append( getKeyAndValueJsonStr(args[index], ignoreKeys)); for(Object obj : args){ parametersBuffer.append(REDIS_SEPARATOR).append( getKeyAndValueJsonStr(obj, ignoreKeys) ); // 将请求参数取md5值作为key的一部分,MD5理论上会重复,但是key中还包含session或者用户id,所以同用户在极端时间内请参数不同生成的相同md5值的概率极低 String parametersKey = DigestUtils.md5DigestAsHex(parametersBuffer.toString().getBytes()); lockKeyBuffer.append(parametersKey); try { boolean isLock = distributedLockService.tryLock(lockKeyBuffer.toString(), 0, resubmitLock.interval(), resubmitLock.timeUnit()); if (!isLock) throw new SystemException(ResultCodeEnum.RESUBMIT_LOCK.code, ResultCodeEnum.RESUBMIT_LOCK.msg); } catch (InterruptedException e) { throw new SystemException(ResultCodeEnum.RESUBMIT_LOCK.code, ResultCodeEnum.RESUBMIT_LOCK.msg); * 将字段转换为json字符串 * @param obj * @return public static String getKeyAndValueJsonStr(Object obj, String[] ignoreKeys) { Map<String, Object> map = Maps.newHashMap(); // 得到类对象 Class objCla = (Class) obj.getClass(); /* 得到类中的所有属性集合 */ Field[] fs = objCla.getDeclaredFields(); for (int i = 0; i < fs.length; i++) { Field f = fs[i]; // 设置些属性是可以访问的 f.setAccessible(true); Object val = new Object(); try { String filedName = f.getName(); // 如果字段在排除列表,那么不将字段放入map if (null != ignoreKeys && Arrays.asList(ignoreKeys).contains(filedName)) continue; val = f.get(obj); // 得到此属性的值 // 设置键值 map.put(filedName, val); } catch (IllegalArgumentException e) { log.error("getKeyAndValue IllegalArgumentException", e); throw new RuntimeException("您的操作太频繁,请稍后再试"); } catch (IllegalAccessException e) { log.error("getKeyAndValue IllegalAccessException", e); throw new RuntimeException("您的操作太频繁,请稍后再试"); Map<String, Object> sortMap = sortMapByKey(map); String mapStr = JsonUtils.mapToJson(sortMap); return mapStr; private static Map<String, Object> sortMapByKey(Map<String, Object> map) { if (map == null || map.isEmpty()) { return null; Map<String, Object> sortMap = new TreeMap<String, Object>(new Comparator<String>() { @Override public int compare(String o1,String o2) { return ((String)o1).compareTo((String) o2); sortMap.putAll(map); return sortMap; }3、Redisson分布式锁自定义接口package com.gitegg.platform.redis.lock; import java.util.concurrent.TimeUnit; * 分布式锁接口 * @author GitEgg * @date 2022-4-10 public interface IDistributedLockService { * @param lockKey key void lock(String lockKey); * 释放锁 * @param lockKey key void unlock(String lockKey); * 加锁并设置有效期 * @param lockKey key * @param timeout 有效时间,默认时间单位在实现类传入 void lock(String lockKey, int timeout); * 加锁并设置有效期指定时间单位 * @param lockKey key * @param timeout 有效时间 * @param unit 时间单位 void lock(String lockKey, int timeout, TimeUnit unit); * 尝试获取锁,获取到则持有该锁返回true,未获取到立即返回false * @param lockKey * @return true-获取锁成功 false-获取锁失败 boolean tryLock(String lockKey); * 尝试获取锁,获取到则持有该锁leaseTime时间. * 若未获取到,在waitTime时间内一直尝试获取,超过watiTime还未获取到则返回false * @param lockKey key * @param waitTime 尝试获取时间 * @param leaseTime 锁持有时间 * @param unit 时间单位 * @return true-获取锁成功 false-获取锁失败 * @throws InterruptedException boolean tryLock(String lockKey, long waitTime, long leaseTime, TimeUnit unit) throws InterruptedException; * 锁是否被任意一个线程锁持有 * @param lockKey * @return true-被锁 false-未被锁 boolean isLocked(String lockKey); }4、Redisson分布式锁自定义接口实现类package com.gitegg.platform.redis.lock.impl; import com.gitegg.platform.redis.lock.IDistributedLockService; import lombok.RequiredArgsConstructor; import org.redisson.api.RLock; import org.redisson.api.RedissonClient; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import java.util.concurrent.TimeUnit; * 分布式锁的 Redisson 接口实现 * @author GitEgg * @date 2022-4-10 @Service @RequiredArgsConstructor(onConstructor_ = @Autowired) public class DistributedLockServiceImpl implements IDistributedLockService { private final RedissonClient redissonClient; @Override public void lock(String lockKey) { RLock lock = redissonClient.getLock(lockKey); lock.lock(); @Override public void unlock(String lockKey) { RLock lock = redissonClient.getLock(lockKey); lock.unlock(); @Override public void lock(String lockKey, int timeout) { RLock lock = redissonClient.getLock(lockKey); lock.lock(timeout, TimeUnit.MILLISECONDS); @Override public void lock(String lockKey, int timeout, TimeUnit unit) { RLock lock = redissonClient.getLock(lockKey); lock.lock(timeout, unit); @Override public boolean tryLock(String lockKey) { RLock lock = redissonClient.getLock(lockKey); return lock.tryLock(); @Override public boolean tryLock(String lockKey, long waitTime, long leaseTime, TimeUnit unit) throws InterruptedException { RLock lock = redissonClient.getLock(lockKey); return lock.tryLock(waitTime, leaseTime, unit); @Override public boolean isLocked(String lockKey) { RLock lock = redissonClient.getLock(lockKey); return lock.isLocked(); }5、Spring Expression自定义工具类,通过此工具类获取注解上的Expression表达式,以获取相应请求对象的值,如果请求对象有多个,可以通过Expression表达式精准获取。package com.gitegg.platform.boot.util; import org.apache.commons.lang3.ArrayUtils; import org.apache.commons.lang3.StringUtils; import org.springframework.expression.Expression; import org.springframework.expression.spel.standard.SpelExpressionParser; import org.springframework.lang.NonNull; import org.springframework.lang.Nullable; import java.util.Map; import java.util.concurrent.ConcurrentHashMap; * Spring Expression 工具类 * @author GitEgg * @date 2022-4-11 public class ExpressionUtils { private static final Map<String, Expression> EXPRESSION_CACHE = new ConcurrentHashMap<>(64); * 获取Expression对象 * @param expressionString Spring EL 表达式字符串 例如 #{param.id} * @return Expression @Nullable public static Expression getExpression(@Nullable String expressionString) { if (StringUtils.isBlank(expressionString)) { return null; if (EXPRESSION_CACHE.containsKey(expressionString)) { return EXPRESSION_CACHE.get(expressionString); Expression expression = new SpelExpressionParser().parseExpression(expressionString); EXPRESSION_CACHE.put(expressionString, expression); return expression; * 根据Spring EL表达式字符串从根对象中求值 * @param root 根对象 * @param expressionString Spring EL表达式 * @param clazz 值得类型 * @param <T> 泛型 * @return 值 @Nullable public static <T> T getExpressionValue(@Nullable Object root, @Nullable String expressionString, @NonNull Class<? extends T> clazz) { if (root == null) { return null; Expression expression = getExpression(expressionString); if (expression == null) { return null; return expression.getValue(root, clazz); @Nullable public static <T> T getExpressionValue(@Nullable Object root, @Nullable String expressionString) { if (root == null) { return null; Expression expression = getExpression(expressionString); if (expression == null) { return null; //noinspection unchecked return (T) expression.getValue(root); * @param root 根对象 * @param expressionStrings Spring EL表达式 * @param <T> 泛型 这里的泛型要慎用,大多数情况下要使用Object接收避免出现转换异常 * @return 结果集 public static <T> T[] getExpressionValue(@Nullable Object root, @Nullable String... expressionStrings) { if (root == null) { return null; if (ArrayUtils.isEmpty(expressionStrings)) { return null; //noinspection ConstantConditions Object[] values = new Object[expressionStrings.length]; for (int i = 0; i < expressionStrings.length; i++) { //noinspection unchecked values[i] = (T) getExpressionValue(root, expressionStrings[i]); //noinspection unchecked return (T[]) values; * 表达式条件求值 * 如果为值为null则返回false, * 如果为布尔类型直接返回, * 如果为数字类型则判断是否大于0 * @param root 根对象 * @param expressionString Spring EL表达式 * @return 值 @Nullable public static boolean getConditionValue(@Nullable Object root, @Nullable String expressionString) { Object value = getExpressionValue(root, expressionString); if (value == null) { return false; if (value instanceof Boolean) { return (boolean) value; if (value instanceof Number) { return ((Number) value).longValue() > 0; return true; * 表达式条件求值 * @param root 根对象 * @param expressionStrings Spring EL表达式数组 * @return 值 @Nullable public static boolean getConditionValue(@Nullable Object root, @Nullable String... expressionStrings) { if (root == null) { return false; if (ArrayUtils.isEmpty(expressionStrings)) { return false; //noinspection ConstantConditions for (String expressionString : expressionStrings) { if (!getConditionValue(root, expressionString)) { return false; return true; }5、防重测试,我们在系统的用户接口(GitEgg-Cloud工程的UserController类)上进行测试,通过多参数接口以及配置keys,不配置keys等各种场景进行测试,在测试时为了达到效果,可以将interval 时间设置为30秒。设置user参数的realName,mobile和page参数的size为key进行防重测试@ResubmitLock(interval = 30, keys = {"[0].realName","[0].mobile","[1].size"}) public PageResult<UserInfo> list(@ApiIgnore QueryUserDTO user, @ApiIgnore Page<UserInfo> page) { Page<UserInfo> pageUser = userService.selectUserList(page, user); PageResult<UserInfo> pageResult = new PageResult<>(pageUser.getTotal(), pageUser.getRecords()); return pageResult; }不设置防重参数的key,只取第一个参数user,配置排除的参数,不参与放重key的生成@ResubmitLock(interval = 30, argsIndex = {0}, ignoreKeys = {"email","status"}) public PageResult<UserInfo> list(@ApiIgnore QueryUserDTO user, @ApiIgnore Page<UserInfo> page) { Page<UserInfo> pageUser = userService.selectUserList(page, user); PageResult<UserInfo> pageResult = new PageResult<>(pageUser.getTotal(), pageUser.getRecords()); return pageResult; }测试结果 测试结果相关引用:1、防重配置项及通过SpringExpression获取相应参数:https://www.jianshu.com/p/77895a8222372、Redisson分布式锁及相关工具类:https://blog.csdn.net/wsh_ningjing/article/details/115326052

SpringCloud微服务实战——搭建企业级开发框架(三十八):搭建ELK日志采集与分析系统

一套好的日志分析系统可以详细记录系统的运行情况,方便我们定位分析系统性能瓶颈、查找定位系统问题。上一篇说明了日志的多种业务场景以及日志记录的实现方式,那么日志记录下来,相关人员就需要对日志数据进行处理与分析,基于E(ElasticSearch)L(Logstash)K(Kibana)组合的日志分析系统可以说是目前各家公司普遍的首选方案。Elasticsearch: 分布式、RESTful 风格的搜索和数据分析引擎,可快速存储、搜索、分析海量的数据。在ELK中用于存储所有日志数据。Logstash: 开源的数据采集引擎,具有实时管道传输功能。Logstash 能够将来自单独数据源的数据动态集中到一起,对这些数据加以标准化并传输到您所选的地方。在ELK中用于将采集到的日志数据进行处理、转换然后存储到Elasticsearch。Kibana: 免费且开放的用户界面,能够让您对 Elasticsearch 数据进行可视化,并让您在 Elastic Stack 中进行导航。您可以进行各种操作,从跟踪查询负载,到理解请求如何流经您的整个应用,都能轻松完成。在ELK中用于通过界面展示存储在Elasticsearch中的日志数据。  作为微服务集群,必须要考虑当微服务访问量暴增时的高并发场景,此时系统的日志数据同样是爆发式增长,我们需要通过消息队列做流量削峰处理,Logstash官方提供Redis、Kafka、RabbitMQ等输入插件。Redis虽然可以用作消息队列,但其各项功能显示不如单一实现的消息队列,所以通常情况下并不使用它的消息队列功能;Kafka的性能要优于RabbitMQ,通常在日志采集,数据采集时使用较多,所以这里我们采用Kafka实现消息队列功能。  ELK日志分析系统中,数据传输、数据保存、数据展示、流量削峰功能都有了,还少一个组件,就是日志数据的采集,虽然log4j2可以将日志数据发送到Kafka,甚至可以将日志直接输入到Logstash,但是基于系统设计解耦的考虑,业务系统运行不会影响到日志分析系统,同时日志分析系统也不会影响到业务系统,所以,业务只需将日志记录下来,然后由日志分析系统去采集分析即可,Filebeat是ELK日志系统中常用的日志采集器,它是 Elastic Stack 的一部分,因此能够与 Logstash、Elasticsearch 和 Kibana 无缝协作。Kafka: 高吞吐量的分布式发布订阅消息队列,主要应用于大数据的实时处理。Filebeat: 轻量型日志采集器。在 Kubernetes、Docker 或云端部署中部署 Filebeat,即可获得所有的日志流:信息十分完整,包括日志流的 pod、容器、节点、VM、主机以及自动关联时用到的其他元数据。此外,Beats Autodiscover 功能可检测到新容器,并使用恰当的 Filebeat 模块对这些容器进行自适应监测。软件下载:  因经常遇到在内网搭建环境的问题,所以这里习惯使用下载软件包的方式进行安装,虽没有使用Yum、Docker等安装方便,但是可以对软件目录、配置信息等有更深的了解,在后续采用Yum、Docker等方式安装时,也能清楚安装了哪些东西,安装配置的文件是怎样的,即使出现问题,也可以快速的定位解决。Elastic Stack全家桶下载主页: https://www.elastic.co/cn/downloads/我们选择如下版本:Elasticsearch8.0.0,下载地址:https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.0.0-linux-x86_64.tar.gzLogstash8.0.0,下载地址:https://artifacts.elastic.co/downloads/logstash/logstash-8.0.0-linux-x86_64.tar.gzKibana8.0.0,下载地址:https://artifacts.elastic.co/downloads/kibana/kibana-8.0.0-linux-x86_64.tar.gzFilebeat8.0.0,下载地址:https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.0.0-linux-x86_64.tar.gzKafka下载:Kafka3.1.0,下载地址:https://dlcdn.apache.org/kafka/3.1.0/kafka_2.13-3.1.0.tgz安装配置:  安装前先准备好三台CentOS7服务器用于集群安装,这是IP地址为:172.16.20.220、172.16.20.221、172.16.20.222,然后将上面下载的软件包上传至三台服务器的/usr/local目录。因服务器资源有限,这里所有的软件都安装在这三台集群服务器上,在实际生产环境中,请根据业务需求设计规划进行安装。  在集群搭建时,如果能够编写shell安装脚本就会很方便,如果不能编写,就需要在每台服务器上执行安装命令,多数ssh客户端提供了多会话同时输入的功能,这里一些通用安装命令可以选择启用该功能。一、安装Elasticsearch集群1、Elasticsearch是使用Java语言开发的,所以需要在环境上安装jdk并配置环境变量。下载jdk软件包安装,https://www.oracle.com/java/technologies/downloads/#java8新建/usr/local/java目录mkdir /usr/local/java将下载的jdk软件包jdk-8u64-linux-x64.tar.gz上传到/usr/local/java目录,然后解压tar -zxvf jdk-8u77-linux-x64.tar.gz配置环境变量/etc/profilevi /etc/profile在底部添加以下内容JAVA_HOME=/usr/local/java/jdk1.8.0_64 PATH=$JAVA_HOME/bin:$PATH CLASSPATH=$JAVA_HOME/jre/lib/ext:$JAVA_HOME/lib/tools.jar export PATH JAVA_HOME CLASSPATH使环境变量生效source /etc/profile另外一种十分快捷的方式,如果不是内网环境,可以直接使用命令行安装,这里安装的是免费版本的openjdkyum install java-1.8.0-openjdk* -y2、安装配置Elasticsearch进入/usr/local目录,解压Elasticsearch安装包,请确保执行命令前已将环境准备时的Elasticsearch安装包上传至该目录。tar -zxvf elasticsearch-8.0.0-linux-x86_64.tar.gz重命名文件夹mv elasticsearch-8.0.0 elasticsearchelasticsearch不能使用root用户运行,这里创建运行elasticsearch的用户组和用户# 创建用户组 groupadd elasticsearch # 创建用户并添加至用户组 useradd elasticsearch -g elasticsearch # 更改elasticsearch密码,设置一个自己需要的密码,这里设置为和用户名一样:El12345678 passwd elasticsearch新建elasticsearch数据和日志存放目录,并给elasticsearch用户赋权限mkdir -p /data/elasticsearch/data mkdir -p /data/elasticsearch/log chown -R elasticsearch:elasticsearch /data/elasticsearch/* chown -R elasticsearch:elasticsearch /usr/local/elasticsearch/*elasticsearch默认启用了x-pack,集群通信需要进行安全认证,所以这里需要用到SSL证书。注意:这里生成证书的命令只在一台服务器上执行,执行之后copy到另外两台服务器的相同目录下。# 提示输入密码时,直接回车 ./elasticsearch-certutil ca -out /usr/local/elasticsearch/config/elastic-stack-ca.p12 # 提示输入密码时,直接回车 ./elasticsearch-certutil cert --ca /usr/local/elasticsearch/config/elastic-stack-ca.p12 -out /usr/local/elasticsearch/config/elastic-certificates.p12 -pass "" # 如果使用root用户生成的证书,记得给elasticsearch用户赋权限 chown -R elasticsearch:elasticsearch /usr/local/elasticsearch/config/elastic-certificates.p12设置密码,这里在出现输入密码时,所有的都是输入的123456./elasticsearch-setup-passwords interactive Enter password for [elastic]: Reenter password for [elastic]: Enter password for [apm_system]: Reenter password for [apm_system]: Enter password for [kibana_system]: Reenter password for [kibana_system]: Enter password for [logstash_system]: Reenter password for [logstash_system]: Enter password for [beats_system]: Reenter password for [beats_system]: Enter password for [remote_monitoring_user]: Reenter password for [remote_monitoring_user]: Changed password for user [apm_system] Changed password for user [kibana_system] Changed password for user [kibana] Changed password for user [logstash_system] Changed password for user [beats_system] Changed password for user [remote_monitoring_user] Changed password for user [elastic]修改elasticsearch配置文件vi /usr/local/elasticsearch/config/elasticsearch.yml# 修改配置 # 集群名称 cluster.name: log-elasticsearch # 节点名称 node.name: node-1 # 数据存放路径 path.data: /data/elasticsearch/data # 日志存放路径 path.logs: /data/elasticsearch/log # 当前节点IP network.host: 192.168.60.201 # 对外端口 http.port: 9200 # 集群ip discovery.seed_hosts: ["172.16.20.220", "172.16.20.221", "172.16.20.222"] # 初始主节点 cluster.initial_master_nodes: ["node-1", "node-2", "node-3"] # 新增配置 # 集群端口 transport.tcp.port: 9300 transport.tcp.compress: true http.cors.enabled: true http.cors.allow-origin: "*" http.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETE http.cors.allow-headers: "X-Requested-With, Content-Type, Content-Length, X-User" xpack.security.enabled: true xpack.security.transport.ssl.enabled: true xpack.security.transport.ssl.verification_mode: certificate xpack.security.transport.ssl.keystore.path: elastic-certificates.p12 xpack.security.transport.ssl.truststore.path: elastic-certificates.p12配置Elasticsearch的JVM参数vi /usr/local/elasticsearch/config/jvm.options-Xms1g -Xmx1g修改Linux默认资源限制数vi /etc/security/limits.conf# 在最后加入,修改完成后,重启系统生效。 * soft nofile 131072 * hard nofile 131072vi /etc/sysctl.conf # 将值vm.max_map_count值修改为655360 vm.max_map_count=655360 # 使配置生效 sysctl -p切换用户启动服务su elasticsearch cd /usr/local/elasticsearch/bin # 控制台启动命令,可以看到具体报错信息 ./elasticsearch访问我们的服务器地址和端口,可以看到,服务已启动:http://172.16.20.220:9200/http://172.16.20.221:9200/http://172.16.20.222:9200/elasticsearch服务已启动正常运行没有问题后,Ctrl+c关闭服务,然后使用后台启动命令./elasticsearch -d备注:后续可通过此命令停止elasticsearch运行# 查看进程id ps -ef | grep elastic # 关闭进程 kill -9 1376(进程id)3、安装ElasticSearch界面管理插件elasticsearch-head,只需要在一台服务器上安装即可,这里我们安装到172.16.20.220服务器上配置nodejs环境下载地址: (https://nodejs.org/dist/v16.14.0/node-v16.14.0-linux-x64.tar.xz)[https://nodejs.org/dist/v16.14.0/node-v16.14.0-linux-x64.tar.xz],将node-v16.14.0-linux-x64.tar.xz上传到服务器172.16.20.220的/usr/local目录# 解压 tar -xvJf node-v16.14.0-linux-x64.tar.xz # 重命名 mv node-v16.14.0-linux-x64 nodejs # 配置环境变量 vi /etc/profile # 新增以下内容 export NODE_HOME=/usr/local/nodejs PATH=$JAVA_HOME/bin:$NODE_HOME/bin:/usr/local/mysql/bin:/usr/local/subversion/bin:$PATH export PATH JAVA_HOME NODE_HOME JENKINS_HOME CLASSPATH # 使配置生效 source /etc/profile # 测试是否配置成功 node -v配置elasticsearch-head项目开源地址:https://github.com/mobz/elasticsearch-headzip包下载地址:https://github.com/mobz/elasticsearch-head/archive/master.zip下载后上传至172.16.20.220的/usr/local目录,然后进行解压安装# 解压 unzip elasticsearch-head-master.zip # 重命名 mv elasticsearch-head-master elasticsearch-head # 进入到elasticsearch-head目录 cd elasticsearch-head #切换软件源,可以提升安装速度 npm config set registry https://registry.npm.taobao.org # 执行安装命令 npm install -g npm@8.5.1 npm install phantomjs-prebuilt@2.1.16 --ignore-scripts npm install # 启动命令 npm run start浏览器访问http://172.16.20.220:9100/?auth_user=elastic&auth_password=123456 ,需要加上我们上面设置的用户名密码,就可以看到我们的Elasticsearch集群状态了。elasticsearch集群状态二、安装Kafka集群环境准备:  新建kafka的日志目录和zookeeper数据目录,因为这两项默认放在tmp目录,而tmp目录中内容会随重启而丢失,所以我们自定义以下目录:mkdir /data/zookeeper mkdir /data/zookeeper/data mkdir /data/zookeeper/logs mkdir /data/kafka mkdir /data/kafka/data mkdir /data/kafka/logszookeeper.properties配置vi /usr/local/kafka/config/zookeeper.properties修改如下:# 修改为自定义的zookeeper数据目录 dataDir=/data/zookeeper/data # 修改为自定义的zookeeper日志目录 dataLogDir=/data/zookeeper/logs clientPort=2181 # 注释掉 #maxClientCnxns=0 # 设置连接参数,添加如下配置 # 为zk的基本时间单元,毫秒 tickTime=2000 # Leader-Follower初始通信时限 tickTime*10 initLimit=10 # Leader-Follower同步通信时限 tickTime*5 syncLimit=5 # 设置broker Id的服务地址,本机ip一定要用0.0.0.0代替 server.1=0.0.0.0:2888:3888 server.2=172.16.20.221:2888:3888 server.3=172.16.20.222:2888:3888在各台服务器的zookeeper数据目录/data/zookeeper/data添加myid文件,写入服务broker.id属性值在data文件夹中新建myid文件,myid文件的内容为1(一句话创建:echo 1 > myid)cd /data/zookeeper/data vi myid #添加内容:1 其他两台主机分别配置 2和3 1kafka配置,进入config目录下,修改server.properties文件vi /usr/local/kafka/config/server.properties# 每台服务器的broker.id都不能相同 broker.id=1 # 是否可以删除topic delete.topic.enable=true # topic 在当前broker上的分片个数,与broker保持一致 num.partitions=3 # 每个主机地址不一样: listeners=PLAINTEXT://172.16.20.220:9092 advertised.listeners=PLAINTEXT://172.16.20.220:9092 # 具体一些参数 log.dirs=/data/kafka/kafka-logs # 设置zookeeper集群地址与端口如下: zookeeper.connect=172.16.20.220:2181,172.16.20.221:2181,172.16.20.222:2181Kafka启动kafka启动时先启动zookeeper,再启动kafka;关闭时相反,先关闭kafka,再关闭zookeeper。1、zookeeper启动命令./zookeeper-server-start.sh ../config/zookeeper.properties &后台运行启动命令:nohup ./zookeeper-server-start.sh ../config/zookeeper.properties >/data/zookeeper/logs/zookeeper.log 2>1 &或者./zookeeper-server-start.sh -daemon ../config/zookeeper.properties &查看集群状态:./zookeeper-server-start.sh status ../config/zookeeper.properties2、kafka启动命令./kafka-server-start.sh ../config/server.properties &后台运行启动命令:nohup bin/kafka-server-start.sh ../config/server.properties >/data/kafka/logs/kafka.log 2>1 &或者./kafka-server-start.sh -daemon ../config/server.properties &3、创建topic,最新版本已经不需要使用zookeeper参数创建。./kafka-topics.sh --create --replication-factor 2 --partitions 1 --topic test --bootstrap-server 172.16.20.220:9092参数解释:复制两份  --replication-factor 2创建1个分区  --partitions 1topic 名称  --topic test4、查看已经存在的topic(三台设备都执行时可以看到)./kafka-topics.sh --list --bootstrap-server 172.16.20.220:90925、启动生产者:./kafka-console-producer.sh --broker-list 172.16.20.220:9092 --topictest6、启动消费者:./kafka-console-consumer.sh --bootstrap-server 172.16.20.221:9092 --topic test ./kafka-console-consumer.sh --bootstrap-server 172.16.20.222:9092 --topic test添加参数 --from-beginning 从开始位置消费,不是从最新消息./kafka-console-consumer.sh --bootstrap-server 172.16.20.221 --topic test --from-beginning7、测试:在生产者输入test,可以在消费者的两台服务器上看到同样的字符test,说明Kafka服务器集群已搭建成功。三、安装配置LogstashLogstash没有提供集群安装方式,相互之间并没有交互,但是我们可以配置同属一个Kafka消费者组,来实现统一消息只消费一次的功能。解压安装包tar -zxvf logstash-8.0.0-linux-x86_64.tar.gz mv logstash-8.0.0 logstash配置kafka主题和组cd logstash # 新建配置文件 vi logstash-kafka.conf # 新增以下内容 input { kafka { codec => "json" group_id => "logstash" client_id => "logstash-api" topics_pattern => "api_log" type => "api" bootstrap_servers => "172.16.20.220:9092,172.16.20.221:9092,172.16.20.222:9092" auto_offset_reset => "latest" kafka { codec => "json" group_id => "logstash" client_id => "logstash-operation" topics_pattern => "operation_log" type => "operation" bootstrap_servers => "172.16.20.220:9092,172.16.20.221:9092,172.16.20.222:9092" auto_offset_reset => "latest" kafka { codec => "json" group_id => "logstash" client_id => "logstash-debugger" topics_pattern => "debugger_log" type => "debugger" bootstrap_servers => "172.16.20.220:9092,172.16.20.221:9092,172.16.20.222:9092" auto_offset_reset => "latest" kafka { codec => "json" group_id => "logstash" client_id => "logstash-nginx" topics_pattern => "nginx_log" type => "nginx" bootstrap_servers => "172.16.20.220:9092,172.16.20.221:9092,172.16.20.222:9092" auto_offset_reset => "latest" output { if [type] == "api"{ elasticsearch { hosts => ["172.16.20.220:9200","172.16.20.221:9200","172.16.20.222:9200"] index => "logstash_api-%{+YYYY.MM.dd}" user => "elastic" password => "123456" if [type] == "operation"{ elasticsearch { hosts => ["172.16.20.220:9200","172.16.20.221:9200","172.16.20.222:9200"] index => "logstash_operation-%{+YYYY.MM.dd}" user => "elastic" password => "123456" if [type] == "debugger"{ elasticsearch { hosts => ["172.16.20.220:9200","172.16.20.221:9200","172.16.20.222:9200"] index => "logstash_operation-%{+YYYY.MM.dd}" user => "elastic" password => "123456" if [type] == "nginx"{ elasticsearch { hosts => ["172.16.20.220:9200","172.16.20.221:9200","172.16.20.222:9200"] index => "logstash_operation-%{+YYYY.MM.dd}" user => "elastic" password => "123456" }启动logstash# 切换到bin目录 cd /usr/local/logstash/bin # 启动命令 nohup ./logstash -f ../config/logstash-kafka.conf & #查看启动日志 tail -f nohup.out四、安装配置Kibana解压安装文件tar -zxvf kibana-8.0.0-linux-x86_64.tar.gz mv kibana-8.0.0 kibana修改配置文件cd /usr/local/kibana/config vi kibana.yml # 修改以下内容 server.port: 5601 server.host: "172.16.20.220" elasticsearch.hosts: ["http://172.16.20.220:9200","http://172.16.20.221:9200","http://172.16.20.222:9200"] elasticsearch.username: "kibana_system" elasticsearch.password: "123456"启动服务cd /usr/local/kibana/bin # 默认不允许使用root运行,可以添加 --allow-root 参数使用root用户运行,也可以跟Elasticsearch一样新增一个用户组用户 nohup ./kibana --allow-root &访问http://172.16.20.220:5601/,并使用elastic / 123456登录。登录页首页五、安装Filebeat  Filebeat用于安装在业务软件运行服务器,收集业务产生的日志,并推送到我们配置的Kafka、Redis、RabbitMQ等消息中间件,或者直接保存到Elasticsearch,下面来讲解如何安装配置:1、进入到/usr/local目录,执行解压命令tar -zxvf filebeat-8.0.0-linux-x86_64.tar.gz mv filebeat-8.0.0-linux-x86_64 filebeat2、编辑配置filebeat.yml  配置文件中默认是输出到elasticsearch,这里我们改为kafka,同文件目录下的filebeat.reference.yml文件是所有配置的实例,可以直接将kafka的配置复制到filebeat.yml配置采集开关和采集路径:# filestream is an input for collecting log messages from files. - type: filestream # Change to true to enable this input configuration. # enable改为true enabled: true # Paths that should be crawled and fetched. Glob based paths. # 修改微服务日志的实际路径 paths: - /data/gitegg/log/gitegg-service-system/*.log - /data/gitegg/log/gitegg-service-base/*.log - /data/gitegg/log/gitegg-service-oauth/*.log - /data/gitegg/log/gitegg-service-gateway/*.log - /data/gitegg/log/gitegg-service-extension/*.log - /data/gitegg/log/gitegg-service-bigdata/*.log #- c:\programdata\elasticsearch\logs\* # Exclude lines. A list of regular expressions to match. It drops the lines that are # matching any regular expression from the list. #exclude_lines: ['^DBG'] # Include lines. A list of regular expressions to match. It exports the lines that are # matching any regular expression from the list. #include_lines: ['^ERR', '^WARN'] # Exclude files. A list of regular expressions to match. Filebeat drops the files that # are matching any regular expression from the list. By default, no files are dropped. #prospector.scanner.exclude_files: ['.gz$'] # Optional additional fields. These fields can be freely picked # to add additional information to the crawled log files for filtering #fields: # level: debug # review: 1Elasticsearch 模板配置# ======================= Elasticsearch template setting ======================= setup.template.settings: index.number_of_shards: 3 index.number_of_replicas: 1 #index.codec: best_compression #_source.enabled: false # 允许自动生成index模板 setup.template.enabled: true # # 生成index模板时字段配置文件 setup.template.fields: fields.yml # # 如果存在模块则覆盖 setup.template.overwrite: true # # 生成index模板的名称 setup.template.name: "api_log" # # 生成index模板匹配的index格式 setup.template.pattern: "api-*" #索引生命周期管理ilm功能默认开启,开启的情况下索引名称只能为filebeat-*, 通过setup.ilm.enabled: false进行关闭; setup.ilm.pattern: "{now/d}" setup.ilm.enabled: false开启仪表盘并配置使用Kibana仪表盘:# ================================= Dashboards ================================= # These settings control loading the sample dashboards to the Kibana index. Loading # the dashboards is disabled by default and can be enabled either by setting the # options here or by using the `setup` command. setup.dashboards.enabled: true # The URL from where to download the dashboards archive. By default this URL # has a value which is computed based on the Beat name and version. For released # versions, this URL points to the dashboard archive on the artifacts.elastic.co # website. #setup.dashboards.url: # =================================== Kibana =================================== # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API. # This requires a Kibana endpoint configuration. setup.kibana: # Kibana Host # Scheme and port can be left out and will be set to the default (http and 5601) # In case you specify and additional path, the scheme is required: http://localhost:5601/path # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601 host: "172.16.20.220:5601" # Kibana Space ID # ID of the Kibana Space into which the dashboards should be loaded. By default, # the Default Space will be used. #space.id:配置输出到Kafka,完整的filebeat.yml如下###################### Filebeat Configuration Example ######################### # This file is an example configuration file highlighting only the most common # options. The filebeat.reference.yml file from the same directory contains all the # supported options with more comments. You can use it as a reference. # You can find the full configuration reference here: # https://www.elastic.co/guide/en/beats/filebeat/index.html # For more available modules and options, please see the filebeat.reference.yml sample # configuration file. # ============================== Filebeat inputs =============================== filebeat.inputs: # Each - is an input. Most options can be set at the input level, so # you can use different inputs for various configurations. # Below are the input specific configurations. # filestream is an input for collecting log messages from files. - type: filestream # Change to true to enable this input configuration. enabled: true # Paths that should be crawled and fetched. Glob based paths. paths: - /data/gitegg/log/*/*operation.log #- c:\programdata\elasticsearch\logs\* # Exclude lines. A list of regular expressions to match. It drops the lines that are # matching any regular expression from the list. #exclude_lines: ['^DBG'] # Include lines. A list of regular expressions to match. It exports the lines that are # matching any regular expression from the list. #include_lines: ['^ERR', '^WARN'] # Exclude files. A list of regular expressions to match. Filebeat drops the files that # are matching any regular expression from the list. By default, no files are dropped. #prospector.scanner.exclude_files: ['.gz$'] # Optional additional fields. These fields can be freely picked # to add additional information to the crawled log files for filtering fields: topic: operation_log # level: debug # review: 1 # filestream is an input for collecting log messages from files. - type: filestream # Change to true to enable this input configuration. enabled: true # Paths that should be crawled and fetched. Glob based paths. paths: - /data/gitegg/log/*/*api.log #- c:\programdata\elasticsearch\logs\* # Exclude lines. A list of regular expressions to match. It drops the lines that are # matching any regular expression from the list. #exclude_lines: ['^DBG'] # Include lines. A list of regular expressions to match. It exports the lines that are # matching any regular expression from the list. #include_lines: ['^ERR', '^WARN'] # Exclude files. A list of regular expressions to match. Filebeat drops the files that # are matching any regular expression from the list. By default, no files are dropped. #prospector.scanner.exclude_files: ['.gz$'] # Optional additional fields. These fields can be freely picked # to add additional information to the crawled log files for filtering fields: topic: api_log # level: debug # review: 1 # filestream is an input for collecting log messages from files. - type: filestream # Change to true to enable this input configuration. enabled: true # Paths that should be crawled and fetched. Glob based paths. paths: - /data/gitegg/log/*/*debug.log #- c:\programdata\elasticsearch\logs\* # Exclude lines. A list of regular expressions to match. It drops the lines that are # matching any regular expression from the list. #exclude_lines: ['^DBG'] # Include lines. A list of regular expressions to match. It exports the lines that are # matching any regular expression from the list. #include_lines: ['^ERR', '^WARN'] # Exclude files. A list of regular expressions to match. Filebeat drops the files that # are matching any regular expression from the list. By default, no files are dropped. #prospector.scanner.exclude_files: ['.gz$'] # Optional additional fields. These fields can be freely picked # to add additional information to the crawled log files for filtering fields: topic: debugger_log # level: debug # review: 1 # filestream is an input for collecting log messages from files. - type: filestream # Change to true to enable this input configuration. enabled: true # Paths that should be crawled and fetched. Glob based paths. paths: - /usr/local/nginx/logs/access.log #- c:\programdata\elasticsearch\logs\* # Exclude lines. A list of regular expressions to match. It drops the lines that are # matching any regular expression from the list. #exclude_lines: ['^DBG'] # Include lines. A list of regular expressions to match. It exports the lines that are # matching any regular expression from the list. #include_lines: ['^ERR', '^WARN'] # Exclude files. A list of regular expressions to match. Filebeat drops the files that # are matching any regular expression from the list. By default, no files are dropped. #prospector.scanner.exclude_files: ['.gz$'] # Optional additional fields. These fields can be freely picked # to add additional information to the crawled log files for filtering fields: topic: nginx_log # level: debug # review: 1 # ============================== Filebeat modules ============================== filebeat.config.modules: # Glob pattern for configuration loading path: ${path.config}/modules.d/*.yml # Set to true to enable config reloading reload.enabled: false # Period on which files under path should be checked for changes #reload.period: 10s # ======================= Elasticsearch template setting ======================= setup.template.settings: index.number_of_shards: 3 index.number_of_replicas: 1 #index.codec: best_compression #_source.enabled: false # 允许自动生成index模板 setup.template.enabled: true # # 生成index模板时字段配置文件 setup.template.fields: fields.yml # # 如果存在模块则覆盖 setup.template.overwrite: true # # 生成index模板的名称 setup.template.name: "gitegg_log" # # 生成index模板匹配的index格式 setup.template.pattern: "filebeat-*" #索引生命周期管理ilm功能默认开启,开启的情况下索引名称只能为filebeat-*, 通过setup.ilm.enabled: false进行关闭; setup.ilm.pattern: "{now/d}" setup.ilm.enabled: false # ================================== General =================================== # The name of the shipper that publishes the network data. It can be used to group # all the transactions sent by a single shipper in the web interface. #name: # The tags of the shipper are included in their own field with each # transaction published. #tags: ["service-X", "web-tier"] # Optional fields that you can specify to add additional information to the # output. #fields: # env: staging # ================================= Dashboards ================================= # These settings control loading the sample dashboards to the Kibana index. Loading # the dashboards is disabled by default and can be enabled either by setting the # options here or by using the `setup` command. setup.dashboards.enabled: true # The URL from where to download the dashboards archive. By default this URL # has a value which is computed based on the Beat name and version. For released # versions, this URL points to the dashboard archive on the artifacts.elastic.co # website. #setup.dashboards.url: # =================================== Kibana =================================== # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API. # This requires a Kibana endpoint configuration. setup.kibana: # Kibana Host # Scheme and port can be left out and will be set to the default (http and 5601) # In case you specify and additional path, the scheme is required: http://localhost:5601/path # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601 host: "172.16.20.220:5601" # Optional protocol and basic auth credentials. #protocol: "https" username: "elastic" password: "123456" # Optional HTTP path #path: "" # Optional Kibana space ID. #space.id: "" # Custom HTTP headers to add to each request #headers: # X-My-Header: Contents of the header # Use SSL settings for HTTPS. #ssl.enabled: true # =============================== Elastic Cloud ================================ # These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/). # The cloud.id setting overwrites the `output.elasticsearch.hosts` and # `setup.kibana.host` options. # You can find the `cloud.id` in the Elastic Cloud web UI. #cloud.id: # The cloud.auth setting overwrites the `output.elasticsearch.username` and # `output.elasticsearch.password` settings. The format is `<user>:<pass>`. #cloud.auth: # ================================== Outputs =================================== # Configure what output to use when sending the data collected by the beat. # ---------------------------- Elasticsearch Output ---------------------------- #output.elasticsearch: # Array of hosts to connect to. hosts: ["localhost:9200"] # Protocol - either `http` (default) or `https`. #protocol: "https" # Authentication credentials - either API key or username/password. #api_key: "id:api_key" #username: "elastic" #password: "changeme" # ------------------------------ Logstash Output ------------------------------- #output.logstash: # The Logstash hosts #hosts: ["localhost:5044"] # Optional SSL. By default is off. # List of root certificates for HTTPS server verifications #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] # Certificate for SSL client authentication #ssl.certificate: "/etc/pki/client/cert.pem" # Client Certificate Key #ssl.key: "/etc/pki/client/cert.key" # -------------------------------- Kafka Output -------------------------------- output.kafka: # Boolean flag to enable or disable the output module. enabled: true # The list of Kafka broker addresses from which to fetch the cluster metadata. # The cluster metadata contain the actual Kafka brokers events are published # to. hosts: ["172.16.20.220:9092","172.16.20.221:9092","172.16.20.222:9092"] # The Kafka topic used for produced events. The setting can be a format string # using any event field. To set the topic from document type use `%{[type]}`. topic: '%{[fields.topic]}' # The Kafka event key setting. Use format string to create a unique event key. # By default no event key will be generated. #key: '' # The Kafka event partitioning strategy. Default hashing strategy is `hash` # using the `output.kafka.key` setting or randomly distributes events if # `output.kafka.key` is not configured. partition.hash: # If enabled, events will only be published to partitions with reachable # leaders. Default is false. reachable_only: true # Configure alternative event field names used to compute the hash value. # If empty `output.kafka.key` setting will be used. # Default value is empty list. #hash: [] # Authentication details. Password is required if username is set. #username: '' #password: '' # SASL authentication mechanism used. Can be one of PLAIN, SCRAM-SHA-256 or SCRAM-SHA-512. # Defaults to PLAIN when `username` and `password` are configured. #sasl.mechanism: '' # Kafka version Filebeat is assumed to run against. Defaults to the "1.0.0". #version: '1.0.0' # Configure JSON encoding #codec.json: # Pretty-print JSON event #pretty: false # Configure escaping HTML symbols in strings. #escape_html: false # Metadata update configuration. Metadata contains leader information # used to decide which broker to use when publishing. #metadata: # Max metadata request retry attempts when cluster is in middle of leader # election. Defaults to 3 retries. #retry.max: 3 # Wait time between retries during leader elections. Default is 250ms. #retry.backoff: 250ms # Refresh metadata interval. Defaults to every 10 minutes. #refresh_frequency: 10m # Strategy for fetching the topics metadata from the broker. Default is false. #full: false # The number of concurrent load-balanced Kafka output workers. #worker: 1 # The number of times to retry publishing an event after a publishing failure. # After the specified number of retries, events are typically dropped. # Some Beats, such as Filebeat, ignore the max_retries setting and retry until # all events are published. Set max_retries to a value less than 0 to retry # until all events are published. The default is 3. #max_retries: 3 # The number of seconds to wait before trying to republish to Kafka # after a network error. After waiting backoff.init seconds, the Beat # tries to republish. If the attempt fails, the backoff timer is increased # exponentially up to backoff.max. After a successful publish, the backoff # timer is reset. The default is 1s. #backoff.init: 1s # The maximum number of seconds to wait before attempting to republish to # Kafka after a network error. The default is 60s. #backoff.max: 60s # The maximum number of events to bulk in a single Kafka request. The default # is 2048. #bulk_max_size: 2048 # Duration to wait before sending bulk Kafka request. 0 is no delay. The default # is 0. #bulk_flush_frequency: 0s # The number of seconds to wait for responses from the Kafka brokers before # timing out. The default is 30s. #timeout: 30s # The maximum duration a broker will wait for number of required ACKs. The # default is 10s. #broker_timeout: 10s # The number of messages buffered for each Kafka broker. The default is 256. #channel_buffer_size: 256 # The keep-alive period for an active network connection. If 0s, keep-alives # are disabled. The default is 0 seconds. #keep_alive: 0 # Sets the output compression codec. Must be one of none, snappy and gzip. The # default is gzip. compression: gzip # Set the compression level. Currently only gzip provides a compression level # between 0 and 9. The default value is chosen by the compression algorithm. #compression_level: 4 # The maximum permitted size of JSON-encoded messages. Bigger messages will be # dropped. The default value is 1000000 (bytes). This value should be equal to # or less than the broker's message.max.bytes. max_message_bytes: 1000000 # The ACK reliability level required from broker. 0=no response, 1=wait for # local commit, -1=wait for all replicas to commit. The default is 1. Note: # If set to 0, no ACKs are returned by Kafka. Messages might be lost silently # on error. required_acks: 1 # The configurable ClientID used for logging, debugging, and auditing # purposes. The default is "beats". #client_id: beats # Use SSL settings for HTTPS. #ssl.enabled: true # Controls the verification of certificates. Valid values are: # * full, which verifies that the provided certificate is signed by a trusted # authority (CA) and also verifies that the server's hostname (or IP address) # matches the names identified within the certificate. # * strict, which verifies that the provided certificate is signed by a trusted # authority (CA) and also verifies that the server's hostname (or IP address) # matches the names identified within the certificate. If the Subject Alternative # Name is empty, it returns an error. # * certificate, which verifies that the provided certificate is signed by a # trusted authority (CA), but does not perform any hostname verification. # * none, which performs no verification of the server's certificate. This # mode disables many of the security benefits of SSL/TLS and should only be used # after very careful consideration. It is primarily intended as a temporary # diagnostic mechanism when attempting to resolve TLS errors; its use in # production environments is strongly discouraged. # The default value is full. #ssl.verification_mode: full # List of supported/valid TLS versions. By default all TLS versions from 1.1 # up to 1.3 are enabled. #ssl.supported_protocols: [TLSv1.1, TLSv1.2, TLSv1.3] # List of root certificates for HTTPS server verifications #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] # Certificate for SSL client authentication #ssl.certificate: "/etc/pki/client/cert.pem" # Client certificate key #ssl.key: "/etc/pki/client/cert.key" # Optional passphrase for decrypting the certificate key. #ssl.key_passphrase: '' # Configure cipher suites to be used for SSL connections #ssl.cipher_suites: [] # Configure curve types for ECDHE-based cipher suites #ssl.curve_types: [] # Configure what types of renegotiation are supported. Valid options are # never, once, and freely. Default is never. #ssl.renegotiation: never # Configure a pin that can be used to do extra validation of the verified certificate chain, # this allow you to ensure that a specific certificate is used to validate the chain of trust. # The pin is a base64 encoded string of the SHA-256 fingerprint. #ssl.ca_sha256: "" # A root CA HEX encoded fingerprint. During the SSL handshake if the # fingerprint matches the root CA certificate, it will be added to # the provided list of root CAs (`certificate_authorities`), if the # list is empty or not defined, the matching certificate will be the # only one in the list. Then the normal SSL validation happens. #ssl.ca_trusted_fingerprint: "" # Enable Kerberos support. Kerberos is automatically enabled if any Kerberos setting is set. #kerberos.enabled: true # Authentication type to use with Kerberos. Available options: keytab, password. #kerberos.auth_type: password # Path to the keytab file. It is used when auth_type is set to keytab. #kerberos.keytab: /etc/security/keytabs/kafka.keytab # Path to the Kerberos configuration. #kerberos.config_path: /etc/krb5.conf # The service name. Service principal name is contructed from # service_name/hostname@realm. #kerberos.service_name: kafka # Name of the Kerberos user. #kerberos.username: elastic # Password of the Kerberos user. It is used when auth_type is set to password. #kerberos.password: changeme # Kerberos realm. #kerberos.realm: ELASTIC # Enables Kerberos FAST authentication. This may # conflict with certain Active Directory configurations. #kerberos.enable_krb5_fast: false # ================================= Processors ================================= processors: - add_host_metadata: when.not.contains.tags: forwarded - add_cloud_metadata: ~ - add_docker_metadata: ~ - add_kubernetes_metadata: ~ # ================================== Logging =================================== # Sets log level. The default log level is info. # Available log levels are: error, warning, info, debug #logging.level: debug # At debug level, you can selectively enable logging only for some components. # To enable all selectors use ["*"]. Examples of other selectors are "beat", # "publisher", "service". #logging.selectors: ["*"] # ============================= X-Pack Monitoring ============================== # Filebeat can export internal metrics to a central Elasticsearch monitoring # cluster. This requires xpack monitoring to be enabled in Elasticsearch. The # reporting is disabled by default. # Set to true to enable the monitoring reporter. #monitoring.enabled: false # Sets the UUID of the Elasticsearch cluster under which monitoring data for this # Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch # is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch. #monitoring.cluster_uuid: # Uncomment to send the metrics to Elasticsearch. Most settings from the # Elasticsearch output are accepted here as well. # Note that the settings should point to your Elasticsearch *monitoring* cluster. # Any setting that is not set is automatically inherited from the Elasticsearch # output configuration, so if you have the Elasticsearch output configured such # that it is pointing to your Elasticsearch monitoring cluster, you can simply # uncomment the following line. #monitoring.elasticsearch: # ============================== Instrumentation =============================== # Instrumentation support for the filebeat. #instrumentation: # Set to true to enable instrumentation of filebeat. #enabled: false # Environment in which filebeat is running on (eg: staging, production, etc.) #environment: "" # APM Server hosts to report instrumentation results to. #hosts: # - http://localhost:8200 # API Key for the APM Server(s). # If api_key is set then secret_token will be ignored. #api_key: # Secret token for the APM Server(s). #secret_token: # ================================= Migration ================================== # This allows to enable 6.7 migration aliases #migration.6_to_7.enabled: true执行filebeat启动命令./filebeat -e -c filebeat.yml后台启动命令nohup ./filebeat -e -c filebeat.yml >/dev/null 2>&1 &停止命令ps -ef |grep filebeat kill -9 进程号六、测试配置是否正确1、测试filebeat是否能够采集log文件并发送到Kafka在kafka服务器开启消费者,监听api_log主题和operation_log主题./kafka-console-consumer.sh --bootstrap-server 172.16.20.221:9092 --topic api_log ./kafka-console-consumer.sh --bootstrap-server 172.16.20.222:9092 --topic operation_log手动写入日志文件,按照filebeat配置的采集目录写入echo "api log1111" > /data/gitegg/log/gitegg-service-system/api.log echo "operation log1111" > /data/gitegg/log/gitegg-service-system/operation.log观察消费者是消费到日志推送内容api_logoperation_log2、测试logstash是消费Kafka的日志主题,并将日志内容存入Elasticsearch手动写入日志文件echo "api log8888888888888888888888" > /data/gitegg/log/gitegg-service-system/api.log echo "operation loggggggggggggggggggg" > /data/gitegg/log/gitegg-service-system/operation.log打开Elasticsearch Head界面 http://172.16.20.220:9100/?auth_user=elastic&auth_password=123456 ,查询Elasticsearch是否有数据。自动新增的两个index,规则是logstash中配置的image.png数据浏览页可以看到Elasticsearch中存储的日志数据内容,说明我们的配置已经生效。image.png七、配置Kibana用于日志统计和展示依次点击左侧菜单Management -> Kibana -> Data Views -> Create data view , 输入logstash_* ,选择@timestamp,再点击Create data view按钮,完成创建。image.pngKibanaimage.pngimage.png点击日志分析查询菜单Analytics -> Discover,选择logstash_* 进行日志查询分析菜单查询结果页

SpringCloud微服务实战——搭建企业级开发框架(三十七):微服务日志系统设计与实现【上】

针对业务开发人员通常面对的业务需求,我们将日志分为操作(请求)日志和系统运行日志,操作(请求)日志可以让管理员或者运营人员方便简单的在系统界面中查询追踪用户具体做了哪些操作,便于分析统计用户行为;系统运行日志又分为不同的级别(Log4j2): OFF > FATAL > ERROR > WARN > INFO > DEBUG > TRACE > ALL,这些日志级别由开发人员在代码编写时确定,并编写在代码中,系统运行时记录,方便系统开发人员分析定位解决问题,查找系统性能瓶颈。  我们可以自定义注解利用AOP拦截Controller请求实现系统(日志)操作日志的记录,系统运行日志可以使用log4j2或者logback。在SpringCloud微服务架构下,可以使用Gateway统一记录操作(请求)日志,由于微服务分布式集群部署,同一服务也存在多个,这里的日志追踪就需要借助Skywalking和ELK来实现具体的追踪分析记录。  由于最近爆发的log4j2和logback的漏洞问题,请选择最新修复漏洞的版本。根据网上很多性能的对比,log4j2显著优于logback,所以我们将SpringBoot默认的日志Logback修改为Log4j2。  在框架设计时,我们尽可能的考虑到日志系统的使用场景,将日志系统实现方式设计为可动态配置的,然后具体根据业务需求,选择使用合适的日志系统,根据常用业务需求,我们暂将微服务日志系统以如下方式实现:操作日志:使用AOP特性,自定义注解拦截Controller请求实现系统操作日志优势:实现简单,通过注解即实现记录操作日志。缺点:需要硬编码到代码中,灵活性差。在网关Gateway通过读取配置,统一记录操作日志优势:可配置,实时更改需要记录哪些操作日志。缺点:配置实现稍复杂。操作日志分为两种实现方式,各有优劣,不管哪种实现方式,日志记录都通过Log4j2来记录,通过Log4j2的配置,可动态选择记录到文件、关系型数据库MySQL、NoSQL数据库MongoDB、消息中间件Kafka等。系统日志:Log4j2记录日志,ELK采集分析展示系统日志我们就采取通用的日志记录方式即可,通过Log4j2记录到日志文件,在通过ELK采集分析展示。下面是具体实现步骤:一、配置SkyWalking+Log4j2打印链路追踪TraceId  很早之前,大家最常用的Java日志记录工具是log4j,后来由log4j的创始人设计了另外一款日志记录工具logback,它比log4j更加优秀,详细对比可参照官方说明,所以SpringBoot默认使用logback作为日志记录工具。近年来Apache对Log4j进行了升级,推出了log4j2版本,无论从设计还是性能方面都优于log4j和logback,详细对比可自行测试,网上也有相应的测试报告,这里不详细说明。所以,我们肯定需要选择目前最合适的日志记录工具。1、将SpringBoot默认日志Logback修改为Log4j2排除spring-boot-starter-web等依赖的logback<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <!-- 去除springboot默认的logback配置--> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-logging</artifactId> </exclusion> </exclusions> </dependency>引入spring-boot-starter-log4j2依赖,因为对应版本的SpringBoot引入的Log4j2版本漏洞问题,这里排除默认log4j2版本,引入最新的修复漏洞版本。<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-log4j2</artifactId> <exclusions> <exclusion> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> </exclusion> <exclusion> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> </exclusion> </exclusions> </dependency>引入log4j2修复漏洞版本的依赖<!-- 修复log4j2漏洞 --> <log4j2.version>2.17.1</log4j2.version> <!-- log4j2支持异步日志,导入disruptor依赖,不需要支持异步日志,也可以去掉该依赖包 --> <log4j2.disruptor.version>3.4.4</log4j2.disruptor.version> <!-- 修复log4j2漏洞 --> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <version>${log4j2.version}</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <version>${log4j2.version}</version> </dependency> <!-- log4j2读取spring配置的依赖库 --> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-spring-boot</artifactId> <version>${log4j2.version}</version> </dependency> <!-- log4j2支持异步日志,导入disruptor依赖,不需要支持异步日志,也可以去掉该依赖包 --> <dependency> <groupId>com.lmax</groupId> <artifactId>disruptor</artifactId> <version>${log4j2.disruptor.version}</version> </dependency>因为在SpringBoot中存在很多子依赖,以及在jar包中存在其他依赖logback的jar包,这里需要使用Maven工具来定位查找这些依赖Logback的jar包,逐一排除掉。在工程文件夹下执行Maven命令:mvn dependency:tree -Dverbose -Dincludes="ch.qos.logback:logback-classic"logback依赖如上图所示都是依赖logback的jar包,需要都排除,否则会和log4j2冲突。2、集成可打印SkyWalking链路追踪TraceId的依赖<!-- skywalking-log4j2链路id版本号 --> <skywalking.log4j2.version>6.4.0</skywalking.log4j2.version> <!-- skywalking-log4j2链路id --> <dependency> <groupId>org.apache.skywalking</groupId> <artifactId>apm-toolkit-log4j-2.x</artifactId> <version>${skywalking.log4j2.version}</version> </dependency>2、Log4j2配置实例配置自己需要的log4j2.xml,在Pattern中配置[%traceId]可以显示链路追踪ID, 如果要读取springboot中yaml的配置,一定要引入log4j-spring-boot依赖。<?xml version="1.0" encoding="UTF-8"?> <!--日志级别以及优先级排序: OFF > FATAL > ERROR > WARN > INFO > DEBUG > TRACE > ALL --> <configuration monitorInterval="5" packages="org.apache.skywalking.apm.toolkit.log.log4j.v2.x"> <!--变量配置--> <Properties> <!-- 格式化输出:%date表示日期,traceId表示微服务Skywalking追踪id,%thread表示线程名,%-5level:级别从左显示5个字符宽度 %m:日志消息,%n是换行符--> <!-- %c 输出类详情 %M 输出方法名 %pid 输出pid %line 日志在哪一行被打印 --> <!-- %logger{80} 表示 Logger 名字最长80个字符 --> <!-- value="${LOCAL_IP_HOSTNAME} %date [%p] %C [%thread] pid:%pid line:%line %throwable %c{10} %m%n"/>--> <property name="CONSOLE_LOG_PATTERN" value="%d %highlight{%-5level [%traceId] pid:%pid-%line}{ERROR=Bright RED, WARN=Bright Yellow, INFO=Bright Green, DEBUG=Bright Cyan, TRACE=Bright White} %style{[%t]}{bright,magenta} %style{%c{1.}.%M(%L)}{cyan}: %msg%n"/> <property name="LOG_PATTERN" value="%d %highlight{%-5level [%traceId] pid:%pid-%line}{ERROR=Bright RED, WARN=Bright Yellow, INFO=Bright Green, DEBUG=Bright Cyan, TRACE=Bright White} %style{[%t]}{bright,magenta} %style{%c{1.}.%M(%L)}{cyan}: %msg%n"/> <!-- 读取application.yaml文件中设置的日志路径 logging.file.path--> <property name="FILE_PATH" value="/var/log"/> <property name="FILE_STORE_MAX" value="50MB"/> <property name="FILE_WRITE_INTERVAL" value="1"/> <property name="LOG_MAX_HISTORY" value="60"/> </Properties> <appenders> <!-- 控制台输出 --> <console name="Console" target="SYSTEM_OUT"> <!-- 输出日志的格式 --> <PatternLayout pattern="${CONSOLE_LOG_PATTERN}"/> <!-- 控制台只输出level及其以上级别的信息(onMatch),其他的直接拒绝(onMismatch) --> <ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/> </console> <!-- 这个会打印出所有的info及以下级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档 --> <RollingRandomAccessFile name="RollingFileInfo" fileName="${FILE_PATH}/info.log" filePattern="${FILE_PATH}/INFO-%d{yyyy-MM-dd}_%i.log.gz"> <!-- 控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch)--> <ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Policies> <!--interval属性用来指定多久滚动一次,默认是1 hour--> <TimeBasedTriggeringPolicy interval="${FILE_WRITE_INTERVAL}"/> <SizeBasedTriggeringPolicy size="${FILE_STORE_MAX}"/> </Policies> <!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖--> <DefaultRolloverStrategy max="${LOG_MAX_HISTORY}"/> </RollingRandomAccessFile> <!-- 这个会打印出所有的debug及以下级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档--> <RollingRandomAccessFile name="RollingFileDebug" fileName="${FILE_PATH}/debug.log" filePattern="${FILE_PATH}/DEBUG-%d{yyyy-MM-dd}_%i.log.gz"> <!--控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch)--> <ThresholdFilter level="debug" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Policies> <!--interval属性用来指定多久滚动一次,默认是1 hour--> <TimeBasedTriggeringPolicy interval="${FILE_WRITE_INTERVAL}"/> <SizeBasedTriggeringPolicy size="${FILE_STORE_MAX}"/> </Policies> <!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖--> <DefaultRolloverStrategy max="${LOG_MAX_HISTORY}"/> </RollingRandomAccessFile> <!-- 这个会打印出所有的warn及以上级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档--> <RollingRandomAccessFile name="RollingFileWarn" fileName="${FILE_PATH}/warn.log" filePattern="${FILE_PATH}/WARN-%d{yyyy-MM-dd}_%i.log.gz"> <!-- 控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch)--> <ThresholdFilter level="warn" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Policies> <!-- interval属性用来指定多久滚动一次,默认是1 hour --> <TimeBasedTriggeringPolicy interval="${FILE_WRITE_INTERVAL}"/> <SizeBasedTriggeringPolicy size="${FILE_STORE_MAX}"/> </Policies> <!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖 --> <DefaultRolloverStrategy max="${LOG_MAX_HISTORY}"/> </RollingRandomAccessFile> <!-- 这个会打印出所有的error及以上级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档 --> <RollingRandomAccessFile name="RollingFileError" fileName="${FILE_PATH}/error.log" filePattern="${FILE_PATH}/ERROR-%d{yyyy-MM-dd}_%i.log.gz"> <!--只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch)--> <ThresholdFilter level="error" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Policies> <!--interval属性用来指定多久滚动一次,默认是1 hour--> <TimeBasedTriggeringPolicy interval="${FILE_WRITE_INTERVAL}"/> <SizeBasedTriggeringPolicy size="${FILE_STORE_MAX}"/> </Policies> <!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖--> <DefaultRolloverStrategy max="${LOG_MAX_HISTORY}"/> </RollingRandomAccessFile> </appenders> <!-- Logger节点用来单独指定日志的形式,比如要为指定包下的class指定不同的日志级别等 --> <!-- 然后定义loggers,只有定义了logger并引入的appender,appender才会生效 --> <loggers> <!--过滤掉spring和mybatis的一些无用的DEBUG信息--> <logger name="org.mybatis" level="info" additivity="false"> <AppenderRef ref="Console"/> </logger> <!--若是additivity设为false,则子Logger 只会在自己的appender里输出,而不会在父Logger 的appender里输出 --> <Logger name="org.springframework" level="info" additivity="false"> <AppenderRef ref="Console"/> </Logger> <AsyncLogger name="AsyncLogger" level="debug" additivity="false"> <AppenderRef ref="Console"/> <AppenderRef ref="RollingFileDebug"/> <AppenderRef ref="RollingFileInfo"/> <AppenderRef ref="RollingFileWarn"/> <AppenderRef ref="RollingFileError"/> </AsyncLogger> <root level="trace"> <appender-ref ref="Console"/> <appender-ref ref="RollingFileDebug"/> <appender-ref ref="RollingFileInfo"/> <appender-ref ref="RollingFileWarn"/> <appender-ref ref="RollingFileError"/> </root> </loggers> </configuration>3、IDEA控制台显示彩色日志  在上面Log4j2.xml中*LOG_PATTERN的配置已经设置了,每种日志显示同的颜色,在IDEA中默认没有效果,这里需要设置一下:点击右上角运行窗口的Edit Configurations,在VM options中添加-Dlog4j.skipJansi=false ,再次运行,就可以看到IDEA控制台显示彩色日志了。二、自定义扩展日志级别,实现可配置的日志存取方式  虽然Log4j2提供了将日志保存到MySQL、MongoDB等,但是这里并不建议直接使用Log4j2将日志保存到MySQL、MongoDB等数据库,这样做不但在每个微服务引入Log4j2组件的时候,增加数据库链接池数量(不要考虑复用业务系统已有的数据库连接池,因为不是每个微服务都要有日志表),还有在高并发的情况下,对于整个业务系统来说这将是致命的问题。如果考虑到项目系统访问和操作很少,为了降低系统维护的复杂度,避免引入过多组件和环境,使用这种方式的话,那么就需要考虑业务系统是否需要使用微服务架构的问题了。  在高并发等情况下,我们推荐有两种方式来记录操作日志:一是将日志先保存到消息队列,作为数据库的一个缓冲,由消费者分批将日志数据保存到数据库,降低耦合,尽量使日志的操作不影响业务操作;二是使用Log4j2的异步文件日志,配合搭建ELK日志采集分析系统,来实现保存操作日志功能。1、自定义操作日志和接口访问日志级别  默认的日志记录级别不能够满足我们记录操作日志和接口日志的需求,这里我们自己扩展Log4j2的日志级别来实现自定义的操作日志和接口日志。新建LogLevelConstant定义日志级别/** * 自定义日志级别 * 业务操作日志级别(级别越高,数字越小) off 0, fatal 100, error 200, warn 300, info 400, debug 500 * warn operation api * @author GitEgg public class LogLevelConstant { * 操作日志 public static final Level OPERATION_LEVEL = Level.forName("OPERATION", 310); * 接口日志 public static final Level API_LEVEL = Level.forName("API", 320); * 操作日志信息 public static final String OPERATION_LEVEL_MESSAGE = "{type:'operation', content:{}}"; * 接口日志信息 public static final String API_LEVEL_MESSAGE = "{type:'api', content:{}}"; }这里需要注意在使用日志时,需要使用@Log4j2注解而不是@Slf4j,因为@Slf4j默认提供的方法不能设置日志级别,测试代码:log.log(LogLevelConstant.OPERATION_LEVEL,"操作日志:{} , {}", "参数1", "参数2"); log.log(LogLevelConstant.API_LEVEL,"接口日志:{} , {}", "参数1", "参数2");2、自定义操作日志注解  在记录操作日志时,我们可能不需要在代码中直接写记录日志的代码,这里可以自定义注解,通过注解来实现操作日志的记录。首先根据Spring AOP的特性自定义三类日志记录类型、BeforeLog(执行方法之前)AfterLog(执行方法之后)、AroundLog(执行前和执行后)BeforeLog/** * @ClassName: BeforeLog * @Description: 记录前置日志 * @author GitEgg * @date 2019年4月27日 下午3:36:29 @Retention(RetentionPolicy.RUNTIME) @Target({ ElementType.METHOD }) public @interface BeforeLog { String name() default ""; }AfterLog/** * @ClassName: AfterLog * @Description: 记录后置日志 * @author GitEgg * @date 2019年4月27日 下午3:36:29 @Retention(RetentionPolicy.RUNTIME) @Target({ ElementType.METHOD }) public @interface AfterLog { String name() default ""; }AroundLog/** * @ClassName: AroundLog * @Description:记录around日志 * @author GitEgg * @date 2019年4月27日 下午3:36:29 @Retention(RetentionPolicy.RUNTIME) @Target({ ElementType.METHOD }) public @interface AroundLog { String name() default ""; }上面自定义注解之后,再编写LogAspect日志记录的切面实现/** * @ClassName: LogAspect * @Description: * @author GitEgg * @date 2019年4月27日 下午4:02:12 @Log4j2 @Aspect @Component public class LogAspect { * Before切点 @Pointcut("@annotation(com.gitegg.platform.base.annotation.log.BeforeLog)") public void beforeAspect() { * After切点 @Pointcut("@annotation(com.gitegg.platform.base.annotation.log.AfterLog)") public void afterAspect() { * Around切点 @Pointcut("@annotation(com.gitegg.platform.base.annotation.log.AroundLog)") public void aroundAspect() { * 前置通知 记录用户的操作 * @param joinPoint 切点 @Before("beforeAspect()") public void doBefore(JoinPoint joinPoint) { try { // 处理入参 Object[] args = joinPoint.getArgs(); StringBuffer inParams = new StringBuffer(""); for (Object obj : args) { if (null != obj && !(obj instanceof ServletRequest) && !(obj instanceof ServletResponse)) { String objJson = JsonUtils.objToJson(obj); inParams.append(objJson); Method method = getMethod(joinPoint); String operationName = getBeforeLogName(method); addSysLog(joinPoint, String.valueOf(inParams), "BeforeLog", operationName); } catch (Exception e) { log.error("doBefore日志记录异常,异常信息:{}", e.getMessage()); * 后置通知 记录用户的操作 * @param joinPoint 切点 @AfterReturning(value = "afterAspect()", returning = "returnObj") public void doAfter(JoinPoint joinPoint, Object returnObj) { try { // 处理出参 String outParams = JsonUtils.objToJson(returnObj); Method method = getMethod(joinPoint); String operationName = getAfterLogName(method); addSysLog(joinPoint, "AfterLog", outParams, operationName); } catch (Exception e) { log.error("doAfter日志记录异常,异常信息:{}", e.getMessage()); * 前后通知 用于拦截记录用户的操作记录 * @param joinPoint 切点 * @throws Throwable @Around("aroundAspect()") public Object doAround(ProceedingJoinPoint joinPoint) throws Throwable { // 出参 Object value = null; // 拦截的方法是否执行 boolean execute = false; // 入参 Object[] args = joinPoint.getArgs(); try { // 处理入参 StringBuffer inParams = new StringBuffer(); for (Object obj : args) { if (null != obj && !(obj instanceof ServletRequest) && !(obj instanceof ServletResponse)) { String objJson = JsonUtils.objToJson(obj); inParams.append(objJson); execute = true; // 执行目标方法 value = joinPoint.proceed(args); // 处理出参 String outParams = JsonUtils.objToJson(value); Method method = getMethod(joinPoint); String operationName = getAroundLogName(method); // 记录日志 addSysLog(joinPoint, String.valueOf(inParams), String.valueOf(outParams), operationName); } catch (Exception e) { log.error("around日志记录异常,异常信息:{}", e.getMessage()); // 如果未执行则继续执行,日志异常不影响操作流程继续 if (!execute) { value = joinPoint.proceed(args); throw e; return value; * 日志入库 addSysLog(这里用一句话描述这个方法的作用) * @Title: addSysLog * @Description: * @param joinPoint * @param inParams * @param outParams * @param operationName * @return void @SneakyThrows public void addSysLog(JoinPoint joinPoint, String inParams, String outParams, String operationName) throws Exception { try { HttpServletRequest request = ((ServletRequestAttributes) RequestContextHolder.getRequestAttributes()) .getRequest(); String ip = request.getRemoteAddr(); GitEggLog gitEggLog = new GitEggLog(); gitEggLog.setMethodName(joinPoint.getSignature().getName()); gitEggLog.setInParams(String.valueOf(inParams)); gitEggLog.setOutParams(String.valueOf(outParams)); gitEggLog.setOperationIp(ip); gitEggLog.setOperationName(operationName); log.log(LogLevelConstant.OPERATION_LEVEL,LogLevelConstant.OPERATION_LEVEL_MESSAGE, JsonUtils.objToJson(gitEggLog)); } catch (Exception e) { log.error("addSysLog日志记录异常,异常信息:{}", e.getMessage()); throw e; * 获取注解中对方法的描述信息 * @param joinPoint 切点 * @return 方法描述 * @throws Exception public Method getMethod(JoinPoint joinPoint) throws Exception { String targetName = joinPoint.getTarget().getClass().getName(); String methodName = joinPoint.getSignature().getName(); Object[] arguments = joinPoint.getArgs(); Class<?> targetClass = Class.forName(targetName); Method[] methods = targetClass.getMethods(); Method methodReturn = null; for (Method method : methods) { if (method.getName().equals(methodName)) { Class<?>[] clazzs = method.getParameterTypes(); if (clazzs.length == arguments.length) { methodReturn = method; break; return methodReturn; * getBeforeLogName(获取before名称) * @Title: getBeforeLogName * @Description: * @param method * @return String public String getBeforeLogName(Method method) { String name = method.getAnnotation(BeforeLog.class).name(); return name; * getAfterLogName(获取after名称) * @Title: getAfterLogName * @Description: * @param method * @return String public String getAfterLogName(Method method) { String name = method.getAnnotation(AfterLog.class).name(); return name; * getAroundLogName(获取around名称) * @Title: getAroundLogName * @Description: * @param method * @return String public String getAroundLogName(Method method) { String name = method.getAnnotation(AroundLog.class).name(); return name;

SpringCloud微服务实战——搭建企业级开发框架(三十七):微服务日志系统设计与实现【下】

二、自定义扩展日志级别,实现可配置的日志存取方式上面代码工作完成之后,接下来需要在log4j2.xml中配置自定义日志级别,实现将自定义的日志打印到指定的文件中:<!-- 这个会打印出所有的operation级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档 --> <RollingRandomAccessFile name="RollingFileOperation" fileName="${FILE_PATH}/operation.log" filePattern="${FILE_PATH}/OPERATION-%d{yyyy-MM-dd}_%i.log.gz"> <!--只输出action level级别的信息(onMatch),其他的直接拒绝(onMismatch)--> <LevelRangeFilter minLevel="OPERATION" maxLevel="OPERATION" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Policies> <!--interval属性用来指定多久滚动一次,默认是1 hour--> <TimeBasedTriggeringPolicy interval="${FILE_WRITE_INTERVAL}"/> <SizeBasedTriggeringPolicy size="${FILE_STORE_MAX}"/> </Policies> <!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖--> <DefaultRolloverStrategy max="${LOG_MAX_HISTORY}"/> </RollingRandomAccessFile> <!-- 这个会打印出所有的api级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档 --> <RollingRandomAccessFile name="RollingFileApi" fileName="${FILE_PATH}/api.log" filePattern="${FILE_PATH}/API-%d{yyyy-MM-dd}_%i.log.gz"> <!--只输出visit level级别的信息(onMatch),其他的直接拒绝(onMismatch)--> <LevelRangeFilter minLevel="API" maxLevel="API" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Policies> <!--interval属性用来指定多久滚动一次,默认是1 hour--> <TimeBasedTriggeringPolicy interval="${FILE_WRITE_INTERVAL}"/> <SizeBasedTriggeringPolicy size="${FILE_STORE_MAX}"/> </Policies> <!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖--> <DefaultRolloverStrategy max="${LOG_MAX_HISTORY}"/> </RollingRandomAccessFile> <loggers> <AsyncLogger name="AsyncLogger" level="debug" additivity="false"> <AppenderRef ref="Console"/> <AppenderRef ref="RollingFileDebug"/> <AppenderRef ref="RollingFileInfo"/> <AppenderRef ref="RollingFileWarn"/> <AppenderRef ref="RollingFileError"/> <AppenderRef ref="RollingFileOperation"/> <AppenderRef ref="RollingFileApi"/> </AsyncLogger> <root level="trace"> <appender-ref ref="Console"/> <appender-ref ref="RollingFileDebug"/> <appender-ref ref="RollingFileInfo"/> <appender-ref ref="RollingFileWarn"/> <appender-ref ref="RollingFileError"/> <AppenderRef ref="RollingFileOperation"/> <AppenderRef ref="RollingFileApi"/> </root> </loggers>3、实现将日志保存到Kafka  前面的配置已基本满足了我们对于日志系统的基础需求,在这里,我们可以考虑通过配置Log4j2的配置文件,来实现动态配置将日志文件记录到指定的文件或消息中间件。  Log4j2将日志消息发送到Kafka需要用到Kfaka的客户端jar包,所以,这里首先引入kafka-clients包:<!-- log4j2记录到kafka需要的依赖 --> <kafka.clients.version>3.1.0</kafka.clients.version> <!-- log4j2 kafka appender --> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>${kafka.clients.version}</version> </dependency>修改log4j2.xml配置将操作日志记录到Kafka,这里需要注意,Log4j2官网说明了这里必须加<Logger name="org.apache.kafka" level="INFO" />配置,否则会出现递归调用。<Kafka name="KafkaOperationLog" topic="operation_log" ignoreExceptions="false"> <LevelRangeFilter minLevel="OPERATION" maxLevel="OPERATION" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Property name="bootstrap.servers">172.16.20.220:9092,172.16.20.221:9092,172.16.20.222:9092</Property> <Property name="max.block.ms">2000</Property> </Kafka> <Kafka name="KafkaApiLog" topic="api_log" ignoreExceptions="false"> <LevelRangeFilter minLevel="API" maxLevel="API" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Property name="bootstrap.servers">172.16.20.220:9092,172.16.20.221:9092,172.16.20.222:9092</Property> <Property name="max.block.ms">2000</Property> </Kafka> <!-- Logger节点用来单独指定日志的形式,比如要为指定包下的class指定不同的日志级别等 --> <!-- 然后定义loggers,只有定义了logger并引入的appender,appender才会生效 --> <loggers> <!--过滤掉spring和mybatis的一些无用的DEBUG信息--> <logger name="org.mybatis" level="info" additivity="false"> <AppenderRef ref="Console"/> </logger> <!--若是additivity设为false,则子Logger 只会在自己的appender里输出,而不会在父Logger 的appender里输出 --> <Logger name="org.springframework" level="info" additivity="false"> <AppenderRef ref="Console"/> </Logger> <!-- 避免递归记录日志 --> <Logger name="org.apache.kafka" level="INFO" /> <AsyncLogger name="AsyncLogger" level="debug" additivity="false"> <AppenderRef ref="Console"/> <AppenderRef ref="RollingFileDebug"/> <AppenderRef ref="RollingFileInfo"/> <AppenderRef ref="RollingFileWarn"/> <AppenderRef ref="RollingFileError"/> <AppenderRef ref="RollingFileOperation"/> <AppenderRef ref="RollingFileApi"/> <AppenderRef ref="KafkaOperationLog"/> <AppenderRef ref="KafkaApiLog"/> </AsyncLogger> <root level="trace"> <appender-ref ref="Console"/> <appender-ref ref="RollingFileDebug"/> <appender-ref ref="RollingFileInfo"/> <appender-ref ref="RollingFileWarn"/> <appender-ref ref="RollingFileError"/> <AppenderRef ref="RollingFileOperation"/> <AppenderRef ref="RollingFileApi"/> <AppenderRef ref="KafkaOperationLog"/> <AppenderRef ref="KafkaApiLog"/> </root> </loggers>综上,修改后完整的log4j.xml如下,可根据配置自己选择不将操作日志记录到文件:<?xml version="1.0" encoding="UTF-8"?> <!--日志级别以及优先级排序: OFF > FATAL > ERROR > WARN > INFO > DEBUG > TRACE > ALL --> <configuration monitorInterval="5" packages="org.apache.skywalking.apm.toolkit.log.log4j.v2.x"> <!--变量配置--> <Properties> <!-- 格式化输出:%date表示日期,traceId表示微服务Skywalking追踪id,%thread表示线程名,%-5level:级别从左显示5个字符宽度 %m:日志消息,%n是换行符--> <!-- %c 输出类详情 %M 输出方法名 %pid 输出pid %line 日志在哪一行被打印 --> <!-- %logger{80} 表示 Logger 名字最长80个字符 --> <!-- value="${LOCAL_IP_HOSTNAME} %date [%p] %C [%thread] pid:%pid line:%line %throwable %c{10} %m%n"/>--> <property name="CONSOLE_LOG_PATTERN" value="%d %highlight{%-5level [%traceId] pid:%pid-%line}{ERROR=Bright RED, WARN=Bright Yellow, INFO=Bright Green, DEBUG=Bright Cyan, TRACE=Bright White} %style{[%t]}{bright,magenta} %style{%c{1.}.%M(%L)}{cyan}: %msg%n"/> <property name="LOG_PATTERN" value="%d %highlight{%-5level [%traceId] pid:%pid-%line}{ERROR=Bright RED, WARN=Bright Yellow, INFO=Bright Green, DEBUG=Bright Cyan, TRACE=Bright White} %style{[%t]}{bright,magenta} %style{%c{1.}.%M(%L)}{cyan}: %msg%n"/> <!-- 读取application.yaml文件中设置的日志路径 logging.file.path--> <Property name="FILE_PATH">${spring:logging.file.path}</Property> <!-- <property name="FILE_PATH">D:\\log4j2_cloud</property> --> <property name="applicationName">${spring:spring.application.name}</property> <property name="FILE_STORE_MAX" value="50MB"/> <property name="FILE_WRITE_INTERVAL" value="1"/> <property name="LOG_MAX_HISTORY" value="60"/> </Properties> <appenders> <!-- 控制台输出 --> <console name="Console" target="SYSTEM_OUT"> <!-- 输出日志的格式 --> <PatternLayout pattern="${CONSOLE_LOG_PATTERN}"/> <!-- 控制台只输出level及其以上级别的信息(onMatch),其他的直接拒绝(onMismatch) --> <ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/> </console> <!-- 这个会打印出所有的info及以下级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档 --> <RollingRandomAccessFile name="RollingFileInfo" fileName="${FILE_PATH}/info.log" filePattern="${FILE_PATH}/INFO-%d{yyyy-MM-dd}_%i.log.gz"> <!-- 控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch)--> <ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Policies> <!--interval属性用来指定多久滚动一次,默认是1 hour--> <TimeBasedTriggeringPolicy interval="${FILE_WRITE_INTERVAL}"/> <SizeBasedTriggeringPolicy size="${FILE_STORE_MAX}"/> </Policies> <!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖--> <DefaultRolloverStrategy max="${LOG_MAX_HISTORY}"/> </RollingRandomAccessFile> <!-- 这个会打印出所有的debug及以下级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档--> <RollingRandomAccessFile name="RollingFileDebug" fileName="${FILE_PATH}/debug.log" filePattern="${FILE_PATH}/DEBUG-%d{yyyy-MM-dd}_%i.log.gz"> <!--控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch)--> <ThresholdFilter level="debug" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Policies> <!--interval属性用来指定多久滚动一次,默认是1 hour--> <TimeBasedTriggeringPolicy interval="${FILE_WRITE_INTERVAL}"/> <SizeBasedTriggeringPolicy size="${FILE_STORE_MAX}"/> </Policies> <!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖--> <DefaultRolloverStrategy max="${LOG_MAX_HISTORY}"/> </RollingRandomAccessFile> <!-- 这个会打印出所有的warn及以上级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档--> <RollingRandomAccessFile name="RollingFileWarn" fileName="${FILE_PATH}/warn.log" filePattern="${FILE_PATH}/WARN-%d{yyyy-MM-dd}_%i.log.gz"> <!-- 控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch)--> <ThresholdFilter level="warn" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Policies> <!-- interval属性用来指定多久滚动一次,默认是1 hour --> <TimeBasedTriggeringPolicy interval="${FILE_WRITE_INTERVAL}"/> <SizeBasedTriggeringPolicy size="${FILE_STORE_MAX}"/> </Policies> <!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖 --> <DefaultRolloverStrategy max="${LOG_MAX_HISTORY}"/> </RollingRandomAccessFile> <!-- 这个会打印出所有的error及以上级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档 --> <RollingRandomAccessFile name="RollingFileError" fileName="${FILE_PATH}/error.log" filePattern="${FILE_PATH}/ERROR-%d{yyyy-MM-dd}_%i.log.gz"> <!--只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch)--> <ThresholdFilter level="error" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Policies> <!--interval属性用来指定多久滚动一次,默认是1 hour--> <TimeBasedTriggeringPolicy interval="${FILE_WRITE_INTERVAL}"/> <SizeBasedTriggeringPolicy size="${FILE_STORE_MAX}"/> </Policies> <!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖--> <DefaultRolloverStrategy max="${LOG_MAX_HISTORY}"/> </RollingRandomAccessFile> <!-- 这个会打印出所有的operation级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档 --> <RollingRandomAccessFile name="RollingFileOperation" fileName="${FILE_PATH}/operation.log" filePattern="${FILE_PATH}/OPERATION-%d{yyyy-MM-dd}_%i.log.gz"> <!--只输出action level级别的信息(onMatch),其他的直接拒绝(onMismatch)--> <LevelRangeFilter minLevel="OPERATION" maxLevel="OPERATION" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Policies> <!--interval属性用来指定多久滚动一次,默认是1 hour--> <TimeBasedTriggeringPolicy interval="${FILE_WRITE_INTERVAL}"/> <SizeBasedTriggeringPolicy size="${FILE_STORE_MAX}"/> </Policies> <!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖--> <DefaultRolloverStrategy max="${LOG_MAX_HISTORY}"/> </RollingRandomAccessFile> <!-- 这个会打印出所有的api级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档 --> <RollingRandomAccessFile name="RollingFileApi" fileName="${FILE_PATH}/api.log" filePattern="${FILE_PATH}/API-%d{yyyy-MM-dd}_%i.log.gz"> <!--只输出visit level级别的信息(onMatch),其他的直接拒绝(onMismatch)--> <LevelRangeFilter minLevel="API" maxLevel="API" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Policies> <!--interval属性用来指定多久滚动一次,默认是1 hour--> <TimeBasedTriggeringPolicy interval="${FILE_WRITE_INTERVAL}"/> <SizeBasedTriggeringPolicy size="${FILE_STORE_MAX}"/> </Policies> <!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖--> <DefaultRolloverStrategy max="${LOG_MAX_HISTORY}"/> </RollingRandomAccessFile> <Kafka name="KafkaOperationLog" topic="operation_log" ignoreExceptions="false"> <LevelRangeFilter minLevel="OPERATION" maxLevel="OPERATION" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Property name="bootstrap.servers">172.16.20.220:9092,172.16.20.221:9092,172.16.20.222:9092</Property> <Property name="max.block.ms">2000</Property> </Kafka> <Kafka name="KafkaApiLog" topic="api_log" ignoreExceptions="false"> <LevelRangeFilter minLevel="API" maxLevel="API" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Property name="bootstrap.servers">172.16.20.220:9092,172.16.20.221:9092,172.16.20.222:9092</Property> <Property name="max.block.ms">2000</Property> </Kafka> </appenders> <!-- Logger节点用来单独指定日志的形式,比如要为指定包下的class指定不同的日志级别等 --> <!-- 然后定义loggers,只有定义了logger并引入的appender,appender才会生效 --> <loggers> <!--过滤掉spring和mybatis的一些无用的DEBUG信息--> <logger name="org.mybatis" level="info" additivity="false"> <AppenderRef ref="Console"/> </logger> <!--若是additivity设为false,则子Logger 只会在自己的appender里输出,而不会在父Logger 的appender里输出 --> <Logger name="org.springframework" level="info" additivity="false"> <AppenderRef ref="Console"/> </Logger> <!-- 避免递归记录日志 --> <Logger name="org.apache.kafka" level="INFO" /> <AsyncLogger name="AsyncLogger" level="debug" additivity="false"> <AppenderRef ref="Console"/> <AppenderRef ref="RollingFileDebug"/> <AppenderRef ref="RollingFileInfo"/> <AppenderRef ref="RollingFileWarn"/> <AppenderRef ref="RollingFileError"/> <AppenderRef ref="RollingFileOperation"/> <AppenderRef ref="RollingFileApi"/> <AppenderRef ref="KafkaOperationLog"/> <AppenderRef ref="KafkaApiLog"/> </AsyncLogger> <root level="trace"> <appender-ref ref="Console"/> <appender-ref ref="RollingFileDebug"/> <appender-ref ref="RollingFileInfo"/> <appender-ref ref="RollingFileWarn"/> <appender-ref ref="RollingFileError"/> <AppenderRef ref="RollingFileOperation"/> <AppenderRef ref="RollingFileApi"/> <AppenderRef ref="KafkaOperationLog"/> <AppenderRef ref="KafkaApiLog"/> </root> </loggers> </configuration>以上配置完成之后,我们对日志记录进行测试,查看日志是否记录到异步文件和kafka中,在Kfaka服务器启动消费者服务,可以实时观察日志是否推送到Kafka:操作日志接口日志4、由Gateway记录可配置的请求日志  在业务开发过程中,除了操作日志的需求,我们通常还会遇到接口日志的需求,系统需要对接口的请求做统计分析。网关负责把请求转发到各个微服务,在此处比较适合进行API日志收集。  我们必然面临着哪些服务需要收集API日志,需要收集哪些类型的API日志的问题,那么在设计的时候,我们需要考虑使API日志收集可灵活配置。基于简单配置的考虑,我们将这些配置放到Nacos配置中心,如果有更多详细定制化的需求可以设计实现系统配置界面,将配置放到Redis缓存。  因为请求中的RequestBody和ResponseBody都是只能读取一次的,所以这里需要在过滤器中对数据进行一下处理,尽管Gateway提供了缓存RequestBody的过滤器AdaptCachedBodyGlobalFilter,但是我们这里除了一些对请求的定制化需求外,有可能会用到ResponseBody,所以这里最好还是自定义过滤器。  有一款开源插件spring-cloud-gateway-plugin非常全面的实现Gateway收集请求日志的过滤器,这里我们直接引用其实现,因为此款插件除了日志记录还有其他不需要的功能,且插件依赖SpringCloud版本,所以,这里只取其日志记录的功能,并根据我们的需求进行部分调整。1、在我们的配置文件中增加如下配置项:日志插件开关记录请求参数开关记录返回参数开关需要记录API日志的微服务ID列表需要记录API日志的URL列表spring: cloud: gateway: plugin: config: # 是否开启Gateway日志插件 enable: true # requestLog==true && responseLog==false时,只记录请求参数日志;responseLog==true时,记录请求参数和返回参数。 # 记录入参 requestLog==false时,不记录日志 requestLog: true # 生产环境,尽量只记录入参,因为返回参数数据太大,且大多数情况是无意义的 # 记录出参 responseLog: true # all: 所有日志 configure:serviceId和pathList交集 serviceId: 只记录serviceId配置列表 pathList:只记录pathList配置列表 logType: all serviceIdList: - "gitegg-oauth" - "gitegg-service-system" pathList: - "/gitegg-oauth/oauth/token" - "/gitegg-oauth/oauth/user/info"2、GatewayPluginConfig配置类,可以根据配置项,选择启用初始化哪些过滤器,根据spring-cloud-gateway-plugin GatewayPluginConfig.java修改。/** * Quoted from @see https://github.com/chenggangpro/spring-cloud-gateway-plugin * Gateway Plugin Config * @author chenggang * @date 2019/01/29 @Slf4j @Configuration public class GatewayPluginConfig { @Bean @ConditionalOnMissingBean(GatewayPluginProperties.class) @ConfigurationProperties(GatewayPluginProperties.GATEWAY_PLUGIN_PROPERTIES_PREFIX) public GatewayPluginProperties gatewayPluginProperties(){ return new GatewayPluginProperties(); @Bean @ConditionalOnBean(GatewayPluginProperties.class) @ConditionalOnMissingBean(GatewayRequestContextFilter.class) @ConditionalOnProperty(prefix = GatewayPluginProperties.GATEWAY_PLUGIN_PROPERTIES_PREFIX, value = { "enable", "requestLog" },havingValue = "true") public GatewayRequestContextFilter gatewayContextFilter(@Autowired GatewayPluginProperties gatewayPluginProperties , @Autowired(required = false) ContextExtraDataGenerator contextExtraDataGenerator){ GatewayRequestContextFilter gatewayContextFilter = new GatewayRequestContextFilter(gatewayPluginProperties, contextExtraDataGenerator); log.debug("Load GatewayContextFilter Config Bean"); return gatewayContextFilter; @Bean @ConditionalOnMissingBean(GatewayResponseContextFilter.class) @ConditionalOnProperty(prefix = GatewayPluginProperties.GATEWAY_PLUGIN_PROPERTIES_PREFIX, value = { "enable", "responseLog" }, havingValue = "true") public GatewayResponseContextFilter responseLogFilter(){ GatewayResponseContextFilter responseLogFilter = new GatewayResponseContextFilter(); log.debug("Load Response Log Filter Config Bean"); return responseLogFilter; @Bean @ConditionalOnBean(GatewayPluginProperties.class) @ConditionalOnMissingBean(RemoveGatewayContextFilter.class) @ConditionalOnProperty(prefix = GatewayPluginProperties.GATEWAY_PLUGIN_PROPERTIES_PREFIX, value = { "enable" }, havingValue = "true") public RemoveGatewayContextFilter removeGatewayContextFilter(){ RemoveGatewayContextFilter gatewayContextFilter = new RemoveGatewayContextFilter(); log.debug("Load RemoveGatewayContextFilter Config Bean"); return gatewayContextFilter; @Bean @ConditionalOnMissingBean(RequestLogFilter.class) @ConditionalOnProperty(prefix = GatewayPluginProperties.GATEWAY_PLUGIN_PROPERTIES_PREFIX, value = { "enable" },havingValue = "true") public RequestLogFilter requestLogFilter(@Autowired GatewayPluginProperties gatewayPluginProperties){ RequestLogFilter requestLogFilter = new RequestLogFilter(gatewayPluginProperties); log.debug("Load Request Log Filter Config Bean"); return requestLogFilter; }3、GatewayRequestContextFilter处理请求参数的过滤器,根据spring-cloud-gateway-plugin GatewayContextFilter.java修改。/** * Quoted from @see https://github.com/chenggangpro/spring-cloud-gateway-plugin * Gateway Context Filter * @author chenggang * @date 2019/01/29 @Slf4j @AllArgsConstructor public class GatewayRequestContextFilter implements GlobalFilter, Ordered { private GatewayPluginProperties gatewayPluginProperties; private ContextExtraDataGenerator contextExtraDataGenerator; private static final AntPathMatcher ANT_PATH_MATCHER = new AntPathMatcher(); * default HttpMessageReader private static final List<HttpMessageReader<?>> MESSAGE_READERS = HandlerStrategies.withDefaults().messageReaders(); @Override public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) { ServerHttpRequest request = exchange.getRequest(); GatewayContext gatewayContext = new GatewayContext(); gatewayContext.setReadRequestData(shouldReadRequestData(exchange)); gatewayContext.setReadResponseData(gatewayPluginProperties.getResponseLog()); HttpHeaders headers = request.getHeaders(); gatewayContext.setRequestHeaders(headers); if(Objects.nonNull(contextExtraDataGenerator)){ GatewayContextExtraData gatewayContextExtraData = contextExtraDataGenerator.generateContextExtraData(exchange); gatewayContext.setGatewayContextExtraData(gatewayContextExtraData); if(!gatewayContext.getReadRequestData()){ exchange.getAttributes().put(GatewayContext.CACHE_GATEWAY_CONTEXT, gatewayContext); log.debug("[GatewayContext]Properties Set To Not Read Request Data"); return chain.filter(exchange); gatewayContext.getAllRequestData().addAll(request.getQueryParams()); * save gateway context into exchange exchange.getAttributes().put(GatewayContext.CACHE_GATEWAY_CONTEXT, gatewayContext); MediaType contentType = headers.getContentType(); if(headers.getContentLength()>0){ if(MediaType.APPLICATION_JSON.equals(contentType) || MediaType.APPLICATION_JSON_UTF8.equals(contentType)){ return readBody(exchange, chain,gatewayContext); if(MediaType.APPLICATION_FORM_URLENCODED.equals(contentType)){ return readFormData(exchange, chain,gatewayContext); log.debug("[GatewayContext]ContentType:{},Gateway context is set with {}",contentType, gatewayContext); return chain.filter(exchange); @Override public int getOrder() { return FilterOrderEnum.GATEWAY_CONTEXT_FILTER.getOrder(); * check should read request data whether or not * @return boolean private boolean shouldReadRequestData(ServerWebExchange exchange){ if(gatewayPluginProperties.getRequestLog() && GatewayLogTypeEnum.ALL.getType().equals(gatewayPluginProperties.getLogType())){ log.debug("[GatewayContext]Properties Set Read All Request Data"); return true; boolean serviceFlag = false; boolean pathFlag = false; boolean lbFlag = false; List<String> readRequestDataServiceIdList = gatewayPluginProperties.getServiceIdList(); List<String> readRequestDataPathList = gatewayPluginProperties.getPathList(); if(!CollectionUtils.isEmpty(readRequestDataPathList) && (GatewayLogTypeEnum.PATH.getType().equals(gatewayPluginProperties.getLogType()) || GatewayLogTypeEnum.CONFIGURE.getType().equals(gatewayPluginProperties.getLogType()))){ String requestPath = exchange.getRequest().getPath().pathWithinApplication().value(); for(String path : readRequestDataPathList){ if(ANT_PATH_MATCHER.match(path,requestPath)){ log.debug("[GatewayContext]Properties Set Read Specific Request Data With Request Path:{},Math Pattern:{}", requestPath, path); pathFlag = true; break; Route route = exchange.getAttribute(ServerWebExchangeUtils.GATEWAY_ROUTE_ATTR); URI routeUri = route.getUri(); if(!"lb".equalsIgnoreCase(routeUri.getScheme())){ lbFlag = true; String routeServiceId = routeUri.getHost().toLowerCase(); if(!CollectionUtils.isEmpty(readRequestDataServiceIdList) && (GatewayLogTypeEnum.SERVICE.getType().equals(gatewayPluginProperties.getLogType()) || GatewayLogTypeEnum.CONFIGURE.getType().equals(gatewayPluginProperties.getLogType()))){ if(readRequestDataServiceIdList.contains(routeServiceId)){ log.debug("[GatewayContext]Properties Set Read Specific Request Data With ServiceId:{}",routeServiceId); serviceFlag = true; if (GatewayLogTypeEnum.CONFIGURE.getType().equals(gatewayPluginProperties.getLogType()) && serviceFlag && pathFlag && !lbFlag) return true; else if (GatewayLogTypeEnum.SERVICE.getType().equals(gatewayPluginProperties.getLogType()) && serviceFlag && !lbFlag) return true; else if (GatewayLogTypeEnum.PATH.getType().equals(gatewayPluginProperties.getLogType()) && pathFlag) return true; return false; * ReadFormData * @param exchange * @param chain * @return private Mono<Void> readFormData(ServerWebExchange exchange, GatewayFilterChain chain, GatewayContext gatewayContext){ HttpHeaders headers = exchange.getRequest().getHeaders(); return exchange.getFormData() .doOnNext(multiValueMap -> { gatewayContext.setFormData(multiValueMap); gatewayContext.getAllRequestData().addAll(multiValueMap); log.debug("[GatewayContext]Read FormData Success"); .then(Mono.defer(() -> { Charset charset = headers.getContentType().getCharset(); charset = charset == null? StandardCharsets.UTF_8:charset; String charsetName = charset.name(); MultiValueMap<String, String> formData = gatewayContext.getFormData(); * formData is empty just return if(null == formData || formData.isEmpty()){ return chain.filter(exchange); StringBuilder formDataBodyBuilder = new StringBuilder(); String entryKey; List<String> entryValue; try { * repackage form data for (Map.Entry<String, List<String>> entry : formData.entrySet()) { entryKey = entry.getKey(); entryValue = entry.getValue(); if (entryValue.size() > 1) { for(String value : entryValue){ formDataBodyBuilder.append(entryKey).append("=").append(URLEncoder.encode(value, charsetName)).append("&"); } else { formDataBodyBuilder.append(entryKey).append("=").append(URLEncoder.encode(entryValue.get(0), charsetName)).append("&"); }catch (UnsupportedEncodingException e){} * substring with the last char '&' String formDataBodyString = ""; if(formDataBodyBuilder.length()>0){ formDataBodyString = formDataBodyBuilder.substring(0, formDataBodyBuilder.length() - 1); * get data bytes byte[] bodyBytes = formDataBodyString.getBytes(charset); int contentLength = bodyBytes.length; HttpHeaders httpHeaders = new HttpHeaders(); httpHeaders.putAll(exchange.getRequest().getHeaders()); httpHeaders.remove(HttpHeaders.CONTENT_LENGTH); * in case of content-length not matched httpHeaders.setContentLength(contentLength); * use BodyInserter to InsertFormData Body BodyInserter<String, ReactiveHttpOutputMessage> bodyInserter = BodyInserters.fromObject(formDataBodyString); CachedBodyOutputMessage cachedBodyOutputMessage = new CachedBodyOutputMessage(exchange, httpHeaders); log.debug("[GatewayContext]Rewrite Form Data :{}",formDataBodyString); return bodyInserter.insert(cachedBodyOutputMessage, new BodyInserterContext()) .then(Mono.defer(() -> { ServerHttpRequestDecorator decorator = new ServerHttpRequestDecorator( exchange.getRequest()) { @Override public HttpHeaders getHeaders() { return httpHeaders; @Override public Flux<DataBuffer> getBody() { return cachedBodyOutputMessage.getBody(); return chain.filter(exchange.mutate().request(decorator).build()); * ReadJsonBody * @param exchange * @param chain * @return private Mono<Void> readBody(ServerWebExchange exchange, GatewayFilterChain chain, GatewayContext gatewayContext){ return DataBufferUtils.join(exchange.getRequest().getBody()) .flatMap(dataBuffer -> { * read the body Flux<DataBuffer>, and release the buffer * //TODO when SpringCloudGateway Version Release To G.SR2,this can be update with the new version's feature * see PR https://github.com/spring-cloud/spring-cloud-gateway/pull/1095 byte[] bytes = new byte[dataBuffer.readableByteCount()]; dataBuffer.read(bytes); DataBufferUtils.release(dataBuffer); Flux<DataBuffer> cachedFlux = Flux.defer(() -> { DataBuffer buffer = exchange.getResponse().bufferFactory().wrap(bytes); DataBufferUtils.retain(buffer); return Mono.just(buffer); * repackage ServerHttpRequest ServerHttpRequest mutatedRequest = new ServerHttpRequestDecorator(exchange.getRequest()) { @Override public Flux<DataBuffer> getBody() { return cachedFlux; ServerWebExchange mutatedExchange = exchange.mutate().request(mutatedRequest).build(); return ServerRequest.create(mutatedExchange, MESSAGE_READERS) .bodyToMono(String.class) .doOnNext(objectValue -> { gatewayContext.setRequestBody(objectValue); log.debug("[GatewayContext]Read JsonBody Success"); }).then(chain.filter(mutatedExchange)); }4、GatewayResponseContextFilter处理返回参数的过滤器,根据spring-cloud-gateway-plugin ResponseLogFilter.java修改。/** * Quoted from @see https://github.com/chenggangpro/spring-cloud-gateway-plugin * @author: chenggang * @createTime: 2019-04-11 * @version: v1.2.0 @Slf4j public class GatewayResponseContextFilter implements GlobalFilter, Ordered { @Override public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) { GatewayContext gatewayContext = exchange.getAttribute(GatewayContext.CACHE_GATEWAY_CONTEXT); if(!gatewayContext.getReadResponseData()){ log.debug("[ResponseLogFilter]Properties Set Not To Read Response Data"); return chain.filter(exchange); ServerHttpResponseDecorator responseDecorator = new ServerHttpResponseDecorator(exchange.getResponse()) { @Override public Mono<Void> writeWith(Publisher<? extends DataBuffer> body) { return DataBufferUtils.join(Flux.from(body)) .flatMap(dataBuffer -> { byte[] bytes = new byte[dataBuffer.readableByteCount()]; dataBuffer.read(bytes); DataBufferUtils.release(dataBuffer); Flux<DataBuffer> cachedFlux = Flux.defer(() -> { DataBuffer buffer = exchange.getResponse().bufferFactory().wrap(bytes); DataBufferUtils.retain(buffer); return Mono.just(buffer); BodyInserter<Flux<DataBuffer>, ReactiveHttpOutputMessage> bodyInserter = BodyInserters.fromDataBuffers(cachedFlux); CachedBodyOutputMessage outputMessage = new CachedBodyOutputMessage(exchange, exchange.getResponse().getHeaders()); DefaultClientResponse clientResponse = new DefaultClientResponse(new ResponseAdapter(cachedFlux, exchange.getResponse().getHeaders()), ExchangeStrategies.withDefaults()); Optional<MediaType> optionalMediaType = clientResponse.headers().contentType(); if(!optionalMediaType.isPresent()){ log.debug("[ResponseLogFilter]Response ContentType Is Not Exist"); return Mono.defer(()-> bodyInserter.insert(outputMessage, new BodyInserterContext()) .then(Mono.defer(() -> { Flux<DataBuffer> messageBody = cachedFlux; HttpHeaders headers = getDelegate().getHeaders(); if (!headers.containsKey(HttpHeaders.TRANSFER_ENCODING)) { messageBody = messageBody.doOnNext(data -> headers.setContentLength(data.readableByteCount())); return getDelegate().writeWith(messageBody); }))); MediaType contentType = optionalMediaType.get(); if(!contentType.equals(MediaType.APPLICATION_JSON) && !contentType.equals(MediaType.APPLICATION_JSON_UTF8)){ log.debug("[ResponseLogFilter]Response ContentType Is Not APPLICATION_JSON Or APPLICATION_JSON_UTF8"); return Mono.defer(()-> bodyInserter.insert(outputMessage, new BodyInserterContext()) .then(Mono.defer(() -> { Flux<DataBuffer> messageBody = cachedFlux; HttpHeaders headers = getDelegate().getHeaders(); if (!headers.containsKey(HttpHeaders.TRANSFER_ENCODING)) { messageBody = messageBody.doOnNext(data -> headers.setContentLength(data.readableByteCount())); return getDelegate().writeWith(messageBody); }))); return clientResponse.bodyToMono(Object.class) .doOnNext(originalBody -> { GatewayContext gatewayContext = exchange.getAttribute(GatewayContext.CACHE_GATEWAY_CONTEXT); gatewayContext.setResponseBody(originalBody); log.debug("[ResponseLogFilter]Read Response Data To Gateway Context Success"); .then(Mono.defer(()-> bodyInserter.insert(outputMessage, new BodyInserterContext()) .then(Mono.defer(() -> { Flux<DataBuffer> messageBody = cachedFlux; HttpHeaders headers = getDelegate().getHeaders(); if (!headers.containsKey(HttpHeaders.TRANSFER_ENCODING)) { messageBody = messageBody.doOnNext(data -> headers.setContentLength(data.readableByteCount())); return getDelegate().writeWith(messageBody); })))); @Override public Mono<Void> writeAndFlushWith(Publisher<? extends Publisher<? extends DataBuffer>> body) { return writeWith(Flux.from(body) .flatMapSequential(p -> p)); return chain.filter(exchange.mutate().response(responseDecorator).build()); @Override public int getOrder() { return FilterOrderEnum.RESPONSE_DATA_FILTER.getOrder(); public class ResponseAdapter implements ClientHttpResponse { private final Flux<DataBuffer> flux; private final HttpHeaders headers; public ResponseAdapter(Publisher<? extends DataBuffer> body, HttpHeaders headers) { this.headers = headers; if (body instanceof Flux) { flux = (Flux) body; } else { flux = ((Mono)body).flux(); @Override public Flux<DataBuffer> getBody() { return flux; @Override public HttpHeaders getHeaders() { return headers; @Override public HttpStatus getStatusCode() { return null; @Override public int getRawStatusCode() { return 0; @Override public MultiValueMap<String, ResponseCookie> getCookies() { return null; }5、RemoveGatewayContextFilter清空请求参数的过滤器,根据spring-cloud-gateway-plugin RemoveGatewayContextFilter.java修改。/** * Quoted from @see https://github.com/chenggangpro/spring-cloud-gateway-plugin * remove gatewayContext Attribute * @author chenggang * @date 2019/06/19 @Slf4j public class RemoveGatewayContextFilter implements GlobalFilter, Ordered { @Override public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) { return chain.filter(exchange).doFinally(s -> exchange.getAttributes().remove(GatewayContext.CACHE_GATEWAY_CONTEXT)); @Override public int getOrder() { return HIGHEST_PRECEDENCE; }6、RequestLogFilter进行日志记录的过滤器,根据spring-cloud-gateway-plugin RequestLogFilter.java修改。/** * Quoted from @see https://github.com/chenggangpro/spring-cloud-gateway-plugin * Filter To Log Request And Response(exclude response body) * @author chenggang * @date 2019/01/29 @Log4j2 @AllArgsConstructor public class RequestLogFilter implements GlobalFilter, Ordered { private static final String START_TIME = "startTime"; private static final String HTTP_SCHEME = "http"; private static final String HTTPS_SCHEME = "https"; private GatewayPluginProperties gatewayPluginProperties; @Override public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) { ServerHttpRequest request = exchange.getRequest(); URI requestURI = request.getURI(); String scheme = requestURI.getScheme(); GatewayContext gatewayContext = exchange.getAttribute(GatewayContext.CACHE_GATEWAY_CONTEXT); * not http or https scheme if ((!HTTP_SCHEME.equalsIgnoreCase(scheme) && !HTTPS_SCHEME.equals(scheme)) || !gatewayContext.getReadRequestData()){ return chain.filter(exchange); long startTime = System.currentTimeMillis(); exchange.getAttributes().put(START_TIME, startTime); // 当返回参数为true时,记录请求参数和返回参数 if (gatewayPluginProperties.getEnable()) return chain.filter(exchange).then(Mono.fromRunnable(() -> logApiRequest(exchange))); else { return chain.filter(exchange); @Override public int getOrder() { return FilterOrderEnum.REQUEST_LOG_FILTER.getOrder(); * log api request * @param exchange private Mono<Void> logApiRequest(ServerWebExchange exchange){ ServerHttpRequest request = exchange.getRequest(); URI requestURI = request.getURI(); String scheme = requestURI.getScheme(); Long startTime = exchange.getAttribute(START_TIME); Long endTime = System.currentTimeMillis(); Long duration = ( endTime - startTime); ServerHttpResponse response = exchange.getResponse(); GatewayApiLog gatewayApiLog = new GatewayApiLog(); gatewayApiLog.setClientHost(requestURI.getHost()); gatewayApiLog.setClientIp(IpUtils.getIP(request)); gatewayApiLog.setStartTime(startTime); gatewayApiLog.setEndTime(startTime); gatewayApiLog.setDuration(duration); gatewayApiLog.setMethod(request.getMethodValue()); gatewayApiLog.setScheme(scheme); gatewayApiLog.setRequestUri(requestURI.getPath()); gatewayApiLog.setResponseCode(String.valueOf(response.getRawStatusCode())); GatewayContext gatewayContext = exchange.getAttribute(GatewayContext.CACHE_GATEWAY_CONTEXT); // 记录参数请求日志 if (gatewayPluginProperties.getRequestLog()) MultiValueMap<String, String> queryParams = request.getQueryParams(); if(!queryParams.isEmpty()){ queryParams.forEach((key,value)-> log.debug("[RequestLogFilter](Request)Query Param :Key->({}),Value->({})",key,value)); gatewayApiLog.setQueryParams(JsonUtils.mapToJson(queryParams)); HttpHeaders headers = request.getHeaders(); MediaType contentType = headers.getContentType(); long length = headers.getContentLength(); log.debug("[RequestLogFilter](Request)ContentType:{},Content Length:{}",contentType,length); if(length>0 && null != contentType && (contentType.includes(MediaType.APPLICATION_JSON) ||contentType.includes(MediaType.APPLICATION_JSON_UTF8))){ log.debug("[RequestLogFilter](Request)JsonBody:{}",gatewayContext.getRequestBody()); gatewayApiLog.setRequestBody(gatewayContext.getRequestBody()); if(length>0 && null != contentType && contentType.includes(MediaType.APPLICATION_FORM_URLENCODED)){ log.debug("[RequestLogFilter](Request)FormData:{}",gatewayContext.getFormData()); gatewayApiLog.setRequestBody(JsonUtils.mapToJson(gatewayContext.getFormData())); // 记录参数返回日志 if (gatewayPluginProperties.getResponseLog()) log.debug("[RequestLogFilter](Response)HttpStatus:{}",response.getStatusCode()); HttpHeaders headers = response.getHeaders(); headers.forEach((key,value)-> log.debug("[RequestLogFilter]Headers:Key->{},Value->{}",key,value)); MediaType contentType = headers.getContentType(); long length = headers.getContentLength(); log.info("[RequestLogFilter](Response)ContentType:{},Content Length:{}", contentType, length); log.debug("[RequestLogFilter](Response)Response Body:{}", gatewayContext.getResponseBody()); try { gatewayApiLog.setResponseBody(JsonUtils.objToJson(gatewayContext.getResponseBody())); } catch (Exception e) { log.error("记录API日志返回数据转换JSON发生错误:{}", e); log.debug("[RequestLogFilter](Response)Original Path:{},Cost:{} ms", exchange.getRequest().getURI().getPath(), duration); Route route = exchange.getAttribute(ServerWebExchangeUtils.GATEWAY_ROUTE_ATTR); URI routeUri = route.getUri(); String routeServiceId = routeUri.getHost().toLowerCase(); // API日志记录级别 try { log.log(LogLevelConstant.API_LEVEL,"{\"serviceId\":{}, \"data\":{}}", routeServiceId, JsonUtils.objToJson(gatewayApiLog)); } catch (Exception e) { log.error("记录API日志数据发生错误:{}", e); return Mono.empty(); }7、启动服务,对数据进行测试,我们可以在控制台启动Kfaka消费者,并查看是否有api_log主题的消息:监听api_log消息监听api_log消息8、关于日志数据的存储和处理  将日志消息保存到文件或者Kafka之后,就需要考虑如何处理这些数据,在有规模的微服务集群模式下,是尽量不提倡或者说禁止保存到MySQL这类关系数据库的,如果实在有需要的话,可以通过上篇介绍的,使用Spring Cloud Stream消费日志消息,并保存到指定数据库。下一篇讲如何搭建ELK日志分析系统,处理分析提取这些数据量庞大的日志数据。

SpringCloud微服务实战——搭建企业级开发框架(三十六):使用Spring Cloud Stream实现可灵活配置消息中间件的功能

在以往消息队列的使用中,我们通常使用集成消息中间件开源包来实现对应功能,而消息中间件的实现又有多种,比如目前比较主流的ActiveMQ、RocketMQ、RabbitMQ、Kafka,Stream等,这些消息中间件的实现都各有优劣。  在进行框架设计的时候,我们考虑是否能够和之前实现的短信发送、分布式存储等功能一样,抽象统一消息接口,屏蔽底层实现,在用到消息队列时,使用统一的接口代码,然后在根据自己业务需要选择不同消息中间件时,只需要通过配置就可以实现灵活切换使用哪种消息中间件。Spring Cloud Stream已经实现了这样的功能,下面我们在框架中集成并测试消息中间件的功能。目前spring-cloud-stream官网显示已支持以下消息中间件,我们使用RabbitMQ和Apache Kafka来集成测试:RabbitMQApache KafkaKafka StreamsAmazon KinesisGoogle PubSub (partner maintained)Solace PubSub+ (partner maintained)Azure Event Hubs (partner maintained)AWS SQS (partner maintained)AWS SNS (partner maintained)Apache RocketMQ (partner maintained)一、集成RabbitMQ并测试消息收发  RabbitMQ是使用Erlang语言实现的,这里安装需要安装Erlang的依赖等,这里为了快速安装测试,所以使用Docker安装单机版RabbitMQ。1、拉取RabbitMQ的Docker镜像,后缀带management的是带web管理界面的镜像docker pull rabbitmq:3.9.13-management2、创建和启动RabbitMQ容器docker run -d\ -e RABBITMQ_DEFAULT_USER=admin\ -e RABBITMQ_DEFAULT_PASS=123456\ --name rabbitmq\ -p 15672:15672\ -p 5672:5672\ -v `pwd`/bigdata:/var/lib/rabbitmq\ rabbitmq:3.9.13-management3、查看RabbitMQ是否启动[root@localhost ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ff1922cc6b73 rabbitmq:3.9.13-management "docker-entrypoint.s…" About a minute ago Up About a minute 4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp, :::5672->5672/tcp, 15671/tcp, 15691-15692/tcp, 25672/tcp, 0.0.0.0:15672->15672/tcp, :::15672->15672/tcp rabbitmq4、访问管理控制台http://172.16.20.225:15672 ,输入设置的用户名密码 admin/123456登录。如果管理台不能访问,可以尝试使用一下命令启动:docker exec -it rabbitmq rabbitmq-plugins enable rabbitmq_managementRabbitMQ登录界面RabbitMQ管理界面5、Nacos添加配置,我们以操作日志和API日志为示例,说明自定义输入和输出通道进行消息收发,operation-log为操作日志,api-log为API日志。注意,官网有文档说明:使用multiple RabbitMQ binders 时需要排除RabbitAutoConfiguration,实际应用过程中,如果不排除,也不直接配置RabbitMQ的连接,那么RabbitMQ健康检查会默认去连接127.0.0.1:5672,导致后台一直报错。排除RabbitAutoConfigurationspring: autoconfigure: # 使用multiple RabbitMQ binders 时需要排除RabbitAutoConfiguration exclude: - org.springframework.boot.autoconfigure.amqp.RabbitAutoConfiguration cloud: stream: binders: defaultRabbit: type: rabbit environment: #配置rabbimq连接环境 spring: rabbitmq: host: 172.16.20.225 username: admin password: 123456 virtual-host: / bindings: output_operation_log: destination: operation-log #exchange名称,交换模式默认是topic content-type: application/json binder: defaultRabbit output_api_log: destination: api-log #exchange名称,交换模式默认是topic content-type: application/json binder: defaultRabbit input_operation_log: destination: operation-log content-type: application/json binder: defaultRabbit group: ${spring.application.name} consumer: concurrency: 2 # 初始/最少/空闲时 消费者数量,默认1 input_api_log: destination: api-log content-type: application/json binder: defaultRabbit group: ${spring.application.name} consumer: concurrency: 2 # 初始/最少/空闲时 消费者数量,默认16、在gitegg-service-bigdata中添加spring-cloud-starter-stream-rabbit依赖,这里注意,只需要在具体使用消息中间件的微服务上引入,不需要统一引入,并不是每个微服务都会用到消息中间件,况且可能不同的微服务使用不同的消息中间件。<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-stream-rabbit</artifactId> </dependency>7、自定义日志输出通道LogSink.java/** * @author GitEgg public interface LogSink { String INPUT_OPERATION_LOG = "output_operation_log"; String INPUT_API_LOG = "output_api_log"; * 操作日志自定义输入通道 * @return @Input(INPUT_OPERATION_LOG) SubscribableChannel inputOperationLog(); * API日志自定义输入通道 * @return @Input(INPUT_API_LOG) SubscribableChannel inputApiLog(); }8、自定义日志输入通道LogSource.java/** * 自定义Stream输出通道 * @author GitEgg public interface LogSource { String OUTPUT_OPERATION_LOG = "input_operation_log"; String OUTPUT_API_LOG = "input_api_log"; * 操作日志自定义输出通道 * @return @Output(OUTPUT_OPERATION_LOG) MessageChannel outputOperationLog(); * API日志自定义输出通道 * @return @Output(OUTPUT_API_LOG) MessageChannel outputApiLog(); }9、实现日志推送接口的调用, @Scheduled(fixedRate = 3000)是为了测试推送消息,每隔3秒执行一次定时任务,注意:要使定时任务执行,还需要在Application启动类添加@EnableScheduling注解。ILogSendService.java/** * @author GitEgg public interface ILogSendService { * 发送操作日志消息 * @return void sendOperationLog(); * 发送api日志消息 * @return void sendApiLog(); }LogSendImpl.java/** * @author GitEgg @EnableBinding(value = { LogSource.class }) @Slf4j @Component @RequiredArgsConstructor(onConstructor_ = @Autowired) public class LogSendImpl implements ILogSendService { private final LogSource logSource; @Scheduled(fixedRate = 3000) @Override public void sendOperationLog() { log.info("推送操作日志-------开始------"); logSource.outputOperationLog() .send(MessageBuilder.withPayload(UUID.randomUUID().toString()).build()); log.info("推送操作日志-------结束------"); @Scheduled(fixedRate = 3000) @Override public void sendApiLog() { log.info("推送API日志-------开始------"); logSource.outputApiLog() .send(MessageBuilder.withPayload(UUID.randomUUID().toString()).build()); log.info("推送API日志-------结束------"); }10、实现日志消息接收接口ILogReceiveService.java/** * @author GitEgg public interface ILogReceiveService { * 接收到操作日志消息 * @param msg <T> void receiveOperationLog(GenericMessage<T> msg); * 接收到API日志消息 * @param msg <T> void receiveApiLog(GenericMessage<T> msg); }LogReceiveImpl.java/** * @author GitEgg @Slf4j @Component @EnableBinding(value = { LogSink.class }) public class LogReceiveImpl implements ILogReceiveService { @StreamListener(LogSink.INPUT_OPERATION_LOG) @Override public synchronized <T> void receiveOperationLog(GenericMessage<T> msg) { log.info("接收到操作日志: " + msg.getPayload()); @StreamListener(LogSink.INPUT_API_LOG) @Override public synchronized <T> void receiveApiLog(GenericMessage<T> msg) { log.info("接收到API日志: " + msg.getPayload()); }10、启动微服务,可以看到日志打印推送和接收消息已经执行的情况日志接收和推送消息情况二、集成Kafka测试消息收发并测试消息中间件切换  使用Spring Cloud Stream的其中一项优势就是方便切换消息中间件又不需要改动代码,那么下面我们测试在Nacos的Spring Cloud Stream配置中同时添加Kafka配置,并且API日志继续使用RabbitMQ,操作日志使用Kafka,查看是否能够同时运行。这里先将配置测试放在前面方便对比,Kafka集群搭建放在后面说明。1、Nacos添加Kafka配置,并且将operation_log的binder改为Kafkaspring: autoconfigure: # 使用multiple RabbitMQ binders 时需要排除RabbitAutoConfiguration exclude: - org.springframework.boot.autoconfigure.amqp.RabbitAutoConfiguration cloud: stream: binders: defaultRabbit: type: rabbit environment: #配置rabbimq连接环境 spring: rabbitmq: host: 172.16.20.225 username: admin password: 123456 virtual-host: / kafka: type: kafka environment: spring: cloud: stream: kafka: binder: brokers: 172.16.20.220:9092,172.16.20.221:9092,172.16.20.222:9092 zkNodes: 172.16.20.220:2181,172.16.20.221:2181,172.16.20.222:2181 # 自动创建Topic auto-create-topics: true bindings: output_operation_log: destination: operation-log #exchange名称,交换模式默认是topic content-type: application/json binder: kafka output_api_log: destination: api-log #exchange名称,交换模式默认是topic content-type: application/json binder: defaultRabbit input_operation_log: destination: operation-log content-type: application/json binder: kafka group: ${spring.application.name} consumer: concurrency: 2 # 初始/最少/空闲时 消费者数量,默认1 input_api_log: destination: api-log content-type: application/json binder: defaultRabbit group: ${spring.application.name} consumer: concurrency: 2 # 初始/最少/空闲时 消费者数量,默认12、登录Kafka服务器,切换到Kafka的bin目录下启动一个消费operation-log主题的消费者./kafka-console-consumer.sh --bootstrap-server 172.16.20.221:9092 --topic operation-log3、启动微服务,查看RabbitMQ和Kafka的日志推送和接收是否能够正常运行微服务后台日志显示能够正常推送和接收消息:服务后台日志Kafka服务器显示收到了操作日志消息Kafka服务器三、Kafka集群搭建1、环境准备:  首先准备好三台CentOS系统的主机,设置ip为:172.16.20.220、172.16.20.221、172.16.20.222。  Kafka会使用大量文件和网络socket,Linux默认配置的File descriptors(文件描述符)不能够满足Kafka高吞吐量的要求,所以这里需要调整(更多性能优化,请查看Kafka官方文档):vi /etc/security/limits.conf # 在最后加入,修改完成后,重启系统生效。 * soft nofile 131072 * hard nofile 131072  新建kafka的日志目录和zookeeper数据目录,因为这两项默认放在tmp目录,而tmp目录中内容会随重启而丢失,所以我们自定义以下目录:mkdir /data/zookeeper mkdir /data/zookeeper/data mkdir /data/zookeeper/logs mkdir /data/kafka mkdir /data/kafka/data mkdir /data/kafka/logs2、zookeeper.properties配置vi /usr/local/kafka/config/zookeeper.properties修改如下:# 修改为自定义的zookeeper数据目录 dataDir=/data/zookeeper/data # 修改为自定义的zookeeper日志目录 dataLogDir=/data/zookeeper/logs clientPort=2181 # 注释掉 #maxClientCnxns=0 # 设置连接参数,添加如下配置 # 为zk的基本时间单元,毫秒 tickTime=2000 # Leader-Follower初始通信时限 tickTime*10 initLimit=10 # Leader-Follower同步通信时限 tickTime*5 syncLimit=5 # 设置broker Id的服务地址,本机ip一定要用0.0.0.0代替 server.1=0.0.0.0:2888:3888 server.2=172.16.20.221:2888:3888 server.3=172.16.20.222:2888:38883、在各台服务器的zookeeper数据目录/data/zookeeper/data添加myid文件,写入服务broker.id属性值在data文件夹中新建myid文件,myid文件的内容为1(一句话创建:echo 1 > myid)cd /data/zookeeper/data vi myid #添加内容:1 其他两台主机分别配置 2和3 14、kafka配置,进入config目录下,修改server.properties文件vi /usr/local/kafka/config/server.properties# 每台服务器的broker.id都不能相同 broker.id=1 # 是否可以删除topic delete.topic.enable=true # topic 在当前broker上的分片个数,与broker保持一致 num.partitions=3 # 每个主机地址不一样: listeners=PLAINTEXT://172.16.20.220:9092 advertised.listeners=PLAINTEXT://172.16.20.220:9092 # 具体一些参数 log.dirs=/data/kafka/kafka-logs # 设置zookeeper集群地址与端口如下: zookeeper.connect=172.16.20.220:2181,172.16.20.221:2181,172.16.20.222:21815、Kafka启动kafka启动时先启动zookeeper,再启动kafka;关闭时相反,先关闭kafka,再关闭zookeeper。zookeeper启动命令./zookeeper-server-start.sh ../config/zookeeper.properties &后台运行启动命令:nohup ./zookeeper-server-start.sh ../config/zookeeper.properties >/data/zookeeper/logs/zookeeper.log 2>1 &或者./zookeeper-server-start.sh -daemon ../config/zookeeper.properties &查看集群状态:./zookeeper-server-start.sh status ../config/zookeeper.propertieskafka启动命令./kafka-server-start.sh ../config/server.properties &后台运行启动命令:nohup bin/kafka-server-start.sh ../config/server.properties >/data/kafka/logs/kafka.log 2>1 &或者./kafka-server-start.sh -daemon ../config/server.properties &创建topic,最新版本已经不需要使用zookeeper参数创建。./kafka-topics.sh --create --replication-factor 2 --partitions 1 --topic test --bootstrap-server 172.16.20.220:9092参数解释:复制两份  --replication-factor 2创建1个分区  --partitions 1topic 名称  --topic test查看已经存在的topic(三台设备都执行时可以看到)./kafka-topics.sh --list --bootstrap-server 172.16.20.220:9092启动生产者:./kafka-console-producer.sh --broker-list 172.16.20.220:9092 --topic test启动消费者:./kafka-console-consumer.sh --bootstrap-server 172.16.20.221:9092 --topic test ./kafka-console-consumer.sh --bootstrap-server 172.16.20.222:9092 --topic test添加参数 --from-beginning 从开始位置消费,不是从最新消息./kafka-console-consumer.sh --bootstrap-server 172.16.20.221 --topic test --from-beginning测试:在生产者输入test,可以在消费者的两台服务器上看到同样的字符test,说明Kafka服务器集群已搭建成功。四、完整的Nacos配置spring: jackson: time-zone: Asia/Shanghai date-format: yyyy-MM-dd HH:mm:ss servlet: multipart: max-file-size: 2048MB max-request-size: 2048MB security: oauth2: resourceserver: jwk-set-uri: 'http://127.0.0.1/gitegg-oauth/oauth/public_key' autoconfigure: # 动态数据源排除默认配置 exclude: - com.alibaba.druid.spring.boot.autoconfigure.DruidDataSourceAutoConfigure - org.springframework.boot.autoconfigure.amqp.RabbitAutoConfiguration datasource: druid: stat-view-servlet: enabled: true loginUsername: admin loginPassword: 123456 dynamic: # 设置默认的数据源或者数据源组,默认值即为master primary: master # 设置严格模式,默认false不启动. 启动后在未匹配到指定数据源时候会抛出异常,不启动则使用默认数据源. strict: false # 开启seata代理,开启后默认每个数据源都代理,如果某个不需要代理可单独关闭 seata: false #支持XA及AT模式,默认AT seata-mode: AT druid: initialSize: 1 minIdle: 3 maxActive: 20 # 配置获取连接等待超时的时间 maxWait: 60000 # 配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒 timeBetweenEvictionRunsMillis: 60000 # 配置一个连接在池中最小生存的时间,单位是毫秒 minEvictableIdleTimeMillis: 30000 validationQuery: select 'x' testWhileIdle: true testOnBorrow: false testOnReturn: false # 打开PSCache,并且指定每个连接上PSCache的大小 poolPreparedStatements: true maxPoolPreparedStatementPerConnectionSize: 20 # 配置监控统计拦截的filters,去掉后监控界面sql无法统计,'wall'用于防火墙 filters: config,stat,slf4j # 通过connectProperties属性来打开mergeSql功能;慢SQL记录 connectionProperties: druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000; # 合并多个DruidDataSource的监控数据 useGlobalDataSourceStat: true datasource: master: url: jdbc:mysql://127.0.0.188/gitegg_cloud?zeroDateTimeBehavior=convertToNull&useUnicode=true&characterEncoding=utf8&allowMultiQueries=true&serverTimezone=Asia/Shanghai username: root password: root cloud: sentinel: filter: enabled: true transport: port: 8719 dashboard: 127.0.0.188:8086 eager: true datasource: nacos: data-type: json server-addr: 127.0.0.188:8848 dataId: ${spring.application.name}-sentinel groupId: DEFAULT_GROUP rule-type: flow gateway: discovery: locator: enabled: true routes: - id: gitegg-oauth uri: lb://gitegg-oauth predicates: - Path=/gitegg-oauth/** filters: - StripPrefix=1 - id: gitegg-service-system uri: lb://gitegg-service-system predicates: - Path=/gitegg-service-system/** filters: - StripPrefix=1 - id: gitegg-service-extension uri: lb://gitegg-service-extension predicates: - Path=/gitegg-service-extension/** filters: - StripPrefix=1 - id: gitegg-service-base uri: lb://gitegg-service-base predicates: - Path=/gitegg-service-base/** filters: - StripPrefix=1 - id: gitegg-code-generator uri: lb://gitegg-code-generator predicates: - Path=/gitegg-code-generator/** filters: - StripPrefix=1 plugin: config: # 是否开启Gateway日志插件 enable: true # requestLog==true && responseLog==false时,只记录请求参数日志;responseLog==true时,记录请求参数和返回参数。 # 记录入参 requestLog==false时,不记录日志 requestLog: true # 生产环境,尽量只记录入参,因为返回参数数据太大,且大多数情况是无意义的 # 记录出参 responseLog: true # all: 所有日志 configure:serviceId和pathList交集 serviceId: 只记录serviceId配置列表 pathList:只记录pathList配置列表 logType: all serviceIdList: - "gitegg-oauth" - "gitegg-service-system" pathList: - "/gitegg-oauth/oauth/token" - "/gitegg-oauth/oauth/user/info" stream: binders: defaultRabbit: type: rabbit environment: #配置rabbimq连接环境 spring: rabbitmq: host: 127.0.0.225 username: admin password: 123456 virtual-host: / kafka: type: kafka environment: spring: cloud: stream: kafka: binder: brokers: 127.0.0.220:9092,127.0.0.221:9092,127.0.0.222:9092 zkNodes: 127.0.0.220:2181,127.0.0.221:2181,127.0.0.222:2181 # 自动创建Topic auto-create-topics: true bindings: output_operation_log: destination: operation-log #exchange名称,交换模式默认是topic content-type: application/json binder: kafka output_api_log: destination: api-log #exchange名称,交换模式默认是topic content-type: application/json binder: defaultRabbit input_operation_log: destination: operation-log content-type: application/json binder: kafka group: ${spring.application.name} consumer: concurrency: 2 # 初始/最少/空闲时 消费者数量,默认1 input_api_log: destination: api-log content-type: application/json binder: defaultRabbit group: ${spring.application.name} consumer: concurrency: 2 # 初始/最少/空闲时 消费者数量,默认1 redis: database: 1 host: 127.0.0.188 port: 6312 password: 123456 ssl: false timeout: 2000 redisson: config: | singleServerConfig: idleConnectionTimeout: 10000 connectTimeout: 10000 timeout: 3000 retryAttempts: 3 retryInterval: 1500 password: 123456 subscriptionsPerConnection: 5 clientName: null address: "redis://127.0.0.188:6312" subscriptionConnectionMinimumIdleSize: 1 subscriptionConnectionPoolSize: 50 connectionMinimumIdleSize: 32 connectionPoolSize: 64 database: 0 dnsMonitoringInterval: 5000 threads: 0 nettyThreads: 0 codec: !<org.redisson.codec.JsonJacksonCodec> {} "transportMode":"NIO" #业务系统相关初始化参数 system: #登录密码默认最大尝试次数 maxTryTimes: 5 #不需要验证码登录的最大次数 maxNonCaptchaTimes: 2 #注册用户默认密码 defaultPwd: 12345678 #注册用户默认角色ID defaultRoleId: 4 #注册用户默认组织机构ID defaultOrgId: 79 #不需要数据权限过滤的角色key noDataFilterRole: DATA_NO_FILTER #AccessToken过期时间(秒)默认为2小时 accessTokenExpiration: 60 #RefreshToken过期时间(秒)默认为24小时 refreshTokenExpiration: 300 logging: config: http://${spring.cloud.nacos.discovery.server-addr}/nacos/v1/cs/configs?dataId=log4j2.xml&group=${spring.nacos.config.group} file: # 配置日志的路径,包含 spring.application.name Linux: /var/log/${spring.application.name} path: D:\\log4j2_nacos\\${spring.application.name} feign: hystrix: enabled: false compression: # 配置响应 GZIP 压缩 response: enabled: true # 配置请求 GZIP 压缩 request: enabled: true # 支持压缩的mime types mime-types: text/xml,application/xml,application/json # 配置压缩数据大小的最小阀值,默认 2048 min-request-size: 2048 client: config: default: connectTimeout: 8000 readTimeout: 8000 loggerLevel: FULL #Ribbon配置 ribbon: #请求连接的超时时间 ConnectTimeout: 50000 #请求处理/响应的超时时间 ReadTimeout: 50000 #对所有操作请求都进行重试,如果没有实现幂等的情况下是很危险的,所以这里设置为false OkToRetryOnAllOperations: false #切换实例的重试次数 MaxAutoRetriesNextServer: 5 #当前实例的重试次数 MaxAutoRetries: 5 #负载均衡策略 NFLoadBalancerRuleClassName: com.alibaba.cloud.nacos.ribbon.NacosRule #Sentinel端点配置 management: endpoints: exposure: include: '*' mybatis-plus: mapper-locations: classpath*:/com/gitegg/*/*/mapper/*Mapper.xml typeAliasesPackage: com.gitegg.*.*.entity global-config: #主键类型 0:"数据库ID自增", 1:"用户输入ID",2:"全局唯一ID (数字类型唯一ID)", 3:"全局唯一ID UUID"; id-type: 2 #字段策略 0:"忽略判断",1:"非 NULL 判断"),2:"非空判断" field-strategy: 2 #驼峰下划线转换 db-column-underline: true #刷新mapper 调试神器 refresh-mapper: true #数据库大写下划线转换 #capital-mode: true #逻辑删除配置 logic-delete-value: 1 logic-not-delete-value: 0 configuration: map-underscore-to-camel-case: true cache-enabled: false log-impl: org.apache.ibatis.logging.stdout.StdOutImpl # 多租户配置 tenant: # 是否开启租户模式 enable: true # 需要排除的多租户的表 exclusionTable: - "t_sys_district" - "t_sys_tenant" - "t_sys_role" - "t_sys_resource" - "t_sys_role_resource" - "oauth_client_details" # 租户字段名称 column: tenant_id # 数据权限 data-permission: # 注解方式默认关闭,否则影响性能 annotation-enable: true seata: enabled: false application-id: ${spring.application.name} tx-service-group: gitegg_seata_tx_group # 一定要是false enable-auto-data-source-proxy: false service: vgroup-mapping: #key与上面的gitegg_seata_tx_group的值对应 gitegg_seata_tx_group: default config: type: nacos nacos: namespace: serverAddr: 127.0.0.188:8848 group: SEATA_GROUP userName: "nacos" password: "nacos" registry: type: nacos nacos: #seata服务端(TC)在nacos中的应用名称 application: seata-server server-addr: 127.0.0.188:8848 namespace: userName: "nacos" password: "nacos" #验证码配置 captcha: #验证码的类型 sliding: 滑动验证码 image: 图片验证码 type: sliding captcha: #缓存local/redis... cache-type: redis #local缓存的阈值,达到这个值,清除缓存 #cache-number=1000 #local定时清除过期缓存(单位秒),设置为0代表不执行 #timing-clear=180 #验证码类型default两种都实例化。 type: default #汉字统一使用Unicode,保证程序通过@value读取到是中文,在线转换 https://tool.chinaz.com/tools/unicode.aspx 中文转Unicode #右下角水印文字(我的水印) water-mark: GitEgg #右下角水印字体(宋体) water-font: 宋体 #点选文字验证码的文字字体(宋体) font-type: 宋体 #校验滑动拼图允许误差偏移量(默认5像素) slip-offset: 5 #aes加密坐标开启或者禁用(true|false) aes-status: true #滑动干扰项(0/1/2) 1.2.2版本新增 interference-options: 2 # 接口请求次数一分钟限制是否开启 true|false req-frequency-limit-enable: true # 验证失败5次,get接口锁定 req-get-lock-limit: 5 # 验证失败后,锁定时间间隔,s req-get-lock-seconds: 360 # get接口一分钟内请求数限制 req-get-minute-limit: 30 # check接口一分钟内请求数限制 req-check-minute-limit: 60 # verify接口一分钟内请求数限制 req-verify-minute-limit: 60 #SMS短信通用配置 #手机号码正则表达式,为空则不做验证 #负载均衡类型 可选值: Random、RoundRobin、WeightRandom、WeightRoundRobin load-balancer-type: Random #启用web端点 enable: true #访问路径前缀 base-path: /commons/sms verification-code: #验证码长度 code-length: 6 #为true则验证失败后删除验证码 delete-by-verify-fail: false #为true则验证成功后删除验证码 delete-by-verify-succeed: true #重试间隔时间,单位秒 retry-interval-time: 60 #验证码有效期,单位秒 expiration-time: 180 #识别码长度 identification-code-length: 3 #是否启用识别码 use-identification-code: false redis: #验证码业务在保存到redis时的key的前缀 key-prefix: VerificationCode # 网关放行设置 1、whiteUrls不需要鉴权的公共url,白名单,配置白名单路径 2、authUrls需要鉴权的公共url oauth-list: staticFiles: - "/doc.html" - "/webjars/**" - "/favicon.ico" - "/swagger-resources/**" whiteUrls: - "/*/v2/api-docs" - "/gitegg-oauth/login/phone" - "/gitegg-oauth/login/qr" - "/gitegg-oauth/oauth/token" - "/gitegg-oauth/oauth/public_key" - "/gitegg-oauth/oauth/captcha/type" - "/gitegg-oauth/oauth/captcha" - "/gitegg-oauth/oauth/captcha/check" - "/gitegg-oauth/oauth/captcha/image" - "/gitegg-oauth/oauth/sms/captcha/send" - "/gitegg-service-base/dict/list/{dictCode}" authUrls: - "/gitegg-oauth/oauth/logout" - "/gitegg-oauth/oauth/user/info" - "/gitegg-service-extension/extension/upload/file" - "/gitegg-service-extension/extension/dfs/query/default"

SpringCloud微服务实战——搭建企业级开发框架(三十五):SpringCloud + Docker + k8s实现微服务集群打包部署-集群环境部署【下】

备注:sonarqube默认用户名密码: admin/admin卸载命令:docker-compose -f jenkins-compose.yml down -v六、Jenkins自动打包部署配置  项目部署有多种方式,从最原始的可运行jar包直接部署到JDK环境下运行,到将可运行的jar包放到docker容器中运行,再到现在比较流行的把可运行的jar包和docker放到k8s的pod环境中运行。每一种新的部署方式都是对原有部署方式的改进和优化,这里不着重介绍每种方式的优缺点,只简单说明一下使用Kubernetes 的原因:Kubernetes 主要提供弹性伸缩、服务发现、自我修复,版本回退、负载均衡、存储编排等功能。  日常开发部署过程中的基本步骤如下:提交代码到gitlab代码仓库gitlab通过webhook触发Jenkins构建代码质量检查Jenkins需通过手动触发,来拉取代码、编译、打包、构建Docker镜像、发布到私有镜像仓库Harbor、执行kubectl命令从Harbor拉取Docker镜像部署至k8s1、安装Kubernetes plugin插件、Git Parameter插件(用于流水线参数化构建)、Extended Choice Parameter插件(用于多个微服务时,选择需要构建的微服务)、 Pipeline Utility Steps插件(用于读取maven工程的.yaml、pom.xml等)和 Kubernetes Continuous Deploy(一定要使用1.0版本,从官网下载然后上传) ,Jenkins --> 系统管理 --> 插件管理 --> 可选插件 --> Kubernetes plugin /Git Parameter/Extended Choice Parameter ,选中后点击Install without restart按钮进行安装Kubernetes plugin Extended Choice Parameterimage.pngGit Parameter  Blueocean目前还不支持Git Parameter插件和Extended Choice Parameter插件,Git Parameter是通过Git Plugin读取分支信息,我们这里使用Pipeline script而不是使用Pipeline script from SCM,是因为我们不希望把构建信息放到代码里,这样做可以开发和部署分离。2、配置Kubernetes plugin插件,Jenkins --> 系统管理 --> 节点管理 --> Configure Clouds --> Add a new cloud -> Kubernetes2dbd8ea1886ae30659926345724bb1b.png3、增加kubernetes证书cat ~/.kube/config # 以下步骤暂不使用,将certificate-authority-data、client-certificate-data、client-key-data替换为~/.kube/config里面具体的值 #echo certificate-authority-data | base64 -d > ca.crt #echo client-certificate-data | base64 -d > client.crt #echo client-key-data | base64 -d > client.key # 执行以下命令,自己设置密码 #openssl pkcs12 -export -out cert.pfx -inkey client.key -in client.crt -certfile ca.crt系统管理-->凭据-->系统-->全局凭据image.png4、添加访问Kubernetes的凭据信息,这里填入上面登录Kubernetes Dashboard所创建的token即可,添加完成之后选择刚刚添加的凭据,然后点击连接测试,如果提示连接成功,那么说明我们的Jenkins可以连接Kubernetes了设置token连接测试5、jenkins全局配置jdk、git和mavenjenkinsci/blueocean镜像默认安装了jdk和git,这里需要登录容器找到路径,然后配置进去。通过命令进入jenkins容器,并查看JAVA_HOEM和git路径[root@localhost ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 0520ebb9cc5d jenkinsci/blueocean "/sbin/tini -- /usr/…" 2 days ago Up 30 hours 50000/tcp, 0.0.0.0:18080->8080/tcp, :::18080->8080/tcp root-jenkins-1 [root@localhost ~]# docker exec -it 0520ebb9cc5d /bin/bash bash-5.1# echo $JAVA_HOME /opt/java/openjdk bash-5.1# which git /usr/bin/git通过命令查询可知,JAVA_HOME=/opt/java/openjdk GIT= /usr/bin/git , 在Jenkins全局工具配置中配置image.pngMaven可以在宿主机映射的/data/docker/ci/jenkins/home中安装,然后配置时,配置容器路径为/var/jenkins_home下的Maven安装路径image.png在系统配置中设置MAVEN_HOME供Pipeline script调用,如果执行脚本时提示没有权限,那么在宿主Maven目录的bin目录下执行chmod 777 *image.png6、为k8s新建harbor-key,用于k8s拉取私服镜像,配置在代码的k8s-deployment.yml中使用。kubectl create secret docker-registry harbor-key --docker-server=172.16.20.175 --docker-username='robot$gitegg' --docker-password='Jqazyv7vvZiL6TXuNcv7TrZeRdL8U9n3'7、新建pipeline流水线任务新建pipeline8、配置流水线任务参数172.16.20.175_18080_job_gitegg-cloud_param.png9、配置pipeline发布脚本在流水线下面选择Pipeline scriptimage.pngpipeline { agent any parameters { gitParameter branchFilter: 'origin/(.*)', defaultValue: 'master', name: 'Branch', type: 'PT_BRANCH', description:'请选择需要构建的代码分支' choice(name: 'BaseImage', choices: ['openjdk:8-jdk-alpine'], description: '请选择基础运行环境') choice(name: 'Environment', choices: ['dev','test','prod'],description: '请选择要发布的环境:dev开发环境、test测试环境、prod 生产环境') extendedChoice( defaultValue: 'gitegg-gateway,gitegg-oauth,gitegg-plugin/gitegg-code-generator,gitegg-service/gitegg-service-base,gitegg-service/gitegg-service-extension,gitegg-service/gitegg-service-system', description: '请选择需要构建的微服务', multiSelectDelimiter: ',', name: 'ServicesBuild', quoteValue: false, saveJSONParameterToFile: false, type: 'PT_CHECKBOX', value:'gitegg-gateway,gitegg-oauth,gitegg-plugin/gitegg-code-generator,gitegg-service/gitegg-service-base,gitegg-service/gitegg-service-extension,gitegg-service/gitegg-service-system', visibleItemCount: 6) string(name: 'BuildParameter', defaultValue: 'none', description: '请输入构建参数') environment { PRO_NAME = "gitegg" BuildParameter="${params.BuildParameter}" ENV = "${params.Environment}" BRANCH = "${params.Branch}" ServicesBuild = "${params.ServicesBuild}" BaseImage="${params.BaseImage}" k8s_token = "7696144b-3b77-4588-beb0-db4d585f5c04" stages { stage('Clean workspace') { steps { deleteDir() stage('Process parameters') { steps { script { if("${params.ServicesBuild}".trim() != "") { def ServicesBuildString = "${params.ServicesBuild}" ServicesBuild = ServicesBuildString.split(",") for (service in ServicesBuild) { println "now got ${service}" if("${params.BuildParameter}".trim() != "" && "${params.BuildParameter}".trim() != "none") { BuildParameter = "${params.BuildParameter}" BuildParameter = "" stage('Pull SourceCode Platform') { steps { echo "${BRANCH}" git branch: "${Branch}", credentialsId: 'gitlabTest', url: 'http://172.16.20.188:2080/root/gitegg-platform.git' stage('Install Platform') { steps{ echo "==============Start Platform Build==========" sh "${MAVEN_HOME}/bin/mvn -DskipTests=true clean install ${BuildParameter}" echo "==============End Platform Build==========" stage('Pull SourceCode') { steps { echo "${BRANCH}" git branch: "${Branch}", credentialsId: 'gitlabTest', url: 'http://172.16.20.188:2080/root/gitegg-cloud.git' stage('Build') { steps{ script { echo "==============Start Cloud Parent Install==========" sh "${MAVEN_HOME}/bin/mvn -DskipTests=true clean install -P${params.Environment} ${BuildParameter}" echo "==============End Cloud Parent Install==========" def workspace = pwd() for (service in ServicesBuild) { stage ('buildCloud${service}') { echo "==============Start Cloud Build ${service}==========" sh "cd ${workspace}/${service} && ${MAVEN_HOME}/bin/mvn -DskipTests=true clean package -P${params.Environment} ${BuildParameter} jib:build -Djib.httpTimeout=200000 -DsendCredentialsOverHttp=true -f pom.xml" echo "==============End Cloud Build ${service}============" stage('Sync to k8s') { steps { script { echo "==============Start Sync to k8s==========" def workspace = pwd() mainpom = readMavenPom file: 'pom.xml' profiles = mainpom.getProfiles() def version = mainpom.getVersion() def nacosAddr = "" def nacosConfigPrefix = "" def nacosConfigGroup = "" def dockerHarborAddr = "" def dockerHarborProject = "" def dockerHarborUsername = "" def dockerHarborPassword = "" def serverPort = "" def commonDeployment = "${workspace}/k8s-deployment.yaml" for(profile in profiles) // 获取对应配置 if (profile.getId() == "${params.Environment}") nacosAddr = profile.getProperties().getProperty("nacos.addr") nacosConfigPrefix = profile.getProperties().getProperty("nacos.config.prefix") nacosConfigGroup = profile.getProperties().getProperty("nacos.config.group") dockerHarborAddr = profile.getProperties().getProperty("docker.harbor.addr") dockerHarborProject = profile.getProperties().getProperty("docker.harbor.project") dockerHarborUsername = profile.getProperties().getProperty("docker.harbor.username") dockerHarborPassword = profile.getProperties().getProperty("docker.harbor.password") for (service in ServicesBuild) { stage ('Sync${service}ToK8s') { echo "==============Start Sync ${service} to k8s==========" dir("${workspace}/${service}") { pom = readMavenPom file: 'pom.xml' echo "group: artifactId: ${pom.artifactId}" def deployYaml = "k8s-deployment-${pom.artifactId}.yaml" yaml = readYaml file : './src/main/resources/bootstrap.yml' serverPort = "${yaml.server.port}" if(fileExists("${workspace}/${service}/k8s-deployment.yaml")){ commonDeployment = "${workspace}/${service}/k8s-deployment.yaml" commonDeployment = "${workspace}/k8s-deployment.yaml" script { sh "sed 's#{APP_NAME}#${pom.artifactId}#g;s#{IMAGE_URL}#${dockerHarborAddr}#g;s#{IMAGE_PROGECT}#${PRO_NAME}#g;s#{IMAGE_TAG}#${version}#g;s#{APP_PORT}#${serverPort}#g;s#{SPRING_PROFILE}#${params.Environment}#g' ${commonDeployment} > ${deployYaml}" kubernetesDeploy configs: "${deployYaml}", kubeconfigId: "${k8s_token}" echo "==============End Sync ${service} to k8s==========" echo "==============End Sync to k8s==========" }常见问题:1、Pipeline Utility Steps 第一次执行会报错Scripts not permitted to use method或者Scripts not permitted to use staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods getProperties java.lang.Object解决:系统管理-->In-process Script Approval->点击 Approvala5cd8534ee59d1b6018941947f4c2fc.pnga7060adfc34ff1a73e685d4436a3e81.png2、通过NFS服务将所有容器的日志统一存放在NFS的服务端3、Kubernetes Continuous Deploy,使用1.0.0版本,否则报错,不兼容4、解决docker注册到内网问题spring: cloud: inetutils: ignored-interfaces: docker05、配置ipvs模式,kube-proxy监控Pod的变化并创建相应的ipvs规则。ipvs相对iptables转发效率更高。除此以外,ipvs支持更多的LB算法。kubectl edit cm kube-proxy -n kube-system修改mode: "ipvs"image.png重新加载kube-proxy配置文件kubectl delete pod -l k8s-app=kube-proxy -n kube-system查看ipvs规则ipvsadm -Ln6、k8s集群内部访问外部服务,nacos,redis等a、内外互通模式,在部署的服务设置hostNetwork: truespec: hostNetwork: trueb、Endpoints模式kind: Endpoints apiVersion: v1 metadata: name: nacos namespace: default subsets: - addresses: - ip: 172.16.20.188 ports: - port: 8848apiVersion: v1 kind: Service metadata: name: nacos namespace: default spec: type: ClusterIP ports: - port: 8848 targetPort: 8848 protocol: TCPc、service的type: ExternalName模式,“ExternalName” 使用 CNAME 重定向,因此无法执行端口重映射,域名使用EndPoints和type: ExternalName以上外部新建yaml,不要用内部的,这些需要在环境设置时配置好。7、k8s常用命令:查看pod: kubectl get pods查看service: kubectl get svc查看endpoints: kubectl get endpoints安装: kubectl apply -f XXX.yaml删除:kubectl delete -f xxx.yaml删除pod: kubectl delete pod podName删除service: kubectl delete service serviceName进入容器: kubectl exec -it podsNamexxxxxx -n default -- /bin/sh

SpringCloud微服务实战——搭建企业级开发框架(三十五):SpringCloud + Docker + k8s实现微服务集群打包部署-集群环境部署【上】

一、集群环境规划配置生产环境不要使用一主多从,要使用多主多从。这里使用三台主机进行测试一台Master(172.16.20.111),两台Node(172.16.20.112和172.16.20.113)1、设置主机名CentOS7安装完成之后,设置固定ip,三台主机做相同设置vi /etc/sysconfig/network-scripts/ifcfg-ens33 #在最下面ONBOOT改为yes,新增固定地址IPADDR,172.16.20.111,172.16.20.112,172.16.20.113 ONBOOT=yes IPADDR=172.16.20.111三台主机ip分别设置好之后,修改hosts文件,设置主机名#master 机器上执行 hostnamectl set-hostname master #node1 机器上执行 hostnamectl set-hostname node1 #node2 机器上执行 hostnamectl set-hostname node2vi /etc/hosts 172.16.20.111 master 172.16.20.112 node1 172.16.20.113 node22、时间同步开启chronyd服务systemctl start chronyd设置开机启动systemctl enable chronyd测试date3、禁用firewalld和iptables(测试环境)systemctl stop firewalld systemctl disable firewalld systemctl stop iptables systemctl disable iptables4、禁用selinuxvi /etc/selinux/config SELINUX=disabled5、禁用swap分区注释掉 /dev/mapper/centos-swap swapvi /etc/fstab # 注释掉 # /dev/mapper/centos-swap swap6、修改linux的内核参数vi /etc/sysctl.d/kubernetes.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 #重新加载配置 sysctl -p #加载网桥过滤模块 modprobe br_netfilter #查看网桥过滤模块 lsmod | grep br_netfilter7、配置ipvs安装ipset和ipvsadmyum install ipset ipvsadm -y添加需要加载的模块(整个执行)cat <<EOF> /etc/sysconfig/modules/ipvs.modules #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF添加执行权限chmod +x /etc/sysconfig/modules/ipvs.modules执行脚本/bin/bash /etc/sysconfig/modules/ipvs.modules查看是否加载成功lsmod | grep -e -ip_vs -e nf_conntrack_ipv4以上完成设置之后,一定要执行重启使配置生效reboot二、Docker环境安装配置1、安装依赖docker依赖于系统的一些必要的工具:yum install -y yum-utils device-mapper-persistent-data lvm22、添加软件源yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum clean all yum makecache fast3、安装docker-ce#查看可以安装的docker版本 yum list docker-ce --showduplicates #选择安装需要的版本,直接安装最新版,可以执行 yum -y install docker-ce yum install --setopt=obsoletes=0 docker-ce-19.03.13-3.el7 -y4、启动服务#通过systemctl启动服务 systemctl start docker #通过systemctl设置开机启动 systemctl enable docker5、查看安装版本启动服务使用docker version查看一下当前的版本:docker version6、 配置镜像加速通过修改daemon配置文件/etc/docker/daemon.json加速,如果使用k8s,这里一定要设置 "exec-opts": ["native.cgroupdriver=systemd"]。 "insecure-registries" : ["172.16.20.175"]配置是可以通过http从我们的harbor上拉取数据。vi /etc/docker/daemon.json "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" "registry-mirrors": ["https://eiov0s1n.mirror.aliyuncs.com"], "insecure-registries" : ["172.16.20.175"] sudo systemctl daemon-reload && sudo systemctl restart docker7、安装docker-compose如果网速太慢,可以直接到 https://github.com/docker/compose/releases 选择对应的版本进行下载,然后上传到服务器/usr/local/bin/目录。sudo curl -L "https://github.com/docker/compose/releases/download/v2.0.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose注意:(非必须设置)开启Docker远程访问 (这里不是必须开启的,生产环境不要开启,开启之后,可以在开发环境直连docker)vi /lib/systemd/system/docker.service修改ExecStart,添加 -H tcp://0.0.0.0:2375ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2375 --containerd=/run/containerd/containerd.sock修改后执行以下命令:systemctl daemon-reload && service docker restart测试是否能够连得上:curl http://localhost:2375/version三、Harbor私有镜像仓库安装配置(重新设置一台服务器172.16.20.175,不要放在K8S的主从服务器上)首先需要按照前面的步骤,在环境上安装Docker,才能安装Harbor。1、选择合适的版本进行下载,下载地址:https://github.com/goharbor/harbor/releases2、解压tar -zxf harbor-offline-installer-v2.2.4.tgz3、配置cd harbor mv harbor.yml.tmpl harbor.yml vi harbor.yml4、将hostname改为当前服务器地址,注释掉https配置。...... # The IP address or hostname to access admin UI and registry service. # DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients. hostname: 172.16.20.175 # http related config http: # port for http, default is 80. If https enabled, this port will redirect to https port port: 80 # https related config #https: # https port for harbor, default is 443 # port: 443 # The path of cert and key files for nginx # certificate: /your/certificate/path # private_key: /your/private/key/path ......5、执行安装命令mkdir /var/log/harbor/ ./install.sh6、查看安装是否成功[root@localhost harbor]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES de1b702759e7 goharbor/harbor-jobservice:v2.2.4 "/harbor/entrypoint.…" 13 seconds ago Up 9 seconds (health: starting) harbor-jobservice 55b465d07157 goharbor/nginx-photon:v2.2.4 "nginx -g 'daemon of…" 13 seconds ago Up 9 seconds (health: starting) 0.0.0.0:80->8080/tcp, :::80->8080/tcp nginx d52f5557fa73 goharbor/harbor-core:v2.2.4 "/harbor/entrypoint.…" 13 seconds ago Up 10 seconds (health: starting) harbor-core 4ba09aded494 goharbor/harbor-db:v2.2.4 "/docker-entrypoint.…" 13 seconds ago Up 11 seconds (health: starting) harbor-db 647f6f46e029 goharbor/harbor-portal:v2.2.4 "nginx -g 'daemon of…" 13 seconds ago Up 11 seconds (health: starting) harbor-portal 70251c4e234f goharbor/redis-photon:v2.2.4 "redis-server /etc/r…" 13 seconds ago Up 11 seconds (health: starting) redis 21a5c408afff goharbor/harbor-registryctl:v2.2.4 "/home/harbor/start.…" 13 seconds ago Up 11 seconds (health: starting) registryctl b0937800f88b goharbor/registry-photon:v2.2.4 "/home/harbor/entryp…" 13 seconds ago Up 11 seconds (health: starting) registry d899e377e02b goharbor/harbor-log:v2.2.4 "/bin/sh -c /usr/loc…" 13 seconds ago Up 12 seconds (health: starting) 127.0.0.1:1514->10514/tcp harbor-log7、harbor的启动停止命令docker-compose down #停止 docker-compose up -d #启动8、访问harbor管理台地址,上面配置的hostname, http://172.16.20.175 (默认用户名/密码: admin/Harbor12345):三、Kubernetes安装配置1、切换镜像源cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF2、安装kubeadm、kubelet和kubectlyum install -y kubelet kubeadm kubectl3、配置kubelet的cgroupvi /etc/sysconfig/kubeletKUBELET_CGROUP_ARGS="--cgroup-driver=systemd" KUBE_PROXY_MODE="ipvs"4、启动kubelet并设置开机启动systemctl start kubelet && systemctl enable kubelet5、初始化k8s集群(只在Master执行)初始化kubeadm init --kubernetes-version=v1.22.3 \ --apiserver-advertise-address=172.16.20.111 \ --image-repository registry.aliyuncs.com/google_containers \ --service-cidr=10.20.0.0/16 --pod-network-cidr=10.222.0.0/16初始化成功创建必要文件mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config6、加入集群(只在Node节点执行)在Node节点(172.16.20.112和172.16.20.113)运行上一步初始化成功后显示的加入集群命令kubeadm join 172.16.20.111:6443 --token fgf380.einr7if1eb838mpe \ --discovery-token-ca-cert-hash sha256:fa5a6a2ff8996b09effbf599aac70505b49f35c5bca610d6b5511886383878f7在Master查看集群状态[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master NotReady control-plane,master 2m54s v1.22.3 node1 NotReady <none> 68s v1.22.3 node2 NotReady <none> 30s v1.22.37、安装网络插件(只在Master执行)wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml镜像加速:修改kube-flannel.yml文件,将quay.io/coreos/flannel:v0.15.0 改为 quay.mirrors.ustc.edu.cn/coreos/flannel:v0.15.0执行安装kubectl apply -f kube-flannel.yml再次查看集群状态,(需要等待一段时间大概1-2分钟)发现STATUS都是Ready。[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready control-plane,master 42m v1.22.3 node1 Ready <none> 40m v1.22.3 node2 Ready <none> 39m v1.22.38、集群测试使用kubectl安装部署nginx服务kubectl create deployment nginx --image=nginx --replicas=1 kubectl expose deploy nginx --port=80 --target-port=80 --type=NodePort查看服务[root@master ~]# kubectl get pod,svc NAME READY STATUS RESTARTS AGE pod/nginx-6799fc88d8-z5tm8 1/1 Running 0 26s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.20.0.1 <none> 443/TCP 68m service/nginx NodePort 10.20.17.199 <none> 80:32605/TCP 9s服务显示service/nginx的PORT(S)为80:32605/TCP, 我们在浏览器中访问主从地址的32605端口,查看nginx是否运行http://172.16.20.111:32605/http://172.16.20.112:32605/http://172.16.20.113:32605/成功后显示如下界面:Nginx9、安装Kubernetes管理界面Dashboard  Kubernetes可以通过命令行工具kubectl完成所需要的操作,同时也提供了方便操作的管理控制界面,用户可以用 Kubernetes Dashboard 部署容器化的应用、监控应用的状态、执行故障排查任务以及管理 Kubernetes 各种资源。1、下载安装配置文件recommended.yaml ,注意在https://github.com/kubernetes/dashboard/releases查看Kubernetes 和 Kubernetes Dashboard的版本对应关系。版本对应关系# 执行下载 wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml2、修改配置信息,在service下添加 type: NodePort和nodePort: 30010vi recommended.yaml...... kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: nodeName: Master type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 30010 ......注释掉以下信息,否则不能安装到master服务器# Comment the following tolerations if Dashboard must not be deployed on master #tolerations: # - key: node-role.kubernetes.io/master # effect: NoSchedule新增nodeName: master,安装到master服务器...... kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: nodeName: master containers: - name: kubernetes-dashboard image: kubernetesui/dashboard:v2.4.0 imagePullPolicy: Always ......3、执行安装部署命令kubectl apply -f recommended.yaml4、查看运行状态命令,可以看到service/kubernetes-dashboard 已运行,访问端口为30010[root@master ~]# kubectl get pod,svc -n kubernetes-dashboard NAME READY STATUS RESTARTS AGE pod/dashboard-metrics-scraper-c45b7869d-6k87n 0/1 ContainerCreating 0 10s pod/kubernetes-dashboard-576cb95f94-zfvc9 0/1 ContainerCreating 0 10s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/dashboard-metrics-scraper ClusterIP 10.20.222.83 <none> 8000/TCP 10s service/kubernetes-dashboard NodePort 10.20.201.182 <none> 443:30010/TCP 10s5、创建访问Kubernetes Dashboard的账号kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard kubectl create clusterrolebinding dashboard-admin-rb --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin6、查询访问Kubernetes Dashboard的token[root@master ~]# kubectl get secrets -n kubernetes-dashboard | grep dashboard-admin dashboard-admin-token-84gg6 kubernetes.io/service-account-token 3 64s [root@master ~]# kubectl describe secrets dashboard-admin-token-84gg6 -n kubernetes-dashboard Name: dashboard-admin-token-84gg6 Namespace: kubernetes-dashboard Labels: <none> Annotations: kubernetes.io/service-account.name: dashboard-admin kubernetes.io/service-account.uid: 2d93a589-6b0b-4ed6-adc3-9a2eeb5d1311 Type: kubernetes.io/service-account-token ==== ca.crt: 1099 bytes namespace: 20 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImRmbVVfRy15QzdfUUF4ZmFuREZMc3dvd0IxQ3ItZm5SdHVZRVhXV3JpZGcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tODRnZzYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMmQ5M2E1ODktNmIwYi00ZWQ2LWFkYzMtOWEyZWViNWQxMzExIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.xsDBLeZdn7IO0Btpb4LlCD1RQ2VYsXXPa-bir91VXIqRrL1BewYAyFfZtxU-8peU8KebaJiRIaUeF813x6WbGG9QKynL1fTARN5XoH-arkBTVlcjHQ5GBziLDE-KU255veVqORF7J5XtB38Ke2n2pi8tnnUUS_bIJpMTF1s-hV0aLlqUzt3PauPmDshtoerz4iafWK0u9oWBASQDPPoE8IWYU1KmSkUNtoGzf0c9vpdlUw4j0UZE4-zSoMF_XkrfQDLD32LrG56Wgpr6E8SeipKRfgXvx7ExD54b8Lq9DyAltr_nQVvRicIEiQGdbeCu9dwzGyhg-cDucULTx7TUgA7、在页面访问Kubernetes Dashboard,注意一定要使用https,https://172.16.20.111:30010 ,输入token登录成功后就进入了后台管理界面,原先命令行的操作就可以在管理界面进操作了输入token管理界面四、GitLab安装配置  GitLab是可以部署在本地环境的Git项目仓库,这里介绍如何安装使用,在开发过程中我们将代码上传到本地仓库,然后Jenkins从仓库中拉取代码打包部署。1、下载需要的安装包,下载地址 https://packages.gitlab.com/gitlab/gitlab-ce/ ,我们这里下载最新版gitlab-ce-14.4.1-ce.0.el7.x86_64.rpm,当然在项目开发中需要根据自己的需求选择稳定版本image.png2、点击需要安装的版本,会提示安装命令,按照上面提示的命令进行安装即可curl -s https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.rpm.sh | sudo bash sudo yum install gitlab-ce-14.4.1-ce.0.el7.x86_643、配置并启动Gitlabgitlab-ctl reconfigure4、查看Gitlab状态gitlab-ctl status5、设置初始登录密码cd /opt/gitlab/bin sudo ./gitlab-rails console # 进入控制台之后执行 u=User.where(id:1).first u.password='root1234' u.password_confirmation='root1234' u.save! quit5、浏览器访问服务器地址,默认是80端口,所以直接访问即可,在登录界面输入我们上面设置的密码root/root1234。image.pngimage.png6、设置界面为中文User Settings ----> Preferences ----> Language ----> 简体中文 ----> 刷新界面image.png7、Gitlab常用命令gitlab-ctl stop gitlab-ctl start gitlab-ctl restart五、使用Docker安装配置Jenkins+Sonar(代码质量检查)  实际项目应用开发过程中,单独为SpringCloud工程部署一台运维服务器,不要安装在Kubernetes服务器上,同样按照上面的步骤安装docker和docker-compose,然后使用docker-compose构建Jenkins和Sonar。1、创建宿主机挂载目录并赋权mkdir -p /data/docker/ci/nexus /data/docker/ci/jenkins/lib /data/docker/ci/jenkins/home /data/docker/ci/sonarqube /data/docker/ci/postgresql chmod -R 777 /data/docker/ci/nexus /data/docker/ci/jenkins/lib /data/docker/ci/jenkins/home /data/docker/ci/sonarqube /data/docker/ci/postgresql2、新建Jenkins+Sonar安装脚本jenkins-compose.yml脚本,这里的Jenkins使用的是Docker官方推荐的镜像jenkinsci/blueocean,在实际使用中发现,即使不修改插件下载地址,也可以下载插件,所以比较推荐这个镜像。version: '3' networks: prodnetwork: driver: bridge services: sonardb: image: postgres:12.2 restart: always ports: - "5433:5432" networks: - prodnetwork volumes: - /data/docker/ci/postgresql:/var/lib/postgresql environment: - POSTGRES_USER=sonar - POSTGRES_PASSWORD=sonar sonar: image: sonarqube:8.2-community restart: always ports: - "19000:9000" - "19092:9092" networks: - prodnetwork depends_on: - sonardb volumes: - /data/docker/ci/sonarqube/conf:/opt/sonarqube/conf - /data/docker/ci/sonarqube/data:/opt/sonarqube/data - /data/docker/ci/sonarqube/logs:/opt/sonarqube/logs - /data/docker/ci/sonarqube/extension:/opt/sonarqube/extensions - /data/docker/ci/sonarqube/bundled-plugins:/opt/sonarqube/lib/bundled-plugins environment: - TZ=Asia/Shanghai - SONARQUBE_JDBC_URL=jdbc:postgresql://sonardb:5432/sonar - SONARQUBE_JDBC_USERNAME=sonar - SONARQUBE_JDBC_PASSWORD=sonar nexus: image: sonatype/nexus3 restart: always ports: - "18081:8081" networks: - prodnetwork volumes: - /data/docker/ci/nexus:/nexus-data jenkins: image: jenkinsci/blueocean user: root restart: always ports: - "18080:8080" networks: - prodnetwork volumes: - /var/run/docker.sock:/var/run/docker.sock - /etc/localtime:/etc/localtime:ro - $HOME/.ssh:/root/.ssh - /data/docker/ci/jenkins/lib:/var/lib/jenkins/ - /usr/bin/docker:/usr/bin/docker - /data/docker/ci/jenkins/home:/var/jenkins_home depends_on: - nexus - sonar environment: - NEXUS_PORT=8081 - SONAR_PORT=9000 - SONAR_DB_PORT=5432 cap_add: - ALL3、在jenkins-compose.yml文件所在目录下执行安装启动命令docker-compose -f jenkins-compose.yml up -d安装成功后,展示以下信息[+] Running 5/5 ⠿ Network root_prodnetwork Created 0.0s ⠿ Container root-sonardb-1 Started 1.0s ⠿ Container root-nexus-1 Started 1.0s ⠿ Container root-sonar-1 Started 2.1s ⠿ Container root-jenkins-1 Started 4.2s4、查看服务的启动情况[root@localhost ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 52779025a83e jenkins/jenkins:lts "/sbin/tini -- /usr/…" 4 minutes ago Up 3 minutes 50000/tcp, 0.0.0.0:18080->8080/tcp, :::18080->8080/tcp root-jenkins-1 2f5fbc25de58 sonarqube:8.2-community "./bin/run.sh" 4 minutes ago Restarting (0) 21 seconds ago root-sonar-1 4248a8ba71d8 sonatype/nexus3 "sh -c ${SONATYPE_DI…" 4 minutes ago Up 4 minutes 0.0.0.0:18081->8081/tcp, :::18081->8081/tcp root-nexus-1 719623c4206b postgres:12.2 "docker-entrypoint.s…" 4 minutes ago Up 4 minutes 0.0.0.0:5433->5432/tcp, :::5433->5432/tcp root-sonardb-1 2b6852a57cc2 goharbor/harbor-jobservice:v2.2.4 "/harbor/entrypoint.…" 5 days ago Up 29 seconds (health: starting) harbor-jobservice ebf2dea994fb goharbor/nginx-photon:v2.2.4 "nginx -g 'daemon of…" 5 days ago Restarting (1) 46 seconds ago nginx adfaa287f23b goharbor/harbor-registryctl:v2.2.4 "/home/harbor/start.…" 5 days ago Up 7 minutes (healthy) registryctl 8e5bcca3aaa1 goharbor/harbor-db:v2.2.4 "/docker-entrypoint.…" 5 days ago Up 7 minutes (healthy) harbor-db ebe845e020dc goharbor/harbor-portal:v2.2.4 "nginx -g 'daemon of…" 5 days ago Up 7 minutes (healthy) harbor-portal 68263dea2cfc goharbor/harbor-log:v2.2.4 "/bin/sh -c /usr/loc…" 5 days ago Up 7 minutes (healthy) 127.0.0.1:1514->10514/tcp harbor-log我们发现 jenkins端口映射到了18081 ,但是sonarqube没有启动,查看日志发现sonarqube文件夹没有权限访问,日志上显示容器目录的权限不够,但实际是宿主机的权限不够,这里需要给宿主机赋予权限chmod 777 /data/docker/ci/sonarqube/logs chmod 777 /data/docker/ci/sonarqube/bundled-plugins chmod 777 /data/docker/ci/sonarqube/conf chmod 777 /data/docker/ci/sonarqube/data chmod 777 /data/docker/ci/sonarqube/extension执行重启命令docker-compose -f jenkins-compose.yml restart再次使用命令查看服务启动情况,就可以看到jenkins映射到18081,sonarqube映射到19000端口,我们在浏览器就可以访问jenkins和sonarqube的后台界面了jenkinssonarqube5、Jenkins登录初始化从Jenkins的登录界面提示可以知道,默认密码路径为/var/jenkins_home/secrets/initialAdminPassword,这里显示的事Docker容器内部的路径,实际对应我们上面服务器设置的路径为/data/docker/ci/jenkins/home/secrets/initialAdminPassword ,我们打开这个文件并输入密码就可以进入Jenkins管理界面插件管理界面6、选择安装推荐插件,安装完成之后,根据提示进行下一步操作,直到进入管理后台界面image.pngimage.pngimage.png

SpringCloud微服务实战——搭建企业级开发框架(三十四):SpringCloud + Docker + k8s实现微服务集群打包部署-打包配置

SpringCloud微服务包含多个SpringBoot可运行的应用程序,在单应用程序下,版本发布时的打包部署还相对简单,当有多个应用程序的微服务发布部署时,原先的单应用程序部署方式就会显得复杂且不可控。那么我们就会思考使用简单的部署方式,解决自动化发布、自动化部署、微服务监控等问题。  我们使用目前行业通用的解决方案,Jenkins+GitLab+Maven+Docker+Kubernetes来实现可持续自动化部署微服务的功能。下面将从工程中Maven打包文件配置、Dockfile文件编写开始到Kubernetes配置来说明如何实现SpringCloud微服务可持续自动化部署功能。1、bootstrap.yml文件不同环境加载配置  在项目打包部署时,我们系统的配置文件需要根据不同的环境来区分开发、测试、生产环境的配置,在之前的SpringBoot工程中,我们用到spring.profiles.active配置属性,使用application.yml、application-dev.yml、application-test.yml、application-sit.yml、application-uat.yml、application-prod.yml来区分不同环境的配置文件。在SpringCloud中,我们用到了Nacos注册中心,Nacos的Config默认读取的是bootstrap.yml配置文件,如果将Nacos Config的配置写到application.yml里面,工程启动时就会一直报错。下面是SpringCloud加载配置文件的顺序:bootstrap.yml(bootstrap.properties)先加载,用于应用程序上下文的引导阶段,可以用来配置application.yml中使用到的参数,由父Spring ApplicationContext加载。application.yml(application.properties)后加载,用于配置各工程模块中使-用到的参数。  所以在SpringCloud工程中我们通过使用bootstrap.yml、bootstrap-dev.yml...等不同的配置文件来区分不同的环境,有些框架是放到同一个yml配置文件中,然后不同的配置放到不同的spring.profiles.active下面,类似于下面这种:spring: profiles: dev 开发配置项: 开发配置项 spring: profiles: test 测试配置项: 测试配置项但是,在实际开发过程中,我们开发、测试的配置文件有时会经常修改,而生产部署环境确很少改动,当多人员开发时,难免会有部分人员不小心将配置文件改动影响到生产环境配置,即使没有影响,开发人员在改动时也要小心翼翼,害怕哪里改错。当我们将这些配置分开时,开发、测试的配置文件无论如何改动,都不会影响到生产环境文件,这正是我们想要的结果。所以我们将不同环境的配置放到不同的配置文件中。我们将配置文件分为bootstrap.yml、bootstrap-dev.yml、bootstrap-test.yml、bootstrap-prod.yml<!-- bootstrap.yml --> server: port: 8001 spring: profiles: active: @spring.profiles.active@ application: name: @artifactId@ cloud: nacos: discovery: server-addr: ${spring.nacos.addr} config: server-addr: ${spring.nacos.addr} file-extension: yaml prefix: ${spring.nacos.config.prefix} group: ${spring.nacos.config.group} enabled: true<!-- bootstrap-dev.yml --> spring: profiles: dev nacos: addr: 127.0.0.1:8848 config: prefix: gitegg-cloud-config group: GITEGG_CLOUD<!-- bootstrap-test.yml --> spring: profiles: test nacos: addr: 测试地址:8848 config: prefix: gitegg-cloud-config group: GITEGG_CLOUD<!-- bootstrap-prod.yml --> spring: profiles: prod nacos: addr: 生产地址:8848 config: prefix: gitegg-cloud-config group: GITEGG_CLOUD  上面的配置可以满足分环境打包读取不同配置文件的目的,但是在实际开发过程中我们发现,我们的微服务太多,如果要修改Nacos配置的话,每个微服务的配置文件都需要修改一遍,虽然可以用IDEA批量替换,但是感觉这不是很好的方式。我们理想的方式是这样的:所有的微服务配置文件默认都从一个统一的地方读取当有某一个微服务需要特殊的配置时,只需要修改它自己的配置文件即可实现上面的方式,我们可以将Nacos的配置到放到Maven的profile中,不同环境的bootstrap.yml可以读取其对应环境的配置信息,修改后的配置如下:<!-- bootstrap.yml --> server: port: 8001 spring: profiles: active: @spring.profiles.active@ application: name: @artifactId@ cloud: nacos: discovery: server-addr: ${spring.nacos.addr} config: server-addr: ${spring.nacos.addr} file-extension: yaml prefix: ${spring.nacos.config.prefix} group: ${spring.nacos.config.group} enabled: true<!-- bootstrap-dev.yml --> spring: profiles: dev nacos: addr: @nacos.addr@ config: prefix: @nacos.config.prefix@ group: @nacos.config.group@<!-- bootstrap-test.yml --> spring: profiles: test nacos: addr: @nacos.addr@ config: prefix: @nacos.config.prefix@ group: @nacos.config.group@<!-- bootstrap-prod.yml --> spring: profiles: prod nacos: addr: @nacos.addr@ config: prefix: @nacos.config.prefix@ group: @nacos.config.group@<!-- pom.xml --> <profiles> <profile> <activation> <!--默认为dev环境打包方式--> <activeByDefault>true</activeByDefault> </activation> <id>dev</id> <properties> <spring.profiles.active>dev</spring.profiles.active> <nacos.addr>1127.0.0.1:8848</nacos.addr> <nacos.config.prefix>gitegg-cloud-config</nacos.config.prefix> <nacos.config.group>GITEGG_CLOUD</nacos.config.group> </properties> </profile> <profile> <id>test</id> <properties> <spring.profiles.active>test</spring.profiles.active> <nacos.addr>测试环境地址:8848</nacos.addr> <nacos.config.prefix>gitegg-cloud-config</nacos.config.prefix> <nacos.config.group>GITEGG_CLOUD</nacos.config.group> </properties> </profile> <profile> <id>prod</id> <properties> <spring.profiles.active>prod</spring.profiles.active> <nacos.addr>生产环境地址:8848</nacos.addr> <nacos.config.prefix>gitegg-cloud-config</nacos.config.prefix> <nacos.config.group>GITEGG_CLOUD</nacos.config.group> </properties> </profile> </profiles>这样,通过在pom.xml里面不同profile的配置,就可以实现修改一处,使所有微服务读取Nacos的配置文件同时修改。  修改完之后,可能会有这样的疑惑:现在我们三个文件bootstrap-dev.yml、bootstrap-test.yml、bootstrap-prod.yml内容配置基本是一样的,只有profiles的值不同,那么实际可以直接写在bootstrap.yml一个文件中,通过pom.xml来配置区分不同环境即可。那么这里做的目的和意义:主要是为了后续可扩展定制,某个环境特定的配置。2、Maven打包配置在编写pom.xml之前,我们先了解一下几个常用Maven打包插件的区别和联系:maven-compiler-plugin: 用于在编译(compile)阶段加入定制化参数,比如设置项目源码的jdk版本、编译的jdk版本,以及项目编码等。maven-jar-plugin: 将maven工程打成 jar 包,提供了manifest的配置,生成jar包中一般存放的是.class文件和resources目录下的配置,不会将依赖的jar包打包成一个可运行的jar包。spring-boot-maven-plugin: 其在Maven的package生命周期阶段,能够将mvn package生成的软件包,再次打包为可执行的软件包,并将mvn package生成的软件包重命名为*.original。 其主要作用就是将SpringBoot工程代码和依赖的jar包全部打包为可执行的jar或war文件,可以直接在jre环境下运行。  因为maven-jar-plugin打包的jar是把打包的jar和lib放在同一目录下,不是打成一个包,所以这样打的jar包文件很小。spring-boot-maven-plugin打包是把maven-jar-plugin打的jar包和依赖库repackage一个可运行的jar包,这个jar包文件很大。如果考虑到系统升级时的网络因素,那么使用maven-jar-plugin是最好不过了,当依赖库不改变的时候,只升级很小的jar包即可。这里因为是企业级微服务应用开发框架,不考虑网络传输的影响,考虑系统升级稳定性,不至于开发时依赖库修改了版本,而生产环境依赖库版本升级导致所有微服务受到影响,所以我们选择使用spring-boot-maven-plugin插件进行打包。  在GitEgg工程的父级pom.xml里配置如下:<properties> <!-- jdk版本1.8 --> <java.version>1.8</java.version> <!-- maven-compiler-plugin插件版本,Java代码编译 --> <maven.plugin.version>3.8.1</maven.plugin.version> <!-- maven编译时指定编码UTF-8 --> <maven.compiler.encoding>UTF-8</maven.compiler.encoding> </properties> <build> <finalName>${project.name}</finalName> <resources> <!-- 增加分环境读取配置 --> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> <excludes> <exclude>**/*.jks</exclude> </excludes> </resource> <!-- 解决jks被过滤掉的问题 --> <resource> <directory>src/main/resources</directory> <filtering>false</filtering> <includes> <include>**/*.jks</include> </includes> </resource> <resource> <directory>src/main/java</directory> <includes> <include>**/*.xml</include> </includes> </resource> </resources> <pluginManagement> <plugins> <!-- 用于在编译(compile)阶段加入定制化参数,比如设置项目源码的jdk版本、编译的jdk版本,以及项目编码等 --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>${maven.plugin.version}</version> <configuration> <source>${java.version}</source> <target>${java.version}</target> <encoding>${maven.compiler.encoding}</encoding> <compilerArgs> <arg>-parameters</arg> </compilerArgs> </configuration> </plugin> <!-- 能够将Spring Boot应用打包为可执行的jar或war文件,然后以通常的方式运行Spring Boot应用 --> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <version>${spring.boot.version}</version> <configuration> <fork>true</fork> <finalName>${project.build.finalName}</finalName> </configuration> <executions> <execution> <goals> <goal>repackage</goal> </goals> </execution> </executions> </plugin> </plugins> </pluginManagement> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> </plugin> </plugins> </build> <profiles> <profile> <activation> <!--默认为dev环境打包方式--> <activeByDefault>true</activeByDefault> </activation> <id>dev</id> <properties> <spring.profiles.active>dev</spring.profiles.active> <nacos.addr>127.0.0.1:8848</nacos.addr> <nacos.config.prefix>gitegg-cloud-config</nacos.config.prefix> <nacos.config.group>GITEGG_CLOUD</nacos.config.group> </properties> </profile> <profile> <id>test</id> <properties> <spring.profiles.active>test</spring.profiles.active> <nacos.addr>127.0.0.1:8848</nacos.addr> <nacos.config.prefix>gitegg-cloud-config</nacos.config.prefix> <nacos.config.group>GITEGG_CLOUD</nacos.config.group> </properties> </profile> <profile> <id>prod</id> <properties> <spring.profiles.active>prod</spring.profiles.active> <nacos.addr>127.0.0.1:8848</nacos.addr> <nacos.config.prefix>gitegg-cloud-config</nacos.config.prefix> <nacos.config.group>GITEGG_CLOUD</nacos.config.group> </properties> </profile> </profiles>  以上Maven配置完成之后,就可以进行正常的打可运行的SpringBoot包了。通常情况下,如果不使用docker和k8s集群,那么就可以直接使用Jenkins一键打包部署到测试或生产环境了。  我们下面将一步步介绍如何实现将微服务打包为Docker文件,进而发布到Docker镜像仓库私服Harbor上,k8s拉取私服Harbor上的Docker文件进行分布式部署。Docker: 开源的应用容器引擎,打包应用以及依赖包到一个可移植的镜像中,可以发布到任何流行的 Linux或Windows操作系统的机器上。Harbor: 区别于Docker官方提供的公共的镜像仓库,可以用于本地部署的私有Docker镜像仓库。Kubernetes(k8s): 用于自动部署,扩展和管理容器化应用程序的开源系统,可以自由地部署在企业内部,私有云、混合云或公有云3、Docker打包配置  目前,网上有多种Docker打包插件使用说明,讲解最多的是Spotify开源的,Spotify官方已不再推荐使用docker-maven-plugin插件进行打包,而是推荐其最新的docker打包插件dockerfile-maven-plugin,但是dockerfile-maven-plugin也已经很久没有更新了,在使用方面也有局限性,比如:只支持在本机Docker的镜像build、tag、push。经过在网上搜索,发现Google开源的Jib插件功能更强大,它可以不写Dockerfile,不需要在本地安装Docker环境就能实现Docker打包,而且一直在更新,所以这里选择这个插件作为我们的Docker打包插件。  SpringBoot打包会将所有的依赖和资源文件等打包到一起,生成一个Fat Jar,这个Fat Jar的文件大小往往高达百兆,如果受制于网络环境,那么发布时,会传输较慢;同时,发布多次后,会占用大量的磁盘空间。尤其微服务架构下,会有一堆的Far Jar,那么,我们可以利用Docker镜像的分层结构特性,将应用程序的公共依赖打包为源镜像层,发布应用时,只发布业务修改层的代码。下面介绍Jib( jib-maven-plugin插件 )如何将SpringBoot应用程序分层打包Docker镜像,充分利用Docker的镜像分层复用机制,解决网络限制和占用大量磁盘空间的问题。Jib( jib-maven-plugin插件 )构建的三个参数:buildTar:本地构建,不需要Docker daemon就可以将镜像生成tar文件,保存在工程的target目录下dockerBuild:将构建的镜像存到当前环境的Docker daemonbuild:将构建的镜像推送到远程仓库,官方仓库或者Harbor私有仓库在GitEgg工程的父级pom.xml里配置jib-maven-plugin如下:<properties> ...... <!-- jib-maven-plugin插件版本,代码打包docker --> <jib.maven.plugin.version>3.1.4</jib.maven.plugin.version> ...... </properties> <pluginManagement> <plugins> ...... <!-- Docker 打包插件 --> <plugin> <groupId>com.google.cloud.tools</groupId> <artifactId>jib-maven-plugin</artifactId> <version>${jib.maven.plugin.version}</version> <!-- 绑定到Maven的install生命周期 ,此处如果不使用https,会有问题,需要设置sendCredentialsOverHttp=true--> <executions> <execution> <phase>install</phase> <goals> <goal>build</goal> </goals> </execution> </executions> <configuration> <!--允许非https--> <allowInsecureRegistries>true</allowInsecureRegistries> <!-- 相当于Docerkfile中的FROM --> <from>  </from> <to>  <auth> <username>${docker.harbor.username}</username> <password>${docker.harbor.password}</password> </auth> </to> <container> <!--jvm内存参数--> <jvmFlags> <jvmFlag>-Xms512m</jvmFlag> <jvmFlag>-Xmx4g</jvmFlag> </jvmFlags> <volumes>/giteggData</volumes> <workingDirectory>/gitegg</workingDirectory> <environment> <TZ>Asia/Shanghai</TZ> </environment> <!--使用该参数保证镜像的创建时间与系统时间一致--> <creationTime>USE_CURRENT_TIMESTAMP</creationTime> <format>OCI</format> </container> </configuration> </plugin> </plugins> </pluginManagement> ...... <profiles> <profile> <activation> <!--默认为dev环境打包方式--> <activeByDefault>true</activeByDefault> </activation> <id>dev</id> <properties> <spring.profiles.active>dev</spring.profiles.active> <nacos.addr>172.16.20.188:8848</nacos.addr> <nacos.config.prefix>gitegg-cloud-config</nacos.config.prefix> <nacos.config.group>GITEGG_CLOUD</nacos.config.group> <docker.harbor.addr>172.16.20.175</docker.harbor.addr> <docker.harbor.project>gitegg</docker.harbor.project> <docker.harbor.username>robot$gitegg</docker.harbor.username> <docker.harbor.password>Jqazyv7vvZiL6TXuNcv7TrZeRdL8U9n3</docker.harbor.password> </properties> </profile> <profile> <id>test</id> <properties> <spring.profiles.active>test</spring.profiles.active> <nacos.addr>127.0.0.1:8848</nacos.addr> <nacos.config.prefix>gitegg-cloud-config</nacos.config.prefix> <nacos.config.group>GITEGG_CLOUD</nacos.config.group> <docker.harbor.addr>172.16.20.175</docker.harbor.addr> <docker.harbor.project>gitegg</docker.harbor.project> <docker.harbor.username>robot$gitegg</docker.harbor.username> <docker.harbor.password>Jqazyv7vvZiL6TXuNcv7TrZeRdL8U9n3</docker.harbor.password> </properties> </profile> <profile> <id>prod</id> <properties> <spring.profiles.active>prod</spring.profiles.active> <nacos.addr>127.0.0.1:8848</nacos.addr> <nacos.config.prefix>gitegg-cloud-config</nacos.config.prefix> <nacos.config.group>GITEGG_CLOUD</nacos.config.group> <docker.harbor.addr>172.16.20.175</docker.harbor.addr> <docker.harbor.project>gitegg</docker.harbor.project> <docker.harbor.username>robot$gitegg</docker.harbor.username> <docker.harbor.password>Jqazyv7vvZiL6TXuNcv7TrZeRdL8U9n3</docker.harbor.password> </properties> </profile> </profiles>在需要docker打包的工程pom.xml里面添加插件引用<build> <plugins> <plugin> <groupId>com.google.cloud.tools</groupId> <artifactId>jib-maven-plugin</artifactId> </plugin> </plugins> </build>在不需要docker打包的工程pom.xml里面需要配置skip=true<build> <plugins> <plugin> <groupId>com.google.cloud.tools</groupId> <artifactId>jib-maven-plugin</artifactId> <configuration> <!--此模块不打可执行的jar包,打普通jar包即可--> <skip>true</skip> </configuration> </plugin> </plugins> </build>Docker本地打镜像tar包命令:clean package -Ptest jib:buildTar -f pom.xmlDocker把镜像push到本地docker命令:clean package -Ptest jib:dockerBuild -f pom.xmlDocker把镜像push到远程镜像仓库命令:clean package -Ptest jib:build -Djib.httpTimeout=200000 -DsendCredentialsOverHttp=true -f pom.xml  Jib( jib-maven-plugin插件 )的构建可以绑定到maven生命周期,以上实例中,已经绑定到maven的install生命周期,在实际使用时,因为安全方面的考虑,不支持http发送用户名密码,需要设置sendCredentialsOverHttp=true。常见问题在bootstrap.yml中无法读取@spring.profiles.active@,且提示found character '@' that cannot start any token.解决:项目中如果没有指定spring-boot-starter-parent,resources->resource->filtering一定要设置为true才能够解析@,如下所示:<build> <resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> </resource> </resources> </build>GitEgg-Platform作为平台jar包,不需要打docker文件,在GitEgg-Cloud打包时会引入GitEgg-Platform的jar包,所以上面的配置只需要在GitEgg-Cloud工程下配置。K8S部署yaml,Jenkins脚本会首先读取子工程是否有配置部署的yaml,如果有则使用,如果没有则读取根目录下的部署yaml。apiVersion: apps/v1 kind: Deployment metadata: name: {APP_NAME}-deployment labels: app: {APP_NAME} spec: replicas: 1 revisionHistoryLimit: 3 selector: matchLabels: app: {APP_NAME} template: metadata: labels: app: {APP_NAME} spec: hostNetwork: true containers: - name: {APP_NAME} image: {IMAGE_URL}/{IMAGE_PROGECT}/{APP_NAME}:{IMAGE_TAG} imagePullPolicy: Always resources: limits: cpu: 300m memory: 500Mi requests: cpu: 100m memory: 100Mi ports: - containerPort: {APP_PORT} - name: SPRING_PROFILES_ACTIVE value: {SPRING_PROFILE} imagePullSecrets: - name: harbor-key kind: Service apiVersion: v1 metadata: name: {APP_NAME}-service labels: app: {APP_NAME} spec: selector: app: {APP_NAME} ports: - protocol: TCP port: {APP_PORT} targetPort: {APP_PORT}docker安装启动命令docker pull 172.16.20.175/gitegg/gitegg-service-system:1.0-SNAPSHOT # --restart=always 自动重新启动 , /opt/gitegg 是配置 jar包运行的位置 docker run -d imageId --restart=always --name=gitegg-service-system -p 8006:8006 /opt/gitegg # 查看是否启动 docker ps # 查看日志 docker logs --tail 100 -f gitegg-service-systemdocker-compose配置启动docker-compose up -ddocker使用容器内网络,当服务注册中心Nacos使用docker-compose安装时使用,注册到Nacos的地址为docker容器内ip...... networks: - giteggNetworks ...... networks: giteggNetworks: driver: bridge ......完整yamlversion: '3' services: gitegg-service-system: image: 172.16.20.175/gitegg/gitegg-service-system:1.0-SNAPSHOT container_name: gitegg-service-system ports: - 8001:8001 volumes: - "/data/gitegg/gateway/gitegg-service-system.jar:/app.jar" - "/data/gitegg/gateway/logs:/logs" logging: options: max-size: "100m" networks: - giteggNetworks gitegg-service-base: image: 172.16.20.175/gitegg/gitegg-service-base:1.0-SNAPSHOT container_name: gitegg-service-base ports: - 8002:8002 volumes: - "/data/gitegg/base/gitegg-service-base.jar:/app.jar" - "/data/gitegg/base/logs:/logs" networks: - giteggNetworks gitegg-oauth: image: 172.16.20.175/gitegg/gitegg-oauth:1.0-SNAPSHOT container_name: gitegg-oauth ports: - 8003:8003 volumes: - "/data/gitegg/oauth/gitegg-oauth.jar:/app.jar" - "/data/gitegg/oauth/logs:/logs" networks: - giteggNetworks gitegg-service-extension: image: 172.16.20.175/gitegg/gitegg-service-extension:1.0-SNAPSHOT container_name: gitegg-service-extension ports: - 8005:8005 volumes: - "/data/gitegg/extension/gitegg-service-extension.jar:/app.jar" - "/data/gitegg/extension/logs:/logs" networks: - giteggNetworks gitegg-code-generator: image: 172.16.20.175/gitegg/gitegg-code-generator:1.0-SNAPSHOT container_name: gitegg-code-generator ports: - 8006:8006 volumes: - "/data/gitegg/generator/gitegg-code-generator:/app.jar" - "/data/gitegg/generator/logs:/logs" networks: - giteggNetworks gitegg-gateway: image: 172.16.20.175/gitegg/gitegg-gateway:1.0-SNAPSHOT container_name: gitegg-gateway ports: - 801:80 volumes: - "/data/gitegg/gateway/gitegg-gateway:/app.jar" - "/data/gitegg/gateway/logs:/logs" networks: - giteggNetworks networks: giteggNetworks: driver: bridgedocker使用宿主机网络,不能和上面的使用容器内网络同时使用。当服务注册中心Nacos单独部署时使用,Nacos获取到的是docker宿主机的ip...... network_mode: "host" ......完整yaml,使用了network_mode: "host"之后,不能再使用ports端口映射version: '3' services: gitegg-service-system: image: 172.16.20.175/gitegg/gitegg-service-system:1.0-SNAPSHOT container_name: gitegg-service-system network_mode: "host" volumes: - "/data/gitegg/gateway/gitegg-service-system.jar:/app.jar" - "/data/gitegg/gateway/logs:/logs" logging: options: max-size: "100m" gitegg-service-base: image: 172.16.20.175/gitegg/gitegg-service-base:1.0-SNAPSHOT container_name: gitegg-service-base network_mode: "host" volumes: - "/data/gitegg/base/gitegg-service-base.jar:/app.jar" - "/data/gitegg/base/logs:/logs" gitegg-oauth: image: 172.16.20.175/gitegg/gitegg-oauth:1.0-SNAPSHOT container_name: gitegg-oauth network_mode: "host" volumes: - "/data/gitegg/oauth/gitegg-oauth.jar:/app.jar" - "/data/gitegg/oauth/logs:/logs" gitegg-service-extension: image: 172.16.20.175/gitegg/gitegg-service-extension:1.0-SNAPSHOT container_name: gitegg-service-extension network_mode: "host" volumes: - "/data/gitegg/extension/gitegg-service-extension.jar:/app.jar" - "/data/gitegg/extension/logs:/logs" gitegg-code-generator: image: 172.16.20.175/gitegg/gitegg-code-generator:1.0-SNAPSHOT container_name: gitegg-code-generator network_mode: "host" volumes: - "/data/gitegg/generator/gitegg-code-generator:/app.jar" - "/data/gitegg/generator/logs:/logs" gitegg-gateway: image: 172.16.20.175/gitegg/gitegg-gateway:1.0-SNAPSHOT container_name: gitegg-gateway network_mode: "host" volumes: - "/data/gitegg/gateway/gitegg-gateway:/app.jar" - "/data/gitegg/gateway/logs:/logs"

SpringCloud微服务实战——搭建企业级开发框架(三十三):整合Skywalking实现链路追踪

Skywalking是由国内开源爱好者吴晟(原OneAPM工程师)开源并提交到Apache孵化器的产品,它同时吸收了Zipkin/Pinpoint/CAT的设计思路,支持非侵入式埋点。是一款基于分布式跟踪的应用程序性能监控系统。另外社区还发展出了一个叫OpenTracing的组织,旨在推进调用链监控的一些规范和标准工作。1、下载Skywalking,下载地址:https://skywalking.apache.org/downloads/#download-the-latest-versions ,根据需求选择发布的版本,这里我们选择最新发布版v8.4.0 for H2/MySQL/TiDB/InfluxDB/ElasticSearch 7v8.4.0 for H2/MySQL/TiDB/InfluxDB/ElasticSearch 72、下载Elasticsearch,下载地址:https://www.elastic.co/cn/downloads/elasticsearch ,因为上面我们选择下载的Skywalking用到的是ElasticSearch 7,所以这里下载Elasticsearch 7.12.0Elasticsearch 7.12.03、将下载后的apache-skywalking-apm-es7-8.4.0.tar.gz和elasticsearch-7.12.0-linux-x86_64.tar.gz上传到Linux服务器并分别解压tar -zxvf apache-skywalking-apm-es7-8.4.0.tar.gz tar -zxvf elasticsearch-7.12.0-linux-x86_64.tar.gz4、修改/elasticsearch-7.12.0/config下的elasticsearch.yml# 修改 clusteri.name: CollectorDBCluster node.name: CollectorDBCluster-1 # 这里设置为实际的ip地址 network.host: 127.0.0.1 http.port: 9200 cluster.initial_master_nodes: ["CollectorDBCluster-1"]5、为了系统安全考虑,Elasticsearch默认不能使用root用户启动,这里新建一个es用户用于启动Elasticsearch#创建es用户组及es用户 groupadd es useradd es -g es # 修改登录用户的密码,这里设置为Skywalking passwd es #将elasticsearch-7.12.0文件夹权限赋予es用户 chown -R es:es elasticsearch-7.12.0 #切换到es用户 su es #进入到elasticsearch-7.12.0/bin目录,执行启动命令, 后面-d为后台启动 ./elasticsearch -d6、访问http://127.0.0.1:9200即可查看elasticsearch是否启动成功{ "name" : "CollectorDBCluster-1", "cluster_name" : "CollectorDBCluster", "cluster_uuid" : "J2LyQWfdTeeBN0dcdWpgqw", "version" : { "number" : "7.12.0", "build_flavor" : "default", "build_type" : "tar", "build_hash" : "78722783c38caa25a70982b5b042074cde5d3b3a", "build_date" : "2021-03-18T06:17:15.410153305Z", "build_snapshot" : false, "lucene_version" : "8.8.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" "tagline" : "You Know, for Search" }7、修改apache-skywalking-apm-bin-es7\config\application.ymlstorage: # 默认存储是h2,这里改为elasticsearch7 selector: ${SW_STORAGE:elasticsearch7} elasticsearch7: nameSpace: ${SW_NAMESPACE:"CollectorDBCluster"} #这里localhost改为elasticsearch7的安装地址 clusterNodes: ${SW_STORAGE_ES_CLUSTER_NODES:127.0.0.1:9200}8、切换到apache-skywalking-apm-bin-es7\bin目录,并执行启动命令./oapService.sh ./webappService.sh ./startup.sh9、访问http://127.0.0.1:8080/查看是否启动成功skywalking10、Skywalking搭建好之后,我们这里说明在开发环境中,把agent通过IDEA配置到每个微服务中,配置-Dskywalking.agent.service_name为微服务的名称image.pngimage.png-javaagent:D:\DevTools\Skywalking\agent\skywalking-agent.jar -Dskywalking.agent.service_name=gitegg-gateway -Dskywalking.collector.backend_service=127.0.0.1:1180011、配置好之后启动微服务,打开http://127.0.0.1:8080 ,点击拓扑图可以看到整个微服务的关系拓扑图追踪

SpringCloud微服务实战——搭建企业级开发框架(三十二):代码生成器使用配置说明

一、新建数据源配置  因考虑到多数据源问题,代码生成器作为一个通用的模块,后续可能会为其他工程生成代码,所以,这里不直接读取系统工程配置的数据源,而是让用户自己维护。新建数据源参数说明数据源名称:用于查找区分数据源的名称连接地址 : 连接方式:数据库类型:数据库地址等参数,例:jdbc:mysql://127.0.0.1/gitegg_cloud?zeroDateTimeBehavior=convertToNull&useUnicode=true&characterEncoding=utf8&all owMultiQueries=true&serverTimezone=Asia/Shanghai用户名:登录数据库的用户名密码:登录数据库的密码数据库驱动:数据库驱动类型,例:com.mysql.jdbc.Driver(MySQL5 )或 com.mysql.cj.jdbc.Driver(MySQL8 )数据库类型:选择对应的数据库类型,如果有新增可以新增数据字典备注:备注信息二、新建业务数据字典  界面的增删改查界面会有一些数据字典的下拉框或者单选、多选等基础数据,这些根据具体需要生成代码的表设计提前做好规划,在业务字典中提前建好数据字典。在自定类型表,点击一条记录所在的行即是选中,右侧字典值列表会出现对应的字典值列表。业务字典三、代码生成的基础配置  代码生成的基础配置实际也是一类数据字典,但这是只针对代码生成功能模块使用的数据字典,比如在界面上选择的数据库类型、表单展现类型、树类型等、都是获取这里的配置数据。在实际应用开发过程中,可以根据自己的需求进行新增、修改。基础配置四、校验规则配置  在我们业务开发过程中,无论是界面还是接口,都会对数据字段的长度、大小、类型等进行校验,这里的配置主要是配置数据字段校验的正则表达式,在代码生成时,会在前端代码和后台代码添加校验方法进行数据校验。正则表达式校验规则五、代码配置(重点)  代码配置是实际代码生成的重点,以上几项配置都是为代码生成做前期准备工作,代码配置模块是实际的针对业务表进行配置,最终生成代码。1、 新建代码配置新增代码配置配置参数说明数据源 :选择我们上面配置的需要生成代码的数据库。模块名称:需要生成代码的模块名称,这个名称将用作菜单名称、系统提示等。模块代码:系统在生成代码时会根据模块代码新建对应名称的目录,进行存放该模块的代码。服务名称:微服务注册到Nacos上的名称,框架中这个取得的是微服务pom.xml里的artifactId配置,在生成代码时,微服务名称将加在请求url的前面,请一定确认这个字段的正确性,否则访问不到后台服务。表名 : 需要生成代码的表。表别名 :在生成多表联合代码时,这个作为表的别名在查询语句的mapper文件中使用。表前缀 :在我们定义表时,t_代码是表(table的首字母),下划线后面代码子系统的名称,再后面是模块名称,那么在我们生成代码时,前面的前缀需要去掉,只保留模块名称,即生成代码时的实体类名称。父级包名:定义生成模块代码的包路径,代码将存放在这个包名下。表单类型:在进行增删查改时的展现方式,有弹出框、新打开一个页面、右侧伸缩抽屉等形式表单列数:定义表单字段在表里每行展示几列数据展示:配置数据查询列表的展示形式,有表格、树等形式左树类型: 当数据展示形式包含左侧树时,这里可以选择左侧树的数据类型Controller请求路径:配置在Controller代码中的 @RequestMapping 参数,即模块的请求路径后端代码保存路径:后端代码的存放路径,到具体微服务的根目录即可,即src目录的上一级目录,不需要具体到src目录和src下面的目录。前端代码保存路径:前端代码的存放路径,到具体前端代码的根目录即可。页面文件目录自定义: 前端代码默认放到views目录下,不设置的话会使用服务请求和模块代码两级字段开始生成目录,如果需要大的区分,这里可以再设置一级目录。生成类型:有些代码生成可能只有接口,或者只想重新生成页面代码,那么这里可以选择是全部生成,还是只生成后端代码或前端代码。状态处理:具体业务模块中,状态是一个常用的字段,如果要生成的代码有状态字段,那么这里可以选择是否生成对状态相关操作代码。支持导出:配置该模块是否有导出功能。支持导入:配置该模块是否有导入功能。联表类型:配置该模块在操作时,是进行多表操作还是单表操作。查询复用:代码中列表查询(分页或不分页)和单条记录查询可以使用同一条sql,基于性能方面考虑,这里可以选择是生成单独的查询方法,还是复用同一个查询方法。2、配置代码生成规则  在已建好的代码配置列表中点击"配置规则"按钮,进入到代码生成规则配置页面。如果在上一步中选择的是多表查询,那么这里会进入多表配置界面,如果选择的是单表,那么这里直接进入字段配置界面。配置联表多表配置列表多表配置列表多表配置表单多表配置表单配置参数说明表名 : 选择相同数据源下的表。别名 : 联表查询时,mapper.xml里面SQL语句的表别名。表前缀 : 去除系统和模块标识,只保留实体名称。排序 : 显示排序及在SQL查询时的排序。联表方式 : 表连接方式,LEFT JOIN、RIGHT JOIN、INNER JOIN、UNION、UNION ALL等查询字段 : 此表需要查询的字段自定义on条件: 需要和主表关联的字段及自定义的条件配置字段字段配置配置参数说明字段描述 : 获取数据表的描述信息,用于字段名称和展示在页面字段的label。字段类型 : 自动转换数据库定义的字段类型为JAVA对应的字段类型。字段名称 : 实体类里面字段的定义。配置表单表单配置配置参数说明表单新增 : 字段是否展示在界面的新增表单。表单编辑 : 字段是否展示在界面的编辑表单。组件类型 : 字段展示在界面的类型,INPUT、SELECT、CHECKBOX等。字典编码 : 当字段的组件类型为选择类型时,提供选择的填充数据。此数据来自业务字典。配置表单校验表单校验配置参数说明最小长度 : 字段的最小长度,初始值来自数据库字段定义。最大长度 : 字段的最大长度,初始值来自数据库字段定义。是否必填 : 字段是否必填。是否唯一 : 字段是否唯一,如果配置为唯一,那么在表单新增或编辑时会自动生成校验方法。校验类型 : 选择已配置的通用正则表达式。正则表达式 : 对于非通用的特殊字段,可以自定义正则表达式。最大值 : 当字段为数值类型时,字段的最大值,初始值来自数据库字段定义。最小值 : 当字段为数值类型时, 字段的最小值,初始值来自数据库字段定义。配置数据展示列表列表配置配置参数说明查询条件 : 是否是查询条件,展示在界面的查询条件区域。查询类型 : 字段的查询类型,等于、不等于、大于、小于等。列表展示 : 是否展示在查询结果的数据表格中。支持导入 : 字段是否支持导入,在代码配置中支持导入时,此字段生效。支持导出 : 字段是否支持导出,在代码配置中支持导出时,此字段生效。完成保存配置,在列表中点击生成代码按钮,生成代码。执行生成代码操作六、配置资源权限  执行完代码生成操作之后,会在后台代码的mapper.xml同级目录生成一个同名的.sql文件,这里面是访问新生成代码模块的资源菜单权限脚本。

SpringCloud微服务实战——搭建企业级开发框架(三十一):自定义MybatisPlus代码生成器实现前后端代码自动生成

理想的情况下,代码生成可以节省很多重复且没有技术含量的工作量,并且代码生成可以按照统一的代码规范和格式来生成代码,给日常的代码开发提供很大的帮助。但是,代码生成也有其局限性,当牵涉到复杂的业务逻辑时,简单的代码生成功能无法解决。  目前市面上的代码生成器层出不穷,大多数的原理是基于已有的代码逻辑模板,按照一定的规则来生成CRUD代码。至于更为复杂的代码生成大家都在人工智能领域探索,目前基于代码训练的人工智能代码生成还在于提供代码补全功能方面,比如智能编程助手aiXcoder提供了常用IDE插件,在项目开发过程中,可以基于你项目的代码进行训练,编程时提供合适的代码提示。由微软、OpenAI、GitHub 三家联合打造的Copilot 也有异曲同工之妙,都是在项目开发中,提供优秀的代码自动补全功能从而可以提升工作效率。希望在不远的将来,我们可以实现复杂业务逻辑的代码也通过人工智能对大量代码的训练和分析来实现吧。  这里我们制作的代码生成器,是按照平时开发过程中的思考来设计,一般情况下我们的开发步骤是: 需求分析->数据建模->数据库设计->编写后台代码(增删改查)->编写前台代码(增删改查)->字段校验 ->业务逻辑完善->测试,所以我们希望代码生成器能够:读取数据库表和字段根据数据库字段生成实体类和CRUD方法根据数据库字段生成前端操作页面前端页面的展示方式可以根据需要配置(form表单、数据展示列表)可以生成多表联合查询的代码可以配置字段的校验规则一、引入依赖的库1、修改GitEgg-Platform项目中的gitegg-platform-bom工程的pom.xml文件,这里使用mybatis-plus-generator目前最新版本3.5.1来自定义我们需要的代码生成器。pom.xml<properties> ...... <!-- Mybatis Plus增强工具代码生成 --> <mybatis.plus.generator.version>3.5.1</mybatis.plus.generator.version> ...... </properties> <dependencyManagement> <dependencies> ...... <!-- Mybatis Plus代码生成工具 --> <dependency> <groupId>com.baomidou</groupId> <artifactId>mybatis-plus-generator</artifactId> <version>${mybatis.plus.generator.version}</version> </dependency> ...... </dependencies> </dependencyManagement>2、在GitEgg-Platform项目中新建gitegg-platform-code-generator工程,提供基本的自定义代码生成能力,以及定义一些常量。GitEggCodeGeneratorConstant.java常量类package com.gitegg.platform.code.generator.constant; import java.io.File; * @ClassName: GitEggCodeGeneratorConstant * @Description: 常量类 * @author GitEgg * @since 2021-10-12 public class GitEggCodeGeneratorConstant { * CONFIG public static final String CONFIG = "config"; * FIELDS public static final String FIELDS = "fields"; * FORM_FIELDS public static final String FORM_FIELDS = "formFields"; * BASE_ENTITY_FIELD_LIST public static final String BASE_ENTITY_FIELD_LIST = "baseEntityFieldList"; * Author public static final String AUTHOR = "GitEgg"; * JAVA_PATH public static final String JAVA_PATH = File.separator + "src" + File.separator + "main" + File.separator + "java" + File.separator; * RESOURCES_PATH public static final String RESOURCES_PATH = File.separator + "src" + File.separator + "main" + File.separator + "resources" + File.separator; * VUE_PATH public static final String VUE_PATH = File.separator + "src" + File.separator + "views" + File.separator; * JS_PATH public static final String JS_PATH = File.separator + "src" + File.separator + "api" + File.separator; * VUE_JS_PATH public static final String VUE_JS_PATH = "vueJsPath"; * CUSTOM_FILE_PATH_MAP public static final String CUSTOM_FILE_PATH_MAP = "customFilePathMap"; }3、mybatis-plus-generator3.5.1版本支持生成默认支持生成service、serviceImpl、mapper、mapperXml、controller、entity以及自定的other。这些文件都可以自定义模板和输出路径,但是mybatis-plus-generator是将所有的自定义文件都生成到other定义的目录下面的,这显然不符合我们的需求,比如我们需要的DTO文件,vue文件、js文件都会生成到不同的目录里面去,我们需要自定义扩展FreemarkerTemplateEngine方法,实现自定义文件生成到不同的目录,因为我们使用的是Freemarker所以自定义FreemarkerTemplateEngine这个实现类。package com.gitegg.platform.code.generator.engine; import com.baomidou.mybatisplus.generator.config.po.TableInfo; import com.baomidou.mybatisplus.generator.engine.FreemarkerTemplateEngine; import java.io.File; import java.util.Map; * Freemarker 自定义输出自定义模板文件 * @author GitEgg * @since 2021-10-12 public class GitEggFreemarkerTemplateEngine extends FreemarkerTemplateEngine { * 自定义输出自定义模板文件 * @param customFile 自定义配置模板文件信息 * @param tableInfo 表信息 * @param objectMap 渲染数据 * @since 3.5.1 @Override protected void outputCustomFile( Map<String, String> customFile, TableInfo tableInfo, Map<String, Object> objectMap) { Map<String, String> customFilePath = (Map<String, String>)objectMap.get("customFilePathMap"); customFile.forEach((key, value) -> { String otherPath = customFilePath.get(key); String fileName = String.format((otherPath + File.separator + "%s"), key); outputFile(new File(fileName), objectMap, value); }二、业务及实现方法代码生成作为系统的一个功能模块,也需要考虑业务、数据库设计,这里主要有这几个模块:数据源配置:因为是微服务,可能会有多个数据库,分库分表等,所以这里选择使用配置数据源的方式,在代码生成的时候,让开发人员可以自己选择在哪个数据源下的表进行代码生成。代码生成基础配置(数据字典):代码生成时用到的组件类型、展示类型等基础配置,都配置的代码生成的数据字典中,这里不使用系统的数据字典。同时,在组件选择时,只可以选择业务的数据字典。校验规则配置:可以配置字段校验的正则表单式,在字段配置时选择哪些字段进行校验。代码生成规则配置:数据表配置、联合表配置、字段配置、表单配置、 校验配置、列表配置1、根据以上业务需求,设计了t_sys_code_generator_datasource(数据源配置)、t_sys_code_generator_config(主数据表配置)、t_sys_code_generator_table_join(联表配置)、t_sys_code_generator_field(表字段配置)、t_sys_code_generator_validate(校验规则配置)、t_sys_code_generator_dict(数据字典配置)共六张表。CREATE TABLE `t_sys_code_generator_datasource` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `datasource_name` varchar(64) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '数据源名称', `url` varchar(500) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '连接地址', `username` varchar(64) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '用户名', `password` varchar(64) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '密码', `driver` varchar(64) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '数据库驱动', `db_type` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '数据库类型', `comments` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '备注', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = '数据源配置表' ROW_FORMAT = Dynamic;CREATE TABLE `t_sys_code_generator_config` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `datasource_id` bigint(20) NULL DEFAULT NULL COMMENT '数据源', `module_name` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '模块名称', `module_code` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '模块代码', `service_name` varchar(64) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '服务名称', `table_name` varchar(64) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '表名', `table_alias` varchar(64) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '表别名', `table_prefix` varchar(64) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '表前缀', `parent_package` varchar(500) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '父级包名', `controller_path` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 'controller路径', `form_type` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '表单类型 modal弹出框 drawer抽屉 tab新窗口', `table_type` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '表类型 single单表 multi多表', `table_show_type` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '展示类型 table数据表格 tree_table 树表格 3 left_tree_table左树右表 tree数据树 table_table左表右表 left_table_tree左表右树', `form_item_col` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '表单字段排列 1一列一行 2 两列一行', `left_tree_type` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '左树类型 organization机构树 resource资源权限树 ', `front_code_path` varchar(1000) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '前端代码路径', `service_code_path` varchar(1000) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '后端代码路径', `import_flag` tinyint(1) NOT NULL DEFAULT 0 COMMENT '是否支持导入 1支持 0不支持', `export_flag` tinyint(1) NOT NULL DEFAULT 0 COMMENT '是否支持导出 1支持 0不支持', `query_reuse` tinyint(1) NOT NULL DEFAULT 1 COMMENT '查询复用:分页查询和单条记录查询公用同一个sql语句', `status_handling` tinyint(1) NOT NULL DEFAULT 1 COMMENT '状态处理', `code_type` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT '0' COMMENT '代码生成类型 全部 仅后端代码 仅前端代码', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = '代码生成配置表' ROW_FORMAT = Dynamic;CREATE TABLE `t_sys_code_generator_table_join` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `generation_id` bigint(20) NOT NULL COMMENT '代码生成主键', `datasource_id` bigint(20) NULL DEFAULT NULL COMMENT '数据源和主表一致', `join_table_name` varchar(64) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '表名', `join_table_alias` varchar(64) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '表别名', `join_table_prefix` varchar(64) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '表前缀', `join_table_type` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 'left左连接 right右连接 inner等值连接 union联合查询', `join_table_select` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '自定义查询字段', `join_table_on` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '自定义on条件', `table_sort` int(11) NULL DEFAULT NULL COMMENT '显示排序', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = '多表查询时的联合表配置' ROW_FORMAT = Dynamic;CREATE TABLE `t_sys_code_generator_field` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `generation_id` bigint(20) NOT NULL COMMENT '代码生成主键', `join_id` bigint(20) NOT NULL COMMENT '关联表主键', `join_table_name` varchar(64) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '表名', `field_name` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '字段名称', `field_type` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '字段类型', `comment` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '字段描述', `entity_type` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '实体类型', `entity_name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '实体名称', `form_add` tinyint(1) NOT NULL DEFAULT 0 COMMENT '表单新增', `form_edit` tinyint(1) NOT NULL DEFAULT 0 COMMENT '表单编辑', `query_term` tinyint(1) NOT NULL DEFAULT 0 COMMENT '查询条件', `list_show` tinyint(1) NOT NULL DEFAULT 0 COMMENT '列表展示', `import_flag` tinyint(1) NOT NULL DEFAULT 0 COMMENT '是否支持导入 1支持 0不支持', `export_flag` tinyint(1) NOT NULL DEFAULT 0 COMMENT '是否支持导出 1支持 0不支持', `required` tinyint(1) NOT NULL DEFAULT 0 COMMENT '是否必填', `field_unique` tinyint(1) NOT NULL DEFAULT 0 COMMENT '是否唯一', `query_type` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '查询类型', `control_type` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '组件类型', `dict_code` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '字典编码', `min` bigint(20) NULL DEFAULT NULL COMMENT '最小值', `max` bigint(20) NULL DEFAULT NULL COMMENT '最大值', `min_length` int(11) NOT NULL DEFAULT 0 COMMENT '最小长度', `max_length` int(11) NULL DEFAULT NULL COMMENT '字段最大长度', `default_value` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '默认值', `validate_id` bigint(20) NULL DEFAULT NULL COMMENT '校验规则主键', `validate_regular` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '自定义正则表达式校验规则', `field_sort` int(11) NOT NULL DEFAULT 1 COMMENT '显示排序', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE, UNIQUE INDEX `unique_field`(`generation_id`, `join_id`, `join_table_name`, `field_name`) USING BTREE COMMENT '联合约束' ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = '字段属性配置表' ROW_FORMAT = Dynamic;CREATE TABLE `t_sys_code_generator_validate` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `validate_name` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '校验名称', `validate_regular` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '正则表达式校验规则', `status` tinyint(2) NOT NULL DEFAULT 1 COMMENT '\'0\'禁用,\'1\' 启用', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = '字段校验规则配置表' ROW_FORMAT = Dynamic;CREATE TABLE `t_sys_code_generator_dict` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `parent_id` bigint(20) NULL DEFAULT NULL COMMENT '字典上级', `ancestors` varchar(100) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '所有上级字典id的集合,便于查找', `dict_name` varchar(40) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '字典名称', `dict_code` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '字典值', `dict_order` int(11) NULL DEFAULT NULL COMMENT '排序', `dict_status` tinyint(2) NULL DEFAULT 1 COMMENT '1有效,0禁用', `comments` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '备注', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建人', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '操作人', `del_flag` tinyint(2) NOT NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE, INDEX `INDEX_DICT_NAME`(`dict_name`) USING BTREE, INDEX `INDEX_DICT_CODE`(`dict_code`) USING BTREE, INDEX `INDEX_PARENT_ID`(`parent_id`) USING BTREE, INDEX `INDEX_TENANT_ID`(`tenant_id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = '数据字典表' ROW_FORMAT = Dynamic;  表结构建立好之后,先用mybatis-plus-generator默认功能生成基本的CRUD代码,这些CRUD代码就不列出来了,主要说明如何利用mybatis-plus-generator读取数据库表和字段,并结合业务在界面上展示,从而进行代码生成规则的配置。2、在GitEgg-Cloud项目下,gitegg-plugin子项目下新建gitegg-code-generator工程,新建IEngineService接口和接口实现类EngineServiceImpl用于实现:查询某个数据源的所有表、查询某个表的字段信息、查询某个代码生成配置里面所有的字段配置、执行代码生成功能。package com.gitegg.code.generator.engine.service; import com.baomidou.mybatisplus.generator.config.po.TableField; import com.baomidou.mybatisplus.generator.config.po.TableInfo; import com.gitegg.code.generator.config.dto.QueryConfigDTO; import com.gitegg.code.generator.engine.dto.TableDTO; import java.util.List; * 代码生成器接口 * @author GitEgg public interface IEngineService { * 查询某个数据源的所有表 * @param queryConfigDTO * @return List<TableDTO> queryTableList(QueryConfigDTO queryConfigDTO); * 查询某个数据源表的字段信息 * @param datasourceId * @param tableNames * @return List<TableInfo> queryTableFields(String datasourceId, List<String> tableNames); * 查询某个代码生成配置里面所有的字段 * @param queryConfigDTO * @return List<TableInfo> queryConfigFields(QueryConfigDTO queryConfigDTO); * 执行代码生成 * @param queryConfigDTO * @return boolean processGenerateCode(QueryConfigDTO queryConfigDTO); }package com.gitegg.code.generator.engine.service.impl; import cn.hutool.core.util.StrUtil; import com.baomidou.mybatisplus.annotation.FieldFill; import com.baomidou.mybatisplus.annotation.IdType; import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper; import com.baomidou.mybatisplus.generator.FastAutoGenerator; import com.baomidou.mybatisplus.generator.config.DataSourceConfig; import com.baomidou.mybatisplus.generator.config.OutputFile; import com.baomidou.mybatisplus.generator.config.StrategyConfig; import com.baomidou.mybatisplus.generator.config.builder.ConfigBuilder; import com.baomidou.mybatisplus.generator.config.po.TableInfo; import com.baomidou.mybatisplus.generator.config.rules.NamingStrategy; import com.baomidou.mybatisplus.generator.fill.Column; import com.gitegg.code.generator.config.dto.QueryConfigDTO; import com.gitegg.code.generator.config.entity.Config; import com.gitegg.code.generator.config.service.IConfigService; import com.gitegg.code.generator.datasource.entity.Datasource; import com.gitegg.code.generator.datasource.service.IDatasourceService; import com.gitegg.code.generator.engine.GitEggDatabaseQuery; import com.gitegg.code.generator.engine.constant.CodeGeneratorConstant; import com.gitegg.code.generator.engine.dto.TableDTO; import com.gitegg.code.generator.engine.enums.CustomFileEnum; import com.gitegg.code.generator.engine.service.IEngineService; import com.gitegg.code.generator.field.dto.FieldDTO; import com.gitegg.code.generator.field.dto.QueryFieldDTO; import com.gitegg.code.generator.field.service.IFieldService; import com.gitegg.code.generator.join.entity.TableJoin; import com.gitegg.code.generator.join.service.ITableJoinService; import com.gitegg.platform.base.enums.BaseEntityEnum; import com.gitegg.platform.code.generator.constant.GitEggCodeGeneratorConstant; import com.gitegg.platform.code.generator.engine.GitEggFreemarkerTemplateEngine; import com.gitegg.platform.mybatis.entity.BaseEntity; import lombok.RequiredArgsConstructor; import lombok.extern.slf4j.Slf4j; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Lazy; import org.springframework.stereotype.Service; import org.springframework.util.CollectionUtils; import java.io.File; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.stream.Collectors; * 代码生成器接口类 * @author GitEgg @Slf4j @Service @RequiredArgsConstructor(onConstructor_ = @Autowired) public class EngineServiceImpl implements IEngineService { private final IConfigService configService; private final IDatasourceService datasourceService; private final ITableJoinService tableJoinService; * 解决循环依赖问题 private IFieldService fieldService; @Autowired public void setFieldService(@Lazy IFieldService fieldService) { this.fieldService = fieldService; @Override public List<TableDTO> queryTableList(QueryConfigDTO queryConfigDTO) { Datasource datasource = datasourceService.getById(queryConfigDTO.getDatasourceId()); DataSourceConfig dataSourceConfig = new DataSourceConfig.Builder(datasource.getUrl(), datasource.getUsername(), datasource.getPassword()).build(); ConfigBuilder configBuilder = new ConfigBuilder(null, dataSourceConfig, null, null, null, null); List<TableDTO> tableInfos = (new GitEggDatabaseQuery(configBuilder)).queryDatasourceTables(); return tableInfos; @Override public List<TableInfo> queryTableFields(String datasourceId, List<String> tableNames) { Datasource datasource = datasourceService.getById(datasourceId); DataSourceConfig dataSourceConfig = new DataSourceConfig.Builder(datasource.getUrl(), datasource.getUsername(), datasource.getPassword()).build(); //设置有哪些表 StrategyConfig strategyConfig = new StrategyConfig.Builder() .addInclude(tableNames.toArray(new String[]{})) .entityBuilder() .enableChainModel() .enableLombok() .enableRemoveIsPrefix() .enableTableFieldAnnotation() .enableActiveRecord() .logicDeleteColumnName(BaseEntityEnum.DEL_FLAG.field) .logicDeletePropertyName(BaseEntityEnum.DEL_FLAG.entity) .naming(NamingStrategy.underline_to_camel) .columnNaming(NamingStrategy.underline_to_camel) .addTableFills(new Column(BaseEntityEnum.CREATE_TIME.field, FieldFill.INSERT)) .addTableFills(new Column(BaseEntityEnum.UPDATE_TIME.field, FieldFill.INSERT_UPDATE)) .idType(IdType.AUTO) .build(); ConfigBuilder configBuilder = new ConfigBuilder(null, dataSourceConfig, strategyConfig, null, null, null); List<TableInfo> tableInfoList = configBuilder.getTableInfoList(); return tableInfoList; @Override public List<TableInfo> queryConfigFields(QueryConfigDTO queryConfigDTO) { List<String> tableNames = new ArrayList<>(); String tableName = queryConfigDTO.getTableName(); tableNames.add(tableName); Long id = queryConfigDTO.getId(); // 查询是否有联表 if (CodeGeneratorConstant.TABLE_DATA_TYPE_MULTI.equals(queryConfigDTO.getTableType())) QueryWrapper<TableJoin> queryWrapper = new QueryWrapper<>(); queryWrapper.eq(CodeGeneratorConstant.GENERATION_ID, id); List<TableJoin> tableJoinList = tableJoinService.list(queryWrapper); if(!CollectionUtils.isEmpty(tableJoinList)) tableJoinList.stream().forEach(tableJoin->{ tableNames.add(tableJoin.getJoinTableName()); Datasource datasource = datasourceService.getById(queryConfigDTO.getDatasourceId()); DataSourceConfig dataSourceConfig = new DataSourceConfig.Builder(datasource.getUrl(), datasource.getUsername(), datasource.getPassword()).build(); //设置有哪些表 StrategyConfig strategyConfig = new StrategyConfig.Builder().addInclude(tableNames.toArray(new String[]{})).build(); ConfigBuilder configBuilder = new ConfigBuilder(null, dataSourceConfig, strategyConfig, null, null, null); List<TableInfo> tableInfoList = configBuilder.getTableInfoList(); return tableInfoList; @Override public boolean processGenerateCode(QueryConfigDTO queryConfigDTO){ Config config = configService.getById(queryConfigDTO.getId()); QueryFieldDTO queryFieldDTO = new QueryFieldDTO(); queryFieldDTO.setGenerationId(queryConfigDTO.getId()); List<FieldDTO> fieldDTOS = fieldService.queryFieldList(queryFieldDTO); //提取表单的字段 List<FieldDTO> formFieldDTOS = fieldDTOS.stream().filter(f->f.getFormAdd() || f.getFormEdit()).collect(Collectors.toList()); Map<String, Object> customMap = new HashMap<>(); customMap.put(GitEggCodeGeneratorConstant.CONFIG, config); customMap.put(GitEggCodeGeneratorConstant.FIELDS, fieldDTOS); customMap.put(GitEggCodeGeneratorConstant.FORM_FIELDS, formFieldDTOS); //baseEntity里面有的,DTO中需要排除的字段 List<String> baseEntityFieldList = BaseEntityEnum.getBaseEntityFieldList(); customMap.put(GitEggCodeGeneratorConstant.BASE_ENTITY_FIELD_LIST, baseEntityFieldList); //查询数据源配置 Datasource datasource = datasourceService.getById(config.getDatasourceId()); String serviceName = config.getServiceName(); //前端代码路径 String frontCodePath = config.getFrontCodePath(); //后端代码路径 String serviceCodePath = config.getServiceCodePath(); //自定义路径 String parent = config.getParentPackage(); String moduleName = config.getModuleCode(); String codeDirPath = (parent + StrUtil.DOT + moduleName).replace(StrUtil.DOT, File.separator) + File.separator; FastAutoGenerator.create(datasource.getUrl(), datasource.getUsername(), datasource.getPassword()) .globalConfig(builder -> { //全局配置 String author = GitEggCodeGeneratorConstant.AUTHOR; builder.author(author) // 设置作者 .enableSwagger() // 开启 swagger 模式 .fileOverride() // 覆盖已生成文件 .disableOpenDir() .outputDir(serviceCodePath + GitEggCodeGeneratorConstant.JAVA_PATH); // 指定输出目录 .packageConfig(builder -> { //包配置 Map<OutputFile, String> pathInfoMap = new HashMap<>(); pathInfoMap.put(OutputFile.mapperXml, serviceCodePath + GitEggCodeGeneratorConstant.RESOURCES_PATH + codeDirPath + CodeGeneratorConstant.MAPPER); builder.parent(parent) // 设置父包名 .moduleName(moduleName) // 设置父包模块名 .pathInfo(pathInfoMap); // 自定义生成路径 .injectionConfig(builder -> { String dtoName = StrUtil.upperFirst(config.getModuleCode()); //dto String dtoFile = dtoName + CodeGeneratorConstant.DTO_JAVA; String createDtoFile = CodeGeneratorConstant.CREATE + dtoFile; String updateDtoFile = CodeGeneratorConstant.UPDATE + dtoFile; String queryDtoFile = CodeGeneratorConstant.QUERY + dtoFile; //Export and Import String exportFile = dtoName + CodeGeneratorConstant.EXPORT_JAVA; String importFile = dtoName + CodeGeneratorConstant.IMPORT_JAVA; // SQL String sqlFile = dtoName + CodeGeneratorConstant.RESOURCE_SQL; // 设置自定义输出文件 Map<String, String> customFileMap = new HashMap<>(); customFileMap.put(dtoFile, CustomFileEnum.DTO_FILE.path); customFileMap.put(createDtoFile, CustomFileEnum.CREATE_DTO.path); customFileMap.put(updateDtoFile, CustomFileEnum.UPDATE_DTO.path); customFileMap.put(queryDtoFile, CustomFileEnum.QUERY_DTO.path); // Export and Import customFileMap.put(exportFile, CustomFileEnum.EXPORT.path); customFileMap.put(importFile, CustomFileEnum.IMPORT.path); // SQL customFileMap.put(sqlFile, CustomFileEnum.SQL.path); //因为目前版本框架只支持自定义输出到other目录,所以这里利用重写AbstractTemplateEngine的outputCustomFile方法支持所有自定义文件输出目录 Map<String, String> customFilePath = new HashMap<>(); int start = serviceName.indexOf(StrUtil.DASHED); int end = serviceName.length(); String servicePath = serviceName.substring(start, end).replace(StrUtil.DASHED, File.separator); //判断是否生成后端代码 if (config.getCodeType().equals(CodeGeneratorConstant.CODE_ALL) || config.getCodeType().equals(CodeGeneratorConstant.CODE_SERVICE)) //dto String dtoPath = serviceCodePath + GitEggCodeGeneratorConstant.JAVA_PATH + codeDirPath + CodeGeneratorConstant.DTO; customFilePath.put(dtoFile, dtoPath); customFilePath.put(createDtoFile, dtoPath); customFilePath.put(updateDtoFile, dtoPath); customFilePath.put(queryDtoFile, dtoPath); // Export and Import String entityPath = serviceCodePath + GitEggCodeGeneratorConstant.JAVA_PATH + codeDirPath + CodeGeneratorConstant.ENTITY; customFilePath.put(exportFile, entityPath); customFilePath.put(importFile, entityPath); // SQL String sqlPath = serviceCodePath + GitEggCodeGeneratorConstant.RESOURCES_PATH + codeDirPath + CodeGeneratorConstant.MAPPER; customFilePath.put(sqlFile, sqlPath); //判断是否生成后端代码 if (config.getCodeType().equals(CodeGeneratorConstant.CODE_ALL) || config.getCodeType().equals(CodeGeneratorConstant.CODE_FRONT)) // vue and js String vueFile = config.getModuleCode() + CodeGeneratorConstant.TABLE_VUE; String jsFile = config.getModuleCode() + CodeGeneratorConstant.JS; String vuePath = frontCodePath + GitEggCodeGeneratorConstant.VUE_PATH + servicePath + File.separator + config.getModuleCode(); String jsPath = frontCodePath + GitEggCodeGeneratorConstant.JS_PATH + servicePath + File.separator + config.getModuleCode(); customFilePath.put(vueFile, vuePath); customFilePath.put(jsFile, jsPath); // VUE AND JS // TODO 要支持树形表、左树右表、左表右表、左表右树、左树右树形表、左树右树 customFileMap.put(vueFile, CustomFileEnum.VUE.path); customFileMap.put(jsFile, CustomFileEnum.JS.path); customMap.put(GitEggCodeGeneratorConstant.VUE_JS_PATH, servicePath.replace(File.separator, StrUtil.SLASH) + StrUtil.SLASH + config.getModuleCode() + StrUtil.SLASH + config.getModuleCode()); customMap.put(GitEggCodeGeneratorConstant.CUSTOM_FILE_PATH_MAP, customFilePath); builder.customMap(customMap) .customFile(customFileMap); .strategyConfig(builder -> { builder .addInclude(config.getTableName()) .addTablePrefix(config.getTablePrefix()) .entityBuilder() .enableLombok() .enableTableFieldAnnotation() // 实体字段注解 .superClass(BaseEntity.class) .addSuperEntityColumns(BaseEntityEnum.TENANT_ID.field, BaseEntityEnum.CREATE_TIME.field, BaseEntityEnum.CREATOR.field, BaseEntityEnum.UPDATE_TIME.field, BaseEntityEnum.OPERATOR.field, BaseEntityEnum.DEL_FLAG.field) .naming(NamingStrategy.underline_to_camel) .addTableFills(new Column(BaseEntityEnum.CREATE_TIME.field, FieldFill.INSERT)) //基于数据库字段填充 .addTableFills(new Column(BaseEntityEnum.UPDATE_TIME.field, FieldFill.INSERT_UPDATE)) //基于模型属性填充 .controllerBuilder() .enableRestStyle() .enableHyphenStyle() .mapperBuilder() // .enableMapperAnnotation() .enableBaseResultMap() .enableBaseColumnList() .templateConfig(builder -> { if (config.getCodeType().equals(CodeGeneratorConstant.CODE_FRONT)) { builder.disable(); // 使用Freemarker引擎模板,默认的是Velocity引擎模板 .templateEngine(new GitEggFreemarkerTemplateEngine()) .execute(); return true; }3、修改代码生成的模板文件,因为默认的代码模板生成文件不能满足我们的需求,我们需要新增DTO、vue、js、数据导入导出实体定义类等模板,在模板接口新增导入导出等方法,在DTO添加字段校验等。因为模板代码太多,这里不详细列举,可以在在GitHub 或者 Gitee下载查看。4、代码生成功能运行界面数据源配置:数据源配置代码生成配置:代码生成配置关联表配置:关联表配置表字段配置:表字段配置表单配置:表单配置表单校验配置:表单校验配置列表查询配置:列表查询配置数据字典配置:数据字典配置校验规则配置:校验规则配置

SpringCloud微服务实战——搭建企业级开发框架(三十):整合EasyExcel实现数据表格导入导出功能

批量上传数据导入、数据统计分析导出,已经基本是系统必不可缺的一项功能,这里从性能和易用性方面考虑,集成EasyExcel。EasyExcel是一个基于Java的简单、省内存的读写Excel的开源项目,在尽可能节约内存的情况下支持读写百M的Excel:  Java解析、生成Excel比较有名的框架有Apache poi、jxl。但他们都存在一个严重的问题就是非常的耗内存,poi有一套SAX模式的API可以一定程度的解决一些内存溢出的问题,但POI还是有一些缺陷,比如07版Excel解压缩以及解压后存储都是在内存中完成的,内存消耗依然很大。easyexcel重写了poi对07版Excel的解析,一个3M的excel用POI sax解析依然需要100M左右内存,改用easyexcel可以降低到几M,并且再大的excel也不会出现内存溢出;03版依赖POI的sax模式,在上层做了模型转换的封装,让使用者更加简单方便。(https://github.com/alibaba/easyexcel/)一、引入依赖的库1、在GitEgg-Platform项目中修改gitegg-platform-bom工程的pom.xml文件,增加EasyExcel的Maven依赖。<properties> ...... <!-- Excel 数据导入导出 --> <easyexcel.version>2.2.10</easyexcel.version> </properties> <dependencyManagement> <dependencies> ...... <!-- Excel 数据导入导出 --> <dependency> <groupId>com.alibaba</groupId> <artifactId>easyexcel</artifactId> <version>${easyexcel.version}</version> </dependency> ...... </dependencies> </dependencyManagement>2、修改gitegg-platform-boot工程的pom.xml文件,添加EasyExcel依赖。这里考虑到数据导入导出是系统必备功能,所有引用springboot工程的微服务都需要用到EasyExcel,并且目前版本EasyExcel不支持LocalDateTime日期格式,这里需要自定义LocalDateTimeConverter转换器,用于在数据导入导出时支持LocalDateTime。pom.xml文件<dependencies> ...... <!-- Excel 数据导入导出 --> <dependency> <groupId>com.alibaba</groupId> <artifactId>easyexcel</artifactId> </dependency> </dependencies>自定义LocalDateTime转换器LocalDateTimeConverter.javapackage com.gitegg.platform.boot.excel; import java.time.LocalDateTime; import java.time.format.DateTimeFormatter; import java.util.Objects; import com.alibaba.excel.annotation.format.DateTimeFormat; import com.alibaba.excel.converters.Converter; import com.alibaba.excel.enums.CellDataTypeEnum; import com.alibaba.excel.metadata.CellData; import com.alibaba.excel.metadata.GlobalConfiguration; import com.alibaba.excel.metadata.property.ExcelContentProperty; * 自定义LocalDateStringConverter * 用于解决使用easyexcel导出表格时候,默认不支持LocalDateTime日期格式 * @author GitEgg public class LocalDateTimeConverter implements Converter<LocalDateTime> { * 不使用{@code @DateTimeFormat}注解指定日期格式时,默认会使用该格式. private static final String DEFAULT_PATTERN = "yyyy-MM-dd HH:mm:ss"; @Override public Class supportJavaTypeKey() { return LocalDateTime.class; @Override public CellDataTypeEnum supportExcelTypeKey() { return CellDataTypeEnum.STRING; * 这里读的时候会调用 * @param cellData excel数据 (NotNull) * @param contentProperty excel属性 (Nullable) * @param globalConfiguration 全局配置 (NotNull) * @return 读取到内存中的数据 @Override public LocalDateTime convertToJavaData(CellData cellData, ExcelContentProperty contentProperty, GlobalConfiguration globalConfiguration) { DateTimeFormat annotation = contentProperty.getField().getAnnotation(DateTimeFormat.class); return LocalDateTime.parse(cellData.getStringValue(), DateTimeFormatter.ofPattern(Objects.nonNull(annotation) ? annotation.value() : DEFAULT_PATTERN)); * 写的时候会调用 * @param value java value (NotNull) * @param contentProperty excel属性 (Nullable) * @param globalConfiguration 全局配置 (NotNull) * @return 写出到excel文件的数据 @Override public CellData convertToExcelData(LocalDateTime value, ExcelContentProperty contentProperty, GlobalConfiguration globalConfiguration) { DateTimeFormat annotation = contentProperty.getField().getAnnotation(DateTimeFormat.class); return new CellData(value.format(DateTimeFormatter.ofPattern(Objects.nonNull(annotation) ? annotation.value() : DEFAULT_PATTERN))); }  以上依赖及转换器编辑好之后,点击Platform的install,将依赖重新安装到本地库,然后GitEgg-Cloud就可以使用定义的依赖和转换器了。二、业务实现及测试  因为依赖的库及转换器都是放到gitegg-platform-boot工程下的,所以,所有使用到gitegg-platform-boot的都可以直接使用EasyExcel的相关功能,在GitEgg-Cloud项目下重新Reload All Maven Projects。这里以gitegg-code-generator微服务项目举例说明数据导入导出的用法。1、EasyExcel可以根据实体类的注解来进行Excel的读取和生成,在entity目录下新建数据导入和导出的实体类模板文件。文件导入的实体类模板DatasourceImport.javapackage com.gitegg.code.generator.datasource.entity; import com.alibaba.excel.annotation.ExcelProperty; import com.alibaba.excel.annotation.write.style.ColumnWidth; import com.alibaba.excel.annotation.write.style.ContentRowHeight; import com.alibaba.excel.annotation.write.style.HeadRowHeight; import io.swagger.annotations.ApiModel; import io.swagger.annotations.ApiModelProperty; import lombok.Data; * <p> * 数据源配置上传 * </p> * @author GitEgg * @since 2021-08-18 16:39:49 @Data @HeadRowHeight(20) @ContentRowHeight(15) @ApiModel(value="DatasourceImport对象", description="数据源配置导入") public class DatasourceImport { @ApiModelProperty(value = "数据源名称") @ExcelProperty(value = "数据源名称" ,index = 0) @ColumnWidth(20) private String datasourceName; @ApiModelProperty(value = "连接地址") @ExcelProperty(value = "连接地址" ,index = 1) @ColumnWidth(20) private String url; @ApiModelProperty(value = "用户名") @ExcelProperty(value = "用户名" ,index = 2) @ColumnWidth(20) private String username; @ApiModelProperty(value = "密码") @ExcelProperty(value = "密码" ,index = 3) @ColumnWidth(20) private String password; @ApiModelProperty(value = "数据库驱动") @ExcelProperty(value = "数据库驱动" ,index = 4) @ColumnWidth(20) private String driver; @ApiModelProperty(value = "数据库类型") @ExcelProperty(value = "数据库类型" ,index = 5) @ColumnWidth(20) private String dbType; @ApiModelProperty(value = "备注") @ExcelProperty(value = "备注" ,index = 6) @ColumnWidth(20) private String comments; }文件导出的实体类模板DatasourceExport.javapackage com.gitegg.code.generator.datasource.entity; import com.alibaba.excel.annotation.ExcelProperty; import com.alibaba.excel.annotation.format.DateTimeFormat; import com.alibaba.excel.annotation.write.style.ColumnWidth; import com.alibaba.excel.annotation.write.style.ContentRowHeight; import com.alibaba.excel.annotation.write.style.HeadRowHeight; import io.swagger.annotations.ApiModel; import io.swagger.annotations.ApiModelProperty; import com.gitegg.platform.boot.excel.LocalDateTimeConverter; import lombok.Data; import java.time.LocalDateTime; * <p> * 数据源配置下载 * </p> * @author GitEgg * @since 2021-08-18 16:39:49 @Data @HeadRowHeight(20) @ContentRowHeight(15) @ApiModel(value="DatasourceExport对象", description="数据源配置导出") public class DatasourceExport { @ApiModelProperty(value = "主键") @ExcelProperty(value = "序号" ,index = 0) @ColumnWidth(15) private Long id; @ApiModelProperty(value = "数据源名称") @ExcelProperty(value = "数据源名称" ,index = 1) @ColumnWidth(20) private String datasourceName; @ApiModelProperty(value = "连接地址") @ExcelProperty(value = "连接地址" ,index = 2) @ColumnWidth(20) private String url; @ApiModelProperty(value = "用户名") @ExcelProperty(value = "用户名" ,index = 3) @ColumnWidth(20) private String username; @ApiModelProperty(value = "密码") @ExcelProperty(value = "密码" ,index = 4) @ColumnWidth(20) private String password; @ApiModelProperty(value = "数据库驱动") @ExcelProperty(value = "数据库驱动" ,index = 5) @ColumnWidth(20) private String driver; @ApiModelProperty(value = "数据库类型") @ExcelProperty(value = "数据库类型" ,index = 6) @ColumnWidth(20) private String dbType; @ApiModelProperty(value = "备注") @ExcelProperty(value = "备注" ,index = 7) @ColumnWidth(20) private String comments; @ApiModelProperty(value = "创建日期") @ExcelProperty(value = "创建日期" ,index = 8, converter = LocalDateTimeConverter.class) @ColumnWidth(22) @DateTimeFormat("yyyy-MM-dd HH:mm:ss") private LocalDateTime createTime; }2、在DatasourceController中新建上传和下载方法:/** * 批量导出数据 * @param response * @param queryDatasourceDTO * @throws IOException @GetMapping("/download") public void download(HttpServletResponse response, QueryDatasourceDTO queryDatasourceDTO) throws IOException { response.setContentType("application/vnd.ms-excel"); response.setCharacterEncoding("utf-8"); // 这里URLEncoder.encode可以防止中文乱码 当然和easyexcel没有关系 String fileName = URLEncoder.encode("数据源列表", "UTF-8").replaceAll("\\+", "%20"); response.setHeader("Content-disposition", "attachment;filename*=utf-8''" + fileName + ".xlsx"); List<DatasourceDTO> dataSourceList = datasourceService.queryDatasourceList(queryDatasourceDTO); List<DatasourceExport> dataSourceExportList = new ArrayList<>(); for (DatasourceDTO datasourceDTO : dataSourceList) { DatasourceExport dataSourceExport = BeanCopierUtils.copyByClass(datasourceDTO, DatasourceExport.class); dataSourceExportList.add(dataSourceExport); String sheetName = "数据源列表"; EasyExcel.write(response.getOutputStream(), DatasourceExport.class).sheet(sheetName).doWrite(dataSourceExportList); * 下载导入模板 * @param response * @throws IOException @GetMapping("/download/template") public void downloadTemplate(HttpServletResponse response) throws IOException { response.setContentType("application/vnd.ms-excel"); response.setCharacterEncoding("utf-8"); // 这里URLEncoder.encode可以防止中文乱码 当然和easyexcel没有关系 String fileName = URLEncoder.encode("数据源导入模板", "UTF-8").replaceAll("\\+", "%20"); response.setHeader("Content-disposition", "attachment;filename*=utf-8''" + fileName + ".xlsx"); String sheetName = "数据源列表"; EasyExcel.write(response.getOutputStream(), DatasourceImport.class).sheet(sheetName).doWrite(null); * 上传数据 * @param file * @return * @throws IOException @PostMapping("/upload") public Result<?> upload(@RequestParam("uploadFile") MultipartFile file) throws IOException { List<DatasourceImport> datasourceImportList = EasyExcel.read(file.getInputStream(), DatasourceImport.class, null).sheet().doReadSync(); if (!CollectionUtils.isEmpty(datasourceImportList)) List<Datasource> datasourceList = new ArrayList<>(); datasourceImportList.stream().forEach(datasourceImport-> { datasourceList.add(BeanCopierUtils.copyByClass(datasourceImport, Datasource.class)); datasourceService.saveBatch(datasourceList); return Result.success(); }3、前端导出(下载)设置,我们前端框架请求用的是axios,正常情况下,普通的请求成功或失败返回的responseType为json格式,当我们下载文件时,请求返回的是文件流,这里需要设置下载请求的responseType为blob。考虑到下载是一个通用的功能,这里提取出下载方法为一个公共方法:首先是判断服务端的返回格式,当一个下载请求返回的是json格式时,那么说明这个请求失败,需要处理错误新题并提示,如果不是,那么走正常的文件流下载流程。api请求//请求的responseType设置为blob格式 export function downloadDatasourceList (query) { return request({ url: '/gitegg-plugin-code/code/generator/datasource/download', method: 'get', responseType: 'blob', params: query }导出/下载的公共方法// 处理请求返回信息 export function handleDownloadBlod (fileName, response) { const res = response.data if (res.type === 'application/json') { const reader = new FileReader() reader.readAsText(response.data, 'utf-8') reader.onload = function () { const { msg } = JSON.parse(reader.result) notification.error({ message: '下载失败', description: msg } else { exportBlod(fileName, res) // 导出Excel export function exportBlod (fileName, data) { const blob = new Blob([data]) const elink = document.createElement('a') elink.download = fileName elink.style.display = 'none' elink.href = URL.createObjectURL(blob) document.body.appendChild(elink) elink.click() URL.revokeObjectURL(elink.href) document.body.removeChild(elink) }vue页面调用handleDownload () { this.downloadLoading = true downloadDatasourceList(this.listQuery).then(response => { handleDownloadBlod('数据源配置列表.xlsx', response) this.listLoading = false },4、前端导入(上传的设置),前端无论是Ant Design of Vue框架还是ElementUI框架都提供了上传组件,用法都是一样的,在上传之前需要组装FormData数据,除了上传的文件,还可以自定义传到后台的参数。上传组件<a-upload name="uploadFile" :show-upload-list="false" :before-upload="beforeUpload" <a-button> <a-icon type="upload" /> 导入 </a-button> </a-upload>上传方法beforeUpload (file) { this.handleUpload(file) return false handleUpload (file) { this.uploadedFileName = '' const formData = new FormData() formData.append('uploadFile', file) this.uploading = true uploadDatasource(formData).then(() => { this.uploading = false this.$message.success('数据导入成功') this.handleFilter() }).catch(err => { console.log('uploading', err) this.$message.error('数据导入失败') },  以上步骤,就把EasyExcel整合完成,基本的数据导入导出功能已经实现,在业务开发过程中,可能会用到复杂的Excel导出,比如包含图片、图表等的Excel导出,这一块需要根据具体业务需要,参考EasyExcel的详细用法来定制自己的导出方法。

SpringCloud微服务实战——搭建企业级开发框架(二十九):集成对象存储服务MinIO+七牛云+阿里云+腾讯云

微服务应用中图片、文件等存储区别于单体应用,单体应用可以放到本地读写磁盘文件,微服务应用必需用到分布式存储,将图片、文件等存储到服务稳定的分布式存储服务器。目前,很多云服务商提供了存储的云服务,比如阿里云OSS、腾讯云COS、七牛云对象存储Kodo、百度云对象存储BOS等等、还有开源对象存储服务器,比如FastDFS、MinIO等。  如果我们的框架只支持一种存储服务,那么在后期扩展或者修改时会有局限性,所以,这里希望能够定义一个抽象接口,想使用哪种服务就实现哪种服务,在配置多个服务时,调用的存储时可以进行选择。在这里云服务选择七牛云,开源服务选择MinIO进行集成,如果需要其他服务可以自行扩展。  首先,在框架搭建前,我们先准备环境,这里以MinIO和七牛云为例,MinIO的安装十分简单,我们这里选择Linux安装包的方式来安装,具体方式参考:http://docs.minio.org.cn/docs/,七牛云只需要到官网注册并实名认证即可获得10G免费存储容量https://www.qiniu.com/。一、基础底层库实现1、在GitEgg-Platform中新建gitegg-platform-dfs (dfs: Distributed File System分布式文件系统)子工程用于定义对象存储服务的抽象接口,新建IDfsBaseService用于定义文件上传下载常用接口/** * 分布式文件存储操作接口定义 * 为了保留系统操作记录,原则上不允许上传文件物理删除,修改等操作。 * 业务操作的修改删除文件,只是关联关系的修改,重新上传文件后并与业务关联即可。 public interface IDfsBaseService { * 获取简单上传凭证 * @param bucket * @return String uploadToken(String bucket); * 获取覆盖上传凭证 * @param bucket * @return String uploadToken(String bucket, String key); * 创建 bucket * @param bucket void createBucket(String bucket); * 通过流上传文件,指定文件名 * @param inputStream * @param fileName * @return GitEggDfsFile uploadFile(InputStream inputStream, String fileName); * 通过流上传文件,指定文件名和bucket * @param inputStream * @param bucket * @param fileName * @return GitEggDfsFile uploadFile(InputStream inputStream, String bucket, String fileName); * 通过文件名获取文件访问链接 * @param fileName * @return String getFileUrl(String fileName); * 通过文件名和bucket获取文件访问链接 * @param fileName * @param bucket * @return String getFileUrl(String bucket, String fileName); * 通过文件名和bucket获取文件访问链接,设置有效期 * @param bucket * @param fileName * @param duration * @param unit * @return String getFileUrl(String bucket, String fileName, int duration, TimeUnit unit); * 通过文件名以流的形式下载一个对象 * @param fileName * @return OutputStream getFileObject(String fileName, OutputStream outputStream); * 通过文件名和bucket以流的形式下载一个对象 * @param fileName * @param bucket * @return OutputStream getFileObject(String bucket, String fileName, OutputStream outputStream); * 根据文件名删除文件 * @param fileName String removeFile(String fileName); * 根据文件名删除指定bucket下的文件 * @param bucket * @param fileName String removeFile(String bucket, String fileName); * 根据文件名列表批量删除文件 * @param fileNames String removeFiles(List<String> fileNames); * 根据文件名列表批量删除bucket下的文件 * @param bucket * @param fileNames String removeFiles(String bucket, List<String> fileNames); }2、在GitEgg-Platform中新建gitegg-platform-dfs-minio子工程,新建MinioDfsServiceImpl和MinioDfsProperties用于实现IDfsBaseService文件上传下载接口@Data @Component @ConfigurationProperties(prefix = "dfs.minio") public class MinioDfsProperties { * AccessKey private String accessKey; * SecretKey private String secretKey; * 区域,需要在MinIO配置服务器的物理位置,默认是us-east-1(美国东区1),这也是亚马逊S3的默认区域。 private String region; * Bucket private String bucket; * 公开还是私有 private Integer accessControl; * 上传服务器域名地址 private String uploadUrl; * 文件请求地址前缀 private String accessUrlPrefix; * 上传文件夹前缀 private String uploadDirPrefix; }@Slf4j @AllArgsConstructor public class MinioDfsServiceImpl implements IDfsBaseService { private final MinioClient minioClient; private final MinioDfsProperties minioDfsProperties; @Override public String uploadToken(String bucket) { return null; @Override public String uploadToken(String bucket, String key) { return null; @Override public void createBucket(String bucket) { BucketExistsArgs bea = BucketExistsArgs.builder().bucket(bucket).build(); try { if (!minioClient.bucketExists(bea)) { MakeBucketArgs mba = MakeBucketArgs.builder().bucket(bucket).build(); minioClient.makeBucket(mba); } catch (ErrorResponseException e) { e.printStackTrace(); } catch (InsufficientDataException e) { e.printStackTrace(); } catch (InternalException e) { e.printStackTrace(); } catch (InvalidKeyException e) { e.printStackTrace(); } catch (InvalidResponseException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (NoSuchAlgorithmException e) { e.printStackTrace(); } catch (ServerException e) { e.printStackTrace(); } catch (XmlParserException e) { e.printStackTrace(); @Override public GitEggDfsFile uploadFile(InputStream inputStream, String fileName) { return this.uploadFile(inputStream, minioDfsProperties.getBucket(), fileName); @Override public GitEggDfsFile uploadFile(InputStream inputStream, String bucket, String fileName) { GitEggDfsFile dfsFile = new GitEggDfsFile(); try { dfsFile.setBucket(bucket); dfsFile.setBucketDomain(minioDfsProperties.getUploadUrl()); dfsFile.setFileUrl(minioDfsProperties.getAccessUrlPrefix()); dfsFile.setEncodedFileName(fileName); minioClient.putObject(PutObjectArgs.builder() .bucket(bucket) .stream(inputStream, -1, 5*1024*1024) .object(fileName) .build()); } catch (ErrorResponseException e) { e.printStackTrace(); } catch (InsufficientDataException e) { e.printStackTrace(); } catch (InternalException e) { e.printStackTrace(); } catch (InvalidKeyException e) { e.printStackTrace(); } catch (InvalidResponseException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (NoSuchAlgorithmException e) { e.printStackTrace(); } catch (ServerException e) { e.printStackTrace(); } catch (XmlParserException e) { e.printStackTrace(); return dfsFile; @Override public String getFileUrl(String fileName) { return this.getFileUrl(minioDfsProperties.getBucket(), fileName); @Override public String getFileUrl(String bucket, String fileName) { return this.getFileUrl(bucket, fileName, DfsConstants.DFS_FILE_DURATION, DfsConstants.DFS_FILE_DURATION_UNIT); @Override public String getFileUrl(String bucket, String fileName, int duration, TimeUnit unit) { String url = null; try { url = minioClient.getPresignedObjectUrl( GetPresignedObjectUrlArgs.builder() .method(Method.GET) .bucket(bucket) .object(fileName) .expiry(duration, unit) .build()); } catch (ErrorResponseException e) { e.printStackTrace(); } catch (InsufficientDataException e) { e.printStackTrace(); } catch (InternalException e) { e.printStackTrace(); } catch (InvalidKeyException e) { e.printStackTrace(); } catch (InvalidResponseException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (NoSuchAlgorithmException e) { e.printStackTrace(); } catch (XmlParserException e) { e.printStackTrace(); } catch (ServerException e) { e.printStackTrace(); return url; @Override public OutputStream getFileObject(String fileName, OutputStream outputStream) { return this.getFileObject(minioDfsProperties.getBucket(), fileName, outputStream); @Override public OutputStream getFileObject(String bucket, String fileName, OutputStream outputStream) { BufferedInputStream bis = null; InputStream stream = null; try { stream = minioClient.getObject( GetObjectArgs.builder() .bucket(bucket) .object(fileName) .build()); bis = new BufferedInputStream(stream); IOUtils.copy(bis, outputStream); } catch (ErrorResponseException e) { e.printStackTrace(); } catch (InsufficientDataException e) { e.printStackTrace(); } catch (InternalException e) { e.printStackTrace(); } catch (InvalidKeyException e) { e.printStackTrace(); } catch (InvalidResponseException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (NoSuchAlgorithmException e) { e.printStackTrace(); } catch (ServerException e) { e.printStackTrace(); } catch (XmlParserException e) { e.printStackTrace(); } finally { if (stream != null) { try { stream.close(); } catch (IOException e) { e.printStackTrace(); if (bis != null) { try { bis.close(); } catch (IOException e) { e.printStackTrace(); return outputStream; @Override public String removeFile(String fileName) { return this.removeFile(minioDfsProperties.getBucket(), fileName); @Override public String removeFile(String bucket, String fileName) { return this.removeFiles(bucket, Collections.singletonList(fileName)); @Override public String removeFiles(List<String> fileNames) { return this.removeFiles(minioDfsProperties.getBucket(), fileNames); @Override public String removeFiles(String bucket, List<String> fileNames) { List<DeleteObject> deleteObject = new ArrayList<>(); if (!CollectionUtils.isEmpty(fileNames)) fileNames.stream().forEach(item -> { deleteObject.add(new DeleteObject(item)); Iterable<Result<DeleteError>> result = minioClient.removeObjects(RemoveObjectsArgs.builder() .bucket(bucket) .objects(deleteObject) .build()); try { return JsonUtils.objToJsonIgnoreNull(result); } catch (Exception e) { e.printStackTrace(); return null; }3、在GitEgg-Platform中新建gitegg-platform-dfs-qiniu子工程,新建QiNiuDfsServiceImpl和QiNiuDfsProperties用于实现IDfsBaseService文件上传下载接口@Data @Component @ConfigurationProperties(prefix = "dfs.qiniu") public class QiNiuDfsProperties { * AccessKey private String accessKey; * SecretKey private String secretKey; * 七牛云机房 private String region; * Bucket 存储块 private String bucket; * 公开还是私有 private Integer accessControl; * 上传服务器域名地址 private String uploadUrl; * 文件请求地址前缀 private String accessUrlPrefix; * 上传文件夹前缀 private String uploadDirPrefix; }@Slf4j @AllArgsConstructor public class QiNiuDfsServiceImpl implements IDfsBaseService { private final Auth auth; private final UploadManager uploadManager; private final BucketManager bucketManager; private final QiNiuDfsProperties qiNiuDfsProperties; * @param bucket * @return @Override public String uploadToken(String bucket) { Auth auth = Auth.create(qiNiuDfsProperties.getAccessKey(), qiNiuDfsProperties.getSecretKey()); String upToken = auth.uploadToken(bucket); return upToken; * @param bucket * @param key * @return @Override public String uploadToken(String bucket, String key) { Auth auth = Auth.create(qiNiuDfsProperties.getAccessKey(), qiNiuDfsProperties.getSecretKey()); String upToken = auth.uploadToken(bucket, key); return upToken; @Override public void createBucket(String bucket) { try { String[] buckets = bucketManager.buckets(); if (!ArrayUtil.contains(buckets, bucket)) { bucketManager.createBucket(bucket, qiNiuDfsProperties.getRegion()); } catch (QiniuException e) { e.printStackTrace(); * @param inputStream * @param fileName * @return @Override public GitEggDfsFile uploadFile(InputStream inputStream, String fileName) { return this.uploadFile(inputStream, qiNiuDfsProperties.getBucket(), fileName); * @param inputStream * @param bucket * @param fileName * @return @Override public GitEggDfsFile uploadFile(InputStream inputStream, String bucket, String fileName) { GitEggDfsFile dfsFile = null; //默认不指定key的情况下,以文件内容的hash值作为文件名 String key = null; if (!StringUtils.isEmpty(fileName)) key = fileName; try { String upToken = auth.uploadToken(bucket); Response response = uploadManager.put(inputStream, key, upToken,null, null); //解析上传成功的结果 dfsFile = JsonUtils.jsonToPojo(response.bodyString(), GitEggDfsFile.class); if (dfsFile != null) { dfsFile.setBucket(bucket); dfsFile.setBucketDomain(qiNiuDfsProperties.getUploadUrl()); dfsFile.setFileUrl(qiNiuDfsProperties.getAccessUrlPrefix()); dfsFile.setEncodedFileName(fileName); } catch (QiniuException ex) { Response r = ex.response; log.error(r.toString()); try { log.error(r.bodyString()); } catch (QiniuException ex2) { log.error(ex2.toString()); } catch (Exception e) { log.error(e.toString()); return dfsFile; @Override public String getFileUrl(String fileName) { return this.getFileUrl(qiNiuDfsProperties.getBucket(), fileName); @Override public String getFileUrl(String bucket, String fileName) { return this.getFileUrl(bucket, fileName, DfsConstants.DFS_FILE_DURATION, DfsConstants.DFS_FILE_DURATION_UNIT); @Override public String getFileUrl(String bucket, String fileName, int duration, TimeUnit unit) { String finalUrl = null; try { Integer accessControl = qiNiuDfsProperties.getAccessControl(); if (accessControl != null && DfsConstants.DFS_FILE_PRIVATE == accessControl.intValue()) { String encodedFileName = URLEncoder.encode(fileName, "utf-8").replace("+", "%20"); String publicUrl = String.format("%s/%s", qiNiuDfsProperties.getAccessUrlPrefix(), encodedFileName); String accessKey = qiNiuDfsProperties.getAccessKey(); String secretKey = qiNiuDfsProperties.getSecretKey(); Auth auth = Auth.create(accessKey, secretKey); long expireInSeconds = unit.toSeconds(duration); finalUrl = auth.privateDownloadUrl(publicUrl, expireInSeconds); else { finalUrl = String.format("%s/%s", qiNiuDfsProperties.getAccessUrlPrefix(), fileName); } catch (UnsupportedEncodingException e) { e.printStackTrace(); return finalUrl; @Override public OutputStream getFileObject(String fileName, OutputStream outputStream) { return this.getFileObject(qiNiuDfsProperties.getBucket(), fileName, outputStream); @Override public OutputStream getFileObject(String bucket, String fileName, OutputStream outputStream) { URL url = null; HttpURLConnection conn = null; BufferedInputStream bis = null; try { String path = this.getFileUrl(bucket, fileName, DfsConstants.DFS_FILE_DURATION, DfsConstants.DFS_FILE_DURATION_UNIT); url = new URL(path); conn = (HttpURLConnection)url.openConnection(); //设置超时间 conn.setConnectTimeout(DfsConstants.DOWNLOAD_TIMEOUT); //防止屏蔽程序抓取而返回403错误 conn.setRequestProperty("User-Agent", "Mozilla/4.0 (compatible; MSIE 5.0; Windows NT; DigExt)"); conn.connect(); //得到输入流 bis = new BufferedInputStream(conn.getInputStream()); IOUtils.copy(bis, outputStream); } catch (Exception e) { log.error("读取网络文件异常:" + fileName); finally { conn.disconnect(); if (bis != null) { try { bis.close(); } catch (IOException e) { e.printStackTrace(); return outputStream; * @param fileName * @return @Override public String removeFile(String fileName) { return this.removeFile( qiNiuDfsProperties.getBucket(), fileName); * @param bucket * @param fileName * @return @Override public String removeFile(String bucket, String fileName) { String resultStr = null; try { Response response = bucketManager.delete(bucket, fileName); resultStr = JsonUtils.objToJson(response); } catch (QiniuException e) { Response r = e.response; log.error(r.toString()); try { log.error(r.bodyString()); } catch (QiniuException ex2) { log.error(ex2.toString()); } catch (Exception e) { log.error(e.toString()); return resultStr; * @param fileNames * @return @Override public String removeFiles(List<String> fileNames) { return this.removeFiles(qiNiuDfsProperties.getBucket(), fileNames); * @param bucket * @param fileNames * @return @Override public String removeFiles(String bucket, List<String> fileNames) { String resultStr = null; try { if (!CollectionUtils.isEmpty(fileNames) && fileNames.size() > GitEggConstant.Number.THOUSAND) throw new BusinessException("单次批量请求的文件数量不得超过1000"); BucketManager.BatchOperations batchOperations = new BucketManager.BatchOperations(); batchOperations.addDeleteOp(bucket, (String [])fileNames.toArray()); Response response = bucketManager.batch(batchOperations); BatchStatus[] batchStatusList = response.jsonToObject(BatchStatus[].class); resultStr = JsonUtils.objToJson(batchStatusList); } catch (QiniuException ex) { log.error(ex.response.toString()); } catch (Exception e) { log.error(e.toString()); return resultStr; }4、在GitEgg-Platform中新建gitegg-platform-dfs-starter子工程,用于集成所有文件上传下载子工程,方便业务统一引入所有实现<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>GitEgg-Platform</artifactId> <groupId>com.gitegg.platform</groupId> <version>1.0-SNAPSHOT</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>gitegg-platform-dfs-starter</artifactId> <name>${project.artifactId}</name> <packaging>jar</packaging> <dependencies> <!-- gitegg 分布式文件自定义扩展-minio --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-dfs-minio</artifactId> </dependency> <!-- gitegg 分布式文件自定义扩展-七牛云 --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-dfs-qiniu</artifactId> </dependency> </dependencies> </project>5、gitegg-platform-bom中添加文件存储相关依赖<!-- gitegg 分布式文件自定义扩展 --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-dfs</artifactId> <version>${gitegg.project.version}</version> </dependency> <!-- gitegg 分布式文件自定义扩展-minio --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-dfs-minio</artifactId> <version>${gitegg.project.version}</version> </dependency> <!-- gitegg 分布式文件自定义扩展-七牛云 --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-dfs-qiniu</artifactId> <version>${gitegg.project.version}</version> </dependency> <!-- gitegg 分布式文件自定义扩展-starter --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-dfs-starter</artifactId> <version>${gitegg.project.version}</version> </dependency> <!-- minio文件存储服务 https://mvnrepository.com/artifact/io.minio/minio --> <dependency> <groupId>io.minio</groupId> <artifactId>minio</artifactId> <version>${dfs.minio.version}</version> </dependency> <!--七牛云文件存储服务--> <dependency> <groupId>com.qiniu</groupId> <artifactId>qiniu-java-sdk</artifactId> <version>${dfs.qiniu.version}</version> </dependency>二、业务功能实现分布式文件存储功能作为系统扩展功能放在gitegg-service-extension工程中,首先需要分为几个模块:文件服务器的基本配置模块文件的上传、下载记录模块(下载只记录私有文件,对于公共可访问的文件不需要记录)前端访问下载实现1、新建文件服务器配置表,用于存放文件服务器相关配置,定义好表结构,使用代码生成工具生成增删改查代码。CREATE TABLE `t_sys_dfs` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `dfs_type` bigint(20) NULL DEFAULT NULL COMMENT '分布式存储分类', `dfs_code` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '分布式存储编号', `access_url_prefix` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '文件访问地址前缀', `upload_url` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '分布式存储上传地址', `bucket` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '空间名称', `app_id` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '应用ID', `region` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '区域', `access_key` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 'accessKey', `secret_key` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 'secretKey', `dfs_default` tinyint(2) NOT NULL DEFAULT 0 COMMENT '是否默认存储 0否,1是', `dfs_status` tinyint(2) NOT NULL DEFAULT 1 COMMENT '状态 0禁用,1 启用', `access_control` tinyint(2) NOT NULL DEFAULT 0 COMMENT '访问控制 0私有,1公开', `comments` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '备注', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = '分布式存储配置表' ROW_FORMAT = DYNAMIC;2、新建DfsQiniuFactory和DfsMinioFactory接口实现工厂类,用于根据当前用户的选择,实例化需要的接口实现类/** * 七牛云上传服务接口工厂类 public class DfsQiniuFactory { public static IDfsBaseService getDfsBaseService(DfsDTO dfsDTO) { Auth auth = Auth.create(dfsDTO.getAccessKey(), dfsDTO.getSecretKey()); Configuration cfg = new Configuration(Region.autoRegion()); UploadManager uploadManager = new UploadManager(cfg); BucketManager bucketManager = new BucketManager(auth, cfg); QiNiuDfsProperties qiNiuDfsProperties = new QiNiuDfsProperties(); qiNiuDfsProperties.setAccessKey(dfsDTO.getAccessKey()); qiNiuDfsProperties.setSecretKey(dfsDTO.getSecretKey()); qiNiuDfsProperties.setRegion(dfsDTO.getRegion()); qiNiuDfsProperties.setBucket(dfsDTO.getBucket()); qiNiuDfsProperties.setUploadUrl(dfsDTO.getUploadUrl()); qiNiuDfsProperties.setAccessUrlPrefix(dfsDTO.getAccessUrlPrefix()); qiNiuDfsProperties.setAccessControl(dfsDTO.getAccessControl()); return new QiNiuDfsServiceImpl(auth, uploadManager, bucketManager, qiNiuDfsProperties); * MINIO上传服务接口工厂类 public class DfsMinioFactory { public static IDfsBaseService getDfsBaseService(DfsDTO dfsDTO) { MinioClient minioClient = MinioClient.builder() .endpoint(dfsDTO.getUploadUrl()) .credentials(dfsDTO.getAccessKey(), dfsDTO.getSecretKey()).build();; MinioDfsProperties minioDfsProperties = new MinioDfsProperties(); minioDfsProperties.setAccessKey(dfsDTO.getAccessKey()); minioDfsProperties.setSecretKey(dfsDTO.getSecretKey()); minioDfsProperties.setRegion(dfsDTO.getRegion()); minioDfsProperties.setBucket(dfsDTO.getBucket()); minioDfsProperties.setUploadUrl(dfsDTO.getUploadUrl()); minioDfsProperties.setAccessUrlPrefix(dfsDTO.getAccessUrlPrefix()); minioDfsProperties.setAccessControl(dfsDTO.getAccessControl()); return new MinioDfsServiceImpl(minioClient, minioDfsProperties); }3、新建DfsFactory工厂类,添加@Component使用容器管理该类(默认单例),用于根据系统用户配置,生成及缓存对应的上传下载接口实现/** * DfsFactory工厂类,根据系统用户配置,生成及缓存对应的上传下载接口实现 @Component public class DfsFactory { * DfsService 缓存 private final static Map<Long, IDfsBaseService> dfsBaseServiceMap = new ConcurrentHashMap<>(); * 获取 DfsService * @param dfsDTO 分布式存储配置 * @return dfsService public IDfsBaseService getDfsBaseService(DfsDTO dfsDTO) { //根据dfsId获取对应的分布式存储服务接口,dfsId是唯一的,每个租户有其自有的dfsId Long dfsId = dfsDTO.getId(); IDfsBaseService dfsBaseService = dfsBaseServiceMap.get(dfsId); if (null == dfsBaseService) { Class cls = null; try { cls = Class.forName(DfsFactoryClassEnum.getValue(String.valueOf(dfsDTO.getDfsType()))); Method staticMethod = cls.getDeclaredMethod(DfsConstants.DFS_SERVICE_FUNCTION, DfsDTO.class); dfsBaseService = (IDfsBaseService) staticMethod.invoke(cls, dfsDTO); dfsBaseServiceMap.put(dfsId, dfsBaseService); } catch (ClassNotFoundException | NoSuchMethodException e) { e.printStackTrace(); } catch (IllegalAccessException e) { e.printStackTrace(); } catch (InvocationTargetException e) { e.printStackTrace(); return dfsBaseService; }4、新建枚举类DfsFactoryClassEnum,用于DfsFactory 工厂类通过反射实例化对应文件服务器的接口实现类/** * @ClassName: DfsFactoryClassEnum * @Description: 分布式存储工厂类枚举 ,因dfs表存的是数据字典表的id,这里省一次数据库查询,所以就用数据字典的id * @author GitEgg * @date 2020年09月19日 下午11:49:45 public enum DfsFactoryClassEnum { * MINIO MINIO MINIO("2", "com.gitegg.service.extension.dfs.factory.DfsMinioFactory"), * 七牛云Kodo QINIUYUN_KODO QI_NIU("3", "com.gitegg.service.extension.dfs.factory.DfsQiniuFactory"), * 阿里云OSS ALIYUN_OSS ALI_YUN("4", "com.gitegg.service.extension.dfs.factory.DfsAliyunFactory"), * 腾讯云COS TENCENT_COS TENCENT("5", "com.gitegg.service.extension.dfs.factory.DfsTencentFactory"); public String code; public String value; DfsFactoryClassEnum(String code, String value) { this.code = code; this.value = value; public static String getValue(String code) { DfsFactoryClassEnum[] smsFactoryClassEnums = values(); for (DfsFactoryClassEnum smsFactoryClassEnum : smsFactoryClassEnums) { if (smsFactoryClassEnum.getCode().equals(code)) { return smsFactoryClassEnum.getValue(); return null; public String getCode() { return code; public void setCode(String code) { this.code = code; public String getValue() { return value; public void setValue(String value) { this.value = value; }5、新建IGitEggDfsService接口,用于定义业务需要的文件上传下载接口/** * 业务文件上传下载接口实现 public interface IGitEggDfsService { * 获取文件上传的 token * @param dfsCode * @return String uploadToken(String dfsCode); * 上传文件 * @param dfsCode * @param file * @return GitEggDfsFile uploadFile(String dfsCode, MultipartFile file); * 获取文件访问链接 * @param dfsCode * @param fileName * @return String getFileUrl(String dfsCode, String fileName); * 下载文件 * @param dfsCode * @param fileName * @return OutputStream downloadFile(String dfsCode, String fileName, OutputStream outputStream); }6、新建IGitEggDfsService接口实现类GitEggDfsServiceImpl,用于实现业务需要的文件上传下载接口@Slf4j @Service @RequiredArgsConstructor(onConstructor_ = @Autowired) public class GitEggDfsServiceImpl implements IGitEggDfsService { private final DfsFactory dfsFactory; private final IDfsService dfsService; private final IDfsFileService dfsFileService; @Override public String uploadToken(String dfsCode) { QueryDfsDTO queryDfsDTO = new QueryDfsDTO(); queryDfsDTO.setDfsCode(dfsCode); DfsDTO dfsDTO = dfsService.queryDfs(queryDfsDTO); IDfsBaseService dfsBaseService = dfsFactory.getDfsBaseService(dfsDTO); String token = dfsBaseService.uploadToken(dfsDTO.getBucket()); return token; @Override public GitEggDfsFile uploadFile(String dfsCode, MultipartFile file) { QueryDfsDTO queryDfsDTO = new QueryDfsDTO(); DfsDTO dfsDTO = null; // 如果上传时没有选择存储方式,那么取默认存储方式 if(StringUtils.isEmpty(dfsCode)) { queryDfsDTO.setDfsDefault(GitEggConstant.ENABLE); else { queryDfsDTO.setDfsCode(dfsCode); GitEggDfsFile gitEggDfsFile = null; DfsFile dfsFile = new DfsFile(); try { dfsDTO = dfsService.queryDfs(queryDfsDTO); IDfsBaseService dfsFileService = dfsFactory.getDfsBaseService(dfsDTO); //获取文件名 String originalName = file.getOriginalFilename(); //获取文件后缀 String extension = FilenameUtils.getExtension(originalName); String hash = Etag.stream(file.getInputStream(), file.getSize()); String fileName = hash + "." + extension; // 保存文件上传记录 dfsFile.setDfsId(dfsDTO.getId()); dfsFile.setOriginalName(originalName); dfsFile.setFileName(fileName); dfsFile.setFileExtension(extension); dfsFile.setFileSize(file.getSize()); dfsFile.setFileStatus(GitEggConstant.ENABLE); //执行文件上传操作 gitEggDfsFile = dfsFileService.uploadFile(file.getInputStream(), fileName); if (gitEggDfsFile != null) gitEggDfsFile.setFileName(originalName); gitEggDfsFile.setKey(hash); gitEggDfsFile.setHash(hash); gitEggDfsFile.setFileSize(file.getSize()); dfsFile.setAccessUrl(gitEggDfsFile.getFileUrl()); } catch (IOException e) { log.error("文件上传失败:{}", e); dfsFile.setFileStatus(GitEggConstant.DISABLE); dfsFile.setComments(String.valueOf(e)); } finally { dfsFileService.save(dfsFile); return gitEggDfsFile; @Override public String getFileUrl(String dfsCode, String fileName) { String fileUrl = null; QueryDfsDTO queryDfsDTO = new QueryDfsDTO(); DfsDTO dfsDTO = null; // 如果上传时没有选择存储方式,那么取默认存储方式 if(StringUtils.isEmpty(dfsCode)) { queryDfsDTO.setDfsDefault(GitEggConstant.ENABLE); else { queryDfsDTO.setDfsCode(dfsCode); try { dfsDTO = dfsService.queryDfs(queryDfsDTO); IDfsBaseService dfsFileService = dfsFactory.getDfsBaseService(dfsDTO); fileUrl = dfsFileService.getFileUrl(fileName); catch (Exception e) log.error("文件上传失败:{}", e); return fileUrl; @Override public OutputStream downloadFile(String dfsCode, String fileName, OutputStream outputStream) { QueryDfsDTO queryDfsDTO = new QueryDfsDTO(); DfsDTO dfsDTO = null; // 如果上传时没有选择存储方式,那么取默认存储方式 if(StringUtils.isEmpty(dfsCode)) { queryDfsDTO.setDfsDefault(GitEggConstant.ENABLE); else { queryDfsDTO.setDfsCode(dfsCode); try { dfsDTO = dfsService.queryDfs(queryDfsDTO); IDfsBaseService dfsFileService = dfsFactory.getDfsBaseService(dfsDTO); outputStream = dfsFileService.getFileObject(fileName, outputStream); catch (Exception e) log.error("文件上传失败:{}", e); return outputStream; }7、新建GitEggDfsController用于文件上传下载通用访问控制器@RestController @RequestMapping("/extension") @RequiredArgsConstructor(onConstructor_ = @Autowired) @Api(value = "GitEggDfsController|文件上传前端控制器") @RefreshScope public class GitEggDfsController { private final IGitEggDfsService gitEggDfsService; * 上传文件 * @param uploadFile * @param dfsCode * @return @PostMapping("/upload/file") public Result<?> uploadFile(@RequestParam("uploadFile") MultipartFile[] uploadFile, String dfsCode) { GitEggDfsFile gitEggDfsFile = null; if (ArrayUtils.isNotEmpty(uploadFile)) for (MultipartFile file : uploadFile) { gitEggDfsFile = gitEggDfsService.uploadFile(dfsCode, file); return Result.data(gitEggDfsFile); * 通过文件名获取文件访问链接 @GetMapping("/get/file/url") @ApiOperation(value = "查询分布式存储配置表详情") public Result<?> query(String dfsCode, String fileName) { String fileUrl = gitEggDfsService.getFileUrl(dfsCode, fileName); return Result.data(fileUrl); * 通过文件名以文件流的方式下载文件 @GetMapping("/get/file/download") public void downloadFile(HttpServletResponse response,HttpServletRequest request,String dfsCode, String fileName) { if (fileName != null) { response.setCharacterEncoding(request.getCharacterEncoding()); response.setContentType("application/octet-stream"); response.addHeader("Content-Disposition", "attachment;fileName=" + fileName); OutputStream os = null; try { os = response.getOutputStream(); os = gitEggDfsService.downloadFile(dfsCode, fileName, os); os.flush(); os.close(); } catch (Exception e) { e.printStackTrace(); } finally { if (os != null) { try { os.close(); } catch (IOException e) { e.printStackTrace(); }8、前端上传下载实现,注意的是:axios请求下载文件流时,需要设置 responseType: 'blob'上传handleUploadTest (row) { this.fileList = [] this.uploading = false this.uploadForm.dfsType = row.dfsType this.uploadForm.dfsCode = row.dfsCode this.uploadForm.uploadFile = null this.dialogTestUploadVisible = true handleRemove (file) { const index = this.fileList.indexOf(file) const newFileList = this.fileList.slice() newFileList.splice(index, 1) this.fileList = newFileList beforeUpload (file) { this.fileList = [...this.fileList, file] return false handleUpload () { this.uploadedFileName = '' const { fileList } = this const formData = new FormData() formData.append('dfsCode', this.uploadForm.dfsCode) fileList.forEach(file => { formData.append('uploadFile', file) this.uploading = true dfsUpload(formData).then(() => { this.fileList = [] this.uploading = false this.$message.success('上传成功') }).catch(err => { console.log('uploading', err) this.$message.error('上传失败') }下载getFileUrl (row) { this.listLoading = true this.fileDownload.dfsCode = row.dfsCode this.fileDownload.fileName = row.fileName dfsGetFileUrl(this.fileDownload).then(response => { window.open(response.data) this.listLoading = false downLoadFile (row) { this.listLoading = true this.fileDownload.dfsCode = row.dfsCode this.fileDownload.fileName = row.fileName this.fileDownload.responseType = 'blob' dfsDownloadFileUrl(this.fileDownload).then(response => { const blob = new Blob([response.data]) var fileName = row.originalName const elink = document.createElement('a') elink.download = fileName elink.style.display = 'none' elink.href = URL.createObjectURL(blob) document.body.appendChild(elink) elink.click() URL.revokeObjectURL(elink.href) document.body.removeChild(elink) this.listLoading = false }前端接口import request from '@/utils/request' export function dfsUpload (formData) { return request({ url: '/gitegg-service-extension/extension/upload/file', method: 'post', data: formData export function dfsGetFileUrl (query) { return request({ url: '/gitegg-service-extension/extension/get/file/url', method: 'get', params: query export function dfsDownloadFileUrl (query) { return request({ url: '/gitegg-service-extension/extension/get/file/download', method: 'get', responseType: 'blob', params: query }三、功能测试界面1、批量上传上传界面2、文件流下载及获取文件地址文件流下载及获取文件地址备注1、防止文件名重复,这里文件名统一采用七牛云的hash算法,可以防止文件重复,在界面需要展示的文件名,则存储到数据库一个文件名字段进行展示。所有的上传文件都留有记录。

SpringCloud微服务实战——搭建企业级开发框架(二十八):扩展MybatisPlus插件DataPermissionInterceptor实现数据权限控制

一套完整的系统权限需要支持功能权限和数据权限,前面介绍了系统通过RBAC的权限模型来实现功能的权限控制,这里我们来介绍,通过扩展Mybatis-Plus的插件DataPermissionInterceptor实现数据权限控制。  简单介绍一下,所谓功能权限,顾名思义是指用户在系统中拥有对哪些功能操作的权限控制,而数据权限是指用户在系统中能够访问哪些数据的权限控制,数据权限又分为行级数据权限和列级数据权限。数据权限基本概念:行级数据权限:以表结构为描述对象,一个用户拥有对哪些数据的权限,表示为对数据库某个表整行的数据拥有权限,例如按部门区分,某一行数据属于某个部门,某个用户只对此部门的数据拥有权限,那么该用户拥有此行的数据权限。列级数据权限:以表结构为描述对象,一个用户可能只对某个表中的部分字段拥有权限,例如表中银行卡、手机号等重要信息只有高级用户能够查询,而一些基本信息,普通用户就可以查询,不同的用户角色拥有的数据权限不一样。实现方式:行级数据权限:  对行级数据权限进行细分,以角色为标识的数据权限,分为:  1、只能查看本人数据;  2、只能查看本部门数据;  3、只能查看本部门及子部门数据;  4、可以查看所有部门数据;  以用户为标识的数据权限,分为:  5、同一功能角色权限拥有不同部门的数据权限;  6、不同角色权限拥有不同部门的数据权限。  第1/2/3/4类的实现方式需要在角色列表对角色进行数据权限配置,针对某一接口该角色拥有哪种数据权限。  第5类的实现方式,需要在用户列表进行配置,给用户分配多个不同部门。  第6类的实现方式比较复杂,目前有市面上的大多数解决方案是:    1、在登录时,判断用户是否拥有多个部门,如果存在,那么首先让用户选择其所在的部门,登录后只对选择的部门权限进行操作;    2、针对不同部门创建不同的用户及角色,登录时,选择对应的账号进行登录。  个人因秉承复杂的系统简单化,尽量用低耦合的方式实现复杂功能的理念,更倾向于第二种方式,原因是:  1、系统实现方面减少复杂度,越复杂的判断,越容易出问题,不仅仅在开发过程中,还在于后续系统的扩展和更新过程中。  2、对于工作量方面的取舍,一个人拥有多个部门不同权限的方式属于常用功能,但是并不普遍,也就是说在一家企业中,同一个用户即是业务部门经理,又是财务部门经理的情况并不普遍,更多的是专人专职。这里要和第5类做好区分,比如你是业务部门经理可能会管理多个部门,这种属于权限一致,只是拥有多个部门权限,这属于第5类。再比如一个总经理,可能会看到所有的业务、财务数据这属于第4类。  所以这里不会采取用户登录后选择部门的方式来判断数据权限。列级数据权限:  列级数据权限的实现主要是针对某个角色能够看到哪些字段,不存在针对某个用户给他特定字段的情况,这种情况单独建立一个角色即可,尽量采用类RBAC的方式来实现,不要使用户直接和数据权限关联。列级数据权限除了要考虑后台取数据的问题,还要考虑到在界面上展示时,如果是一个表格,那么没有权限的列需要根据数据权限来判断是否展示。这里在配置界面就要考虑,在角色配置时,需要分为行级数据权限和列级数据权限进行不同的配置:行级数据权限应该配置需要数据权限控制的接口,数据权限的类型(上面提到的1234);列级数据权限除了需要配置上面提到的之外,还需要配置可以访问的字段或者排除访问的字段。数据权限在资源管理配置资源关联接口的数据权限规则(t_sys_data_permission_role),通过RBAC的方式用角色和用户关联,在用户管理配置用户同角色的多个部门数据权限,用户直接和部门关联(t_sys_data_permission_user)。系统数据权限管理功能设计如下所示:权限管理数据权限表设计:CREATE TABLE `t_sys_data_permission_user` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `user_id` bigint(20) NOT NULL COMMENT '用户id', `organization_id` bigint(20) NOT NULL COMMENT '机构id', `status` tinyint(2) NULL DEFAULT 1 COMMENT '状态 0禁用,1 启用,', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Dynamic;CREATE TABLE `t_sys_data_permission_role` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `resource_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '功能权限id', `data_name` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '数据权限名称', `data_mapper_function` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '数据权限对应的mapper方法全路径', `data_table_name` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '需要做数据权限主表', `data_table_alias` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '需要做数据权限表的别名', `data_column_exclude` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '数据权限需要排除的字段', `data_column_include` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '数据权限需要保留的字段', `inner_table_name` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '数据权限表,默认t_sys_organization', `inner_table_alias` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '数据权限表的别名,默认organization', `data_permission_type` varchar(2) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL DEFAULT '1' COMMENT '数据权限类型:1只能查看本人 2只能查看本部门 3只能查看本部门及子部门 4可以查看所有数据', `custom_expression` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '自定义数据权限(增加 where条件)', `status` tinyint(2) NOT NULL DEFAULT 1 COMMENT '状态 0禁用,1 启用,', `comments` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '备注', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = '数据权限配置表' ROW_FORMAT = DYNAMIC;数据权限缓存(Redis)设计:Redis Key:多租户模式:auth:tenant:data:permission:0(租户):mapper_Mapper全路径type数据权限类型普通模式:auth:data:permission:mapper_Mapper全路径type数据权限类型Redis Value:存放角色分配的DataPermissionEntity配置   数据权限插件在组装SQL时,首先通过前缀匹配查询mapper的statementId是否在缓存中,如果存在,那么取出当前用户的数据权限类型,组装好带有数据权限类型的DataPermission缓存Key,从缓存中取出数据权限配置。在设计角色时,除了需要给角色设置功能权限之外,还要设置数据权限类型,角色的数据权限类型只能单选(1只能查看本人 2只能查看本部门 3只能查看本部门及子部门 4可以查看所有数据5自定义)代码实现:因DataPermissionInterceptor默认不支持修改selectItems,导致无法做到列级别的数据权限,所以这里自定义扩展DataPermissionInterceptor,使其支持列级权限扩展@Data @NoArgsConstructor @AllArgsConstructor @ToString(callSuper = true) @EqualsAndHashCode(callSuper = true) public class GitEggDataPermissionInterceptor extends DataPermissionInterceptor { private GitEggDataPermissionHandler dataPermissionHandler; public void beforeQuery(Executor executor, MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, BoundSql boundSql) throws SQLException { if (!InterceptorIgnoreHelper.willIgnoreDataPermission(ms.getId())) { PluginUtils.MPBoundSql mpBs = PluginUtils.mpBoundSql(boundSql); mpBs.sql(this.parserSingle(mpBs.sql(), ms.getId())); protected void processSelect(Select select, int index, String sql, Object obj) { SelectBody selectBody = select.getSelectBody(); if (selectBody instanceof PlainSelect) { PlainSelect plainSelect = (PlainSelect)selectBody; this.processDataPermission(plainSelect, (String)obj); } else if (selectBody instanceof SetOperationList) { SetOperationList setOperationList = (SetOperationList)selectBody; List<SelectBody> selectBodyList = setOperationList.getSelects(); selectBodyList.forEach((s) -> { PlainSelect plainSelect = (PlainSelect)s; this.processDataPermission(plainSelect, (String)obj); protected void processDataPermission(PlainSelect plainSelect, String whereSegment) { this.dataPermissionHandler.processDataPermission(plainSelect, whereSegment); }自定义实现DataPermissionHandler数据权限控制@Component @RequiredArgsConstructor(onConstructor_ = @Autowired) public class GitEggDataPermissionHandler implements DataPermissionHandler { @Value(("${tenant.enable}")) private Boolean enable; * 注解方式默认关闭,这里只是说明一种实现方式,实际使用时,使用配置的方式即可 @Value(("${data-permission.annotation-enable}")) private Boolean annotationEnable = false; private final RedisTemplate redisTemplate; public void processDataPermission(PlainSelect plainSelect, String mappedStatementId) { try { GitEggUser loginUser = GitEggAuthUtils.getCurrentUser(); // 1 当有数据权限配置时才去判断用户是否有数据权限控制 if (ObjectUtils.isNotEmpty(loginUser) && CollectionUtils.isNotEmpty(loginUser.getDataPermissionTypeList())) { // 1 根据系统配置的数据权限拼装sql StringBuffer statementSb = new StringBuffer(); if (enable) statementSb.append(DataPermissionConstant.TENANT_DATA_PERMISSION_KEY).append(loginUser.getTenantId()); statementSb.append(DataPermissionConstant.DATA_PERMISSION_KEY); String dataPermissionKey = statementSb.toString(); StringBuffer statementSbt = new StringBuffer(DataPermissionConstant.DATA_PERMISSION_KEY_MAPPER); statementSbt.append(mappedStatementId).append(DataPermissionConstant.DATA_PERMISSION_KEY_TYPE); String mappedStatementIdKey = statementSbt.toString(); DataPermissionEntity dataPermissionEntity = null; for (String dataPermissionType: loginUser.getDataPermissionTypeList()) String dataPermissionUserKey = mappedStatementIdKey + dataPermissionType; dataPermissionEntity = (DataPermissionEntity) redisTemplate.boundHashOps(dataPermissionKey).get(dataPermissionUserKey); if (ObjectUtils.isNotEmpty(dataPermissionEntity)) { break; // mappedStatementId是否有配置数据权限 if (ObjectUtils.isNotEmpty(dataPermissionEntity)) dataPermissionFilter(loginUser, dataPermissionEntity, plainSelect); //默认不开启注解,因每次查询都遍历注解,影响性能,直接选择使用配置的方式实现数据权限即可 else if(annotationEnable) // 2 根据注解的数据权限拼装sql Class<?> clazz = Class.forName(mappedStatementId.substring(GitEggConstant.Number.ZERO, mappedStatementId.lastIndexOf(StringPool.DOT))); String methodName = mappedStatementId.substring(mappedStatementId.lastIndexOf(StringPool.DOT) + GitEggConstant.Number.ONE); Method[] methods = clazz.getDeclaredMethods(); for (Method method : methods) { //当有多个时,这个方法可以获取到 DataPermission[] annotations = method.getAnnotationsByType(DataPermission.class); if (ObjectUtils.isNotEmpty(annotations) && method.getName().equals(methodName)) { for (DataPermission dataPermission : annotations) { String dataPermissionType = dataPermission.dataPermissionType(); for (String dataPermissionUser : loginUser.getDataPermissionTypeList()) { if (ObjectUtils.isNotEmpty(dataPermission) && StringUtils.isNotEmpty(dataPermissionType) && dataPermissionUser.equals(dataPermissionType)) { DataPermissionEntity dataPermissionEntityAnnotation = annotationToEntity(dataPermission); dataPermissionFilter(loginUser, dataPermissionEntityAnnotation, plainSelect); break; } catch (ClassNotFoundException e) { e.printStackTrace(); * 构建过滤条件 * @param user 当前登录用户 * @param plainSelect plainSelect * @return 构建后查询条件 public static void dataPermissionFilter(GitEggUser user, DataPermissionEntity dataPermissionEntity, PlainSelect plainSelect) { Expression expression = plainSelect.getWhere(); String dataPermissionType = dataPermissionEntity.getDataPermissionType(); String dataTableName = dataPermissionEntity.getDataTableName(); String dataTableAlias = dataPermissionEntity.getDataTableAlias(); String innerTableName = StringUtils.isNotEmpty(dataPermissionEntity.getInnerTableName()) ? dataPermissionEntity.getInnerTableName(): DataPermissionConstant.DATA_PERMISSION_TABLE_NAME; String innerTableAlias = StringUtils.isNotEmpty(dataPermissionEntity.getInnerTableAlias()) ? dataPermissionEntity.getInnerTableAlias() : DataPermissionConstant.DATA_PERMISSION_TABLE_ALIAS_NAME; List<String> organizationIdList = user.getOrganizationIdList(); // 列级数据权限 String dataColumnExclude = dataPermissionEntity.getDataColumnExclude(); String dataColumnInclude = dataPermissionEntity.getDataColumnInclude(); List<String> includeColumns = new ArrayList<>(); List<String> excludeColumns = new ArrayList<>(); // 只包含这几个字段,也就是不是这几个字段的,直接删除 if (StringUtils.isNotEmpty(dataColumnInclude)) includeColumns = Arrays.asList(dataColumnInclude.split(StringPool.COMMA)); // 需要排除这几个字段 if (StringUtils.isNotEmpty(dataColumnExclude)) excludeColumns = Arrays.asList(dataColumnExclude.split(StringPool.COMMA)); List<SelectItem> selectItems = plainSelect.getSelectItems(); List<SelectItem> removeItems = new ArrayList<>(); if (CollectionUtils.isNotEmpty(selectItems) && (CollectionUtils.isNotEmpty(includeColumns) || CollectionUtils.isNotEmpty(excludeColumns))) { for (SelectItem selectItem : selectItems) { // 暂不处理其他类型的selectItem if (selectItem instanceof SelectExpressionItem) { SelectExpressionItem selectExpressionItem = (SelectExpressionItem) selectItem; Alias alias = selectExpressionItem.getAlias(); if ((CollectionUtils.isNotEmpty(includeColumns) && !includeColumns.contains(alias.getName())) || (!CollectionUtils.isEmpty(excludeColumns) && excludeColumns.contains(alias.getName()))) removeItems.add(selectItem); } else if (selectItem instanceof AllTableColumns) { removeItems.add(selectItem); if (CollectionUtils.isNotEmpty(removeItems)) selectItems.removeAll(removeItems); plainSelect.setSelectItems(selectItems); // 行级数据权限 // 查询用户机构和子机构的数据,这里是使用where条件添加子查询的方式来实现的,这样的实现方式好处是不需要判断Update,Insert还是Select,都是通用的,缺点是性能问题。 if (DataPermissionTypeEnum.DATA_PERMISSION_ORG_AND_CHILD.getLevel().equals(dataPermissionType)) { // 如果是table的话,那么直接加inner,如果不是,那么直接在where条件里加子查询 if (plainSelect.getFromItem() instanceof Table) Table fromTable = (Table)plainSelect.getFromItem(); //数据主表 Table dataTable = null; //inner数据权限表 Table innerTable = null; if (fromTable.getName().equalsIgnoreCase(dataTableName)) dataTable = (Table)plainSelect.getFromItem(); // 如果是查询,这里使用inner join关联过滤,不使用子查询,因为join不需要建立临时表,因此速度比子查询快。 List<Join> joins = plainSelect.getJoins(); boolean hasPermissionTable = false; if (CollectionUtils.isNotEmpty(joins)) { Iterator joinsIterator = joins.iterator(); while(joinsIterator.hasNext()) { Join join = (Join)joinsIterator.next(); // 判断join里面是否存在t_sys_organization表,如果存在,那么直接使用,如果不存在则新增 FromItem rightItem = join.getRightItem(); if (rightItem instanceof Table) { Table table = (Table)rightItem; // 判断需要inner的主表是否存在 if (null == dataTable && table.getName().equalsIgnoreCase(dataTableName)) dataTable = table; // 判断需要inner的表是否存在 if (table.getName().equalsIgnoreCase(innerTableName)) hasPermissionTable = true; innerTable = table; //如果没有找到数据主表,那么直接抛出异常 if (null == dataTable) throw new BusinessException("在SQL语句中没有找到数据权限配置的主表,数据权限过滤失败。"); //如果不存在这个table,那么新增一个innerjoin if (!hasPermissionTable) innerTable = new Table(innerTableName).withAlias(new Alias(innerTableAlias, false)); Join join = new Join(); join.withRightItem(innerTable); EqualsTo equalsTo = new EqualsTo(); equalsTo.setLeftExpression(new Column(dataTable, DataPermissionConstant.DATA_PERMISSION_ORGANIZATION_ID)); equalsTo.setRightExpression(new Column(innerTable, DataPermissionConstant.DATA_PERMISSION_ID)); join.withOnExpression(equalsTo); plainSelect.addJoins(join); EqualsTo equalsToWhere = new EqualsTo(); equalsToWhere.setLeftExpression(new Column(innerTable, DataPermissionConstant.DATA_PERMISSION_ID)); equalsToWhere.setRightExpression(new LongValue(user.getOrganizationId())); Function function = new Function(); function.setName(DataPermissionConstant.DATA_PERMISSION_FIND_IN_SET); function.setParameters(new ExpressionList(new LongValue(user.getOrganizationId()) , new Column(innerTable, DataPermissionConstant.DATA_PERMISSION_ANCESTORS))); OrExpression orExpression = new OrExpression(equalsToWhere, function); //判断是否有数据权限,如果有数据权限配置,那么添加数据权限的机构列表 if(CollectionUtils.isNotEmpty(organizationIdList)) for (String organizationId : organizationIdList) EqualsTo equalsToPermission = new EqualsTo(); equalsToPermission.setLeftExpression(new Column(innerTable, DataPermissionConstant.DATA_PERMISSION_ID)); equalsToPermission.setRightExpression(new LongValue(organizationId)); orExpression = new OrExpression(orExpression, equalsToPermission); Function functionPermission = new Function(); functionPermission.setName(DataPermissionConstant.DATA_PERMISSION_FIND_IN_SET); functionPermission.setParameters(new ExpressionList(new LongValue(organizationId) , new Column(innerTable,DataPermissionConstant.DATA_PERMISSION_ANCESTORS))); orExpression = new OrExpression(orExpression, functionPermission); expression = ObjectUtils.isNotEmpty(expression) ? new AndExpression(expression, new Parenthesis(orExpression)) : orExpression; plainSelect.setWhere(expression); InExpression inExpression = new InExpression(); inExpression.setLeftExpression(buildColumn(dataTableAlias, DataPermissionConstant.DATA_PERMISSION_ORGANIZATION_ID)); SubSelect subSelect = new SubSelect(); PlainSelect select = new PlainSelect(); select.setSelectItems(Collections.singletonList(new SelectExpressionItem(new Column(DataPermissionConstant.DATA_PERMISSION_ID)))); select.setFromItem(new Table(DataPermissionConstant.DATA_PERMISSION_TABLE_NAME)); EqualsTo equalsTo = new EqualsTo(); equalsTo.setLeftExpression(new Column(DataPermissionConstant.DATA_PERMISSION_ID)); equalsTo.setRightExpression(new LongValue(user.getOrganizationId())); Function function = new Function(); function.setName(DataPermissionConstant.DATA_PERMISSION_FIND_IN_SET); function.setParameters(new ExpressionList(new LongValue(user.getOrganizationId()) , new Column(DataPermissionConstant.DATA_PERMISSION_ANCESTORS))); OrExpression orExpression = new OrExpression(equalsTo, function); //判断是否有数据权限,如果有数据权限配置,那么添加数据权限的机构列表 if(CollectionUtils.isNotEmpty(organizationIdList)) for (String organizationId : organizationIdList) EqualsTo equalsToPermission = new EqualsTo(); equalsToPermission.setLeftExpression(new Column(DataPermissionConstant.DATA_PERMISSION_ID)); equalsToPermission.setRightExpression(new LongValue(organizationId)); orExpression = new OrExpression(orExpression, equalsToPermission); Function functionPermission = new Function(); functionPermission.setName(DataPermissionConstant.DATA_PERMISSION_FIND_IN_SET); functionPermission.setParameters(new ExpressionList(new LongValue(organizationId) , new Column(DataPermissionConstant.DATA_PERMISSION_ANCESTORS))); orExpression = new OrExpression(orExpression, functionPermission); select.setWhere(orExpression); subSelect.setSelectBody(select); inExpression.setRightExpression(subSelect); expression = ObjectUtils.isNotEmpty(expression) ? new AndExpression(expression, new Parenthesis(inExpression)) : inExpression; plainSelect.setWhere(expression); // 只查询用户拥有机构的数据,不包含子机构 else if (DataPermissionTypeEnum.DATA_PERMISSION_ORG.getLevel().equals(dataPermissionType)) { InExpression inExpression = new InExpression(); inExpression.setLeftExpression(buildColumn(dataTableAlias, DataPermissionConstant.DATA_PERMISSION_ORGANIZATION_ID)); ExpressionList expressionList = new ExpressionList(); List<Expression> expressions = new ArrayList<>(); expressions.add(new LongValue(user.getOrganizationId())); if(CollectionUtils.isNotEmpty(organizationIdList)) for (String organizationId : organizationIdList) expressions.add(new LongValue(organizationId)); expressionList.setExpressions(expressions); inExpression.setRightItemsList(expressionList); expression = ObjectUtils.isNotEmpty(expression) ? new AndExpression(expression, new Parenthesis(inExpression)) : inExpression; plainSelect.setWhere(expression); // 只能查询个人数据 else if (DataPermissionTypeEnum.DATA_PERMISSION_SELF.getLevel().equals(dataPermissionType)) { EqualsTo equalsTo = new EqualsTo(); equalsTo.setLeftExpression(buildColumn(dataTableAlias, DataPermissionConstant.DATA_PERMISSION_SELF)); equalsTo.setRightExpression(new StringValue(String.valueOf(user.getId()))); expression = ObjectUtils.isNotEmpty(expression) ? new AndExpression(expression, new Parenthesis(equalsTo)) : equalsTo; plainSelect.setWhere(expression); //当类型为查看所有数据时,不处理 // if (DataPermissionTypeEnum.DATA_PERMISSION_ALL.getType().equals(dataPermissionType)) { // } // 自定义过滤语句 else if (DataPermissionTypeEnum.DATA_PERMISSION_CUSTOM.getLevel().equals(dataPermissionType)) { String customExpression = dataPermissionEntity.getCustomExpression(); if (StringUtils.isEmpty(customExpression)) throw new BusinessException("没有配置自定义表达式"); try { Expression expressionCustom = CCJSqlParserUtil.parseCondExpression(customExpression); expression = ObjectUtils.isNotEmpty(expression) ? new AndExpression(expression, new Parenthesis(expressionCustom)) : expressionCustom; plainSelect.setWhere(expression); } catch (JSQLParserException e) { throw new BusinessException("自定义表达式配置错误"); * 构建Column * @param dataTableAlias 表别名 * @param columnName 字段名称 * @return 带表别名字段 public static Column buildColumn(String dataTableAlias, String columnName) { if (StringUtils.isNotEmpty(dataTableAlias)) { columnName = dataTableAlias + StringPool.DOT + columnName; return new Column(columnName); * 注解转为实体类 * @param annotation 注解 * @return 实体类 public static DataPermissionEntity annotationToEntity(DataPermission annotation) { DataPermissionEntity dataPermissionEntity = new DataPermissionEntity(); dataPermissionEntity.setDataPermissionType(annotation.dataPermissionType()); dataPermissionEntity.setDataColumnExclude(annotation.dataColumnExclude()); dataPermissionEntity.setDataColumnInclude(annotation.dataColumnInclude()); dataPermissionEntity.setDataTableName(annotation.dataTableName()); dataPermissionEntity.setDataTableAlias(annotation.dataTableAlias()); dataPermissionEntity.setInnerTableName(annotation.innerTableName()); dataPermissionEntity.setInnerTableAlias(annotation.innerTableAlias()); dataPermissionEntity.setCustomExpression(annotation.customExpression()); return dataPermissionEntity; @Override public Expression getSqlSegment(Expression where, String mappedStatementId) { return null; }系统启动时初始化数据权限配置到Redis@Override public void initDataRolePermissions() { List<DataPermissionRoleDTO> dataPermissionRoleList = dataPermissionRoleMapper.queryDataPermissionRoleListAll(); // 判断是否开启了租户模式,如果开启了,那么角色权限需要按租户进行分类存储 if (enable) { Map<Long, List<DataPermissionRoleDTO>> dataPermissionRoleListMap = dataPermissionRoleList.stream().collect(Collectors.groupingBy(DataPermissionRoleDTO::getTenantId)); dataPermissionRoleListMap.forEach((key, value) -> { // auth:tenant:data:permission:0 String redisKey = DataPermissionConstant.TENANT_DATA_PERMISSION_KEY + key; redisTemplate.delete(redisKey); addDataRolePermissions(redisKey, value); } else { redisTemplate.delete(DataPermissionConstant.DATA_PERMISSION_KEY); // auth:data:permission addDataRolePermissions(DataPermissionConstant.DATA_PERMISSION_KEY, dataPermissionRoleList); private void addDataRolePermissions(String key, List<DataPermissionRoleDTO> dataPermissionRoleList) { Map<String, DataPermissionEntity> dataPermissionMap = new TreeMap<>(); Optional.ofNullable(dataPermissionRoleList).orElse(new ArrayList<>()).forEach(dataPermissionRole -> { String dataRolePermissionCache = new StringBuffer(DataPermissionConstant.DATA_PERMISSION_KEY_MAPPER) .append(dataPermissionRole.getDataMapperFunction()).append(DataPermissionConstant.DATA_PERMISSION_KEY_TYPE) .append(dataPermissionRole.getDataPermissionType()).toString(); DataPermissionEntity dataPermissionEntity = BeanCopierUtils.copyByClass(dataPermissionRole, DataPermissionEntity.class); dataPermissionMap.put(dataRolePermissionCache, dataPermissionEntity); redisTemplate.boundHashOps(key).putAll(dataPermissionMap); }数据权限配置指南:image.png数据权限名称:自定义一个名称,方便查找和区分Mapper全路径: Mapper路径配置到具体方法名称,例:com.gitegg.service.system.mapper.UserMapper.selectUserList数据权限类型:只能查看本人(实现原理是在查询条件添加数据表的creator条件)只能查看本部门 (实现原理是在查询条件添加数据表的部门条件)只能查看本部门及子部门 (实现原理是在查询条件添加数据表的部门条件)可以查看所有数据(不处理)自定义(添加where子条件)注解配置数据权限配置指南:/** * 查询用户列表 * @param page * @param user * @return @DataPermission(dataTableName = "t_sys_organization_user", dataTableAlias = "organizationUser", dataPermissionType = "3", innerTableName = "t_sys_organization", innerTableAlias = "orgDataPermission") @DataPermission(dataTableName = "t_sys_organization_user", dataTableAlias = "organizationUser", dataPermissionType = "2", innerTableName = "t_sys_organization", innerTableAlias = "orgDataPermission") @DataPermission(dataTableName = "t_sys_organization_user", dataTableAlias = "organizationUser", dataPermissionType = "1", innerTableName = "t_sys_organization", innerTableAlias = "orgDataPermission") Page<UserInfo> selectUserList(Page<UserInfo> page, @Param("user") QueryUserDTO user);行级数据权限配置:数据主表:主数据表,用于数据操作时的主表,例如SQL语句时的主表数据主表别名:主数据表的别名,用于和数据权限表进行inner join操作数据权限表:用于inner join的数据权限表,主要用于使用ancestors字段查询所有子组织机构数据权限表别名:用于和主数据表进行inner join列级数据权限配置:排除的字段:配置没有权限查看的字段,需要排除这些字段保留的字段:配置有权限查看的字段,只保留这些字段备注:此数据权限设计较灵活,也较复杂,有些简单应用场景的系统可能根本用不到,只需配置行级数据权限即可。Mybatis-Plus的插件DataPermissionInterceptor使用说明 https://gitee.com/baomidou/mybatis-plus/issues/I37I90update,insert逻辑说明:inner时只支持正常查询,及inner查询,不支持子查询,update,insert,子查询等直接使用添加子查询的方式实现数据权限还有在这里说明一下,在我们实际业务开发过程中,只能查看本人数据的数据权限,一般不会通过系统来配置,而是在业务代码编写过程中就 会实现,比如查询个人订单接口,那么个人用户id肯定是接口的入参,在接口被请求的时候,只需要通过我们自定义的方法获取到当前登录用户,然后作为参数传入即可。这种对于个人数据的数据权限,通过业务代码来实现会更加方便和安全,且没有太多的工作量,方便理解也容易扩展。

SpringCloud微服务实战——搭建企业级开发框架(二十七):集成多数据源+Seata分布式事务+读写分离+分库分表

读写分离:为了确保数据库产品的稳定性,很多数据库拥有双机热备功能。也就是,第一台数据库服务器,是对外提供增删改业务的生产服务器;第二台数据库服务器,主要进行读的操作。  目前有多种方式实现读写分离,一种是Mycat这种数据库中间件,需要单独部署服务,通过配置来实现读写分离,不侵入到业务代码中;还有一种是dynamic-datasource/shardingsphere-jdbc这种,需要在业务代码引入jar包进行开发。  本框架集成 dynamic-datasource(多数据源+读写分离+分库)+ druid(数据库连接池)+ seata(分布式事务)+ mybatis-plus+shardingsphere-jdbc(分库分表), dynamic-datasource可以实现简单的分库操作,目前还不支持分表,复杂的分库分表需要用到shardingsphere-jdbc,本文参考dynamic-datasource中的实例,模拟用户下单,扣商品库存,扣用户余额操作,初步可分为订单服务+商品服务+用户服务。一、Seata安装配置1、我们将服务安装到CentOS环境上,所以这里我们下载tar.gz版本,下载地址:https://github.com/seata/seata/releasesseata-server-1.4.1.tar.gz2、上传到CentOS服务器,执行解压命令tar -zxvf seata-server-1.4.1.tar.gz3、下载Seata需要的SQL脚本,新建Seata数据库并将需要使用的数据库脚本seata-1.4.1\seata-1.4.1\script\server\db\mysql.sql刷进去seata数据库4、修改Seata配置文件,将seata服务端的注册中心和配置中心设置为Nacosvi /bigdata/soft_home/seata/conf/registry.confregistry { # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa type = "nacos" loadBalance = "RandomLoadBalance" loadBalanceVirtualNodes = 10 nacos { application = "seata-server" serverAddr = "127.0.0.1:8848" group = "SEATA_GROUP" namespace = "" cluster = "default" username = "nacos" password = "nacos" eureka { serviceUrl = "http://localhost:8761/eureka" application = "default" weight = "1" redis { serverAddr = "localhost:6379" db = 0 password = "" cluster = "default" timeout = 0 cluster = "default" serverAddr = "127.0.0.1:2181" sessionTimeout = 6000 connectTimeout = 2000 username = "" password = "" consul { cluster = "default" serverAddr = "127.0.0.1:8500" etcd3 { cluster = "default" serverAddr = "http://localhost:2379" sofa { serverAddr = "127.0.0.1:9603" application = "default" region = "DEFAULT_ZONE" datacenter = "DefaultDataCenter" cluster = "default" group = "SEATA_GROUP" addressWaitTime = "3000" file { name = "file.conf" config { # file、nacos 、apollo、zk、consul、etcd3 type = "nacos" nacos { serverAddr = "127.0.0.1:8848" namespace = "" group = "SEATA_GROUP" username = "nacos" password = "nacos" consul { serverAddr = "127.0.0.1:8500" apollo { appId = "seata-server" apolloMeta = "http://192.168.1.204:8801" namespace = "application" apolloAccesskeySecret = "" serverAddr = "127.0.0.1:2181" sessionTimeout = 6000 connectTimeout = 2000 username = "" password = "" etcd3 { serverAddr = "http://localhost:2379" file { name = "file.conf" }5、在Nacos添加Seata配置文件,修改script/config-center/config.txt,将script目录上传到CentOS服务器,执行script/config-center/nacos/nacos-config.sh命令service.vgroupMapping.gitegg_seata_tx_group=default service.default.grouplist=127.0.0.1:8091 store.mode=db store.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true store.db.user=root store.db.password=rootchmod 777 nacos-config.sh sh nacos-config.sh -h 127.0.0.1 -p 8848设置成功6、在CentOS上进去到Seata安装目录的bin目录执行命令,启动Seata服务端nohup ./seata-server.sh -h 127.0.0.1 -p 8091 >log.out 2>1 &如果服务器有多网卡,存在多个ip地址,-h后面一定要加可以访问的ip地址7、在Nacos上可以看到配置文件和服务已经注册成功配置服务二、Seata安装成功后,我们需要在微服务中集成Seata客户端1、因为我们在微服务中使用Seata,所以,我们将Seata客户端的依赖添加在gitegg-plaform-cloud中<!-- Seata 分布式事务管理 --> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-seata</artifactId> </dependency>2、我们这里打算使用多数据源,所以这里也把动态多数据源组件Dynamic Datasource加入到gitegg-plaform-mybatis依赖中<!-- 动态数据源 --> <dependency> <groupId>com.baomidou</groupId> <artifactId>dynamic-datasource-spring-boot-starter</artifactId> </dependency>3、配置Nacos数据库多数据源及Seataspring: datasource: druid: stat-view-servlet: enabled: true loginUsername: admin loginPassword: 123456 dynamic: # 设置默认的数据源或者数据源组,默认值即为master primary: master # 设置严格模式,默认false不启动. 启动后在未匹配到指定数据源时候会抛出异常,不启动则使用默认数据源. strict: false # 开启seata代理,开启后默认每个数据源都代理,如果某个不需要代理可单独关闭 seata: true #支持XA及AT模式,默认AT seata-mode: AT druid: initialSize: 1 minIdle: 3 maxActive: 20 # 配置获取连接等待超时的时间 maxWait: 60000 # 配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒 timeBetweenEvictionRunsMillis: 60000 # 配置一个连接在池中最小生存的时间,单位是毫秒 minEvictableIdleTimeMillis: 30000 validationQuery: select 'x' testWhileIdle: true testOnBorrow: false testOnReturn: false # 打开PSCache,并且指定每个连接上PSCache的大小 poolPreparedStatements: true maxPoolPreparedStatementPerConnectionSize: 20 # 配置监控统计拦截的filters,去掉后监控界面sql无法统计,'wall'用于防火墙 filters: config,stat,slf4j # 通过connectProperties属性来打开mergeSql功能;慢SQL记录 connectionProperties: druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000; # 合并多个DruidDataSource的监控数据 useGlobalDataSourceStat: true datasource: master: url: jdbc:mysql://127.0.0.1/gitegg_cloud?zeroDateTimeBehavior=convertToNull&useUnicode=true&characterEncoding=utf8&all owMultiQueries=true&serverTimezone=Asia/Shanghai username: root password: root mall_user: url: jdbc:mysql://127.0.0.1/gitegg_cloud_mall_user?zeroDateTimeBehavior=convertToNull&useUnicode=true&characterEncoding=utf8&allowMultiQueries=true&serverTimezone=Asia/Shanghai username: root password: root mall_goods: url: jdbc:mysql://127.0.0.1/gitegg_cloud_mall_goods?zeroDateTimeBehavior=convertToNull&useUnicode=true&characterEncoding=utf8&allowMultiQueries=true&serverTimezone=Asia/Shanghai username: root password: root mall_order: url: jdbc:mysql://127.0.0.1/gitegg_cloud_mall_order?zeroDateTimeBehavior=convertToNull&useUnicode=true&characterEncoding=utf8&allowMultiQueries=true&serverTimezone=Asia/Shanghai username: root password: root mall_pay: url: jdbc:mysql://127.0.0.1/gitegg_cloud_mall_pay?zeroDateTimeBehavior=convertToNull&useUnicode=true&characterEncoding=utf8&alowMultiQueries=true&serverTimezone=Asia/Shanghai username: root password: root seata: enabled: true application-id: ${spring.application.name} tx-service-group: gitegg_seata_tx_group # 一定要是false enable-auto-data-source-proxy: false service: vgroup-mapping: #key与上面的gitegg_seata_tx_group的值对应 gitegg_seata_tx_group: default config: type: nacos nacos: namespace: serverAddr: 127.0.0.1:8848 group: SEATA_GROUP userName: "nacos" password: "nacos" registry: type: nacos nacos: #seata服务端(TC)在nacos中的应用名称 application: seata-server server-addr: 127.0.0.1:8848 namespace: userName: "nacos" password: "nacos"三、数据库表设计    这里参考Dynamic Datasource官方提供的示例项目,并结合电商项目数据库设计,新建四个数据库,gitegg_cloud_mall_goods(商品数据库),gitegg_cloud_mall_order(订单数据库),gitegg_cloud_mall_pay(支付数据库),gitegg_cloud_mall_user(账户数据库)四个数据库,下面是具体表结构和简要说明:1、商品数据库表设计表设计:商品分类表:t_mall_goods_category商品品牌表: t_mall_goods_brand分类品牌关联关系表:t_mall_goods_category_brand商品规格参数组表: t_mall_goods_spec_group商品规格参数表:t_mall_goods_spec_param商品SPU表: t_mall_goods_spu商品SPU详情表: t_mall_goods_spu_detail商品SKU表: t_mall_goods_sku关系:一个分类有多个品牌,一个品牌属于多个分类,所以是多对多一个分类有多个规格组,一个规格组有多个规格参数,所以是一对多一个分类下有多个SPU,所以是一对多一个品牌下有多个SPU,所以是一对多一个SPU下有多个SKU,所以是一对多DROP TABLE IF EXISTS `t_mall_goods_brand`; CREATE TABLE `t_mall_goods_brand` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '品牌id', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `name` varchar(64) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL COMMENT '品牌名称', `image` varchar(256) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT '' COMMENT '品牌图片地址', `letter` char(1) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT '' COMMENT '品牌的首字母', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = '品牌表,一个品牌下有多个商品(spu),一对多关系' ROW_FORMAT = DYNAMIC; -- ---------------------------- -- Table structure for t_mall_goods_brand_category -- ---------------------------- DROP TABLE IF EXISTS `t_mall_goods_brand_category`; CREATE TABLE `t_mall_goods_brand_category` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '品牌id', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `brand_id` bigint(20) NOT NULL COMMENT '品牌id', `category_id` bigint(20) NOT NULL COMMENT '商品类目id', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE, INDEX `key_tenant_id`(`tenant_id`) USING BTREE, INDEX `key_category_id`(`category_id`) USING BTREE, INDEX `key_brand_id`(`brand_id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = '商品分类和品牌的中间表,两者是多对多关系' ROW_FORMAT = DYNAMIC; -- ---------------------------- -- Table structure for t_mall_goods_category -- ---------------------------- DROP TABLE IF EXISTS `t_mall_goods_category`; CREATE TABLE `t_mall_goods_category` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '类目id', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `name` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL COMMENT '类目名称', `parent_id` bigint(20) NOT NULL COMMENT '父类目id,顶级类目填0', `is_parent` tinyint(2) NOT NULL COMMENT '是否为父节点,0为否,1为是', `sort` tinyint(2) NOT NULL COMMENT '排序指数,越小越靠前', `comments` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '备注', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE, INDEX `key_parent_id`(`parent_id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = '商品类目表,类目和商品(spu)是一对多关系,类目与品牌是多对多关系' ROW_FORMAT = DYNAMIC; -- ---------------------------- -- Table structure for t_mall_goods_sku -- ---------------------------- DROP TABLE IF EXISTS `t_mall_goods_sku`; CREATE TABLE `t_mall_goods_sku` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `spu_id` bigint(20) NOT NULL COMMENT 'spu id', `title` varchar(256) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL COMMENT '商品标题', `images` varchar(1024) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT '' COMMENT '商品的图片,多个图片以‘,’分割', `stock` int(8) UNSIGNED NULL DEFAULT 0 COMMENT '库存', `price` decimal(10, 2) NOT NULL DEFAULT 0.00 COMMENT '销售价格', `indexes` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT '' COMMENT '特有规格属性在spu属性模板中的对应下标组合', `own_spec` varchar(1024) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT '' COMMENT 'sku的特有规格参数键值对,json格式,反序列化时请使用linkedHashMap,保证有序', `status` tinyint(1) NOT NULL DEFAULT 1 COMMENT '是否有效,0无效,1有效', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE, INDEX `key_spu_id`(`spu_id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 3 CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = 'sku表,该表表示具体的商品实体' ROW_FORMAT = DYNAMIC; -- ---------------------------- -- Table structure for t_mall_goods_spec_group -- ---------------------------- DROP TABLE IF EXISTS `t_mall_goods_spec_group`; CREATE TABLE `t_mall_goods_spec_group` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `category_id` bigint(20) NOT NULL COMMENT '商品分类id,一个分类下有多个规格组', `name` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL COMMENT '规格组的名称', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE, INDEX `key_tenant_id`(`tenant_id`) USING BTREE, INDEX `key_category_id`(`category_id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = '规格参数的分组表,每个商品分类下有多个规格参数组' ROW_FORMAT = DYNAMIC; -- ---------------------------- -- Table structure for t_mall_goods_spec_param -- ---------------------------- DROP TABLE IF EXISTS `t_mall_goods_spec_param`; CREATE TABLE `t_mall_goods_spec_param` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `category_id` bigint(20) NOT NULL COMMENT '商品分类id', `group_id` bigint(20) NOT NULL COMMENT '所属组的id', `name` varchar(128) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL COMMENT '参数名', `numeric` tinyint(1) NOT NULL COMMENT '是否是数字类型参数,true或false', `unit` varchar(128) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT '' COMMENT '数字类型参数的单位,非数字类型可以为空', `generic` tinyint(1) NOT NULL COMMENT '是否是sku通用属性,true或false', `searching` tinyint(1) NOT NULL COMMENT '是否用于搜索过滤,true或false', `segments` varchar(1024) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT '' COMMENT '数值类型参数,如果需要搜索,则添加分段间隔值,如CPU频率间隔:0.5-1.0', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE, INDEX `key_tenant_id`(`tenant_id`) USING BTREE, INDEX `key_category_id`(`category_id`) USING BTREE, INDEX `key_group_id`(`group_id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = '规格参数组下的参数名' ROW_FORMAT = DYNAMIC; -- ---------------------------- -- Table structure for t_mall_goods_spu -- ---------------------------- DROP TABLE IF EXISTS `t_mall_goods_spu`; CREATE TABLE `t_mall_goods_spu` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `brand_id` bigint(20) NOT NULL COMMENT '商品所属品牌id', `category_id` bigint(20) NOT NULL COMMENT '商品分类id', `name` varchar(256) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL DEFAULT '' COMMENT '商品名称', `sub_title` varchar(256) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT '' COMMENT '副标题,一般是促销信息', `on_sale` tinyint(2) NOT NULL DEFAULT 1 COMMENT '是否上架,0下架,1上架', `price` decimal(10, 2) NOT NULL DEFAULT 0.00 COMMENT '售价', `use_spec` tinyint(2) NOT NULL DEFAULT 1 COMMENT '是否使用规格:0=不使用,1=使用', `spec_groups` text CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL COMMENT '商品规格组', `goods_stock` int(11) NOT NULL DEFAULT 0 COMMENT '商品库存', `virtual_sales` int(11) NOT NULL DEFAULT 0 COMMENT '虚拟销售数量', `confine_count` int(11) NOT NULL DEFAULT -1 COMMENT '购物数量限制', `pieces` int(11) NOT NULL DEFAULT 0 COMMENT '满件包邮', `forehead` decimal(10, 2) NOT NULL DEFAULT 0.00 COMMENT '满额包邮', `freight_id` int(11) NOT NULL COMMENT '运费模板ID', `give_integral` int(11) NOT NULL DEFAULT 0 COMMENT '赠送积分', `give_integral_type` tinyint(2) NOT NULL DEFAULT 1 COMMENT '赠送积分类型1固定值 2百分比', `deductible_integral` decimal(10, 2) NOT NULL DEFAULT 0.00 COMMENT '可抵扣积分', `deductible_integral_type` tinyint(2) NOT NULL DEFAULT 1 COMMENT '可抵扣积分类型1固定值 2百分比', `accumulative` tinyint(2) NOT NULL DEFAULT 0 COMMENT '允许多件累计折扣 0否 1是', `individual_share` tinyint(2) NOT NULL DEFAULT 0 COMMENT '是否单独分销设置:0否 1是', `share_setting_type` tinyint(2) NOT NULL DEFAULT 0 COMMENT '分销设置类型 0普通设置 1详细设置', `share_commission_type` tinyint(1) NOT NULL DEFAULT 0 COMMENT '佣金配比 0 固定金额 1 百分比', `membership_price` tinyint(2) NOT NULL DEFAULT 0 COMMENT '是否享受会员价购买', `membership_price_single` tinyint(2) NOT NULL DEFAULT 0 COMMENT '是否单独设置会员价', `share_image` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL DEFAULT '' COMMENT '自定义分享图片', `share_title` varchar(65) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL DEFAULT '' COMMENT '自定义分享标题', `is_default_services` tinyint(2) NOT NULL DEFAULT 1 COMMENT '默认服务 0否 1是', `sort` int(11) NOT NULL DEFAULT 100 COMMENT '排序', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE, INDEX `key_tenant_id`(`tenant_id`) USING BTREE, INDEX `key_category_id`(`category_id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = 'spu表,该表描述的是一个抽象性的商品' ROW_FORMAT = DYNAMIC; -- ---------------------------- -- Table structure for t_mall_goods_spu_detail -- ---------------------------- DROP TABLE IF EXISTS `t_mall_goods_spu_detail`; CREATE TABLE `t_mall_goods_spu_detail` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `spu_id` bigint(20) NOT NULL, `description` text CHARACTER SET utf8 COLLATE utf8_general_ci NULL COMMENT '商品描述信息', `generic_spec` varchar(2048) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL DEFAULT '' COMMENT '通用规格参数数据', `special_spec` varchar(1024) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL COMMENT '特有规格参数及可选值信息,json格式', `packing_list` varchar(1024) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT '' COMMENT '包装清单', `after_service` varchar(1024) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT '' COMMENT '售后服务', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE, INDEX `key_tenant_id`(`tenant_id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = DYNAMIC; -- ---------------------------- -- Table structure for undo_log -- ---------------------------- DROP TABLE IF EXISTS `undo_log`; CREATE TABLE `undo_log` ( `branch_id` bigint(20) NOT NULL COMMENT 'branch transaction id', `xid` varchar(100) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL COMMENT 'global transaction id', `context` varchar(128) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL COMMENT 'undo_log context,such as serialization', `rollback_info` longblob NOT NULL COMMENT 'rollback info', `log_status` int(11) NOT NULL COMMENT '0:normal status,1:defense status', `log_created` datetime(6) NOT NULL COMMENT 'create datetime', `log_modified` datetime(6) NOT NULL COMMENT 'modify datetime', UNIQUE INDEX `ux_undo_log`(`xid`, `branch_id`) USING BTREE ) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = 'AT transaction mode undo table' ROW_FORMAT = Dynamic; SET FOREIGN_KEY_CHECKS = 1;2、订单数据库表设计-- ---------------------------- -- Table structure for t_mall_order -- ---------------------------- DROP TABLE IF EXISTS `t_mall_order`; CREATE TABLE `t_mall_order` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `user_id` bigint(20) NOT NULL COMMENT '主键', `store_id` int(11) NOT NULL DEFAULT 0 COMMENT '店铺id', `order_no` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT '' COMMENT '订单号', `total_price` decimal(10, 2) NULL DEFAULT NULL COMMENT '订单总金额(含运费)', `total_pay_price` decimal(10, 2) NOT NULL COMMENT '实际支付总费用(含运费)', `express_original_price` decimal(10, 2) NOT NULL COMMENT '运费', `express_price` decimal(10, 2) NULL DEFAULT NULL COMMENT '修改后运费', `total_goods_original_price` decimal(10, 2) NULL DEFAULT NULL COMMENT '订单商品总金额', `total_goods_price` decimal(10, 2) NULL DEFAULT NULL COMMENT '优惠后订单商品总金额', `store_discount_price` decimal(10, 2) NULL DEFAULT 0.00 COMMENT '商家改价优惠', `member_discount_price` decimal(10, 2) NULL DEFAULT NULL COMMENT '会员优惠价格', `coupon_id` int(11) NULL DEFAULT NULL COMMENT '优惠券id', `coupon_discount_price` decimal(10, 2) NULL DEFAULT NULL COMMENT '优惠券优惠金额', `integral` int(11) NULL DEFAULT NULL COMMENT '使用的积分数量', `integral_deduction_price` decimal(10, 2) NULL DEFAULT NULL COMMENT '积分抵扣金额', `name` varchar(65) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT '' COMMENT '收件人姓名', `mobile` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT '' COMMENT '收件人手机号', `address` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT '' COMMENT '收件人地址', `comments` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT '' COMMENT '用户订单备注', `order_form` longtext CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL COMMENT '自定义表单(JSON)', `leaving_message` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT '' COMMENT '留言', `store_comments` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT '' COMMENT '商家订单备注', `pay_status` tinyint(2) NULL DEFAULT 0 COMMENT '是否支付:0.未支付 1.已支付', `pay_type` tinyint(2) NULL DEFAULT 1 COMMENT '支付方式:1.在线支付 2.货到付款 3.余额支付', `pay_time` timestamp(0) NULL DEFAULT '0000-00-00 00:00:00' COMMENT '支付时间', `deliver_status` tinyint(2) NULL DEFAULT 0 COMMENT '是否发货:0.未发货 1.已发货', `deliver_time` timestamp(0) NULL DEFAULT '0000-00-00 00:00:00' COMMENT '发货时间', `express` varchar(65) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT '' COMMENT '物流公司', `express_no` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT '' COMMENT '物流订单号', `confirm_receipt` tinyint(2) NULL DEFAULT 0 COMMENT '收货状态:0.未收货 1.已收货', `confirm_receipt_time` timestamp(0) NULL DEFAULT '0000-00-00 00:00:00' COMMENT '确认收货时间', `cancel_status` tinyint(2) NULL DEFAULT 0 COMMENT '订单取消状态:0.未取消 1.已取消 2.申请取消', `cancel_time` timestamp(0) NULL DEFAULT '0000-00-00 00:00:00' COMMENT '订单取消时间', `recycle_status` tinyint(2) NULL DEFAULT 0 COMMENT '是否加入回收站 0.否 1.是', `offline` tinyint(2) NULL DEFAULT 0 COMMENT '是否到店自提:0.否 1.是', `offline_code` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT '' COMMENT '核销码', `verifier_id` int(11) NULL DEFAULT 0 COMMENT '核销员ID', `verifier_store_id` int(11) NULL DEFAULT 0 COMMENT '自提门店ID', `support_pay_types` longtext CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL COMMENT '支持的支付方式', `evaluation_status` tinyint(2) NULL DEFAULT 0 COMMENT '是否评价 0.否 1.是', `evaluation_time` timestamp(0) NULL DEFAULT '0000-00-00 00:00:00', `after_sales_out` tinyint(2) NULL DEFAULT 0 COMMENT '是否过售后时间 0.否 1.是', `after_sales_status` tinyint(2) NULL DEFAULT 0 COMMENT '是否申请售后 0.否 1.是', `status` tinyint(2) NULL DEFAULT 1 COMMENT '订单状态 1.已完成 0.进行中', `auto_cancel_time` timestamp(0) NULL DEFAULT NULL COMMENT '自动取消时间', `auto_confirm_verifier_time` timestamp(0) NULL DEFAULT NULL COMMENT '自动确认收货时间', `auto_after_sales_time` timestamp(0) NULL DEFAULT NULL COMMENT '自动售后时间', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(1) NOT NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE, INDEX `INDEX_TENANT_ID`(`tenant_id`) USING BTREE, INDEX `INDEX_USER_ID`(`user_id`) USING BTREE, INDEX `INDEX_STORE_ID`(`store_id`) USING BTREE, INDEX `INDEX_ORDER_NO`(`order_no`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 10 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = DYNAMIC; -- ---------------------------- -- Table structure for t_mall_order_sku -- ---------------------------- DROP TABLE IF EXISTS `t_mall_order_sku`; CREATE TABLE `t_mall_order_sku` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `order_id` bigint(20) NOT NULL COMMENT '订单id', `goods_sku_id` bigint(20) NULL DEFAULT NULL COMMENT '购买商品id', `goods_sku_number` int(11) NULL DEFAULT NULL COMMENT '购买商品数量', `goods_sku_price` decimal(10, 2) NULL DEFAULT NULL COMMENT '商品单价', `total_original_price` decimal(10, 2) NULL DEFAULT NULL COMMENT '商品总价', `total_price` decimal(10, 2) NULL DEFAULT NULL COMMENT '优惠后商品总价', `member_discount_price` decimal(10, 2) NULL DEFAULT NULL COMMENT '会员优惠金额', `store_discount_price` decimal(10, 2) NULL DEFAULT 0.00 COMMENT '商家改价优惠', `goods_sku_info` longtext CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL COMMENT '购买商品信息', `refund_status` tinyint(1) NULL DEFAULT 0 COMMENT '是否退款', `after_sales_status` tinyint(1) NULL DEFAULT 0 COMMENT '售后状态 0--未售后 1--售后中 2--售后结束', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(1) NOT NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE, INDEX `INDEX_TENANT_ID`(`tenant_id`) USING BTREE, INDEX `INDEX_ORDER_ID`(`order_id`) USING BTREE, INDEX `INDEX_GOODS_SKU_ID`(`goods_sku_id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 15 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = DYNAMIC; -- ---------------------------- -- Table structure for undo_log -- ---------------------------- DROP TABLE IF EXISTS `undo_log`; CREATE TABLE `undo_log` ( `branch_id` bigint(20) NOT NULL COMMENT 'branch transaction id', `xid` varchar(100) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL COMMENT 'global transaction id', `context` varchar(128) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL COMMENT 'undo_log context,such as serialization', `rollback_info` longblob NOT NULL COMMENT 'rollback info', `log_status` int(11) NOT NULL COMMENT '0:normal status,1:defense status', `log_created` datetime(6) NOT NULL COMMENT 'create datetime', `log_modified` datetime(6) NOT NULL COMMENT 'modify datetime', UNIQUE INDEX `ux_undo_log`(`xid`, `branch_id`) USING BTREE ) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = 'AT transaction mode undo table' ROW_FORMAT = Dynamic; SET FOREIGN_KEY_CHECKS = 1;3、支付数据库表设计DROP TABLE IF EXISTS `t_mall_pay_record`; CREATE TABLE `t_mall_pay_record` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `user_id` bigint(20) NOT NULL COMMENT '用户id', `order_no` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL DEFAULT '0', `amount` decimal(9, 2) NOT NULL, `pay_status` tinyint(2) NOT NULL DEFAULT 0 COMMENT '支付状态:0=未支付,1=已支付, 2=已退款', `pay_type` tinyint(2) NOT NULL DEFAULT 3 COMMENT '支付方式:1=微信支付,2=货到付款,3=余额支付,4=支付宝支付, 5=银行卡支付', `title` varchar(128) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT '', `refund` decimal(9, 2) NOT NULL DEFAULT 0.00 COMMENT '已退款金额', `comments` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL DEFAULT '' COMMENT '备注', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE, INDEX `INDEX_TENANT_ID`(`tenant_id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 9 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = DYNAMIC; -- ---------------------------- -- Table structure for undo_log -- ---------------------------- DROP TABLE IF EXISTS `undo_log`; CREATE TABLE `undo_log` ( `branch_id` bigint(20) NOT NULL COMMENT 'branch transaction id', `xid` varchar(100) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL COMMENT 'global transaction id', `context` varchar(128) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL COMMENT 'undo_log context,such as serialization', `rollback_info` longblob NOT NULL COMMENT 'rollback info', `log_status` int(11) NOT NULL COMMENT '0:normal status,1:defense status', `log_created` datetime(6) NOT NULL COMMENT 'create datetime', `log_modified` datetime(6) NOT NULL COMMENT 'modify datetime', UNIQUE INDEX `ux_undo_log`(`xid`, `branch_id`) USING BTREE ) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = 'AT transaction mode undo table' ROW_FORMAT = Dynamic; SET FOREIGN_KEY_CHECKS = 1;4、账户数据库表设计-- ---------------------------- -- Table structure for t_mall_user_account -- ---------------------------- DROP TABLE IF EXISTS `t_mall_user_account`; CREATE TABLE `t_mall_user_account` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `user_id` bigint(20) NOT NULL COMMENT '用户id', `integral` bigint(20) NOT NULL DEFAULT 0 COMMENT '积分', `balance` decimal(10, 2) NOT NULL DEFAULT 0.00 COMMENT '余额', `account_status` tinyint(2) NULL DEFAULT 1 COMMENT '账户状态 \'0\'禁用,\'1\' 启用', `comments` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '备注', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE, INDEX `INDEX_TENANT_ID`(`tenant_id`) USING BTREE, INDEX `INDEX_USER_ID`(`user_id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 2 CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = '用户账户表' ROW_FORMAT = DYNAMIC; -- ---------------------------- -- Table structure for t_mall_user_balance_record -- ---------------------------- DROP TABLE IF EXISTS `t_mall_user_balance_record`; CREATE TABLE `t_mall_user_balance_record` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `user_id` bigint(20) NOT NULL COMMENT '用户id', `type` tinyint(2) NOT NULL COMMENT '类型:1=收入,2=支出', `amount` decimal(10, 2) NOT NULL COMMENT '变动金额', `comments` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL DEFAULT '' COMMENT '备注', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE, INDEX `INDEX_TENANT_ID`(`tenant_id`) USING BTREE, INDEX `INDEX_USER_ID`(`user_id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 17 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = DYNAMIC; -- ---------------------------- -- Table structure for undo_log -- ---------------------------- DROP TABLE IF EXISTS `undo_log`; CREATE TABLE `undo_log` ( `branch_id` bigint(20) NOT NULL COMMENT 'branch transaction id', `xid` varchar(100) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL COMMENT 'global transaction id', `context` varchar(128) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL COMMENT 'undo_log context,such as serialization', `rollback_info` longblob NOT NULL COMMENT 'rollback info', `log_status` int(11) NOT NULL COMMENT '0:normal status,1:defense status', `log_created` datetime(6) NOT NULL COMMENT 'create datetime', `log_modified` datetime(6) NOT NULL COMMENT 'modify datetime', UNIQUE INDEX `ux_undo_log`(`xid`, `branch_id`) USING BTREE ) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = 'AT transaction mode undo table' ROW_FORMAT = Dynamic; SET FOREIGN_KEY_CHECKS = 1;5、上面的脚本中,每个数据都需要刷入了Seata分布式事务回滚需要的表脚本,在下载Seata包的seata-1.4.1\seata-1.4.1\script\client\at\db路径下-- ---------------------------- -- Table structure for undo_log -- ---------------------------- DROP TABLE IF EXISTS `undo_log`; CREATE TABLE `undo_log` ( `branch_id` bigint(20) NOT NULL COMMENT 'branch transaction id', `xid` varchar(100) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL COMMENT 'global transaction id', `context` varchar(128) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL COMMENT 'undo_log context,such as serialization', `rollback_info` longblob NOT NULL COMMENT 'rollback info', `log_status` int(11) NOT NULL COMMENT '0:normal status,1:defense status', `log_created` datetime(6) NOT NULL COMMENT 'create datetime', `log_modified` datetime(6) NOT NULL COMMENT 'modify datetime', UNIQUE INDEX `ux_undo_log`(`xid`, `branch_id`) USING BTREE ) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = 'AT transaction mode undo table' ROW_FORMAT = Dynamic; SET FOREIGN_KEY_CHECKS = 1;三、测试代码  在GitEgg-Cloud工程下,新建gitegg-mall和gitegg-mall-client子工程,client子工程用于fegin调用1、订单服务@DS("mall_order")//每一层都需要使用多数据源注解切换所选择的数据库 @Transactional(propagation = Propagation.REQUIRES_NEW) @GlobalTransactional //重点 第一个开启事务的需要添加seata全局事务注解 @Override public void order(List<OrderSkuDTO> orderSkuList, Long userId) { //获取商品的详细信息 Result<Object> goodsSkuResult = mallGoodsFeign.queryByIds(orderSkuList.stream() .map(OrderSkuDTO::getGoodsSkuId) .collect(Collectors.toList())); List<Object> resultSkuList = (List<Object>) goodsSkuResult.getData(); List<GoodsSkuDTO> goodsSkuList = new ArrayList<>(); if(CollectionUtils.isEmpty(resultSkuList) || resultSkuList.size() != orderSkuList.size()) { throw new BusinessException("商品不存在"); else { resultSkuList.stream().forEach(goodsSku -> { GoodsSkuDTO goodsSkuDTO = BeanUtil.fillBeanWithMap((Map<?, ?>) goodsSku, new GoodsSkuDTO(), false); goodsSkuList.add(goodsSkuDTO); //扣商品库存 List<ReduceStockDTO> reduceStockDtoList = orderSkuList.stream() .map(t -> new ReduceStockDTO(t.getGoodsSkuId(),t.getGoodsSkuNumber())) .collect(Collectors.toList()); mallGoodsFeign.reduceStock(reduceStockDtoList); // //支付 BigDecimal totalMoney = new BigDecimal(0.0d); for(OrderSkuDTO orderSkuDTO: orderSkuList) { for(GoodsSkuDTO goodsSkuDTO: goodsSkuList) { if(orderSkuDTO.getGoodsSkuId().equals(goodsSkuDTO.getId())) { BigDecimal skuNumber = new BigDecimal(orderSkuDTO.getGoodsSkuNumber()); totalMoney = totalMoney.add(goodsSkuDTO.getPrice().multiply(skuNumber)); break; mallPayFeign.pay(userId, totalMoney); //主订单表插入数据 Order order = new Order(); order.setTotalPrice(totalMoney); order.setTotalPayPrice(totalMoney); order.setExpressOriginalPrice(totalMoney); order.setStatus(1); order.setUserId(userId); this.save(order); //子订单表插入数据 ArrayList<OrderSku> orderSkus = new ArrayList<>(); orderSkuList.forEach(payOrderReq -> { OrderSku orderSku = new OrderSku(); orderSku.setOrderId(order.getId()); orderSku.setGoodsSkuNumber(payOrderReq.getGoodsSkuNumber()); orderSku.setGoodsSkuId(payOrderReq.getGoodsSkuId()); for(GoodsSkuDTO goodsSkuDTO : goodsSkuList) { if(payOrderReq.getGoodsSkuId().equals(goodsSkuDTO.getId())) { orderSku.setGoodsSkuPrice(goodsSkuDTO.getPrice()); break; orderSkus.add(orderSku); orderSkuService.saveBatch(orderSkus); }2、商品服务@DS("mall_goods") @Override public List<GoodsSku> queryGoodsByIds(List<Long> idList) { return goodsSkuMapper.queryGoodsByIds(idList); * 事务传播特性设置为 REQUIRES_NEW 开启新的事务 重要!!!!一定要使用REQUIRES_NEW @DS("mall_goods") @Transactional(propagation = Propagation.REQUIRES_NEW) @Override public void reduceStock(List<ReduceStockDTO> reduceStockReqList) { reduceStockReqList.forEach(sku -> { Integer line = goodsSkuMapper.reduceStock(sku.getNumber(), sku.getSkuId()); if(line == null || line == 0) { throw new BusinessException("商品不存在或库存不足"); }3、支付服务/** * 事务传播特性设置为 REQUIRES_NEW 开启新的事务 重要!!!!一定要使用REQUIRES_NEW @DS("mall_pay") @Transactional(propagation = Propagation.REQUIRES_NEW) @Override public Long pay(Long userId, BigDecimal payMoney) { //调用gitegg-mall-user的账户扣除余额接口 mallUserFeign.accountDeduction(userId, payMoney); // 插入支付记录表 PayRecord payRecord = new PayRecord(); payRecord.setUserId(userId); payRecord.setAmount(payMoney); payRecord.setPayStatus(GitEggConstant.Number.ONE); payRecord.setPayType(GitEggConstant.Number.FIVE); payRecordService.save(payRecord); return payRecord.getId(); }4、账户服务/** * 事务传播特性设置为 REQUIRES_NEW 开启新的事务 重要!!!!一定要使用REQUIRES_NEW @DS("mall_user") @Transactional(propagation = Propagation.REQUIRES_NEW) @Override public void deduction(Long userId, BigDecimal amountOfMoney) { //查看账户余额是否满足扣款 QueryUserAccountDTO queryUserAccountDTO = new QueryUserAccountDTO(); queryUserAccountDTO.setUserId(userId); UserAccountDTO userAccount = this.queryUserAccount(queryUserAccountDTO); if(userAccount == null) { throw new BusinessException("用户未开通个人账户"); if(amountOfMoney.compareTo(userAccount.getBalance()) > GitEggConstant.Number.ZERO) { throw new BusinessException("账户余额不足"); //执行扣款 userAccountMapper.deductionById(userAccount.getId(), amountOfMoney); //加入账户变动记录 UserBalanceRecord userBalanceRecord = new UserBalanceRecord(); userBalanceRecord.setUserId(userId); userBalanceRecord.setAmount(amountOfMoney); userBalanceRecord.setType(GitEggConstant.Number.TWO); userBalanceRecordService.save(userBalanceRecord); }5、使用Postman测试,发送请求,然后查看数据库是否都增加了数据,正常情况下,几个数据库的表都有新增或更新请求头请求参数6、测试异常情况,在代码中抛出异常,然后进行debug,查看在异常之前数据库数据是否入库,异常之后,入库数据是否已回滚。同时可观察undo_log表的数据情况。# 在订单服务中添加 throw new BusinessException("测试异常回滚");四、整合数据库分库分表  首先在我们整合dynamic-datasource和shardingsphere-JDBC之前,需要了解它们的异同点:dynamic-datasource从字面意思可以看出,它是动态多数据源,其主要功能是支持多数据源及数据源动态切换不支持数据分片,shardingsphere-jdbc主要功能是数据分片、读写分离,当然也支持多数据源,但是到目前为止如果要支持多数据源动态切换的话,需要自己实现,所以,这里结合两者的优势,使用dynamic-datasource的动态多数据源切换+shardingsphere-jdbc的数据分片、读写分离。1、在gitegg-platform-bom和gitegg-platform-db中引入shardingsphere-jdbc的依赖,重新install。(注意这里使用了5.0.0-alpha版本,正式环境请使用最新发布版。)<!-- shardingsphere-jdbc --> <sharding.jdbc.version>5.0.0-alpha</sharding.jdbc.version> <!-- Shardingsphere-jdbc --> <dependency> <groupId>org.apache.shardingsphere</groupId> <artifactId>shardingsphere-jdbc-core-spring-boot-starter</artifactId> <version>${shardingsphere.jdbc.version}</version> </dependency> <dependency> <groupId>org.apache.shardingsphere</groupId> <artifactId>shardingsphere-jdbc-core-spring-namespace</artifactId> <version>${shardingsphere.jdbc.version}</version> </dependency>2、在gitegg-platform-db中,新建DynamicDataSourceProviderConfig类,自定义DynamicDataSourceProvider完成与shardingsphere的集成/** * @author GitEgg * @date 2021-04-23 19:06:51 @Configuration @AutoConfigureBefore(DynamicDataSourceAutoConfiguration.class) public class DynamicDataSourceProviderConfig { @Resource private DynamicDataSourceProperties properties; * shardingSphereDataSource @Lazy @Resource(name = "shardingSphereDataSource") private DataSource shardingSphereDataSource; @Bean public DynamicDataSourceProvider dynamicDataSourceProvider() { Map<String, DataSourceProperty> datasourceMap = properties.getDatasource(); return new AbstractDataSourceProvider() { @Override public Map<String, DataSource> loadDataSources() { Map<String, DataSource> dataSourceMap = createDataSourceMap(datasourceMap); dataSourceMap.put("sharding", shardingSphereDataSource); return dataSourceMap; * 将动态数据源设置为首选的 * 当spring存在多个数据源时, 自动注入的是首选的对象 * 设置为主要的数据源之后,就可以支持shardingsphere-jdbc原生的配置方式了 @Primary @Bean public DataSource dataSource(DynamicDataSourceProvider dynamicDataSourceProvider) { DynamicRoutingDataSource dataSource = new DynamicRoutingDataSource(); dataSource.setPrimary(properties.getPrimary()); dataSource.setStrict(properties.getStrict()); dataSource.setStrategy(properties.getStrategy()); dataSource.setProvider(dynamicDataSourceProvider); dataSource.setP6spy(properties.getP6spy()); dataSource.setSeata(properties.getSeata()); return dataSource; }3、新建用来分库的数据库表gitegg_cloud_mall_order0和gitegg_cloud_mall_order1,复制gitegg_cloud_mall_order中的表结构。4、在Nacos中分别配置shardingsphere-jdbc和多数据源# shardingsphere 配置 shardingsphere: props: show: true datasource: common: type: com.alibaba.druid.pool.DruidDataSource validationQuery: SELECT 1 FROM DUAL names: shardingorder0,shardingorder1 shardingorder0: type: com.alibaba.druid.pool.DruidDataSource url: jdbc:mysql://127.0.0.1/gitegg_cloud_mall_order0?zeroDateTimeBehavior=convertToNull&useUnicode=true&characterEncoding=utf8&all owMultiQueries=true&serverTimezone=Asia/Shanghai username: root password: root shardingorder1: type: com.alibaba.druid.pool.DruidDataSource url: jdbc:mysql://127.0.0.1/gitegg_cloud_mall_order1?zeroDateTimeBehavior=convertToNull&useUnicode=true&characterEncoding=utf8&all owMultiQueries=true&serverTimezone=Asia/Shanghai username: root password: root rules: sharding: tables: t_mall_order: actual-data-nodes: shardingorder$->{0..1}.t_mall_order$->{0..1} # 配置分库策略 databaseStrategy: standard: shardingColumn: id shardingAlgorithmName: database-inline table-strategy: standard: sharding-column: id sharding-algorithm-name: table-inline-order key-generate-strategy: column: id key-generator-name: snowflake t_mall_order_sku: actual-data-nodes: shardingorder$->{0..1}.t_mall_order_sku$->{0..1} # 配置分库策略 databaseStrategy: standard: shardingColumn: id shardingAlgorithmName: database-inline table-strategy: standard: sharding-column: id sharding-algorithm-name: table-inline-order-sku key-generate-strategy: column: id key-generator-name: snowflake sharding-algorithms: database-inline: type: INLINE props: algorithm-expression: shardingorder$->{id % 2} table-inline-order: type: INLINE props: algorithm-expression: t_mall_order$->{id % 2} table-inline-order-sku: type: INLINE props: algorithm-expression: t_mall_order_sku$->{id % 2} key-generators: snowflake: type: SNOWFLAKE props: worker-id: 1235、修改OrderServiceImpl.java的下单方法order注解,数据源选择sharding@DS("sharding")//每一层都需要使用多数据源注解切换所选择的数据库 @Transactional(propagation = Propagation.REQUIRES_NEW) @GlobalTransactional //重点 第一个开启事务的需要添加seata全局事务注解 @Override public void order(List<OrderSkuDTO> orderSkuList, Long userId) { ...... }6、postman模拟测试调用下单接口,观察数据库gitegg_cloud_mall_order0和gitegg_cloud_mall_order1里面的order表数据变化,我们发现,数据记录根据id取余存放到对应的库和表。这里的配置是使用order表的id,在实际生产环境中,需要根据实际情况来选择合适的分库分表策略。订单数据根据分库分表策略存储订单数据根据分库分表策略存储7、测试引入shardingsphere-jdbc后分布式事务是否正常,在OrderServiceImpl.java的下单方法order中的最后主动抛出异常,saveBatch之后打断点,使用postman模拟测试调用下单接口,到达断点时,查看数据是否入库,放开断点,抛出异常,然后再查看数据是否被回滚。orderSkuService.saveBatch(orderSkus); throw new BusinessException("测试异常");断点gitegg-mall-pay最新数据记录已入库抛出异常后gitegg-mall-pay入库数据被回滚备注:1、sharding-jdbc启动时报错java.sql.SQLFeatureNotSupportedException: isValid解决: 这个是4.x版本的问题,官方会在5.x结局这个问题,目前解决方案是关闭sql健康检查。本文源码在https://gitee.com/wmz1930/GitEgg 的chapter-27(未使用shardingsphere-jdbc分库分表)和chapter-27-shardingsphere-jdbc(使用shardingsphere-jdbc分库分表)分支。

SpringCloud微服务实战——搭建企业级开发框架(二十六):自定义扩展OAuth2实现短信验证码登录

我们系统集成了短信通知服务,这里我们进行OAuth2的扩展,使系统支持短信验证码登录。1、在gitegg-oauth中新增SmsCaptchaTokenGranter 自定义短信验证码令牌授权处理类/** * 短信验证码模式 public class SmsCaptchaTokenGranter extends AbstractTokenGranter { private static final String GRANT_TYPE = "sms_captcha"; private final AuthenticationManager authenticationManager; private UserDetailsService userDetailsService; private IUserFeign userFeign; private ISmsFeign smsFeign; private RedisTemplate redisTemplate; private CaptchaService captchaService; private String captchaType; public SmsCaptchaTokenGranter(AuthenticationManager authenticationManager, AuthorizationServerTokenServices tokenServices, ClientDetailsService clientDetailsService, OAuth2RequestFactory requestFactory, RedisTemplate redisTemplate, IUserFeign userFeign, ISmsFeign smsFeign, CaptchaService captchaService, UserDetailsService userDetailsService, String captchaType) { this(authenticationManager, tokenServices, clientDetailsService, requestFactory, GRANT_TYPE); this.redisTemplate = redisTemplate; this.captchaService = captchaService; this.captchaType = captchaType; this.smsFeign = smsFeign; this.userFeign = userFeign; this.userDetailsService = userDetailsService; protected SmsCaptchaTokenGranter(AuthenticationManager authenticationManager, AuthorizationServerTokenServices tokenServices, ClientDetailsService clientDetailsService, OAuth2RequestFactory requestFactory, String grantType) { super(tokenServices, clientDetailsService, requestFactory, grantType); this.authenticationManager = authenticationManager; @Override protected OAuth2Authentication getOAuth2Authentication(ClientDetails client, TokenRequest tokenRequest) { Map<String, String> parameters = new LinkedHashMap<>(tokenRequest.getRequestParameters()); // 获取验证码类型 String captchaType = parameters.get(CaptchaConstant.CAPTCHA_TYPE); // 判断传入的验证码类型和系统配置的是否一致 if (!StringUtils.isEmpty(captchaType) && !captchaType.equals(this.captchaType)) { throw new UserDeniedAuthorizationException(ResultCodeEnum.INVALID_CAPTCHA_TYPE.getMsg()); if (CaptchaConstant.IMAGE_CAPTCHA.equalsIgnoreCase(captchaType)) { // 图片验证码验证 String captchaKey = parameters.get(CaptchaConstant.CAPTCHA_KEY); String captchaCode = parameters.get(CaptchaConstant.CAPTCHA_CODE); // 获取验证码 String redisCode = (String)redisTemplate.opsForValue().get(CaptchaConstant.IMAGE_CAPTCHA_KEY + captchaKey); // 判断验证码 if (captchaCode == null || !captchaCode.equalsIgnoreCase(redisCode)) { throw new UserDeniedAuthorizationException(ResultCodeEnum.INVALID_CAPTCHA.getMsg()); } else { // 滑动验证码验证 String captchaVerification = parameters.get(CaptchaConstant.CAPTCHA_VERIFICATION); CaptchaVO captchaVO = new CaptchaVO(); captchaVO.setCaptchaVerification(captchaVerification); ResponseModel responseModel = captchaService.verification(captchaVO); if (null == responseModel || !RepCodeEnum.SUCCESS.getCode().equals(responseModel.getRepCode())) { throw new UserDeniedAuthorizationException(ResultCodeEnum.INVALID_CAPTCHA.getMsg()); String phoneNumber = parameters.get(TokenConstant.PHONE_NUMBER); String smsCode = parameters.get(TokenConstant.SMS_CODE); String code = parameters.get(TokenConstant.CODE); // Protect from downstream leaks of password parameters.remove(TokenConstant.CODE); Result<Boolean> checkResult = smsFeign.checkSmsVerificationCode(smsCode, phoneNumber, code); if (null == checkResult || !checkResult.getData()) { throw new InvalidGrantException(("Could not authenticate user: " + phoneNumber)); UserDetails userDetails = this.userDetailsService.loadUserByUsername(phoneNumber); Authentication userAuth = new UsernamePasswordAuthenticationToken(userDetails, null, userDetails.getAuthorities()); ((AbstractAuthenticationToken)userAuth).setDetails(parameters); OAuth2Request storedOAuth2Request = getRequestFactory().createOAuth2Request(client, tokenRequest); return new OAuth2Authentication(storedOAuth2Request, userAuth); }2、自定义GitEggTokenGranter,支持多种token模式/** * 自定义token public class GitEggTokenGranter { * 自定义tokenGranter public static TokenGranter getTokenGranter(final AuthenticationManager authenticationManager, final AuthorizationServerEndpointsConfigurer endpoints, RedisTemplate redisTemplate, IUserFeign userFeign, ISmsFeign smsFeign, CaptchaService captchaService, UserDetailsService userDetailsService, String captchaType) { // 默认tokenGranter集合 List<TokenGranter> granters = new ArrayList<>(Collections.singletonList(endpoints.getTokenGranter())); // 增加验证码模式 granters.add(new CaptchaTokenGranter(authenticationManager, endpoints.getTokenServices(), endpoints.getClientDetailsService(), endpoints.getOAuth2RequestFactory(), redisTemplate, captchaService, captchaType)); // 增加短信验证码模式 granters.add(new SmsCaptchaTokenGranter(authenticationManager, endpoints.getTokenServices(), endpoints.getClientDetailsService(), endpoints.getOAuth2RequestFactory(), redisTemplate, userFeign, smsFeign, captchaService, userDetailsService, captchaType)); // 组合tokenGranter集合 return new CompositeTokenGranter(granters); }3、GitEggOAuthController中增加获取短信验证码的方法@ApiOperation("发送短信验证码") @PostMapping("/sms/captcha/send") public Result sendSmsCaptcha(@RequestBody SmsVerificationDTO smsVerificationDTO) { Result<Object> sendResult = smsFeign.sendSmsVerificationCode(smsVerificationDTO.getSmsCode(), smsVerificationDTO.getPhoneNumber()); return sendResult; }4、前端页面增加短信验证码登录方式<a-tab-pane key="phone_account" :tab="$t('user.login.tab-login-mobile')" class="color:#1890ff;"> <a-form-item> <a-input size="large" type="text" :placeholder="$t('user.login.mobile.placeholder')" v-decorator="['phoneNumber', {rules: [{ required: true, pattern: /^1[34578]\d{9}$/, message: $t('user.phone-number.required') }], validateTrigger: 'change'}]"> <a-icon slot="prefix" type="mobile" :style="{ color: '#1890ff' }" /> </a-input> </a-form-item> <a-row :gutter="16"> <a-col class="gutter-row" :span="16"> <a-form-item> <a-input size="large" type="text" :placeholder="$t('user.login.mobile.verification-code.placeholder')" v-decorator="['captcha', {rules: [{ required: true, message: $t('user.verification-code.required') }], validateTrigger: 'blur'}]"> <a-icon slot="prefix" type="mail" :style="{ color: '#1890ff' }" /> </a-input> </a-form-item> </a-col> <a-col class="gutter-row" :span="8"> <a-button class="getCaptcha" tabindex="-1" :disabled="state.smsSendBtn" @click.stop.prevent="getCaptcha" v-text="!state.smsSendBtn && $t('user.register.get-verification-code') || (state.time+' s')"></a-button> </a-col> </a-row> </a-tab-pane>getCaptcha (e) { e.preventDefault() const { form: { validateFields }, state } = this validateFields(['phoneNumber'], { force: true }, (err, values) => { if (!err) { state.smsSendBtn = true const interval = window.setInterval(() => { if (state.time-- <= 0) { state.time = 60 state.smsSendBtn = false window.clearInterval(interval) }, 1000) const hide = this.$message.loading('验证码发送中..', 0) getSmsCaptcha({ phoneNumber: values.phoneNumber, smsCode: 'aliLoginCode' }).then(res => { setTimeout(hide, 2500) this.$notification['success']({ message: '提示', description: '验证码获取成功,您的验证码为:' + res.result.captcha, duration: 8 }).catch(err => { setTimeout(hide, 1) clearInterval(interval) state.time = 60 state.smsSendBtn = false this.requestFailed(err) stepCaptchaSuccess () { this.loginSuccess() stepCaptchaCancel () { this.Logout().then(() => { this.loginBtn = false this.stepCaptchaVisible = false },5、通过短信验证码登录界面短信验证码登录界面

SpringCloud微服务实战——搭建企业级开发框架(二十五):集成短信通知服务

目前系统集成短信似乎是必不可少的部分,由于各种云平台都提供了不同的短信通道,这里我们增加多租户多通道的短信验证码,并增加配置项,使系统可以支持多家云平台提供的短信服务。这里以阿里云和腾讯云为例,集成短信通知服务。1、在GitEgg-Platform中新建gitegg-platform-sms基础工程,定义抽象方法和配置类SmsSendService发送短信抽象接口:/** * 短信发送接口 public interface SmsSendService { * 发送单个短信 * @param smsData * @param phoneNumber * @return default SmsResponse sendSms(SmsData smsData, String phoneNumber){ if (StrUtil.isEmpty(phoneNumber)) { return new SmsResponse(); return this.sendSms(smsData, Collections.singletonList(phoneNumber)); * 群发发送短信 * @param smsData * @param phoneNumbers * @return SmsResponse sendSms(SmsData smsData, Collection<String> phoneNumbers); }SmsResultCodeEnum定义短信发送结果/** * @ClassName: ResultCodeEnum * @Description: 自定义返回码枚举 * @author GitEgg * @date 2020年09月19日 下午11:49:45 @Getter @AllArgsConstructor public enum SmsResultCodeEnum { SUCCESS(200, "操作成功"), * 系统繁忙,请稍后重试 ERROR(429, "短信发送失败,请稍后重试"), * 系统错误 PHONE_NUMBER_ERROR(500, "手机号错误"); public int code; public String msg; }2、新建gitegg-platform-sms-aliyun工程,实现阿里云短信发送接口AliyunSmsProperties配置类@Data @Component @ConfigurationProperties(prefix = "sms.aliyun") public class AliyunSmsProperties { * product private String product = "Dysmsapi"; * domain private String domain = "dysmsapi.aliyuncs.com"; * regionId private String regionId = "cn-hangzhou"; * accessKeyId private String accessKeyId; * accessKeySecret private String accessKeySecret; * 短信签名 private String signName; }AliyunSmsSendServiceImpl阿里云短信发送接口实现类/** * 阿里云短信发送 @Slf4j @AllArgsConstructor public class AliyunSmsSendServiceImpl implements SmsSendService { private static final String successCode = "OK"; private final AliyunSmsProperties properties; private final IAcsClient acsClient; @Override public SmsResponse sendSms(SmsData smsData, Collection<String> phoneNumbers) { SmsResponse smsResponse = new SmsResponse(); SendSmsRequest request = new SendSmsRequest(); request.setSysMethod(MethodType.POST); request.setPhoneNumbers(StrUtil.join(",", phoneNumbers)); request.setSignName(properties.getSignName()); request.setTemplateCode(smsData.getTemplateId()); request.setTemplateParam(JsonUtils.mapToJson(smsData.getParams())); try { SendSmsResponse sendSmsResponse = acsClient.getAcsResponse(request); if (null != sendSmsResponse && !StringUtils.isEmpty(sendSmsResponse.getCode())) { if (this.successCode.equals(sendSmsResponse.getCode())) { smsResponse.setSuccess(true); } else { log.error("Send Aliyun Sms Fail: [code={}, message={}]", sendSmsResponse.getCode(), sendSmsResponse.getMessage()); smsResponse.setCode(sendSmsResponse.getCode()); smsResponse.setMessage(sendSmsResponse.getMessage()); } catch (Exception e) { e.printStackTrace(); log.error("Send Aliyun Sms Fail: {}", e); smsResponse.setMessage("Send Aliyun Sms Fail!"); return smsResponse; }3、新建gitegg-platform-sms-tencent工程,实现腾讯云短信发送接口TencentSmsProperties配置类@Data @Component @ConfigurationProperties(prefix = "sms.tencent") public class TencentSmsProperties { /* 填充请求参数,这里 request 对象的成员变量即对应接口的入参 * 您可以通过官网接口文档或跳转到 request 对象的定义处查看请求参数的定义 * 基本类型的设置: * 帮助链接: * 短信控制台:https://console.cloud.tencent.com/smsv2 * sms helper:https://cloud.tencent.com/document/product/382/3773 */ /* 短信应用 ID: 在 [短信控制台] 添加应用后生成的实际 SDKAppID,例如1400006666 */ private String SmsSdkAppId; /* 国际/港澳台短信 senderid: 国内短信填空,默认未开通,如需开通请联系 [sms helper] */ private String senderId; /* 短信码号扩展号: 默认未开通,如需开通请联系 [sms helper] */ private String extendCode; * 短信签名 private String signName; }TencentSmsSendServiceImpl腾讯云短信发送接口实现类/** * 腾讯云短信发送 @Slf4j @AllArgsConstructor public class TencentSmsSendServiceImpl implements SmsSendService { private static final String successCode = "Ok"; private final TencentSmsProperties properties; private final SmsClient client; @Override public SmsResponse sendSms(SmsData smsData, Collection<String> phoneNumbers) { SmsResponse smsResponse = new SmsResponse(); SendSmsRequest request = new SendSmsRequest(); request.setSmsSdkAppid(properties.getSmsSdkAppId()); /* 短信签名内容: 使用 UTF-8 编码,必须填写已审核通过的签名,可登录 [短信控制台] 查看签名信息 */ request.setSign(properties.getSignName()); /* 国际/港澳台短信 senderid: 国内短信填空,默认未开通,如需开通请联系 [sms helper] */ if (!StringUtils.isEmpty(properties.getSenderId())) request.setSenderId(properties.getSenderId()); request.setTemplateID(smsData.getTemplateId()); /* 下发手机号码,采用 e.164 标准,+[国家或地区码][手机号] * 例如+8613711112222, 其中前面有一个+号 ,86为国家码,13711112222为手机号,最多不要超过200个手机号*/ String[] phoneNumbersArray = (String[]) phoneNumbers.toArray(); request.setPhoneNumberSet(phoneNumbersArray); /* 模板参数: 若无模板参数,则设置为空*/ String[] templateParams = new String[]{}; if (!CollectionUtils.isEmpty(smsData.getParams())) { templateParams = (String[]) smsData.getParams().values().toArray(); request.setTemplateParamSet(templateParams); try { /* 通过 client 对象调用 SendSms 方法发起请求。注意请求方法名与请求对象是对应的 * 返回的 res 是一个 SendSmsResponse 类的实例,与请求对象对应 */ SendSmsResponse sendSmsResponse = client.SendSms(request); //如果是批量发送,那么腾讯云短信会返回每条短信的发送状态,这里默认返回第一条短信的状态 if (null != sendSmsResponse && null != sendSmsResponse.getSendStatusSet()) { SendStatus sendStatus = sendSmsResponse.getSendStatusSet()[0]; if (this.successCode.equals(sendStatus.getCode())) smsResponse.setSuccess(true); smsResponse.setCode(sendStatus.getCode()); smsResponse.setMessage(sendStatus.getMessage()); } catch (Exception e) { e.printStackTrace(); log.error("Send Aliyun Sms Fail: {}", e); smsResponse.setMessage("Send Aliyun Sms Fail!"); return smsResponse; }4、在GitEgg-Cloud中新建业务调用方法,这里要考虑到不同租户调用不同的短信配置进行短信发送,所以新建SmsFactory短信接口实例化工厂,根据不同的租户实例化不同的短信发送接口,这里以实例化com.gitegg.service.extension.sms.factory.SmsAliyunFactory类为例,进行实例化操作,实际使用中,这里需要配置和租户的对应关系,从租户的短信配置中获取。@Component public class SmsFactory { private final ISmsTemplateService smsTemplateService; * SmsSendService 缓存 private final Map<Long, SmsSendService> SmsSendServiceMap = new ConcurrentHashMap<>(); public SmsFactory(ISmsTemplateService smsTemplateService) { this.smsTemplateService = smsTemplateService; * 获取 SmsSendService * @param smsTemplateDTO 短信模板 * @return SmsSendService public SmsSendService getSmsSendService(SmsTemplateDTO smsTemplateDTO) { //根据channelId获取对应的发送短信服务接口,channelId是唯一的,每个租户有其自有的channelId Long channelId = smsTemplateDTO.getChannelId(); SmsSendService smsSendService = SmsSendServiceMap.get(channelId); if (null == smsSendService) { Class cls = null; try { cls = Class.forName("com.gitegg.service.extension.sms.factory.SmsAliyunFactory"); Method staticMethod = cls.getDeclaredMethod("getSmsSendService", SmsTemplateDTO.class); smsSendService = (SmsSendService) staticMethod.invoke(cls,smsTemplateDTO); SmsSendServiceMap.put(channelId, smsSendService); } catch (ClassNotFoundException | NoSuchMethodException e) { e.printStackTrace(); } catch (IllegalAccessException e) { e.printStackTrace(); } catch (InvocationTargetException e) { e.printStackTrace(); return smsSendService; * 阿里云短信服务接口工厂类 public class SmsAliyunFactory { public static SmsSendService getSmsSendService(SmsTemplateDTO sms) { AliyunSmsProperties aliyunSmsProperties = new AliyunSmsProperties(); aliyunSmsProperties.setAccessKeyId(sms.getSecretId()); aliyunSmsProperties.setAccessKeySecret(sms.getSecretKey()); aliyunSmsProperties.setRegionId(sms.getRegionId()); aliyunSmsProperties.setSignName(sms.getSignName()); IClientProfile profile = DefaultProfile.getProfile(aliyunSmsProperties.getRegionId(), aliyunSmsProperties.getAccessKeyId(), aliyunSmsProperties.getAccessKeySecret()); IAcsClient acsClient = new DefaultAcsClient(profile); return new AliyunSmsSendServiceImpl(aliyunSmsProperties, acsClient); * 腾讯云短信服务接口工厂类 public class SmsTencentFactory { public static SmsSendService getSmsSendService(SmsTemplateDTO sms) { TencentSmsProperties tencentSmsProperties = new TencentSmsProperties(); tencentSmsProperties.setSmsSdkAppId(sms.getSecretId()); tencentSmsProperties.setExtendCode(sms.getSecretKey()); tencentSmsProperties.setSenderId(sms.getRegionId()); tencentSmsProperties.setSignName(sms.getSignName()); /* 必要步骤: * 实例化一个认证对象,入参需要传入腾讯云账户密钥对 secretId 和 secretKey * 本示例采用从环境变量读取的方式,需要预先在环境变量中设置这两个值 * 您也可以直接在代码中写入密钥对,但需谨防泄露,不要将代码复制、上传或者分享给他人 * CAM 密钥查询:https://console.cloud.tencent.com/cam/capi Credential cred = new Credential(sms.getSecretId(), sms.getSecretKey()); // 实例化一个 http 选项,可选,无特殊需求时可以跳过 HttpProfile httpProfile = new HttpProfile(); // 设置代理 // httpProfile.setProxyHost("host"); // httpProfile.setProxyPort(port); /* SDK 默认使用 POST 方法。 * 如需使用 GET 方法,可以在此处设置,但 GET 方法无法处理较大的请求 */ httpProfile.setReqMethod("POST"); /* SDK 有默认的超时时间,非必要请不要进行调整 * 如有需要请在代码中查阅以获取最新的默认值 */ httpProfile.setConnTimeout(60); /* SDK 会自动指定域名,通常无需指定域名,但访问金融区的服务时必须手动指定域名 * 例如 SMS 的上海金融区域名为 sms.ap-shanghai-fsi.tencentcloudapi.com */ if (!StringUtils.isEmpty(sms.getRegionId())) httpProfile.setEndpoint(sms.getRegionId()); /* 非必要步骤: * 实例化一个客户端配置对象,可以指定超时时间等配置 */ ClientProfile clientProfile = new ClientProfile(); /* SDK 默认用 TC3-HMAC-SHA256 进行签名 * 非必要请不要修改该字段 */ clientProfile.setSignMethod("HmacSHA256"); clientProfile.setHttpProfile(httpProfile); /* 实例化 SMS 的 client 对象 * 第二个参数是地域信息,可以直接填写字符串 ap-guangzhou,或者引用预设的常量 */ SmsClient client = new SmsClient(cred, "",clientProfile); return new TencentSmsSendServiceImpl(tencentSmsProperties, client); }5、定义短信发送接口及实现类ISmsService业务短信发送接口定义/** * <p> * 短信发送接口定义 * </p> * @author GitEgg * @since 2021-01-25 public interface ISmsService { * 发送短信 * @param smsCode * @param smsData * @param phoneNumbers * @return SmsResponse sendSmsNormal(String smsCode, String smsData, String phoneNumbers); * 发送短信验证码 * @param smsCode * @param phoneNumber * @return SmsResponse sendSmsVerificationCode( String smsCode, String phoneNumber); * 校验短信验证码 * @param smsCode * @param phoneNumber * @return boolean checkSmsVerificationCode(String smsCode, String phoneNumber, String verificationCode); }SmsServiceImpl 短信发送接口实现类/** * <p> * 短信发送接口实现类 * </p> * @author GitEgg * @since 2021-01-25 @Slf4j @Service @RequiredArgsConstructor(onConstructor_ = @Autowired) public class SmsServiceImpl implements ISmsService { private final SmsFactory smsFactory; private final ISmsTemplateService smsTemplateService; private final RedisTemplate redisTemplate; @Override public SmsResponse sendSmsNormal(String smsCode, String smsData, String phoneNumbers) { SmsResponse smsResponse = new SmsResponse(); try { QuerySmsTemplateDTO querySmsTemplateDTO = new QuerySmsTemplateDTO(); querySmsTemplateDTO.setSmsCode(smsCode); //获取短信code的相关信息,租户信息会根据mybatis plus插件获取 SmsTemplateDTO smsTemplateDTO = smsTemplateService.querySmsTemplate(querySmsTemplateDTO); ObjectMapper mapper = new ObjectMapper(); Map smsDataMap = mapper.readValue(smsData, Map.class); List<String> phoneNumberList = JsonUtils.jsonToList(phoneNumbers, String.class); SmsData smsDataParam = new SmsData(); smsDataParam.setTemplateId(smsTemplateDTO.getTemplateId()); smsDataParam.setParams(smsDataMap); SmsSendService smsSendService = smsFactory.getSmsSendService(smsTemplateDTO); smsResponse = smsSendService.sendSms(smsDataParam, phoneNumberList); } catch (Exception e) { smsResponse.setMessage("短信发送失败"); e.printStackTrace(); return smsResponse; @Override public SmsResponse sendSmsVerificationCode(String smsCode, String phoneNumber) { String verificationCode = RandomUtil.randomNumbers(6); Map<String, String> smsDataMap = new HashMap<>(); smsDataMap.put(SmsConstant.SMS_CAPTCHA_TEMPLATE_CODE, verificationCode); List<String> phoneNumbers = Arrays.asList(phoneNumber); SmsResponse smsResponse = this.sendSmsNormal(smsCode, JsonUtils.mapToJson(smsDataMap), JsonUtils.listToJson(phoneNumbers)); if (null != smsResponse && smsResponse.isSuccess()) { // 将短信验证码存入redis并设置过期时间为5分钟 redisTemplate.opsForValue().set(SmsConstant.SMS_CAPTCHA_KEY + smsCode + phoneNumber, verificationCode, 30, TimeUnit.MINUTES); return smsResponse; @Override public boolean checkSmsVerificationCode(String smsCode, String phoneNumber, String verificationCode) { String verificationCodeRedis = (String) redisTemplate.opsForValue().get(SmsConstant.SMS_CAPTCHA_KEY + smsCode + phoneNumber); if (!StrUtil.isAllEmpty(verificationCodeRedis, verificationCode) && verificationCode.equalsIgnoreCase(verificationCodeRedis)) { return true; return false; }6、新建SmsFeign类,供其他微服务调用发送短信/** * @ClassName: SmsFeign * @Description: SmsFeign前端控制器 * @author gitegg * @date 2019年5月18日 下午4:03:58 @RestController @RequestMapping(value = "/feign/sms") @RequiredArgsConstructor(onConstructor_ = @Autowired) @Api(value = "SmsFeign|提供微服务调用接口") @RefreshScope public class SmsFeign { private final ISmsService smsService; @GetMapping(value = "/send/normal") @ApiOperation(value = "发送普通短信", notes = "发送普通短信") Result<Object> sendSmsNormal(@RequestParam("smsCode") String smsCode, @RequestParam("smsData") String smsData, @RequestParam("phoneNumbers") String phoneNumbers) { SmsResponse smsResponse = smsService.sendSmsNormal(smsCode, smsData, phoneNumbers); return Result.data(smsResponse); @GetMapping(value = "/send/verification/code") @ApiOperation(value = "发送短信验证码", notes = "发送短信验证码") Result<Object> sendSmsVerificationCode(@RequestParam("smsCode") String smsCode, @RequestParam("phoneNumber") String phoneNumber) { SmsResponse smsResponse = smsService.sendSmsVerificationCode(smsCode, phoneNumber); return Result.data(smsResponse); @GetMapping(value = "/check/verification/code") @ApiOperation(value = "校验短信验证码", notes = "校验短信验证码") Result<Boolean> checkSmsVerificationCode(@RequestParam("smsCode") String smsCode, @RequestParam("phoneNumber") String phoneNumber, @RequestParam("verificationCode") String verificationCode) { boolean checkResult = smsService.checkSmsVerificationCode(smsCode, phoneNumber, verificationCode); return Result.data(checkResult);

SpringCloud微服务实战——搭建企业级开发框架(二十四):集成行为验证码和图片验证码实现登录功能

随着近几年技术的发展,人们对于系统安全性和用户体验的要求越来越高,大多数网站系统都逐渐采用行为验证码来代替图片验证码。GitEgg-Cloud集成了开源行为验证码组件和图片验证码,并在系统中添加可配置项来选择具体使用哪种验证码。AJ-Captcha:行为验证码EasyCaptcha: 图片验证码1、在我们的gitegg-platform-bom工程中增加验证码的包依赖<!-- AJ-Captcha滑动验证码 --> <captcha.version>1.2.7</captcha.version> <!-- Easy-Captcha图形验证码 --> <easy.captcha.version>1.6.2</easy.captcha.version> <!-- captcha 滑动验证码--> <dependency> <groupId>com.github.anji-plus</groupId> <artifactId>captcha-spring-boot-starter</artifactId> <version>${captcha.version}</version> </dependency> <!-- easy-captcha 图形验证码--> <dependency> <groupId>com.github.whvcse</groupId> <artifactId>easy-captcha</artifactId> <version>${easy.captcha.version}</version> </dependency>2、新建gitegg-platform-captcha工程,用于配置及自定义方法,行为验证码用到缓存是需要自定义实现CaptchaCacheService,自定义类CaptchaCacheServiceRedisImpl:public class CaptchaCacheServiceRedisImpl implements CaptchaCacheService { @Override public String type() { return "redis"; @Autowired private StringRedisTemplate stringRedisTemplate; @Override public void set(String key, String value, long expiresInSeconds) { stringRedisTemplate.opsForValue().set(key, value, expiresInSeconds, TimeUnit.SECONDS); @Override public boolean exists(String key) { return stringRedisTemplate.hasKey(key); @Override public void delete(String key) { stringRedisTemplate.delete(key); @Override public String get(String key) { return stringRedisTemplate.opsForValue().get(key); }3、在gitegg-platform-captcha的resources目录新建META-INF.services文件夹,参考resource/META-INF/services中的写法。com.gitegg.platform.captcha.service.impl.CaptchaCacheServiceRedisImpl4、在GitEgg-Cloud下的gitegg-oauth中增加CaptchaTokenGranter自定义验证码令牌授权处理类/** * 验证码模式 public class CaptchaTokenGranter extends AbstractTokenGranter { private static final String GRANT_TYPE = "captcha"; private final AuthenticationManager authenticationManager; private RedisTemplate redisTemplate; private CaptchaService captchaService; private String captchaType; public CaptchaTokenGranter(AuthenticationManager authenticationManager, AuthorizationServerTokenServices tokenServices, ClientDetailsService clientDetailsService, OAuth2RequestFactory requestFactory, RedisTemplate redisTemplate, CaptchaService captchaService, String captchaType) { this(authenticationManager, tokenServices, clientDetailsService, requestFactory, GRANT_TYPE); this.redisTemplate = redisTemplate; this.captchaService = captchaService; this.captchaType = captchaType; protected CaptchaTokenGranter(AuthenticationManager authenticationManager, AuthorizationServerTokenServices tokenServices, ClientDetailsService clientDetailsService, OAuth2RequestFactory requestFactory, String grantType) { super(tokenServices, clientDetailsService, requestFactory, grantType); this.authenticationManager = authenticationManager; @Override protected OAuth2Authentication getOAuth2Authentication(ClientDetails client, TokenRequest tokenRequest) { Map<String, String> parameters = new LinkedHashMap<>(tokenRequest.getRequestParameters()); // 获取验证码类型 String captchaType = parameters.get(CaptchaConstant.CAPTCHA_TYPE); // 判断传入的验证码类型和系统配置的是否一致 if (!StringUtils.isEmpty(captchaType) && !captchaType.equals(this.captchaType)) { throw new UserDeniedAuthorizationException(ResultCodeEnum.INVALID_CAPTCHA_TYPE.getMsg()); if (CaptchaConstant.IMAGE_CAPTCHA.equalsIgnoreCase(captchaType)) { // 图片验证码验证 String captchaKey = parameters.get(CaptchaConstant.CAPTCHA_KEY); String captchaCode = parameters.get(CaptchaConstant.CAPTCHA_CODE); // 获取验证码 String redisCode = (String)redisTemplate.opsForValue().get(CaptchaConstant.IMAGE_CAPTCHA_KEY + captchaKey); // 判断验证码 if (captchaCode == null || !captchaCode.equalsIgnoreCase(redisCode)) { throw new UserDeniedAuthorizationException(ResultCodeEnum.INVALID_CAPTCHA.getMsg()); } else { // 滑动验证码验证 String captchaVerification = parameters.get(CaptchaConstant.CAPTCHA_VERIFICATION); String slidingCaptchaType = parameters.get(CaptchaConstant.SLIDING_CAPTCHA_TYPE); CaptchaVO captchaVO = new CaptchaVO(); captchaVO.setCaptchaVerification(captchaVerification); captchaVO.setCaptchaType(slidingCaptchaType); ResponseModel responseModel = captchaService.verification(captchaVO); if (null == responseModel || !RepCodeEnum.SUCCESS.getCode().equals(responseModel.getRepCode())) { throw new UserDeniedAuthorizationException(ResultCodeEnum.INVALID_CAPTCHA.getMsg()); String username = parameters.get(TokenConstant.USER_NAME); String password = parameters.get(TokenConstant.PASSWORD); // Protect from downstream leaks of password parameters.remove(TokenConstant.PASSWORD); Authentication userAuth = new UsernamePasswordAuthenticationToken(username, password); ((AbstractAuthenticationToken)userAuth).setDetails(parameters); try { userAuth = authenticationManager.authenticate(userAuth); } catch (AccountStatusException | BadCredentialsException ase) { // covers expired, locked, disabled cases (mentioned in section 5.2, draft 31) throw new InvalidGrantException(ase.getMessage()); // If the username/password are wrong the spec says we should send 400/invalid grant if (userAuth == null || !userAuth.isAuthenticated()) { throw new InvalidGrantException("Could not authenticate user: " + username); OAuth2Request storedOAuth2Request = getRequestFactory().createOAuth2Request(client, tokenRequest); return new OAuth2Authentication(storedOAuth2Request, userAuth); }5、gitegg-oauth中GitEggOAuthController新增获取验证码的方法@Value("${captcha.type}") private String captchaType; @ApiOperation("获取系统配置的验证码类型") @GetMapping("/captcha/type") public Result captchaType() { return Result.data(captchaType); @ApiOperation("生成滑动验证码") @PostMapping("/captcha") public Result captcha(@RequestBody CaptchaVO captchaVO) { ResponseModel responseModel = captchaService.get(captchaVO); return Result.data(responseModel); @ApiOperation("滑动验证码验证") @PostMapping("/captcha/check") public Result captchaCheck(@RequestBody CaptchaVO captchaVO) { ResponseModel responseModel = captchaService.check(captchaVO); return Result.data(responseModel); @ApiOperation("生成图片验证码") @RequestMapping("/captcha/image") public Result captchaImage() { SpecCaptcha specCaptcha = new SpecCaptcha(130, 48, 5); String captchaCode = specCaptcha.text().toLowerCase(); String captchaKey = UUID.randomUUID().toString(); // 存入redis并设置过期时间为5分钟 redisTemplate.opsForValue().set(CaptchaConstant.IMAGE_CAPTCHA_KEY + captchaKey, captchaCode, GitEggConstant.Number.FIVE, TimeUnit.MINUTES); ImageCaptcha imageCaptcha = new ImageCaptcha(); imageCaptcha.setCaptchaKey(captchaKey); imageCaptcha.setCaptchaImage(specCaptcha.toBase64()); // 将key和base64返回给前端 return Result.data(imageCaptcha); }6、将滑动验证码提供的前端页面verifition目录copy到我们前端工程的compoonents目录,修改Login.vue,增加验证码<a-row :gutter="0" v-if="loginCaptchaType === 'image' && grantType !== 'password'"> <a-col :span="14"> <a-form-item> <a-input v-decorator="['captchaCode', validatorRules.captchaCode]" size="large" type="text" :placeholder="$t('user.verification-code.required')"> <a-icon v-if="inputCodeContent == verifiedCode" slot="prefix" type="safety-certificate" :style="{ fontSize: '20px', color: '#1890ff' }" /> <a-icon v-else slot="prefix" type="safety-certificate" :style="{ fontSize: '20px', color: '#1890ff' }" /> </a-input> </a-form-item> </a-col> <a-col :span="10"> <img :src="captchaImage" class="v-code-img" @click="refreshImageCode"> </a-col> </a-row><Verify @success="verifySuccess" :mode="'pop'" :captchaType="slidingCaptchaType" :imgSize="{ width: '330px', height: '155px' }" ref="verify"></Verify>grantType: 'password', loginCaptchaType: 'sliding', slidingCaptchaType: 'blockPuzzle', loginErrorMsg: '用户名或密码错误', captchaKey: '', captchaCode: '', captchaImage: 'data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAEALAAAAAABAAEAAAICRAEAOw==', inputCodeContent: '', inputCodeNull: truemethods: { ...mapActions(['Login', 'Logout']), // handler handleUsernameOrEmail (rule, value, callback) { const { state } = this const regex = /^([a-zA-Z0-9_-])+@([a-zA-Z0-9_-])+((\.[a-zA-Z0-9_-]{2,3}){1,2})$/ if (regex.test(value)) { state.loginType = 0 } else { state.loginType = 1 callback() // 滑动验证码二次校验并提交登录 verifySuccess (params) { // params 返回的二次验证参数, 和登录参数一起回传给登录接口,方便后台进行二次验证 const { form: { validateFields }, state, customActiveKey, Login } = this state.loginBtn = true const validateFieldsKey = customActiveKey === 'tab_account' ? ['username', 'password', 'captchaCode', 'captchaKey'] : ['phoneNumber', 'captcha', 'captchaCode', 'captchaKey'] validateFields(validateFieldsKey, { force: true }, (err, values) => { if (!err) { const loginParams = { ...values } delete loginParams.username loginParams[!state.loginType ? 'email' : 'username'] = values.username loginParams.client_id = process.env.VUE_APP_CLIENT_ID loginParams.client_secret = process.env.VUE_APP_CLIENT_SECRET if (this.grantType === 'password' && customActiveKey === 'tab_account') { loginParams.grant_type = 'password' loginParams.password = values.password } else { if (customActiveKey === 'tab_account') { loginParams.grant_type = 'captcha' loginParams.password = values.password } else { loginParams.grant_type = 'sms_captcha' loginParams.phone_number = values.phoneNumber loginParams.code = values.captcha loginParams.smsCode = 'aliLoginCode' // loginParams.password = md5(values.password) // 判断是图片验证码还是滑动验证码 if (this.loginCaptchaType === 'sliding') { loginParams.captcha_type = 'sliding' loginParams.sliding_type = this.slidingCaptchaType loginParams.captcha_verification = params.captchaVerification } else if (this.loginCaptchaType === 'image') { loginParams.captcha_type = 'image' loginParams.captcha_key = this.captchaKey loginParams.captcha_code = values.captchaCode Login(loginParams) .then((res) => this.loginSuccess(res)) .catch(err => this.requestFailed(err)) .finally(() => { state.loginBtn = false } else { setTimeout(() => { state.loginBtn = false }, 600) // 滑动验证码校验 captchaVerify (e) { e.preventDefault() const { form: { validateFields }, state, customActiveKey } = this state.loginBtn = true const validateFieldsKey = customActiveKey === 'tab_account' ? ['username', 'password', 'vcode', 'verkey'] : ['phoneNumber', 'captcha', 'vcode', 'verkey'] validateFields(validateFieldsKey, { force: true }, (err, values) => { if (!err) { if (this.grantType === 'password') { this.verifySuccess() } else { if (this.loginCaptchaType === 'sliding') { this.$refs.verify.show() } else { this.verifySuccess() } else { setTimeout(() => { state.loginBtn = false }, 600) queryCaptchaType () { getCaptchaType().then(res => { this.loginCaptchaType = res.data if (this.loginCaptchaType === 'image') { this.refreshImageCode() refreshImageCode () { getImageCaptcha().then(res => { const data = res.data this.captchaKey = data.captchaKey this.captchaImage = data.captchaImage handleTabClick (key) { this.customActiveKey = key // this.form.resetFields() handleSubmit (e) { e.preventDefault() getCaptcha (e) { e.preventDefault() const { form: { validateFields }, state } = this validateFields(['phoneNumber'], { force: true }, (err, values) => { if (!err) { state.smsSendBtn = true const interval = window.setInterval(() => { if (state.time-- <= 0) { state.time = 60 state.smsSendBtn = false window.clearInterval(interval) }, 1000) const hide = this.$message.loading('验证码发送中..', 0) getSmsCaptcha({ phoneNumber: values.phoneNumber, smsCode: 'aliLoginCode' }).then(res => { setTimeout(hide, 2500) this.$notification['success']({ message: '提示', description: '验证码获取成功,您的验证码为:' + res.result.captcha, duration: 8 }).catch(err => { setTimeout(hide, 1) clearInterval(interval) state.time = 60 state.smsSendBtn = false this.requestFailed(err) stepCaptchaSuccess () { this.loginSuccess() stepCaptchaCancel () { this.Logout().then(() => { this.loginBtn = false this.stepCaptchaVisible = false loginSuccess (res) { // 判断是否记住密码 const rememberMe = this.form.getFieldValue('rememberMe') const username = this.form.getFieldValue('username') const password = this.form.getFieldValue('password') if (rememberMe && username !== '' && password !== '') { storage.set(process.env.VUE_APP_TENANT_ID + '-' + process.env.VUE_APP_CLIENT_ID + '-username', username, 60 * 60 * 24 * 7 * 1000) storage.set(process.env.VUE_APP_TENANT_ID + '-' + process.env.VUE_APP_CLIENT_ID + '-password', password, 60 * 60 * 24 * 7 * 1000) storage.set(process.env.VUE_APP_TENANT_ID + '-' + process.env.VUE_APP_CLIENT_ID + '-rememberMe', true, 60 * 60 * 24 * 7 * 1000) } else { storage.remove(process.env.VUE_APP_TENANT_ID + '-' + process.env.VUE_APP_CLIENT_ID + '-username') storage.remove(process.env.VUE_APP_TENANT_ID + '-' + process.env.VUE_APP_CLIENT_ID + '-password') storage.remove(process.env.VUE_APP_TENANT_ID + '-' + process.env.VUE_APP_CLIENT_ID + '-rememberMe') this.$router.push({ path: '/' }) // 延迟 1 秒显示欢迎信息 setTimeout(() => { this.$notification.success({ message: '欢迎', description: `${timeFix()},欢迎回来` }, 1000) this.isLoginError = false requestFailed (err) { this.isLoginError = true if (err && err.code === 427) { // 密码错误次数超过最大限值,请选择验证码模式登录 if (this.customActiveKey === 'tab_account') { this.grantType = 'captcha' } else { this.grantType = 'sms_captcha' this.loginErrorMsg = err.msg if (this.loginCaptchaType === 'sliding') { this.$refs.verify.show() } else if (err) { this.loginErrorMsg = err.msg }7、在Nacos中增加配置项,默认使用行为验证码#验证码配置 captcha: #验证码的类型 sliding: 滑动验证码 image: 图片验证码 type: sliding8、登录效果登录页安全验证尝试过多,账号被锁

SpringCloud微服务实战——搭建企业级开发框架(二十三):Gateway+OAuth2+JWT实现微服务统一认证授权

OAuth2是一个关于授权的开放标准,核心思路是通过各类认证手段(具体什么手段OAuth2不关心)认证用户身份,并颁发token(令牌),使得第三方应用可以使用该token(令牌)在限定时间、限定范围访问指定资源。  OAuth2中使用token验证用户登录合法性,但token最大的问题是不携带用户信息,资源服务器无法在本地进行验证,每次对于资源的访问,资源服务器都需要向认证服务器发起请求,一是验证token的有效性,二是获取token对应的用户信息。如果有大量的此类请求,无疑处理效率是很低,且认证服务器会变成一个中心节点,这在分布式架构下很影响性能。如果认证服务器颁发的是jwt格式的token,那么资源服务器就可以直接自己验证token的有效性并绑定用户,这无疑大大提升了处理效率且减少了单点隐患。  SpringCloud认证授权解决思路:认证服务负责认证,网关负责校验认证和鉴权,其他API服务负责处理自己的业务逻辑。安全相关的逻辑只存在于认证服务和网关服务中,其他服务只是单纯地提供服务而没有任何安全相关逻辑。微服务鉴权功能划分:gitegg-oauth:Oauth2用户认证和单点登录gitegg-gateway:请求转发和统一鉴权gitegg-system: 读取系统配置的RBAC权限配置并存放到缓存一、鉴权配置1、GitEgg-Platform工程下新建gitegg-platform-oauth2工程,用于统一管理OAuth2版本,及统一配置<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>GitEgg-Platform</artifactId> <groupId>com.gitegg.platform</groupId> <version>1.0-SNAPSHOT</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>gitegg-platform-oauth2</artifactId> <name>${project.artifactId}</name> <packaging>jar</packaging> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-configuration-processor</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-oauth2</artifactId> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-oauth2-jose</artifactId> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-oauth2-resource-server</artifactId> </dependency> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-swagger</artifactId> <optional>true</optional> </dependency> </dependencies> </project>2、在gitegg-oauth工程中引入需要的库<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>GitEgg-Cloud</artifactId> <groupId>com.gitegg.cloud</groupId> <version>1.0-SNAPSHOT</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>gitegg-oauth</artifactId> <name>${project.artifactId}</name> <packaging>jar</packaging> <dependencies> <!-- gitegg-platform-boot --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-boot</artifactId> <version>${gitegg.project.version}</version> </dependency> <!-- gitegg-platform-cloud --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-cloud</artifactId> <version>${gitegg.project.version}</version> </dependency> <!-- gitegg-platform-oauth2 --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-oauth2</artifactId> <version>${gitegg.project.version}</version> </dependency> <!-- gitegg数据库驱动及连接池 --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-db</artifactId> </dependency> <!-- gitegg mybatis-plus --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-mybatis</artifactId> </dependency> <!-- 验证码 --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-captcha</artifactId> </dependency> <!-- gitegg-service-system 的fegin公共调用方法 --> <dependency> <groupId>com.gitegg.cloud</groupId> <artifactId>gitegg-service-system-api</artifactId> <version>${gitegg.project.version}</version> </dependency> <dependency> <groupId>org.apache.tomcat.embed</groupId> <artifactId>tomcat-embed-core</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> </dependency> </dependencies> </project>3、JWT可以使用HMAC算法或使用RSA的公钥/私钥对来签名,防止被篡改。首先我们使用keytool生成RSA证书gitegg.jks,复制到gitegg-oauth工程的resource目录下,CMD命令行进入到JDK安装目录的bin目录下, 使用keytool命令生成gitegg.jks证书keytool -genkey -alias gitegg -keyalg RSA -keystore gitegg.jks4、新建GitEggUserDetailsServiceImpl.java实现SpringSecurity获取用户信息接口,用于SpringSecurity鉴权时获取用户信息package com.gitegg.oauth.service; import javax.servlet.http.HttpServletRequest; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.security.core.authority.AuthorityUtils; import org.springframework.security.core.userdetails.UserDetailsService; import org.springframework.security.core.userdetails.UsernameNotFoundException; import org.springframework.security.oauth2.common.exceptions.UserDeniedAuthorizationException; import org.springframework.stereotype.Service; import org.springframework.util.CollectionUtils; import org.springframework.util.StringUtils; import com.gitegg.oauth.enums.AuthEnum; import com.gitegg.platform.base.constant.AuthConstant; import com.gitegg.platform.base.domain.GitEggUser; import com.gitegg.platform.base.enums.ResultCodeEnum; import com.gitegg.platform.base.result.Result; import com.gitegg.service.system.api.feign.IUserFeign; import cn.hutool.core.bean.BeanUtil; import lombok.RequiredArgsConstructor; * 实现SpringSecurity获取用户信息接口 * @author gitegg @Service @RequiredArgsConstructor(onConstructor = @__(@Autowired)) public class GitEggUserDetailsServiceImpl implements UserDetailsService { private final IUserFeign userFeign; private final HttpServletRequest request; @Override public GitEggUserDetails loadUserByUsername(String username) { // 获取登录类型,密码,二维码,验证码 String authLoginType = request.getParameter(AuthConstant.AUTH_TYPE); // 获取客户端id String clientId = request.getParameter(AuthConstant.AUTH_CLIENT_ID); // 远程调用返回数据 Result<Object> result; // 通过手机号码登录 if (!StringUtils.isEmpty(authLoginType) && AuthEnum.PHONE.code.equals(authLoginType)) String phone = request.getParameter(AuthConstant.PHONE_NUMBER); result = userFeign.queryUserByPhone(phone); // 通过账号密码登录 else if(!StringUtils.isEmpty(authLoginType) && AuthEnum.QR.code.equals(authLoginType)) result = userFeign.queryUserByAccount(username); result = userFeign.queryUserByAccount(username); // 判断返回信息 if (null != result && result.isSuccess()) { GitEggUser gitEggUser = new GitEggUser(); BeanUtil.copyProperties(result.getData(), gitEggUser, false); if (gitEggUser == null || gitEggUser.getId() == null) { throw new UsernameNotFoundException(ResultCodeEnum.INVALID_USERNAME.msg); if (CollectionUtils.isEmpty(gitEggUser.getRoleIdList())) { throw new UserDeniedAuthorizationException(ResultCodeEnum.INVALID_ROLE.msg); return new GitEggUserDetails(gitEggUser.getId(), gitEggUser.getTenantId(), gitEggUser.getOauthId(), gitEggUser.getNickname(), gitEggUser.getRealName(), gitEggUser.getOrganizationId(), gitEggUser.getOrganizationName(), gitEggUser.getOrganizationIds(), gitEggUser.getOrganizationNames(), gitEggUser.getRoleId(), gitEggUser.getRoleIds(), gitEggUser.getRoleName(), gitEggUser.getRoleNames(), gitEggUser.getRoleIdList(), gitEggUser.getRoleKeyList(), gitEggUser.getResourceKeyList(), gitEggUser.getDataPermission(), gitEggUser.getAvatar(), gitEggUser.getAccount(), gitEggUser.getPassword(), true, true, true, true, AuthorityUtils.createAuthorityList(gitEggUser.getRoleIdList().toArray(new String[gitEggUser.getRoleIdList().size()]))); } else { throw new UsernameNotFoundException(result.getMsg()); }5、新建AuthorizationServerConfig.java用于认证服务相关配置,正式环境请一定记得修改gitegg.jks配置的密码,这里默认为123456。TokenEnhancer 为登录用户的扩展信息,可以自己定义。package com.gitegg.oauth.config; import java.security.KeyPair; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; import javax.sql.DataSource; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Value; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.core.io.ClassPathResource; import org.springframework.data.redis.core.RedisTemplate; import org.springframework.security.authentication.AuthenticationManager; import org.springframework.security.core.userdetails.UserDetailsService; import org.springframework.security.oauth2.common.DefaultOAuth2AccessToken; import org.springframework.security.oauth2.config.annotation.configurers.ClientDetailsServiceConfigurer; import org.springframework.security.oauth2.config.annotation.web.configuration.AuthorizationServerConfigurerAdapter; import org.springframework.security.oauth2.config.annotation.web.configuration.EnableAuthorizationServer; import org.springframework.security.oauth2.config.annotation.web.configurers.AuthorizationServerEndpointsConfigurer; import org.springframework.security.oauth2.config.annotation.web.configurers.AuthorizationServerSecurityConfigurer; import org.springframework.security.oauth2.provider.TokenGranter; import org.springframework.security.oauth2.provider.token.TokenEnhancer; import org.springframework.security.oauth2.provider.token.TokenEnhancerChain; import org.springframework.security.oauth2.provider.token.store.JwtAccessTokenConverter; import org.springframework.security.oauth2.provider.token.store.KeyStoreKeyFactory; import com.anji.captcha.service.CaptchaService; import com.gitegg.oauth.granter.GitEggTokenGranter; import com.gitegg.oauth.service.GitEggClientDetailsServiceImpl; import com.gitegg.oauth.service.GitEggUserDetails; import com.gitegg.platform.base.constant.AuthConstant; import com.gitegg.platform.base.constant.TokenConstant; import com.gitegg.service.system.api.feign.IUserFeign; import lombok.RequiredArgsConstructor; import lombok.SneakyThrows; * 认证服务配置 @Configuration @EnableAuthorizationServer @RequiredArgsConstructor(onConstructor_ = @Autowired) public class AuthorizationServerConfig extends AuthorizationServerConfigurerAdapter { private final DataSource dataSource; private final AuthenticationManager authenticationManager; private final UserDetailsService userDetailsService; private final IUserFeign userFeign; private final RedisTemplate redisTemplate; private final CaptchaService captchaService; @Value("${captcha.type}") private String captchaType; * 客户端信息配置 @Override @SneakyThrows public void configure(ClientDetailsServiceConfigurer clients) { GitEggClientDetailsServiceImpl jdbcClientDetailsService = new GitEggClientDetailsServiceImpl(dataSource); jdbcClientDetailsService.setFindClientDetailsSql(AuthConstant.FIND_CLIENT_DETAILS_SQL); jdbcClientDetailsService.setSelectClientDetailsSql(AuthConstant.SELECT_CLIENT_DETAILS_SQL); clients.withClientDetails(jdbcClientDetailsService); * 配置授权(authorization)以及令牌(token)的访问端点和令牌服务(token services) @Override public void configure(AuthorizationServerEndpointsConfigurer endpoints) { TokenEnhancerChain tokenEnhancerChain = new TokenEnhancerChain(); List<TokenEnhancer> tokenEnhancers = new ArrayList<>(); tokenEnhancers.add(tokenEnhancer()); tokenEnhancers.add(jwtAccessTokenConverter()); tokenEnhancerChain.setTokenEnhancers(tokenEnhancers); // 获取自定义tokenGranter TokenGranter tokenGranter = GitEggTokenGranter.getTokenGranter(authenticationManager, endpoints, redisTemplate, userFeign, captchaService, captchaType); endpoints.authenticationManager(authenticationManager) .accessTokenConverter(jwtAccessTokenConverter()) .tokenEnhancer(tokenEnhancerChain) .userDetailsService(userDetailsService) .tokenGranter(tokenGranter) * refresh_token有两种使用方式:重复使用(true)、非重复使用(false),默认为true * 1.重复使用:access_token过期刷新时, refresh token过期时间未改变,仍以初次生成的时间为准 * 2.非重复使用:access_token过期刷新时, refresh_token过期时间延续,在refresh_token有效期内刷新而无需失效再次登录 .reuseRefreshTokens(false); * 允许表单认证 @Override public void configure(AuthorizationServerSecurityConfigurer security) { security.allowFormAuthenticationForClients() .tokenKeyAccess("permitAll()") .checkTokenAccess("isAuthenticated()"); * 使用非对称加密算法对token签名 @Bean public JwtAccessTokenConverter jwtAccessTokenConverter() { JwtAccessTokenConverter converter = new JwtAccessTokenConverter(); converter.setKeyPair(keyPair()); return converter; * 从classpath下的密钥库中获取密钥对(公钥+私钥) @Bean public KeyPair keyPair() { KeyStoreKeyFactory factory = new KeyStoreKeyFactory( new ClassPathResource("gitegg.jks"), "123456".toCharArray()); KeyPair keyPair = factory.getKeyPair( "gitegg", "123456".toCharArray()); return keyPair; * JWT内容增强 @Bean public TokenEnhancer tokenEnhancer() { return (accessToken, authentication) -> { Map<String, Object> map = new HashMap<>(2); GitEggUserDetails user = (GitEggUserDetails) authentication.getUserAuthentication().getPrincipal(); map.put(TokenConstant.TENANT_ID, user.getTenantId()); map.put(TokenConstant.OAUTH_ID, user.getOauthId()); map.put(TokenConstant.USER_ID, user.getId()); map.put(TokenConstant.ORGANIZATION_ID, user.getOrganizationId()); map.put(TokenConstant.ORGANIZATION_NAME, user.getOrganizationName()); map.put(TokenConstant.ORGANIZATION_IDS, user.getOrganizationIds()); map.put(TokenConstant.ORGANIZATION_NAMES, user.getOrganizationNames()); map.put(TokenConstant.ROLE_ID, user.getRoleId()); map.put(TokenConstant.ROLE_NAME, user.getRoleName()); map.put(TokenConstant.ROLE_IDS, user.getRoleIds()); map.put(TokenConstant.ROLE_NAMES, user.getRoleNames()); map.put(TokenConstant.ACCOUNT, user.getAccount()); map.put(TokenConstant.REAL_NAME, user.getRealName()); map.put(TokenConstant.NICK_NAME, user.getNickname()); map.put(TokenConstant.ROLE_ID_LIST, user.getRoleIdList()); map.put(TokenConstant.ROLE_KEY_LIST, user.getRoleKeyList()); //不把权限菜单放到jwt里面,当菜单太多时,会导致jwt长度不可控 // map.put(TokenConstant.RESOURCE_KEY_LIST, user.getResourceKeyList()); map.put(TokenConstant.DATA_PERMISSION, user.getDataPermission()); map.put(TokenConstant.AVATAR, user.getAvatar()); ((DefaultOAuth2AccessToken) accessToken).setAdditionalInformation(map); return accessToken; }6、Gateway在认证授权时需要RSA的公钥来验证签名是否合法,所以这里新建GitEggOAuthController的getKey接口用于Gateway获取RSA公钥@GetMapping("/public_key") public Map<String, Object> getKey() { RSAPublicKey publicKey = (RSAPublicKey) keyPair.getPublic(); RSAKey key = new RSAKey.Builder(publicKey).build(); return new JWKSet(key).toJSONObject(); }7、新建ResourceServerConfig.java资源服务器配置,放开public_key的读取权限@Override @SneakyThrows public void configure(HttpSecurity http) { http.headers().frameOptions().disable(); http.formLogin() .and() .authorizeRequests().requestMatchers(EndpointRequest.toAnyEndpoint()).permitAll() .and() .authorizeRequests() .antMatchers( "/oauth/public_key").permitAll() .anyRequest().authenticated() .and() .csrf().disable(); }8、在gitegg-service-system新建InitResourceRolesCacheRunner.java实现CommandLineRunner接口,用于系统启动时加载RBAC权限配置信息到缓存package com.gitegg.service.system.component; import java.util.*; import java.util.stream.Collectors; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Value; import org.springframework.boot.CommandLineRunner; import org.springframework.data.redis.core.RedisTemplate; import org.springframework.stereotype.Component; import com.gitegg.platform.base.constant.AuthConstant; import com.gitegg.service.system.entity.Resource; import com.gitegg.service.system.service.IResourceService; import cn.hutool.core.collection.CollectionUtil; import lombok.RequiredArgsConstructor; import lombok.extern.slf4j.Slf4j; * 容器启动完成加载资源权限数据到缓存 @Slf4j @RequiredArgsConstructor(onConstructor_ = @Autowired) @Component public class InitResourceRolesCacheRunner implements CommandLineRunner { private final RedisTemplate redisTemplate; private final IResourceService resourceService; * 是否开启租户模式 @Value(("${tenant.enable}")) private Boolean enable; @Override public void run(String... args) { log.info("InitResourceRolesCacheRunner running"); // 查询系统角色和权限的关系 List<Resource> resourceList = resourceService.queryResourceRoleIds(); // 判断是否开启了租户模式,如果开启了,那么角色权限需要按租户进行分类存储 if (enable) { Map<Long, List<Resource>> resourceListMap = resourceList.stream().collect(Collectors.groupingBy(Resource::getTenantId)); resourceListMap.forEach((key, value) -> { String redisKey = AuthConstant.TENANT_RESOURCE_ROLES_KEY + key; redisTemplate.delete(redisKey); addRoleResource(redisKey, value); System.out.println(redisTemplate.opsForHash().entries(redisKey).size()); } else { redisTemplate.delete(AuthConstant.RESOURCE_ROLES_KEY); addRoleResource(AuthConstant.RESOURCE_ROLES_KEY, resourceList); private void addRoleResource(String key, List<Resource> resourceList) { Map<String, List<String>> resourceRolesMap = new TreeMap<>(); Optional.ofNullable(resourceList).orElse(new ArrayList<>()).forEach(resource -> { // roleId -> ROLE_{roleId} List<String> roles = Optional.ofNullable(resource.getRoleIds()).orElse(new ArrayList<>()).stream() .map(roleId -> AuthConstant.AUTHORITY_PREFIX + roleId).collect(Collectors.toList()); if (CollectionUtil.isNotEmpty(roles)) { resourceRolesMap.put(resource.getResourceUrl(), roles); redisTemplate.opsForHash().putAll(key, resourceRolesMap); }9、新建网关服务gitegg-gateway,作为Oauth2的资源服务、客户端服务使用,对访问微服务的请求进行转发、统一校验认证和鉴权操作,引入相关依赖<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>GitEgg-Cloud</artifactId> <groupId>com.gitegg.cloud</groupId> <version>1.0-SNAPSHOT</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>gitegg-gateway</artifactId> <dependencies> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-base</artifactId> <version>${gitegg.project.version}</version> </dependency> <!-- Nacos 服务注册发现 --> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId> </dependency> <!-- Nacos 分布式配置 --> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId> </dependency> <!-- OpenFeign 微服务调用解决方案 --> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-openfeign</artifactId> </dependency> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-oauth2</artifactId> <version>${gitegg.project.version}</version> </dependency> <!-- gitegg cache自定义扩展 --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-cache</artifactId> <version>${gitegg.project.version}</version> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-gateway</artifactId> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> </dependency> <dependency> <groupId>com.github.xiaoymin</groupId> <artifactId>knife4j-spring-ui</artifactId> </dependency> </dependencies> </project>10、新建AuthResourceServerConfig.java对gateway网关服务进行配置安全配置,需要使用@EnableWebFluxSecurity而非@EnableWebSecurity,因为SpringCloud Gateway基于WebFluxpackage com.gitegg.gateway.config; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.core.convert.converter.Converter; import org.springframework.security.authentication.AbstractAuthenticationToken; import org.springframework.security.config.annotation.web.reactive.EnableWebFluxSecurity; import org.springframework.security.config.web.server.SecurityWebFiltersOrder; import org.springframework.security.config.web.server.ServerHttpSecurity; import org.springframework.security.oauth2.jwt.Jwt; import org.springframework.security.oauth2.server.resource.authentication.JwtAuthenticationConverter; import org.springframework.security.oauth2.server.resource.authentication.JwtGrantedAuthoritiesConverter; import org.springframework.security.oauth2.server.resource.authentication.ReactiveJwtAuthenticationConverterAdapter; import org.springframework.security.web.server.SecurityWebFilterChain; import com.gitegg.gateway.auth.AuthorizationManager; import com.gitegg.gateway.filter.WhiteListRemoveJwtFilter; import com.gitegg.gateway.handler.AuthServerAccessDeniedHandler; import com.gitegg.gateway.handler.AuthServerAuthenticationEntryPoint; import com.gitegg.gateway.props.AuthUrlWhiteListProperties; import com.gitegg.platform.base.constant.AuthConstant; import cn.hutool.core.util.ArrayUtil; import lombok.AllArgsConstructor; import reactor.core.publisher.Mono; * 资源服务器配置 @AllArgsConstructor @Configuration // 注解需要使用@EnableWebFluxSecurity而非@EnableWebSecurity,因为SpringCloud Gateway基于WebFlux @EnableWebFluxSecurity public class AuthResourceServerConfig { private final AuthorizationManager authorizationManager; private final AuthServerAccessDeniedHandler authServerAccessDeniedHandler; private final AuthServerAuthenticationEntryPoint authServerAuthenticationEntryPoint; private final AuthUrlWhiteListProperties authUrlWhiteListProperties; private final WhiteListRemoveJwtFilter whiteListRemoveJwtFilter; @Bean public SecurityWebFilterChain securityWebFilterChain(ServerHttpSecurity http) { http.oauth2ResourceServer().jwt() .jwtAuthenticationConverter(jwtAuthenticationConverter()); // 自定义处理JWT请求头过期或签名错误的结果 http.oauth2ResourceServer().authenticationEntryPoint(authServerAuthenticationEntryPoint); // 对白名单路径,直接移除JWT请求头,不移除的话,后台会校验jwt http.addFilterBefore(whiteListRemoveJwtFilter, SecurityWebFiltersOrder.AUTHENTICATION); http.authorizeExchange() .pathMatchers(ArrayUtil.toArray(authUrlWhiteListProperties.getUrls(), String.class)).permitAll() .anyExchange().access(authorizationManager) .and() .exceptionHandling() .accessDeniedHandler(authServerAccessDeniedHandler) // 处理未授权 .authenticationEntryPoint(authServerAuthenticationEntryPoint) //处理未认证 .and() .cors() .and().csrf().disable(); return http.build(); * ServerHttpSecurity没有将jwt中authorities的负载部分当做Authentication,需要把jwt的Claim中的authorities加入 * 解决方案:重新定义ReactiveAuthenticationManager权限管理器,默认转换器JwtGrantedAuthoritiesConverter @Bean public Converter<Jwt, ? extends Mono<? extends AbstractAuthenticationToken>> jwtAuthenticationConverter() { JwtGrantedAuthoritiesConverter jwtGrantedAuthoritiesConverter = new JwtGrantedAuthoritiesConverter(); jwtGrantedAuthoritiesConverter.setAuthorityPrefix(AuthConstant.AUTHORITY_PREFIX); jwtGrantedAuthoritiesConverter.setAuthoritiesClaimName(AuthConstant.AUTHORITY_CLAIM_NAME); JwtAuthenticationConverter jwtAuthenticationConverter = new JwtAuthenticationConverter(); jwtAuthenticationConverter.setJwtGrantedAuthoritiesConverter(jwtGrantedAuthoritiesConverter); return new ReactiveJwtAuthenticationConverterAdapter(jwtAuthenticationConverter); }11、新建AuthorizationManager.java实现ReactiveAuthorizationManager接口,用于自定义权限校验package com.gitegg.gateway.auth; import java.util.ArrayList; import java.util.Iterator; import java.util.List; import java.util.Map; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Value; import org.springframework.data.redis.core.RedisTemplate; import org.springframework.http.HttpMethod; import org.springframework.http.server.reactive.ServerHttpRequest; import org.springframework.security.authorization.AuthorizationDecision; import org.springframework.security.authorization.ReactiveAuthorizationManager; import org.springframework.security.core.Authentication; import org.springframework.security.core.GrantedAuthority; import org.springframework.security.web.server.authorization.AuthorizationContext; import org.springframework.stereotype.Component; import org.springframework.util.AntPathMatcher; import org.springframework.util.PathMatcher; import org.springframework.util.StringUtils; import com.gitegg.platform.base.constant.AuthConstant; import cn.hutool.core.convert.Convert; import lombok.RequiredArgsConstructor; import lombok.extern.slf4j.Slf4j; import reactor.core.publisher.Mono; * 网关鉴权管理器 @Slf4j @Component @RequiredArgsConstructor(onConstructor_ = @Autowired) public class AuthorizationManager implements ReactiveAuthorizationManager<AuthorizationContext> { private final RedisTemplate redisTemplate; * 是否开启租户模式 @Value(("${tenant.enable}")) private Boolean enable; @Override public Mono<AuthorizationDecision> check(Mono<Authentication> mono, AuthorizationContext authorizationContext) { ServerHttpRequest request = authorizationContext.getExchange().getRequest(); String path = request.getURI().getPath(); PathMatcher pathMatcher = new AntPathMatcher(); // 对应跨域的预检请求直接放行 if (request.getMethod() == HttpMethod.OPTIONS) { return Mono.just(new AuthorizationDecision(true)); // token为空拒绝访问 String token = request.getHeaders().getFirst(AuthConstant.JWT_TOKEN_HEADER); if (StringUtils.isEmpty(token)) { return Mono.just(new AuthorizationDecision(false)); // 如果开启了租户模式,但是请求头里没有租户信息,那么拒绝访问 String tenantId = request.getHeaders().getFirst(AuthConstant.TENANT_ID); if (enable && StringUtils.isEmpty(tenantId)) { return Mono.just(new AuthorizationDecision(false)); String redisRoleKey = AuthConstant.TENANT_RESOURCE_ROLES_KEY; // 判断是否开启了租户模式,如果开启了,那么按租户分类的方式获取角色权限 if (enable) { redisRoleKey += tenantId; } else { redisRoleKey = AuthConstant.RESOURCE_ROLES_KEY; // 缓存取资源权限角色关系列表 Map<Object, Object> resourceRolesMap = redisTemplate.opsForHash().entries(redisRoleKey); Iterator<Object> iterator = resourceRolesMap.keySet().iterator(); //请求路径匹配到的资源需要的角色权限集合authorities统计 List<String> authorities = new ArrayList<>(); while (iterator.hasNext()) { String pattern = (String) iterator.next(); if (pathMatcher.match(pattern, path)) { authorities.addAll(Convert.toList(String.class, resourceRolesMap.get(pattern))); Mono<AuthorizationDecision> authorizationDecisionMono = mono .filter(Authentication::isAuthenticated) .flatMapIterable(Authentication::getAuthorities) .map(GrantedAuthority::getAuthority) .any(roleId -> { // roleId是请求用户的角色(格式:ROLE_{roleId}),authorities是请求资源所需要角色的集合 log.info("访问路径:{}", path); log.info("用户角色roleId:{}", roleId); log.info("资源需要权限authorities:{}", authorities); return authorities.contains(roleId); .map(AuthorizationDecision::new) .defaultIfEmpty(new AuthorizationDecision(false)); return authorizationDecisionMono; }12、新建AuthGlobalFilter.java全局过滤器,解析用户请求信息,将用户信息及租户信息放在请求的Header中,这样后续服务就不需要解析JWT令牌了,可以直接从请求的Header中获取到用户和租户信息。package com.gitegg.gateway.filter; import java.io.UnsupportedEncodingException; import java.net.URLEncoder; import java.text.ParseException; import java.util.HashMap; import java.util.Map; import java.util.function.Consumer; import org.springframework.beans.factory.annotation.Value; import org.springframework.cloud.gateway.filter.GatewayFilterChain; import org.springframework.cloud.gateway.filter.GlobalFilter; import org.springframework.core.Ordered; import org.springframework.http.HttpHeaders; import org.springframework.http.server.reactive.ServerHttpRequest; import org.springframework.stereotype.Component; import org.springframework.web.server.ServerWebExchange; import com.gitegg.platform.base.constant.AuthConstant; import com.nimbusds.jose.JWSObject; import cn.hutool.core.util.StrUtil; import lombok.extern.slf4j.Slf4j; import reactor.core.publisher.Mono; * 将登录用户的JWT转化成用户信息的全局过滤器 @Slf4j @Component public class AuthGlobalFilter implements GlobalFilter, Ordered { * 是否开启租户模式 @Value(("${tenant.enable}")) private Boolean enable; @Override public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) { String tenantId = exchange.getRequest().getHeaders().getFirst(AuthConstant.TENANT_ID); String token = exchange.getRequest().getHeaders().getFirst(AuthConstant.JWT_TOKEN_HEADER); if (StrUtil.isEmpty(tenantId) && StrUtil.isEmpty(token)) { return chain.filter(exchange); Map<String, String> addHeaders = new HashMap<>(); // 如果系统配置已开启租户模式,设置tenantId if (enable && StrUtil.isEmpty(tenantId)) { addHeaders.put(AuthConstant.TENANT_ID, tenantId); if (!StrUtil.isEmpty(token)) { try { //从token中解析用户信息并设置到Header中去 String realToken = token.replace("Bearer ", ""); JWSObject jwsObject = JWSObject.parse(realToken); String userStr = jwsObject.getPayload().toString(); log.info("AuthGlobalFilter.filter() User:{}", userStr); addHeaders.put(AuthConstant.HEADER_USER, URLEncoder.encode(userStr, "UTF-8")); } catch (ParseException | UnsupportedEncodingException e) { e.printStackTrace(); Consumer<HttpHeaders> httpHeaders = httpHeader -> { addHeaders.forEach((k, v) -> { httpHeader.set(k, v); ServerHttpRequest request = exchange.getRequest().mutate().headers(httpHeaders).build(); exchange = exchange.mutate().request(request).build(); return chain.filter(exchange); @Override public int getOrder() { return 0; }13、在Nacos中添加权限相关配置信息:spring: jackson: time-zone: Asia/Shanghai date-format: yyyy-MM-dd HH:mm:ss security: oauth2: resourceserver: jwk-set-uri: 'http://127.0.0.1/gitegg-oauth/oauth/public_key' # 多租户配置 tenant: # 是否开启租户模式 enable: true # 需要排除的多租户的表 exclusionTable: - "t_sys_district" - "t_sys_tenant" - "t_sys_role" - "t_sys_resource" - "t_sys_role_resource" # 租户字段名称 column: tenant_id # 网关放行白名单,配置白名单路径 white-list: urls: - "/gitegg-oauth/oauth/public_key"二、注销登录使JWT失效  因为JWT是无状态的且不在服务端储存,所以,当系统在执行退出登录时就无法使JWT失效,我们有两种方式拒绝注销登录后的JWT:JWT白名单:每次登录成功就将JWT存到缓存中,缓存有效期和JWT有效期保持一致,注销登录就将JWT从缓存中移出。Gateway每次认证授权先从缓存JWT白名单中获取是否存在该JWT,存在则继续校验,不存在则拒绝访问。JWT黑名单:每当注销登录时,将JWT存到缓存中,解析JWT的到期时间,将缓存过期时间设置为和JWT一致。Gateway每次认证授权先从缓存中获取JWT是否存在于黑名单中,存在则拒绝访问,不存在则继续校验。  不管是白名单还是黑名单,实现方式的原理都基本一致,就是将JWT先存放到缓存,再根据不同的状态进行判断JWT是否有效,下面是两种方式的优缺点分析:黑名单功能分析:优点是存放到缓存的数据量将小于白名单方式存放的数据量,缺点是无法获知当前签发了多少JWT,当前在线多少登录用户。白名单功能分析:优点是当我们需要统计在线用户的时候,白名单方式可以近似的获取到当前系统登录用户,可以扩展踢出登录用户的功能。缺点是数据存储量大,且大量token存在缓存中需要进行校验,万一被攻击会导致大量信息泄露。综上考虑,还是采用黑名单的方式来实现注销登录功能,实时统计在线人数和踢出用户等功能作为扩展功能来开发,不在登录注销逻辑中掺杂太多的业务处理逻辑,使系统保持低耦合。为了使JWT有效信息最大程度保证准确性,注销登录除了在系统点击退出登录按钮,还需要监测是否直接关闭页面,关闭浏览器事件,来执行调用系统注销接口。token和refresh_token的过期时间不一致,都在其解析之后的exp字段。因为我们定制了黑名单模式,当用户点击退出登录之后,我们会把refresh_token也加入黑名单,在refresh_token获取刷新token的时候,需要定制校验refresh_token是否被加入到黑名单。1、退出登录接口将token和refresh_token加入黑名单/** * 退出登录需要需要登录的一点思考: * 1、如果不需要登录,那么在调用接口的时候就需要把token传过来,且系统不校验token有效性,此时如果系统被攻击,不停的大量发送token,最后会把redis充爆 * 2、如果调用退出接口必须登录,那么系统会调用token校验有效性,refresh_token通过参数传过来加入黑名单 * 综上:选择调用退出接口需要登录的方式 * @param request * @return @PostMapping("/logout") public Result logout(HttpServletRequest request) { String token = request.getHeader(AuthConstant.JWT_TOKEN_HEADER); String refreshToken = request.getParameter(AuthConstant.REFRESH_TOKEN); long currentTimeSeconds = System.currentTimeMillis() / GitEggConstant.Number.THOUSAND; // 将token和refresh_token同时加入黑名单 String[] tokenArray = new String[GitEggConstant.Number.TWO]; tokenArray[GitEggConstant.Number.ZERO] = token.replace("Bearer ", ""); tokenArray[GitEggConstant.Number.ONE] = refreshToken; for (int i = GitEggConstant.Number.ZERO; i < tokenArray.length; i++) { String realToken = tokenArray[i]; JSONObject jsonObject = JwtUtils.decodeJwt(realToken); String jti = jsonObject.getAsString("jti"); Long exp = Long.parseLong(jsonObject.getAsString("exp")); if (exp - currentTimeSeconds > GitEggConstant.Number.ZERO) { redisTemplate.opsForValue().set(AuthConstant.TOKEN_BLACKLIST + jti, jti, (exp - currentTimeSeconds), TimeUnit.SECONDS); return Result.success(); }2、Gateway在AuthorizationManager中添加token是否加入黑名单的判断//如果token被加入到黑名单,就是执行了退出登录操作,那么拒绝访问 String realToken = token.replace("Bearer ", ""); try { JWSObject jwsObject = JWSObject.parse(realToken); Payload payload = jwsObject.getPayload(); JSONObject jsonObject = payload.toJSONObject(); String jti = jsonObject.getAsString("jti"); String blackListToken = (String)redisTemplate.opsForValue().get(AuthConstant.TOKEN_BLACKLIST + jti); if (!StringUtils.isEmpty(blackListToken)) { return Mono.just(new AuthorizationDecision(false)); } catch (ParseException e) { e.printStackTrace(); }3、自定义DefaultTokenService,校验refresh_token是否被加入黑名单@Slf4j public class GitEggTokenServices extends DefaultTokenServices { private final RedisTemplate redisTemplate; public GitEggTokenServices(RedisTemplate redisTemplate) this.redisTemplate = redisTemplate; @Transactional( noRollbackFor = {InvalidTokenException.class, InvalidGrantException.class} @Override public OAuth2AccessToken refreshAccessToken(String refreshTokenValue, TokenRequest tokenRequest) throws AuthenticationException { JSONObject jsonObject = null; String jti = null; //如果refreshToken被加入到黑名单,就是执行了退出登录操作,那么拒绝访问 try { JWSObject jwsObject = JWSObject.parse(refreshTokenValue); Payload payload = jwsObject.getPayload(); jsonObject = payload.toJSONObject(); jti = jsonObject.getAsString(TokenConstant.JTI); String blackListToken = (String)redisTemplate.opsForValue().get(AuthConstant.TOKEN_BLACKLIST + jti); if (!StringUtils.isEmpty(blackListToken)) { throw new InvalidTokenException("Invalid refresh token (blackList): " + refreshTokenValue); } catch (ParseException e) { log.error("获取refreshToken黑名单时发生错误:{}", e); OAuth2AccessToken oAuth2AccessToken = super.refreshAccessToken(refreshTokenValue, tokenRequest); // RefreshToken不支持重复使用,如果使用一次,则加入黑名单不再允许使用,当刷新token执行完之后,即校验过RefreshToken之后,才执行存redis操作 if (null != jsonObject && !StringUtils.isEmpty(jti)) { long currentTimeSeconds = System.currentTimeMillis() / GitEggConstant.Number.THOUSAND; Long exp = Long.parseLong(jsonObject.getAsString(TokenConstant.EXP)); if (exp - currentTimeSeconds > GitEggConstant.Number.ZERO) { redisTemplate.opsForValue().set(AuthConstant.TOKEN_BLACKLIST + jti, jti, (exp - currentTimeSeconds), TimeUnit.SECONDS); return oAuth2AccessToken; }测试:1、使用密码模式获取tokenHeaders里面加TenantId:0参数密码模式获取token2、通过refresh_token刷新tokenrefresh_token刷新token3、再次执行refresh_token刷新token,此时因为refresh_token已经调用过一次,所以这里不能再次使用refresh_token已过期三、前端自动使用refresh_token刷新token1、使用axios-auth-refresh公共组件,当后台状态返回401时,进行token刷新操作import axios from 'axios' import createAuthRefreshInterceptor from 'axios-auth-refresh' import store from '@/store' import storage from 'store' import { serialize } from '@/utils/util' import notification from 'ant-design-vue/es/notification' import modal from 'ant-design-vue/es/modal' import { VueAxios } from './axios' import { ACCESS_TOKEN, REFRESH_ACCESS_TOKEN } from '@/store/mutation-types' // 创建 axios 实例 const request = axios.create({ // API 请求的默认前缀 baseURL: process.env.VUE_APP_API_BASE_URL, timeout: 30000 // 请求超时时间 // 当token失效时,需要调用的刷新token的方法 const refreshAuthLogic = failedRequest => axios.post(process.env.VUE_APP_API_BASE_URL + '/gitegg-oauth/oauth/token', serialize({ client_id: process.env.VUE_APP_CLIENT_ID, client_secret: process.env.VUE_APP_CLIENT_SECRET, grant_type: 'refresh_token', refresh_token: storage.get(REFRESH_ACCESS_TOKEN) headers: { 'TenantId': process.env.VUE_APP_TENANT_ID, 'Content-Type': 'application/x-www-form-urlencoded' } ).then(tokenRefreshResponse => { if (tokenRefreshResponse.status === 200 && tokenRefreshResponse.data && tokenRefreshResponse.data.success) { const result = tokenRefreshResponse.data.data storage.set(ACCESS_TOKEN, result.tokenHead + result.token, result.expiresIn * 1000) storage.set(REFRESH_ACCESS_TOKEN, result.refreshToken, result.refreshExpiresIn * 1000) failedRequest.response.config.headers['Authorization'] = result.tokenHead + result.token return Promise.resolve() // 初始化刷新token拦截器 createAuthRefreshInterceptor(request, refreshAuthLogic, { pauseInstanceWhileRefreshing: true // 当刷新token执行时,暂停其他请求 // 异常拦截处理器 const errorHandler = (error) => { if (error.response) { const data = error.response.data if (error.response.status === 403) { notification.error({ message: '禁止访问', description: data.message } else if (error.response.status === 401 && !(data.result && data.result.isLogin)) { // 当刷新token超时,则调到登录页面 modal.warn({ title: '登录超时', content: '由于您长时间未操作, 为确保安全, 请重新登录系统进行后续操作 !', okText: '重新登录', onOk () { store.dispatch('Timeout').then(() => { window.location.reload() return Promise.reject(error) // request interceptor request.interceptors.request.use(config => { const token = storage.get(ACCESS_TOKEN) // 如果 token 存在 // 让每个请求携带自定义 token 请根据实际情况自行修改 if (token) { config.headers['Authorization'] = token config.headers['TenantId'] = process.env.VUE_APP_TENANT_ID return config }, errorHandler) // response interceptor request.interceptors.response.use((response) => { const res = response.data if (res.code) { if (res.code !== 200) { notification.error({ message: '操作失败', description: res.msg return Promise.reject(new Error(res.msg || 'Error')) } else { return response.data } else { return response }, errorHandler) const installer = { vm: {}, install (Vue) { Vue.use(VueAxios, request) export default request export { installer as VueAxios, request as axios }四、记住密码功能实现有时候,在我们在可信任的电脑上可以实现记住密码功能,前后端分离项目的实现只需要把密码记录到localstorage中,然后每次访问登录界面时,自动填入即可。这里先使用明文进行存储,为了系统安全,在实际应用过程需要将密码加密存储,后台校验加密后的密码1、在created中读取是否记住密码created () { this.queryCaptchaType() this.$nextTick(() => { const rememberMe = storage.get(process.env.VUE_APP_TENANT_ID + '-' + process.env.VUE_APP_CLIENT_ID + '-rememberMe') if (rememberMe) { const username = storage.get(process.env.VUE_APP_TENANT_ID + '-' + process.env.VUE_APP_CLIENT_ID + '-username') const password = storage.get(process.env.VUE_APP_TENANT_ID + '-' + process.env.VUE_APP_CLIENT_ID + '-password') if (username !== '' && password !== '') { this.form.setFieldsValue({ 'username': username }) this.form.setFieldsValue({ 'password': password }) this.form.setFieldsValue({ 'rememberMe': true }) },2、每次登录成功之后,根据是否勾选记住密码来确定是否填入用户名密码// 判断是否记住密码 const rememberMe = this.form.getFieldValue('rememberMe') const username = this.form.getFieldValue('username') const password = this.form.getFieldValue('password') if (rememberMe && username !== '' && password !== '') { storage.set(process.env.VUE_APP_TENANT_ID + '-' + process.env.VUE_APP_CLIENT_ID + '-username', username, 60 * 60 * 24 * 7 * 1000) storage.set(process.env.VUE_APP_TENANT_ID + '-' + process.env.VUE_APP_CLIENT_ID + '-password', password, 60 * 60 * 24 * 7 * 1000) storage.set(process.env.VUE_APP_TENANT_ID + '-' + process.env.VUE_APP_CLIENT_ID + '-rememberMe', true, 60 * 60 * 24 * 7 * 1000) } else { storage.remove(process.env.VUE_APP_TENANT_ID + '-' + process.env.VUE_APP_CLIENT_ID + '-username') storage.remove(process.env.VUE_APP_TENANT_ID + '-' + process.env.VUE_APP_CLIENT_ID + '-password') storage.remove(process.env.VUE_APP_TENANT_ID + '-' + process.env.VUE_APP_CLIENT_ID + '-rememberMe') }五、密码尝试次数过多则锁定账户从系统安全方面来讲,我们需要支持防止用户账户被暴力破解的措施,目前技术已经能够轻松破解大多数的验证码,这为暴力破解用户账户提供了方便,那么这里我们的系统需要密码尝试次数过多锁定账户的功能。SpringSecurity的UserDetails接口定义了isAccountNonLocked方法来判断账户是否被锁定public interface UserDetails extends Serializable { Collection<? extends GrantedAuthority> getAuthorities(); String getPassword(); String getUsername(); boolean isAccountNonExpired(); boolean isAccountNonLocked(); boolean isCredentialsNonExpired(); boolean isEnabled(); }1、自定义LoginFailureListener事件监听器,监听SpringSecurity抛出AuthenticationFailureBadCredentialsEvent异常事件,使用Redis计数器,记录账号错误密码次数/** * 当登录失败时的调用,当密码错误过多时,则锁定账户 * @author GitEgg * @date 2021-03-12 17:57:05 @Slf4j @Component @RequiredArgsConstructor(onConstructor_ = @Autowired) public class LoginFailureListener implements ApplicationListener<AuthenticationFailureBadCredentialsEvent> { private final UserDetailsService userDetailsService; private final RedisTemplate redisTemplate; @Value("${system.maxTryTimes}") private int maxTryTimes; @Override public void onApplicationEvent(AuthenticationFailureBadCredentialsEvent event) { if (event.getException().getClass().equals(UsernameNotFoundException.class)) { return; String userName = event.getAuthentication().getName(); GitEggUserDetails user = (GitEggUserDetails) userDetailsService.loadUserByUsername(userName); if (null != user) { Object lockTimes = redisTemplate.boundValueOps(AuthConstant.LOCK_ACCOUNT_PREFIX + user.getId()).get(); if(null == lockTimes || (int)lockTimes <= maxTryTimes){ redisTemplate.boundValueOps(AuthConstant.LOCK_ACCOUNT_PREFIX + user.getId()).increment(GitEggConstant.Number.ONE); }2、GitEggUserDetailsServiceImpl方法查询Redis记录的账号锁定次数// 判断账号是否被锁定(账户过期,凭证过期等可在此处扩展) Object lockTimes = redisTemplate.boundValueOps(AuthConstant.LOCK_ACCOUNT_PREFIX + gitEggUser.getId()).get(); boolean accountNotLocked = true; if(null != lockTimes && (int)lockTimes >= maxTryTimes){ accountNotLocked = false; }六、登录时是否需要输入验证码验证码设置前三次(可配置)登录时,不需要输入验证码,当密码尝试次数大于三次时,需要输入验证码,登录方式的一个思路:初始进入登录界面,用户可选择自己的登录方式,我们系统OAuth默认设置了三种登录方式:用户名+密码登录用户名+密码+验证码手机号+验证码登录系统默认采用用户名+密码登录,当默认的用户名密码登录错误次数(默认一次)超过系统配置的最大次数时,则必须输入验证码登录,当验证码也超过一定次数时(默认五次),都不行则锁定账户二小时之后才可以继续尝试。因为考虑到有些系统可能不会用到短信验证码等,所以这里作为一个扩展功能:如果有需要可以在用户名密码错误过多时,强制只用短信验证码才能登录,且一定要设置超过错误次数就锁定。1、在自定义的GitEggUserDetailsServiceImpl增加账号判断// 从Redis获取账号密码错误次数 Object lockTimes = redisTemplate.boundValueOps(AuthConstant.LOCK_ACCOUNT_PREFIX + gitEggUser.getId()).get(); // 判断账号密码输入错误几次,如果输入错误多次,则锁定账号 // 输入错误大于配置的次数,必须选择captcha或sms_captcha if (null != lockTimes && (int)lockTimes >= maxNonCaptchaTimes && ( StringUtils.isEmpty(authGrantType) || (!StringUtils.isEmpty(authGrantType) && !AuthEnum.SMS_CAPTCHA.code.equals(authGrantType) && !AuthEnum.CAPTCHA.code.equals(authGrantType)))) { throw new GitEggOAuth2Exception(ResultCodeEnum.INVALID_PASSWORD_CAPTCHA.msg); // 判断账号是否被锁定(账户过期,凭证过期等可在此处扩展) if(null != lockTimes && (int)lockTimes >= maxTryTimes){ throw new LockedException(ResultCodeEnum.PASSWORD_TRY_MAX_ERROR.msg); // 判断账号是否被禁用 String userStatus = gitEggUser.getStatus(); if (String.valueOf(GitEggConstant.DISABLE).equals(userStatus)) { throw new DisabledException(ResultCodeEnum.DISABLED_ACCOUNT.msg); }2、自定义OAuth2拦截异常并统一处理/** * 自定义Oauth异常拦截处理器 @Slf4j @RestControllerAdvice public class GitEggOAuth2ExceptionHandler { @ExceptionHandler(InvalidTokenException.class) public Result handleInvalidTokenException(InvalidTokenException e) { return Result.error(ResultCodeEnum.UNAUTHORIZED); @ExceptionHandler({UsernameNotFoundException.class}) public Result handleUsernameNotFoundException(UsernameNotFoundException e) { return Result.error(ResultCodeEnum.INVALID_USERNAME_PASSWORD); @ExceptionHandler({InvalidGrantException.class}) public Result handleInvalidGrantException(InvalidGrantException e) { return Result.error(ResultCodeEnum.INVALID_USERNAME_PASSWORD); @ExceptionHandler(InternalAuthenticationServiceException.class) public Result handleInvalidGrantException(InternalAuthenticationServiceException e) { Result result = Result.error(ResultCodeEnum.INVALID_USERNAME_PASSWORD); if (null != e) { String errorMsg = e.getMessage(); if (ResultCodeEnum.INVALID_PASSWORD_CAPTCHA.getMsg().equals(errorMsg)) { //必须使用验证码 result = Result.error(ResultCodeEnum.INVALID_PASSWORD_CAPTCHA); else if (ResultCodeEnum.PASSWORD_TRY_MAX_ERROR.getMsg().equals(errorMsg)) { //账号被锁定 result = Result.error(ResultCodeEnum.PASSWORD_TRY_MAX_ERROR); else if (ResultCodeEnum.DISABLED_ACCOUNT.getMsg().equals(errorMsg)) { //账号被禁用 result = Result.error(ResultCodeEnum.DISABLED_ACCOUNT); return result; }3、前端登录页面增加判断,默认采用password方式登录,当错误达到一定次数时,必须使用验证码登录requestFailed (err) { this.isLoginError = true if (err && err.code === 427) { // 密码错误次数超过最大限值,请选择验证码模式登录 if (this.customActiveKey === 'tab_account') { this.grantType = 'captcha' } else { this.grantType = 'sms_captcha' this.loginErrorMsg = err.msg if (this.loginCaptchaType === 'sliding') { this.$refs.verify.show() } else if (err) { this.loginErrorMsg = err.msg }备注:一、当验证报401时:进行 /auth/token 的post请求时,没有进行http basic认证。什么是http Basic认证?http协议的一种认证方式,将客户端id和客户端密码按照“客户端ID:客户端密码”的格式拼接,并用base64编码,放在header中请求服务端。例子如下:Authorization:Basic ASDLKFALDSFAJSLDFKLASD=ASDLKFALDSFAJSLDFKLASD= 就是 客户端ID:客户端密码 的64编码二、JWT一直不过期:在自定义TokenEnhancer时,将毫秒加入到了过期时间中,在鉴权解析时,OAuth2是按照秒来解析,所以生成的过期时间非常大,导致token一直未过期。

SpringCloud微服务实战——搭建企业级开发框架(二十二):基于MybatisPlus插件TenantLineInnerInterceptor实现多租户功能

多租户技术的基本概念:  多租户技术(英语:multi-tenancy technology)或称多重租赁技术,是一种软件架构技术,它是在探讨与实现如何于多用户的环境下共用相同的系统或程序组件,并且仍可确保各用户间数据的隔离性。  在云计算的加持之下,多租户技术被广为运用于开发云各式服务,不论是IaaS,PaaS还是SaaS,都可以看到多租户技术的影子。  前面介绍过GitEgg框架与数据库交互使用了Mybatis增强工具Mybatis-Plus,Mybatis-Plus提供了TenantLineInnerInterceptor租户处理器来实现多租户功能,其原理就是Mybatis-Plus实现了自定义Mybatis拦截器(Interceptor),在需要执行的sql后面自动添加租户的查询条件,实际和分页插件,数据权限拦截器是同样的实现方式。简而言之多租户技术就是可以让一套系统通过配置给不同的客户提供服务,每个客户看到的数据都是属于自己的,就好像每个客户都拥有自己一套独立完善的系统。下面是在GitEgg系统的应用配置:1、在gitegg-platform-mybatis工程下新建多租户组件配置文件TenantProperties.java和TenantConfig.java,TenantProperties.java用于系统读取配置文件,这里会在Nacos配置中心设置多组户的具体配置信息,TenantConfig.java是插件需要读取的配置有三个配置项:TenantId租户ID、TenantIdColumn多租户的字段名、ignoreTable不需要多租户隔离的表。TenantProperties.java:package com.gitegg.platform.mybatis.props; import lombok.Data; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.context.annotation.Configuration; import java.util.List; * 白名单配置 @Data @Configuration @ConfigurationProperties(prefix = "tenant") public class TenantProperties { * 是否开启租户模式 private Boolean enable; * 多租户字段名称 private String column; * 需要排除的多租户的表 private List<String> exclusionTable; }TenantConfig.java:package com.gitegg.platform.mybatis.config; import com.baomidou.mybatisplus.extension.plugins.handler.TenantLineHandler; import com.baomidou.mybatisplus.extension.plugins.inner.TenantLineInnerInterceptor; import com.gitegg.platform.boot.util.GitEggAuthUtils; import com.gitegg.platform.mybatis.props.TenantProperties; import lombok.RequiredArgsConstructor; import net.sf.jsqlparser.expression.Expression; import net.sf.jsqlparser.expression.NullValue; import net.sf.jsqlparser.expression.StringValue; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.autoconfigure.AutoConfigureBefore; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; * 多租户配置中心 * @author GitEgg @Configuration @RequiredArgsConstructor(onConstructor_ = @Autowired) @AutoConfigureBefore(MybatisPlusConfig.class) public class TenantConfig { private final TenantProperties tenantProperties; * 新多租户插件配置,一缓和二缓遵循mybatis的规则, * 需要设置 MybatisConfiguration#useDeprecatedExecutor = false * 避免缓存万一出现问题 * @return TenantLineInnerInterceptor @Bean public TenantLineInnerInterceptor tenantLineInnerInterceptor() { return new TenantLineInnerInterceptor(new TenantLineHandler() { * 获取租户ID * @return Expression @Override public Expression getTenantId() { String tenant = GitEggAuthUtils.getTenantId(); if (tenant != null) { return new StringValue(GitEggAuthUtils.getTenantId()); return new NullValue(); * 获取多租户的字段名 * @return String @Override public String getTenantIdColumn() { return tenantProperties.getColumn(); * 过滤不需要根据租户隔离的表 * 这是 default 方法,默认返回 false 表示所有表都需要拼多租户条件 * @param tableName 表名 @Override public boolean ignoreTable(String tableName) { return tenantProperties.getExclusionTable().stream().anyMatch( (t) -> t.equalsIgnoreCase(tableName) }2、可在工程下新建application.yml,配置将来需要在Nacos上配置的信息:tenant: # 是否开启租户模式 enable: true # 需要排除的多租户的表 exclusionTable: - "t_sys_district" - "oauth_client_details" # 租户字段名称 column: tenant_id3、修改MybatisPlusConfig.java,把多租户过滤器加载进来使其生效:package com.gitegg.platform.mybatis.config; import com.baomidou.mybatisplus.annotation.DbType; import com.baomidou.mybatisplus.extension.plugins.MybatisPlusInterceptor; import com.baomidou.mybatisplus.extension.plugins.inner.BlockAttackInnerInterceptor; import com.baomidou.mybatisplus.extension.plugins.inner.OptimisticLockerInnerInterceptor; import com.baomidou.mybatisplus.extension.plugins.inner.PaginationInnerInterceptor; import com.baomidou.mybatisplus.extension.plugins.inner.TenantLineInnerInterceptor; import com.gitegg.platform.mybatis.props.TenantProperties; import lombok.RequiredArgsConstructor; import org.mybatis.spring.annotation.MapperScan; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; @Configuration @RequiredArgsConstructor(onConstructor_ = @Autowired) @MapperScan("com.gitegg.**.mapper.**") public class MybatisPlusConfig { private final TenantLineInnerInterceptor tenantLineInnerInterceptor; private final TenantProperties tenantProperties; * 新的分页插件,一缓和二缓遵循mybatis的规则,需要设置 MybatisConfiguration#useDeprecatedExecutor = false * 避免缓存出现问题(该属性会在旧插件移除后一同移除) @Bean public MybatisPlusInterceptor mybatisPlusInterceptor() { MybatisPlusInterceptor interceptor = new MybatisPlusInterceptor(); //多租户插件 if (tenantProperties.getEnable()) { interceptor.addInnerInterceptor(tenantLineInnerInterceptor); //分页插件 interceptor.addInnerInterceptor(new PaginationInnerInterceptor(DbType.MYSQL)); //防止全表更新与删除插件: BlockAttackInnerInterceptor BlockAttackInnerInterceptor blockAttackInnerInterceptor = new BlockAttackInnerInterceptor(); interceptor.addInnerInterceptor(blockAttackInnerInterceptor); return interceptor; * 乐观锁插件 当要更新一条记录的时候,希望这条记录没有被别人更新 * https://mybatis.plus/guide/interceptor-optimistic-locker.html#optimisticlockerinnerinterceptor @Bean public OptimisticLockerInnerInterceptor optimisticLockerInterceptor() { return new OptimisticLockerInnerInterceptor(); }4、在GitEggAuthUtils方法中新增获取租户信息的公共方法,租户信息在Gateway进行转发时进行设置,后面会说明如何讲租户信息设置到Header中:package com.gitegg.platform.boot.util; import cn.hutool.json.JSONUtil; import com.gitegg.platform.base.constant.AuthConstant; import com.gitegg.platform.base.domain.GitEggUser; import org.springframework.util.StringUtils; import javax.servlet.http.HttpServletRequest; import java.io.UnsupportedEncodingException; import java.net.URLDecoder; public class GitEggAuthUtils { * 获取用户信息 * @return GitEggUser public static GitEggUser getCurrentUser() { HttpServletRequest request = GitEggWebUtils.getRequest(); if (request == null) { return null; try { String user = request.getHeader(AuthConstant.HEADER_USER); if (StringUtils.isEmpty(user)) return null; String userStr = URLDecoder.decode(user,"UTF-8"); GitEggUser gitEggUser = JSONUtil.toBean(userStr, GitEggUser.class); return gitEggUser; } catch (UnsupportedEncodingException e) { e.printStackTrace(); return null; * 获取租户Id * @return tenantId public static String getTenantId() { HttpServletRequest request = GitEggWebUtils.getRequest(); if (request == null) { return null; try { String tenantId = request.getHeader(AuthConstant.TENANT_ID); String user = request.getHeader(AuthConstant.HEADER_USER); //如果请求头中的tenantId为空,那么尝试是否能够从登陆用户中去获取租户id if (StringUtils.isEmpty(tenantId) && !StringUtils.isEmpty(user)) String userStr = URLDecoder.decode(user,"UTF-8"); GitEggUser gitEggUser = JSONUtil.toBean(userStr, GitEggUser.class); if (null != gitEggUser) tenantId = gitEggUser.getTenantId(); return tenantId; } catch (UnsupportedEncodingException e) { e.printStackTrace(); return null; }5、GitEgg-Cloud工程中gitegg-gateway子工程的AuthGlobalFilter增加设置TenantId的过滤方法String tenantId = exchange.getRequest().getHeaders().getFirst(AuthConstant.TENANT_ID); String token = exchange.getRequest().getHeaders().getFirst(AuthConstant.JWT_TOKEN_HEADER); if (StrUtil.isEmpty(tenantId) && StrUtil.isEmpty(token)) { return chain.filter(exchange); Map<String, String> addHeaders = new HashMap<>(); // 如果系统配置已开启租户模式,设置tenantId if (enable && StrUtil.isEmpty(tenantId)) { addHeaders.put(AuthConstant.TENANT_ID, tenantId); }6、以上为后台的多租户功能集成步骤,在实际项目开发过程中,我们需要考虑到前端页面在租户信息上的配置,实现思路,不用的租户拥有不同的域名,前端页面根据当前域名获取到对应的租户信息,并在公共请求方法设置TenantId参数,保证每次请求能够携带租户信息。// request interceptor request.interceptors.request.use(config => { const token = storage.get(ACCESS_TOKEN) // 如果 token 存在 // 让每个请求携带自定义 token 请根据实际情况自行修改 if (token) { config.headers['Authorization'] = token config.headers['TenantId'] = process.env.VUE_APP_TENANT_ID return config }, errorHandler)

SpringCloud微服务实战——搭建企业级开发框架(二十一):基于RBAC模型的系统权限设计

RBAC(基于角色的权限控制)模型的核心是在用户和权限之间引入了角色的概念。取消了用户和权限的直接关联,改为通过用户关联角色、角色关联权限的方法来间接地赋予用户权限,从而达到用户和权限解耦的目的,RBAC介绍原文链接。RABC的好处职能划分更谨慎。对于角色的权限调整不仅仅只影响单个用户,而是会影响关联此角色的所有用户,管理员下发/回收权限会更为谨慎;便于权限管理。对于批量的用户权限调整,只需调整用户关联的角色权限即可,无需对每一个用户都进行权限调整,既大幅提升权限调整的效率,又降低漏调权限的概率;在不断的发展过程中,RBAC也因不同的需求而演化出了不同的版本,目前主要有以下几个版本:RBAC0,这是RBAC的初始形态,也是最原始、最简单的RBAC版本;RBAC1,基于RBAC0的优化,增加了角色的分层(即:子角色),子角色可以继承父角色的所有权限;RBAC2,基于RBAC0的另一种优化,增加了对角色的一些限制:角色互斥、角色容量等;RBAC3,最复杂也是最全面的RBAC模型,它在RBAC0的基础上,将RBAC1和RBAC2中的优化部分进行了整合;RBAC权限基本功能模块:RBAC功能模块RBAC权限基础表:1、用户表:t_sys_userCREATE TABLE `t_sys_user` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `account` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '账号', `nickname` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '昵称', `real_name` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '真实姓名', `gender` char(1) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT '2' COMMENT '1 : 男,0 : 女', `email` varchar(64) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '邮箱', `mobile` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '电话', `password` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '密码', `status` tinyint(1) NULL DEFAULT 1 COMMENT '\'0\'禁用,\'1\' 启用, \'2\' 密码过期或初次未修改', `avatar` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '头像', `country` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '国家', `province` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '省', `city` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '市', `area` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '区', `street` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '街道详细地址', `comments` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '备注', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(1) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE, INDEX `INDEX_USER_NAME`(`real_name`) USING BTREE, INDEX `INDEX_USER_PHONE`(`mobile`) USING BTREE, INDEX `INDEX_USER_EMAIL`(`email`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = '用户表' ROW_FORMAT = Dynamic;2、角色表:t_sys_roleCREATE TABLE `t_sys_role` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `parent_id` bigint(20) NULL DEFAULT 0 COMMENT '父id', `role_name` varchar(40) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '角色名称', `role_key` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '角色标识', `role_level` int(11) NULL DEFAULT NULL COMMENT '角色级别', `role_status` tinyint(1) NULL DEFAULT 1 COMMENT '1有效,0禁用', `comments` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '描述', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(1) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE, INDEX `INDEX_ROLE_NAME`(`role_name`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = '角色表' ROW_FORMAT = Dynamic;3、权限表(资源表):t_sys_resourceCREATE TABLE `t_sys_resource` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `parent_id` bigint(20) NULL DEFAULT NULL COMMENT '父id', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `ancestors` varchar(100) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '所有上级组织id的集合,便于机构查找', `resource_name` varchar(40) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '资源名称', `resource_key` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '资源标识', `resource_type` char(1) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '资源类型 1、模块 2、菜单 3、按钮 4、链接', `resource_icon` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '资源图标', `resource_path` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '资源路径', `resource_url` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '资料链接', `resource_level` int(11) NULL DEFAULT NULL COMMENT '资源级别', `resource_show` tinyint(1) NULL DEFAULT NULL COMMENT '是否显示', `resource_cache` tinyint(1) NULL DEFAULT NULL COMMENT '是否缓存', `resource_page_name` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '资源页面名称', `resource_status` tinyint(1) NULL DEFAULT 1 COMMENT '1有效,0禁用', `comments` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '备注', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(1) NOT NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE, INDEX `INDEX_PERM_NAME`(`resource_name`) USING BTREE, INDEX `INDEX_PERM_PID`(`parent_id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = '权限表' ROW_FORMAT = Dynamic;4、组织机构表:t_sys_organizationCREATE TABLE `t_sys_organization` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `parent_id` bigint(20) NULL DEFAULT NULL COMMENT '父组织id', `ancestors` varchar(100) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '所有上级组织id的集合,便于机构查找', `organization_type` char(1) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '组织类型:1:事业部 2:机构 3:盐城', `organization_name` varchar(40) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '组织名称', `organization_key` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '组织编码', `organization_icon` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '组织图标', `organization_level` int(11) NULL DEFAULT NULL COMMENT '组织级别(排序)', `organization_status` tinyint(1) NULL DEFAULT 1 COMMENT '1有效,0禁用', `province` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '省', `city` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '市', `area` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '区', `street` varchar(120) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '街道', `comments` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '描述', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建日期', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新日期', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(1) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE, INDEX `INDEX_ORG_NAME`(`organization_name`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = '组织表' ROW_FORMAT = Dynamic;5、用户和角色关联关系表:t_sys_user_role(多对多)CREATE TABLE `t_sys_user_role` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `user_id` bigint(20) NOT NULL COMMENT '用户id', `role_id` bigint(20) NOT NULL COMMENT '角色id', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建人', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新人', `del_flag` tinyint(1) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE, INDEX `INDEX_USER_ID`(`user_id`) USING BTREE, INDEX `INDEX_ROLE_ID`(`role_id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = '用户和角色关联表' ROW_FORMAT = Dynamic;6、机构和用户关联关系表:t_sys_organization_user(一对多)CREATE TABLE `t_sys_organization_user` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `organization_id` bigint(20) NOT NULL COMMENT '机构id', `user_id` bigint(20) NOT NULL COMMENT '用户id', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(1) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Dynamic;7、角色和权限(资源)关联关系表:t_sys_role_resource(多对多)CREATE TABLE `t_sys_role_resource` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `role_id` bigint(20) NOT NULL COMMENT '角色id', `resource_id` bigint(20) NOT NULL COMMENT '资源id', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(1) NOT NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = '角色和权限关联表' ROW_FORMAT = Dynamic;权限扩展表:1、机构角色表:t_sys_organization_role(某机构下所有人员都具有某种角色的权限)CREATE TABLE `t_sys_organization_role` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `organization_id` bigint(20) NOT NULL COMMENT '组织机构id', `role_id` bigint(20) NOT NULL COMMENT '角色id', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(1) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = '可以给组织权限,在该组织下的所有用户都有此权限' ROW_FORMAT = Dynamic;2、数据权限配置表:t_sys_data_permissionCREATE TABLE `t_sys_data_permission` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `user_id` bigint(20) NOT NULL COMMENT '用户id', `organization_id` bigint(20) NOT NULL COMMENT '机构id', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者', `del_flag` tinyint(1) NULL DEFAULT 0 COMMENT '1:删除 0:不删除', PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Dynamic;3、用户信息扩展表:t_sys_user_info这个根据自己业务具体需求进行扩展 CREATE TABLE `t_sys_user_info` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键', `tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id', `parent_id` bigint(20) NULL DEFAULT 0 COMMENT '上级ID', `user_id` bigint(20) NULL DEFAULT NULL COMMENT '系统用户表用户ID', `wechat_open_id` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT '' COMMENT '小程序用户openid', `wechat_platform_open_id` varchar(64) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT '' COMMENT '公众号用户openid', `wechat_union_id` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT '' COMMENT '微信用户union id', `telephone` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '固定电话', `wechat_number` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '微信号', `qq_number` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT 'QQ号', `user_type` smallint(1) NULL DEFAULT 1 COMMENT '用户类型1、普通用户', `member_points` bigint(20) NULL DEFAULT 60 COMMENT '会员积分', `work_unit` varchar(100) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '工作单位', `duties` varchar(50) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '职务', `education` varchar(10) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '学历', `card_type` varchar(1) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '证件类型', `card_number` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '证件号码', `card_front` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '正面照片', `card_reverse` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '反面照片', `graduated` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '毕业院校', `gender` int(1) NULL DEFAULT NULL COMMENT '性别', `birthday` datetime(0) NULL DEFAULT NULL COMMENT '出生日期', `graduated_date` date NULL DEFAULT NULL COMMENT '毕业时间', `register_time` datetime(0) NULL DEFAULT NULL COMMENT '注册日期', `register_ip` varchar(45) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '注册ip', `last_login_time` datetime(0) NULL DEFAULT NULL COMMENT '最后登录日期', `last_login_ip` varchar(45) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '最后登录ip', `create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间', `creator` bigint(20) NULL DEFAULT NULL COMMENT '创建人', `update_time` datetime(0) NULL DEFAULT NULL COMMENT '最后修改时间', `operator` bigint(20) NULL DEFAULT NULL COMMENT '最后修改人', `del_flag` tinyint(1) NOT NULL DEFAULT 0 COMMENT '是否删除', PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = '微信注册会员表' ROW_FORMAT = DYNAMIC;这些表的实体类和mapper方法可以使用mybatis-plus代码生成,这里不详细介绍,后面会单独介绍集成代码生成模块。 因为是系统权限相关功能,这些表数据的管理代码存放在gitegg-service-system子工程中。这里仅介绍权限的配置及设计,系统权限的具体使用会在介绍SpringCloud OAuth2和Gateway的使用中具体介绍。

SpringCloud微服务实战——搭建企业级开发框架(二):环境准备【下】

三、安装Mysql    这里介绍在CentOS7上通过安装通用预编译包方式安装MySql数据库:增加用户名和用户组 :#groupadd mysql #useradd -r -g mysql mysql ---新建msyql 用户禁止登录shell下载、解压MySQL通用编译包:#wget ftp://ftp.mirrorservice.org/sites/ftp.mysql.com/Downloads/MySQL-5.7/mysql-5.7.11-linux-glibc2.5-x86_64.tar.gz #cd /usr/local/ ---切换到存放源码包所在目录(这里也是安装目录) #tar -xvf mysql-5.7.11-linux-glibc2.5-x86_64.tar.gz ---在当前目录解压通用编译包 #ln -s /usr/local/mysql-5.7.11-linux-glibc2.5-x86_64.tar.gz mysql ---建立软链接mysql 方便操作设置权限并初始化MySQL系统授权表:#cd mysql ---进入软链接目录 #mkdir /usr/local/mysql/data ---新建数据目录 #chmod 770 /usr/local/mysql/data ---更改data目录权限为770 #chown -R mysql. ---更改所有者,注意是mysql. #chgrp -R mysql. ---更改所属组,注意是mysql. #bin/mysqld —initialize -user=mysql -basedir=/usr/local/mysql --datadir=/usr/local/mysql/data ---以root 初始化操作时要加--user=mysql参数,生成一个随机的密码(保存登录时使用) #chown -R root. ---更改所有者,注意是root. #chown -R mysql /usr/local/mysql/data ---更改data目录所有者为mysql创建配置文件并后台启动MySQL# mv/etc/my.cnf /etc/my.cnf.bak ---my.cnf 改名或删除(默认的my.cnf 会影响mysql 启动) #cd /usr/local/mysql/support-files ---进入MySQL 安装目录支持文件目录 #cp my-default.cnf/etc/my.cnf ---复制模板为新的配置文件,根据需要修改文件中配置选 项如不修改配置MySQL则按默认配置参数运行。 #/usr/local/mysql/bin/mysqld_safe --user=mysql & ---后台启动mysql配置MySQL自动启动#cp /usr/local/mysql/support-files/mysql.server /etc/init.d/mysql ---复制启动文件 #chmod 755 /etc/init.d/mysql ---增加执行权限 #chkconfig -add mysql ---加入自动启动项 #chkconfig --level 345 mysql on ---设置MySQL 在345 等级自动启动 ***把服务文件放到/etc/init.d/目录下面相当于改为了rpm包安装的服务使用方式。配置MySQL系统环境变量#vi /etc/profile ---编辑/etc/profile文件在最后添加如下两行: PATH=/usr/local/mysql/bin:$PATH export PATH ---不加登录mysql 时会报错“-bash: mysql: command not found” #source /etc/profile ---使环境变量及时生效启动MySQL服务#/usr/local/mysql/support-files/mysql.server start ---启动mysql服务 #/usr/local/mysql/support-files/mysql.server restart ---重启mysql #/usr/local/mysql/support-files/mysql.server stop ---停止mysql服务也可以用service mysql start或systemctl start mysql这样的rpm服务命令,还可以使用绝对路径/etc/init.d/mysql start来启动mysql,因为上面已经把启动方式改为了rpm服务启动方式。访问MySQL数据库#mysql -u root-p ---连接mysql,输入初始化时生成的密码 mysql> alter user 'root^'localhost' identified by '123456'; ---修改root 新密码 mysql> quit; ---退出也可用exit; # mysql -u root -p ---提示输入密码时输入新设置的密码登录 mysql>use mysql; ---访问数据库mysql mysql>GRANT ALL ON *.* TO 'root'@'%' IDENTIFIED BY '密码' WITH GRANT OPTION; ---创建可以远程链接的用户创建SSL/RSA文件#cd /usr/local/mysql/bin ---切换目录 #mysql_ssl_rsa_setup -user=mysql -basedir=/usr/local/mysql --datadir=/usr/local/mysql/data ---创建新的SSL文件MySQL默认区分大小写,需要修改配置文件使其不区分大小写在/etc/my.cnf中的[mysqld]后加入下面一行,并重启MySQLlower_case_table_names=111、常见问题及解决方式:a、登录时报错#myslq -u root -p 报错: ERROR 1045 (28000): Access denied for user 'root'local host' (using password: NO)---(不输入密码时)ERROR 1045 (28000): Access denied for user ,root,@,localhost, (using password: YES)---(输入密码时)解决方式:#/etc/init.d/mysql stop ---停止mysql 服务 #mysqld_safe -skip-grant-tables -skip-networking & ---跳过权限表控制,跳过TCP/IP 协议在本机访问 #mysql -u root -p mysql ---提示输入密码时直接到 7 丨( mysql>update user set authentication_string=password('123456,) where user='root'; --修改密码,在 MySQL5.7.9中密码字段名是authentication_string 而不是原来的password 了。 mysql> flush privileges; ---刷新MySQL的系统权限相关表使其生效 mysql> quit; ---退出mysql #/etc/init.d/mysqld restart ---重启mysql 服务b、访问数据库时报错#myslq -u root -p ---提示输入密码时输入新设置的密码 mysql>use mysql; 报错: ERROR 1820 (HY000): You must SET PASSWORD before executing this statement解决方式:mysql>alteruseruser() identifiedby '123456'; ---再重新设置下密码,注意方法与之前5.6版本的“SET PASSWORD = PASSWORD('new_password'}/MHc、启动MySQL服务报错#systemctl start mysql 报错: Starting MySQL.. ERROR! The server quit without updating PID file (/usr/local/mysql/data/localhost.localdomain.pid).解决方式:初始化没有指定路径参数造成的加上参数即可 #cd /usr/local/mysql #bin/mysqld -initialize-user=mysql -basedir=/usr/local/mysql -datadir=/usr/local/mysql/datad、使用druid作为数据库连接池时,密码加密(找到maven目录下的druid包)java -cp druid-1.0.14.jar com.alibaba.druid.filter.config.ConfigTools you_password四、安装Redis    下面是在CentOS7中安装Redis的操作步骤,在命令行执行以下命令:下载并解压Redis安装包wget http://download.redis.io/releases/redis-5.0.5.tar.gz cd /opt/software/ tar zxf redis-5.0.5.tar.gz -C /usr/local/src编译并安装Rediscd /usr/local/src/redis-5.0.5 make && make install ln -s /usr/local/src/redis-5.0.5 /usr/local/redis修改Redis配置文件vi /usr/local/redis/redis.conf #修改内容如下: daemonize yes #开启后台运行 timeout 120 #超时时间 bind 0.0.0.0 #任何地址IP都可以登录redis requirepass 123456 #redis密码123456启动Rediscd /usr/local/redis/src ./redis-server /usr/local/redis/redis.conf测试安装配置是否成功redis-cli -h 127.0.0.1 -p 6379 -a 123456 127.0.0.1:6379> KEYS * (empty list or set) 127.0.0.1:6379> set user ray 127.0.0.1:6379> KEYS * 1) "user"常见问题:redis不能远程连接时,可能是防火墙的问题,关闭防火墙或者开放对应的redis端口即可五、安装Nacos    Nacos是一个更易于构建云原生应用的动态服务发现、配置管理和服务管理平台,Nacos 致力于帮助您发现、配置和管理微服务。Nacos 提供了一组简单易用的特性集,帮助您快速实现动态服务发现、服务配置、服务元数据及流量管理。Nacos发布地址:https://github.com/alibaba/nacos/releases,从里面选择需要的版本,这里选择1.4.0版本,下载地址为:https://github.com/alibaba/nacos/releases/download/1.4.0/nacos-server-1.4.0.tar.gz。Nacos发布地址下载完成后,上传到测试Linux服务器解压。(如果只想本地windows安装,可以下载nacos-server-1.4.0.zip,解压后使用方法基本一致)[root@localhost soft_home]# cd nacos [root@localhost nacos]# ls nacos-server-1.4.0.tar.gz [root@localhost nacos]# tar -zxvf nacos-server-1.4.0.tar.gz nacos/LICENSE nacos/NOTICE nacos/target/nacos-server.jar nacos/conf/ nacos/conf/schema.sql nacos/conf/nacos-mysql.sql nacos/conf/application.properties.example nacos/conf/nacos-logback.xml nacos/conf/cluster.conf.example nacos/conf/application.properties nacos/bin/startup.sh nacos/bin/startup.cmd nacos/bin/shutdown.sh nacos/bin/shutdown.cmd [root@localhost nacos]# ls nacos nacos-server-1.4.0.tar.gz [root@localhost nacos]# cd nacos [root@localhost nacos]# ls bin conf LICENSE NOTICE target [root@localhost nacos]# cd bin [root@localhost bin]# ls shutdown.cmd shutdown.sh startup.cmd startup.sh [root@localhost bin]# pwd /usr/local/nacos/nacos/bin [root@localhost bin]#修改配置文件的数据库连接,修改为自己实际的数据#*************** Config Module Related Configurations ***************# ### If use MySQL as datasource: spring.datasource.platform=mysql ### Count of DB: db.num=1 ### Connect URL of DB: db.url.0=jdbc:mysql://127.0.0.1:3306/nacos?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTC db.user=nacos db.password=nacos在数据库中刷入/nacos/conf目录下的nacos-mysql.sql数据库脚本,如果需要其他配置或者了解使用方式可以访问官网,官网地址:https://nacos.io/zh-cn/docs/quick-start.html。进入到bin目录下直接执行sh startup.sh -m standalone。[root@localhost bin]# sh startup.sh -m standalone /usr/java/jdk1.8.0_77/bin/java -server -Xms2g -Xmx2g -Xmn1g -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m -XX:-OmitStackTraceInFastThrow -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/usr/local/nacos/nacos/logs/java_heapdump.hprof -XX:-UseLargePages -Dnacos.member.list= -Djava.ext.dirs=/usr/java/jdk1.8.0_77/jre/lib/ext:/usr/java/jdk1.8.0_77/lib/ext -Xloggc:/usr/local/nacos/nacos/logs/nacos_gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Dloader.path=/usr/local/nacos/nacos/plugins/health,/usr/local/nacos/nacos/plugins/cmdb -Dnacos.home=/usr/local/nacos/nacos -jar /usr/local/nacos/nacos/target/nacos-server.jar --spring.config.location=file:/usr/local/nacos/nacos/conf/,classpath:/,classpath:/config/,file:./,file:./config/ --logging.config=/usr/local/nacos/nacos/conf/nacos-logback.xml --server.max-http-header-size=524288 nacos is starting with cluster nacos is starting,you can check the /usr/local/nacos/nacos/logs/start.out6、服务启动之后,可以访问http://ip:8848/nacos访问管理后台,默认用户名密码:nacos/nacosNacos登录页Nacos首页六、安装Sentinel下载Sentinel发布版本,地址https://github.com/alibaba/Sentinel/releases将下载的jar包sentinel-dashboard-1.8.0.jar上传到CentOS7服务器,Sentinel 是一个标准的 Spring Boot 应用,以 Spring Boot 的方式运行 jar 包即可,执行启动命令nohup java -Dserver.port=8086 -Dproject.name=sentinel-dashboard -jar sentinel-dashboard-1.8.0.jar >/dev/null &3、在浏览器输入测试的http://ip:8086 即可访问登录界面,默认用户名密码为sentinel/sentinelimage.png4、至此,一个简单的Sentinel就部署成功了,其他更详细功能及使用方式请参考:https://github.com/alibaba/Sentinel/wiki/%E4%BB%8B%E7%BB%8D七、安装IntelliJ IDEA    后台Java代码我们使用目前最流行的IntelliJ IDEA进行开发下载需要的安装包,IntelliJ IDEA下载,双击安装,一直点击下一步,尽量修改一下安装目录,不要安装在C盘即可。安装第一步安装第二步安装第三步安装第四步想办法获取到注册码配置默认的Maven和JDK路径SettingsMaven配置IDEA默认会读取到系统配置的JDK环境变量,具体项目可通过File -> Project Structure进行配置。通过插件中心,安装Lombok,MybatisX, Save Actions, Eclipse Code Formatter插件,后面会详细介绍几款插件的用法:插件安装前端开发所需环境及软件安装步骤:一、安装Node.js    如果是刚接触Vue的话,建议学习一下vue-element-admin系列文章,介绍得很详细,虽然ElementUI已经不更新了,但是这位前端大神写的文档比AntDesignVue文档高好几个层级,AntDesignVue适合掌握一定Vue能力的人去使用学习。Node.js下载地址:https://nodejs.org/en/download/releases/Node.js下载页Node.js下载页双击安装包,一步步点击下一步即可安装1安装2安装3安装4安装5安装6检查是否安装成功运行 -> cmd命令窗口 ,在命令行中输入 node -v ,显示node版本信息表示安装成功查看版本npm切换阿里源命令行中执行如下命令npm config set registry https://registry.npm.taobao.org/安装cnpmnpm install -g cnpm --registry=https://registry.npm.taobao.org安装yarnnpm install -g yarn yarn config set registry `https://registry.npm.taobao.org -g`这里之所以cnpm和yarn都安装,是因为其各有优缺点,在使用的时候选择自己习惯的即可。二、安装VSCode    Visual Studio Code (简称 VSCode / VSC) 是一款免费开源的现代化轻量级代码编辑器,支持几乎所有主流的开发语言的语法高亮、智能代码补全、自定义热键、括号匹配、代码片段、代码对比 Diff、Git 等特性,支持插件扩展,并针对网页开发和云端应用开发做了优化。下载合适的VSCode安装包,下载地址VSCode下载页我们这里选择的是.zip解压版,下载解压后就可使用。安装插件,打开VSCode,点击左侧下面的扩展按钮,选择需要的插件进行安装汉化插件: Chinese (Simplified) Language Pack for Visual Studio Code代码检查/格式化工具: ESLintHtml/js/css进行格式化对齐显示: BeautifyVue开发工具 : Vetur配置ESLint自动检测格式化前端代码在我们使用的前端框架中,已经生成eslint 相关的配置文件.eslintignore和.eslintrc.js,当我们编辑代码保存时,ESlint插件会将我们的代码自动按照配置好的格式进行格式化,这里介绍在VSCode中如何配置使用Eslint。修改VSCode配置,文件->首选项->设置,在设置页,点击右上角第一个按钮,打开setting.json,修改内容为:{ //保存自动格式化 "editor.formatOnSave": true, //autoFixedOnSave 设置已废弃,采用如下新的设置 "editor.codeActionsOnSave": { "source.fixAll.eslint": true //.vue文件template格式化支持,并使用js-beautify-html插件 "vetur.format.defaultFormatter.html": "js-beautify-html", // js-beautify-html格式化配置,属性强制换行 "vetur.format.defaultFormatterOptions": { "js-beautify-html": { "wrap_attributes": "force-aligned" //后面不添加逗号 "vetur.format.defaultFormatter.js": "vscode-typescript", //方法后面加空格 "javascript.format.insertSpaceBeforeFunctionParenthesis": true, "files.autoSave": "off", "eslint.format.enable": true, //autoFix默认开启,只需输入字符串数组即可 "eslint.validate": [ "javascript", "javascriptreact", "vue", "html", "vue-html" "eslint.run": "onSave" }    以上基本开发环境配置操作完成,接下来就可以进行编码开发了。

SpringCloud微服务实战——搭建企业级开发框架(二):环境准备【上】

这里简单说明一下在Windows系统下开发SpringCloud项目所需要的的基本环境,这里只说明开发过程中基础必须的软件,其他扩展功能(Docker,k8s,MinIO,XXL-JOB,EKL,Keepalived,Nginx,RabbitMQ,Kafka等)用到的软件会在具体使用时详细说明,本地开发的环境软件以Windows版本的安装配置为例,数据库等中间件以Linux(CentOS7)的安装配置为例,其他系统Mac/Linux可自行配置。    后端开发需要准备的环境及软件有:JDK 1.8+,Maven 3.6.3+,Mysql 5.7.11+,Redis 5.0+,Nacos 1.4.0+,Sentinel 1.8.0+,IntelliJ IDEA 2020.2.1    前端开发需要准备的环境及软件有:Node.js 15.9.0+,npm/cnpm/yarn,Visual Studio Code    实际上环境软件可以使用Docker安装,更加简单方便,如果说自己为了更详细地了解各项配置及安装原理,还是通过软件包一步步安装配置(这里暂不深入讨论数据库、消息中间件等应不应该使用Docker安装的问题),以下为详细安装操作步骤,不是小白请略过...后端开发所需环境及软件安装步骤:一、安装JDK    2019年4月16日,Oracle发布了Oracle JDK的8u211和8u 212两个版本(属于JDK8系列),并从这两个版本开始将JDK的授权许可从BCL换成了OTN,也就是从这两个版本开始商用收费。当然,个人开发和测试并不会收费,那么商用环境我们可以有两个选择: 下载之前的版本(2019年1月15日发布的Oracle JDK 8u201和8u202)进行使用或者选择使用OpenJDK。目前我们一般的做法是在本地开发环境使用Oracle JDK ,在测试环境和正式环境中使用OpenJDK。为了保持使用的特性一致,需选择合适的版本。我们这里在开发过程中选择使用Oracke JDK, Oracle JDK官网下载选择页面已标注好8u211后面的版本和8u202之前的版本方便下载,https://www.oracle.com/java/technologies/oracle-java-archive-downloads.htmlOracle JDK官网下载页选择JDK免费版进行下载,根据自己合适的Windows系统版本下载,我这里选择Windows x64版本,提前做好Oracke JDK网站的系统注册和登录,否则在下载过程中会提示登录,选择页面:Oracle JDK下载页双击下载的Oracle JDK进行安装,根据提示一步步地点击下一步即可:安装1安装2安装3安装4安装5安装6配置环境变量:在系统环境变量中添加JAVA_HOME和 CLASSPATH,并将JAVA的bin目录加入到path中环境变量1环境变量2验证是否安装配置成功:运行 -> cmd命令窗口,在命令行中输入:java -version,下面出现版本信息说明安装配置成功。Java版本信息二、安装Maven    我们的SpringCloud项目使用Maven进行构建和依赖管理,Maven 的 Snapshot 版本与 Release 版本:1、Snapshot 版本代表不稳定、尚处于开发中的版本;2、Release 版本则代表稳定的版本。Gradle 作为构建工具最近几年也比流行,和Maven比较各有优缺点吧,如果说哪一个比较好,这个仁者见仁智者见智,我们这里仍选择Maven进行项目构建。下载安装:Maven(http://maven.apache.org/download.cgi)需要JDK的支持,我们这里选择最新的Manven版本3.6.3,需要JDK1.7以上的支持,JDK的安装以及配置在上面我们已经完成。 下载Maven的zip包: apache-maven-3.6.3-bin.zipMaven下载页配置环境:在系统环境变量中添加 M2_HOME 和 MAVEN_HOME,最后在PATH中添加Maven的bin目录: %MAVEN_HOME%\binM2_HOME 和 MAVEN_HOMEpath验证是否安装配置成功:运行 -> cmd命令窗口,在命令行中输入:mvn -version ,如下图所示,展示版本信息说明安装配置成功。Maven版本信息注册阿里云私服并获取私服仓库地址:我们可以选择安装Nexus作为Maven仓库管理器,也可以使用阿里云提供的Maven私服,配置方式都是一样的,这里我们选择使用阿里云的Maven私服,如果是企业使用,这里建议申请私有仓库:阿里Maven仓库私有仓库私有仓库地址5、配置Maven私服地址和本地仓库路径,请按下面的注释进行替换为自己的私有仓库信息。<?xml version="1.0" encoding="UTF-8"?> <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd"> <!--请替换为自己本地的仓库地址--> <localRepository>D:\maven\repository</localRepository> <mirrors> <mirror> <id>mirror</id> <mirrorOf>!rdc-releases,!rdc-snapshots</mirrorOf> <name>mirror</name> <url>http://maven.aliyun.com/nexus/content/groups/public</url> </mirror> </mirrors> <servers> <server> <id>rdc-releases</id> <username>用户名/密码请替换为自己阿里云仓库的</username> <password>用户名/密码请替换为自己阿里云仓库的</password> </server> <server> <id>rdc-snapshots</id> <username>用户名/密码请替换为自己阿里云仓库的</username> <password>用户名/密码请替换为自己阿里云仓库的</password> </server> </servers> <profiles> <profile> <id>nexus</id> <repositories> <repository> <id>central</id> <url>http://maven.aliyun.com/nexus/content/groups/public</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>snapshots</id> <url>http://maven.aliyun.com/nexus/content/groups/public</url> <releases> <enabled>false</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </repository> <repository> <id>rdc-releases</id> <!--下面的url替换为自己的阿里云私服地址--> <url>替换为自己的阿里云私服地址</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>rdc-snapshots</id> <url>替换为自己的阿里云私服地址</url> <releases> <enabled>false</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>central</id> <url>http://maven.aliyun.com/nexus/content/groups/public</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>snapshots</id> <url>http://maven.aliyun.com/nexus/content/groups/public</url> <releases> <enabled>false</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>rdc-releases</id> <!--下面的url替换为自己的阿里云私服地址--> <url>替换为自己的阿里云私服地址</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>rdc-snapshots</id> <!--下面的url替换为自己的阿里云私服地址--> <url>替换为自己的阿里云私服地址</url> <releases> <enabled>false</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>nexus</activeProfile> </activeProfiles> </settings>

【SpringCloud微服务实战】搭建企业级应用开发框架(一):架构说明

SpringCloud分布式应用微服务系统架构图: springcloud微服务系统架构图SpringCloud分布式应用微服务系统组件列表:微服务框架组件:Spring Boot2 + SpringCloud Hoxton.SR8 + SpringCloud AlibabaSpring Boot Admin: 管理和监控SpringBoot应用程序的微服务健康状态数据持久化组件:MySql + Druid + MyBatis + MyBatis-PlusMycat: 中间件实现数据库读写分离Seata: 分布式事务管理,跨服务的业务操作保持数据一致性高性能的key-value缓存数据库:Redis + RedissonClient + RedisTemplateAPI接口文档: Swagger2 + knife4j接口参数校验:spring-boot-starter-validationNacos:一个更易于构建云原生应用的动态服务发现、配置管理和服务管理平台Sentinel:把流量作为切入点,从流量控制、熔断降级、系统负载保护等多个维度保护服务的稳定性OpenFeign: 微服务架构下服务之间的调用的解决方案 + Ribbon实现负载均衡/高可用重试机制Gateway: 微服务路由转发 + 聚合knife4j微服务文档 + 【Gateway+OAuth2+JWT微服务统一认证授权】Oauth2:SpringSecurity单点登录功能支持多终端认证授权 + RBAC权限框架验证码:集成滑动验证码【AJ-Captcha】 + 图片验证码【EasyCaptcha】多租户: 基于Mybatis-Plus【TenantLineInnerInterceptor】插件实现多租户功能数据权限: 基于Mybatis-Plus【DataPermissionHandler】分页插件实现可配置的数据权限功能对象存储服务( OSS):MinIO + 阿里云 + 七牛云 + 腾讯云 + 百度云 + 华为云工作流:Flowable轻量级业务流程引擎XXL-JOB:分布式任务调度平台,作业调度系统Ant-design-vue + ElementUI (基础)优秀流行的前端开源框架整合uni-app: 可发布到iOS、Android、Web(响应式)、以及各种小程序(微信/支付宝/百度/头条/QQ/钉钉/淘宝)、快应用等多个平台 (本框架中主要用于H5、小程序)Flutter: 给开发者提供简单、高效的方式来构建和部署跨平台、高性能移动应用 (本框架中主要用于移动应用)EKL: Elasticsearch + Logstash + Kibana分布式日志监控平台代码生成器: 基于Mybatis-Plus代码生成插件开发的,便捷可配置的代码生成器Keepalived + Nginx: 高可用 + 高性能的HTTP和反向代理web服务器DevOps : kubernetes + docker + jenkins 实现持续集成(CI)和持续交付(CD)数据报表:基于Ant-design-vue + Echarts实现的自定义数据可视化报表GitEgg-Cloud是一款基于SpringCloud整合搭建的企业级微服务应用开发框架,开源项目地址:Gitee: https://gitee.com/wmz1930/GitEggGitHub: https://github.com/wmz1930/GitEgg欢迎感兴趣的小伙伴Star支持一下。

SpringCloud微服务实战——搭建企业级开发框架(二十):集成Reids缓存

这章我们来介绍在系统中引入redisson-spring-boot-starter依赖来实现redis缓存管理1、在GitEgg-Platform中新建gitegg-platform-redis用于管理工程中用到的Redis公共及通用方法。<!-- redisson Redis客户端--> <dependency> <groupId>org.redisson</groupId> <artifactId>redisson-spring-boot-starter</artifactId> </dependency>2、在gitegg-platform-bom的pom.xml文件中添加gitegg-platform-redis<!-- gitegg cache自定义扩展 --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-redis</artifactId> <version>${gitegg.project.version}</version> </dependency>3、GitEgg-Platform重新install,在GitEgg-Cloud子工程gitegg-service-system代码SystemController.java中添加设置和获取缓存的测试方法private final RedissonClient redisson; private final RedisTemplate<String, String> template; @ApiOperation(value = "缓存测试设置值") @GetMapping(value = "redis/set") public Result redisSet(@RequestParam("id") String id) { RMap<String, String> m = redisson.getMap("test", StringCodec.INSTANCE); m.put("1", id); return Result.success("设置成功"); @ApiOperation(value = "缓存测试获取值") @GetMapping(value = "redis/get") public Result redisGet() { BoundHashOperations<String, String, String> hash = template.boundHashOps("test"); String t = hash.get("1"); return Result.success(t); }4、gitegg-service-system中的GitEggSystemApplication.java添加@EnableCaching注解package com.gitegg.service.system; import org.mybatis.spring.annotation.MapperScan; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cache.annotation.EnableCaching; import org.springframework.cloud.client.discovery.EnableDiscoveryClient; import org.springframework.context.annotation.ComponentScan; * gitegg-system 启动类 @EnableDiscoveryClient @ComponentScan(basePackages = "com.gitegg") @MapperScan("com.gitegg.*.*.mapper") @SpringBootApplication @EnableCaching public class GitEggSystemApplication { public static void main(String[] args) { SpringApplication.run(GitEggSystemApplication.class,args); }5、在Nacos配置文件中添加redis的相关配置,这里使用单机版redis,其他模式配置请参考官方文档spring: redis: database: 1 host: 127.0.0.1 port: 6379 password: root ssl: false timeout: 2000 redisson: config: | singleServerConfig: idleConnectionTimeout: 10000 connectTimeout: 10000 timeout: 3000 retryAttempts: 3 retryInterval: 1500 password: root subscriptionsPerConnection: 5 clientName: null address: "redis://127.0.0.1:6379" subscriptionConnectionMinimumIdleSize: 1 subscriptionConnectionPoolSize: 50 connectionMinimumIdleSize: 32 connectionPoolSize: 64 database: 0 dnsMonitoringInterval: 5000 threads: 0 nettyThreads: 0 codec: !<org.redisson.codec.JsonJacksonCodec> {} "transportMode":"NIO"6、启动项目,使用swagger进行测试image.pngimage.png通过以上设置的值和获取的结果可知,我们配置的缓存已生效。

SpringCloud微服务实战——搭建企业级开发框架(十九):Gateway使用knife4j聚合微服务文档

本章介绍Spring Cloud Gateway网关如何集成knife4j,通过网关聚合所有的Swagger微服务文档1、gitegg-gateway中引入knife4j依赖,如果没有后端代码编写的话,仅仅引入一个swagger的前端ui模块就可以了<dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> </dependency> <dependency> <groupId>com.github.xiaoymin</groupId> <artifactId>knife4j-spring-ui</artifactId> </dependency>2、修改配置文件,增加knife4j、Swagger2的配置server: port: 80 spring: application: name: gitegg-service-gateway cloud: nacos: discovery: server-addr: 127.0.0.1:8848 config: server-addr: 127.0.0.1:8848 file-extension: yaml group: DEFAULT_GROUP enabled: true gateway: discovery: locator: enabled: true routes: - id: gitegg-service-system uri: lb://gitegg-service-system predicates: - Path=/gitegg-system/** filters: - SwaggerHeaderFilter - StripPrefix=1 - id: gitegg-service-base uri: lb://gitegg-service-base predicates: - Path=/gitegg-base/** filters: - SwaggerHeaderFilter - StripPrefix=1文档聚合业务编码在我们使用Spring Boot等单体架构集成swagger项目时,是通过对包路径进行业务分组,然后在前端进行不同模块的展示,而在微服务架构下,我们的一个服务就类似于原来我们写的一个业务组springfox-swagger提供的分组接口是swagger-resource,返回的是分组接口名称、地址等信息在Spring Cloud微服务架构下,我们需要重写该接口,主要是通过网关的注册中心动态发现所有的微服务文档,代码如下:package com.gitegg.gateway.config; import lombok.AllArgsConstructor; import lombok.extern.slf4j.Slf4j; import org.springframework.cloud.gateway.config.GatewayProperties; import org.springframework.cloud.gateway.route.RouteLocator; import org.springframework.cloud.gateway.support.NameUtils; import org.springframework.context.annotation.Primary; import org.springframework.stereotype.Component; import springfox.documentation.swagger.web.SwaggerResource; import springfox.documentation.swagger.web.SwaggerResourcesProvider; import java.util.ArrayList; import java.util.List; @Slf4j @Component @Primary @AllArgsConstructor public class SwaggerResourceConfig implements SwaggerResourcesProvider { private final RouteLocator routeLocator; private final GatewayProperties gatewayProperties; @Override public List<SwaggerResource> get() { List<SwaggerResource> resources = new ArrayList<>(); List<String> routes = new ArrayList<>(); routeLocator.getRoutes().subscribe(route -> routes.add(route.getId())); gatewayProperties.getRoutes().stream().filter(routeDefinition -> routes.contains(routeDefinition.getId())).forEach(route -> { route.getPredicates().stream() .filter(predicateDefinition -> ("Path").equalsIgnoreCase(predicateDefinition.getName())) .forEach(predicateDefinition -> resources.add(swaggerResource(route.getId(), predicateDefinition.getArgs().get(NameUtils.GENERATED_NAME_PREFIX + "0") .replace("**", "v2/api-docs?group=1.X版本")))); return resources; private SwaggerResource swaggerResource(String name, String location) { log.info("name:{},location:{}",name,location); SwaggerResource swaggerResource = new SwaggerResource(); swaggerResource.setName(name); swaggerResource.setLocation(location); swaggerResource.setSwaggerVersion("1.0.0"); return swaggerResource; }package com.gitegg.gateway.filter; import org.apache.commons.lang.StringUtils; import org.springframework.cloud.gateway.filter.GatewayFilter; import org.springframework.cloud.gateway.filter.factory.AbstractGatewayFilterFactory; import org.springframework.http.server.reactive.ServerHttpRequest; import org.springframework.stereotype.Component; import org.springframework.web.server.ServerWebExchange; @Component public class SwaggerHeaderFilter extends AbstractGatewayFilterFactory { private static final String HEADER_NAME = "X-Forwarded-Prefix"; private static final String URI = "/v2/api-docs"; @Override public GatewayFilter apply(Object config) { return (exchange, chain) -> { ServerHttpRequest request = exchange.getRequest(); String path = request.getURI().getPath(); if (!StringUtils.endsWithIgnoreCase(path,URI )) { return chain.filter(exchange); String basePath = path.substring(0, path.lastIndexOf(URI)); ServerHttpRequest newRequest = request.mutate().header(HEADER_NAME, basePath).build(); ServerWebExchange newExchange = exchange.mutate().request(newRequest).build(); return chain.filter(newExchange); }package com.gitegg.gateway.handler; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.http.HttpStatus; import org.springframework.http.ResponseEntity; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RestController; import reactor.core.publisher.Mono; import springfox.documentation.swagger.web.*; import java.util.Optional; @RestController public class SwaggerHandler { @Autowired(required = false) private SecurityConfiguration securityConfiguration; @Autowired(required = false) private UiConfiguration uiConfiguration; private final SwaggerResourcesProvider swaggerResources; @Autowired public SwaggerHandler(SwaggerResourcesProvider swaggerResources) { this.swaggerResources = swaggerResources; @GetMapping("/swagger-resources/configuration/security") public Mono<ResponseEntity<SecurityConfiguration>> securityConfiguration() { return Mono.just(new ResponseEntity<>( Optional.ofNullable(securityConfiguration).orElse(SecurityConfigurationBuilder.builder().build()), HttpStatus.OK)); @GetMapping("/swagger-resources/configuration/ui") public Mono<ResponseEntity<UiConfiguration>> uiConfiguration() { return Mono.just(new ResponseEntity<>( Optional.ofNullable(uiConfiguration).orElse(UiConfigurationBuilder.builder().build()), HttpStatus.OK)); @GetMapping("/swagger-resources") public Mono<ResponseEntity> swaggerResources() { return Mono.just((new ResponseEntity<>(swaggerResources.get(), HttpStatus.OK))); }3、访问gitegg-gateway服务地址http://127.0.0.1/doc.html,可以看到聚合后的文档image.png

SpringCloud微服务实战——搭建企业级开发框架(十八):集成Gateway实现微服务路由转发

在微服务架构里,服务的粒度被进一步细分,各个业务服务可以被独立的设计、开发、测试、部署和管理。这时,各个独立部署单元可以用不同的开发测试团队维护,可以使用不同的编程语言和技术平台进行设计,这就要求必须使用一种语言和平 台无关的服务协议作为各个单元间的通讯方式。API 网关的定义网关的角色是作为一个 API 架构,用来保护、增强和控制对于 API 服务的访问。API 网关是一个处于应用程序或服务(提供 REST API 接口服务)之前的系统,用来管理授权、访问控制和流量限制等,这样 REST API 接口服务就被 API 网关保护起来,对所有的调用者透明。因此,隐藏在 API 网关后面的业务系统就可以专注于创建和管理服务,而不用去处理这些策略性的基础设施。Gateway是什么Spring Cloud Gateway是Spring官方基于Spring 5.0,Spring Boot 2.0和Project Reactor等技术开发的网关,Spring Cloud Gateway旨在为微服务架构提供一种简单而有效的统一的API路由管理方式。Spring Cloud Gateway作为Spring Cloud生态系中的网关,目标是替代ZUUL,其不仅提供统一的路由方式,并且基于Filter链的方式提供了网关基本的功能,例如:安全,监控/埋点,和限流等。为什么用GatewaySpring Cloud Gateway 可以看做是一个 Zuul 1.x 的升级版和代替品,比 Zuul 2 更早的使用 Netty 实现异步 IO,从而实现了一个简单、比 Zuul 1.x 更高效的、与 Spring Cloud 紧密配合的 API 网关。Spring Cloud Gateway 里明确的区分了 Router 和 Filter,并且一个很大的特点是内置了非常多的开箱即用功能,并且都可以通过 SpringBoot 配置或者手工编码链式调用来使用。比如内置了 10 种 Router,使得我们可以直接配置一下就可以随心所欲的根据 Header、或者 Path、或者 Host、或者 Query 来做路由。比如区分了一般的 Filter 和全局 Filter,内置了 20 种 Filter 和 9 种全局 Filter,也都可以直接用。当然自定义 Filter 也非常方便。1、在GitEgg-Cloud工程的子工程gitegg-gateway中引入Nacos和Spring Cloud Gateway的依赖<dependencies> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-gateway</artifactId> </dependency> </dependencies>2、新建GitEggGatewayApplication.javapackage com.gitegg.gateway; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.client.discovery.EnableDiscoveryClient; @EnableDiscoveryClient @SpringBootApplication public class GitEggGatewayApplication { public static void main(String[] args) { SpringApplication.run(GitEggGatewayApplication.class,args); }3、新建bootstrap.yml配置文件server: port: 80 spring: application: name: gitegg-service-gateway cloud: nacos: discovery: server-addr: 127.0.0.1:8848 config: server-addr: 127.0.0.1:8848 file-extension: yaml group: DEFAULT_GROUP enabled: true gateway: discovery: locator: enabled: true routes: - id: gitegg-service-system uri: lb://gitegg-service-system predicates: - Path=/gitegg-system/** filters: - StripPrefix=14、在gitegg-cloud-system的SystemController.java添加测试方法:@ApiOperation(value = "Gateway路由转发测试") @GetMapping(value = "gateway/forward") public Result gatewayForward() { return Result.success("gitegg-service-system测试数据"); }5、启动gitegg-cloud-system和gitegg-gateway服务,在浏览器中访问gitegg-gateway的服务端口+ /gitegg-system/ + /system/gateway/forward,可以看到页面返回的数据是访问的gitegg-cloud-system方法image.png

SpringCloud微服务实战——搭建企业级开发框架(十七):Sentinel+Nacos配置持久化

Sentinel Dashboard中添加的规则是存储在内存中的,我们的微服务或者Sentinel一重启规则就丢失了,现在我们将Sentinel规则持久化配置到Nacos中,在Nacos中添加规则,然后同步到Sentinel Dashboard服务中。Sentinel 支持以下几种规则:流量控制规则、熔断降级规则、系统保护规则、来源访问控制规则 和 热点参数规则。具体可查看官网 Sentinel 规则我们以流控规则为例进行配置,其他规则可自行配置测试。流量规则的定义重要属性:Field说明默认值resource资源名,即规则的作用对象count限流阈值grade限流阈值类型,QPS 模式(1)或并发线程数模式(0)QPS 模式limitApp流控针对的调用来源default,代表不区分调用来源strategy调用关系限流策略:直接、链路、关联根据资源本身(直接)controlBehavior流控效果(直接拒绝/WarmUp/匀速+排队等待),不支持按调用关系限流直接拒绝clusterMode是否集群限流否1、gitegg-platform-cloud的pom.xml中引入sentinel-datasource-nacos依赖<!-- Sentinel 使用Nacos配置 --> <dependency> <groupId>com.alibaba.csp</groupId> <artifactId>sentinel-datasource-nacos</artifactId> </dependency>2、gitegg-platform-cloud的配置文件application.yml中添加数据源配置Nacos的路径(这里面的配置,在实际应用过程中是配置在GitEgg-Cloud的Nacos配置中,会自动覆盖这些配置)spring: cloud: sentinel: filter: enabled: true transport: port: 8719 dashboard: 127.0.0.1:8086 eager: true datasource: nacos: data-type: json # 默认提供两种内置的值,分别是 json 和 xml (不填默认是json) server-addr: 127.0.0.1:8848 dataId: ${spring.application.name}-sentinel groupId: DEFAULT_GROUP rule-type: flow #rule-type 配置表示该数据源中的规则属于哪种类型的规则(flow流控,degrade熔断降级,authority,system系统保护, param-flow热点参数限流, gw-flow, gw-api-group) #Ribbon配置 ribbon: #请求连接的超时时间 ConnectTimeout: 5000 #请求处理/响应的超时时间 ReadTimeout: 5000 #对所有操作请求都进行重试 OkToRetryOnAllOperations: true #切换实例的重试次数 MaxAutoRetriesNextServer: 1 #当前实例的重试次数 MaxAutoRetries: 1 #Sentinel端点配置 management: endpoints: exposure: include: '*'3、打开Nacos控制台,新增gitegg-service-system-sentinel配置项[ "resource": "/system/sentinel/protected", "count": 5, "grade": 1, "limitApp": "default", "strategy": 0, "controlBehavior": 0, "clusterMode": false ]image.png4、打开Sentinel控制台管理界面,点击流控规则菜单可以看到我们在Nacos中配置的限流信息,使用上一章节中使用的Jmater进行测试,可以看到限流生效。image.png

SpringCloud微服务实战——搭建企业级开发框架(十六):集成Sentinel高可用流量管理框架【自定义返回消息】

Sentinel限流之后,默认的响应消息为Blocked by Sentinel (flow limiting),对于系统整体功能提示来说并不统一,参考我们前面设置的统一响应及异常处理方式,返回相同的格式的消息。1、在自定义Sentinel返回消息之前,需要调整一下代码结构,因为这里要用到统一返回异常的格式,考虑到后期可能的使用问题,这里需要把gitegg-platform-boot工程里定义的统一响应及异常移到新建的gitegg-platform-base通用定义工程里面,同时在gitegg-platform-cloud中引入gitegg-platform-base和spring-boot-starter-web<!-- 为了使用HttpServletRequest和HttpServletResponse --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <optional>true</optional> </dependency> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-base</artifactId> </dependency>2、在GitEgg-Platform子工程gitegg-platform-cloud中自定义Sentinel错误处理类GitEggBlockExceptionHandler.java:package com.gitegg.platform.cloud.sentinel.handler; import com.alibaba.csp.sentinel.adapter.spring.webmvc.callback.BlockExceptionHandler; import com.alibaba.csp.sentinel.slots.block.BlockException; import com.fasterxml.jackson.databind.ObjectMapper; import com.gitegg.platform.base.enums.ResultCodeEnum; import com.gitegg.platform.base.result.Result; import lombok.extern.slf4j.Slf4j; import org.springframework.stereotype.Component; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; * 自定义异常处理器 @Slf4j @Component public class GitEggBlockExceptionHandler implements BlockExceptionHandler { @Override public void handle(HttpServletRequest request, HttpServletResponse response, BlockException e) throws Exception { response.setStatus(429); response.setContentType("application/json;charset=utf-8"); Result result = Result.error(ResultCodeEnum.SYSTEM_BUSY, ResultCodeEnum.SYSTEM_BUSY.getMsg()); new ObjectMapper().writeValue(response.getWriter(), result); }3、配置Sentinel控制台,配置容易出现限流的规则,打开Jmeter进行测试,我们可以看到返回消息已经是我们自定义的格式了:image.png

SpringCloud微服务实战——搭建企业级开发框架(十五):集成Sentinel高可用流量管理框架【熔断降级】

Sentinel除了流量控制以外,对调用链路中不稳定的资源进行熔断降级也是保障高可用的重要措施之一。由于调用关系的复杂性,如果调用链路中的某个资源不稳定,最终会导致请求发生堆积。Sentinel 熔断降级会在调用链路中某个资源出现不稳定状态时(例如调用超时或异常比例升高),对这个资源的调用进行限制,让请求快速失败,避免影响到其它的资源而导致级联错误。当资源被降级后,在接下来的降级时间窗口之内,对该资源的调用都自动熔断。   Sentinel 提供以下几种熔断策略:慢调用比例 (SLOW_REQUEST_RATIO):选择以慢调用比例作为阈值,需要设置允许的慢调用 RT(即最大的响应时间),请求的响应时间大于该值则统计为慢调用。当单位统计时长(statIntervalMs)内请求数目大于设置的最小请求数目,并且慢调用的比例大于阈值,则接下来的熔断时长内请求会自动被熔断。经过熔断时长后熔断器会进入探测恢复状态(HALF-OPEN 状态),若接下来的一个请求响应时间小于设置的慢调用 RT 则结束熔断,若大于设置的慢调用 RT 则会再次被熔断。异常比例 (ERROR_RATIO):当单位统计时长(statIntervalMs)内请求数目大于设置的最小请求数目,并且异常的比例大于阈值,则接下来的熔断时长内请求会自动被熔断。经过熔断时长后熔断器会进入探测恢复状态(HALF-OPEN 状态),若接下来的一个请求成功完成(没有错误)则结束熔断,否则会再次被熔断。异常比率的阈值范围是 [0.0, 1.0],代表 0% - 100%。异常数 (ERROR_COUNT):当单位统计时长内的异常数目超过阈值之后会自动进行熔断。经过熔断时长后熔断器会进入探测恢复状态(HALF-OPEN 状态),若接下来的一个请求成功完成(没有错误)则结束熔断,否则会再次被熔断。    熔断降级规则说明: 熔断降级规则(DegradeRule)包含下面几个重要的属性:Field说明默认值resource资源名,即规则的作用对象grade熔断策略,支持慢调用比例/异常比例/异常数策略慢调用比例count慢调用比例模式下为慢调用临界 RT(超出该值计为慢调用);异常比例/异常数模式下为对应的阈值timeWindow熔断时长,单位为 sminRequestAmount熔断触发的最小请求数,请求数小于该值时即使异常比率超出阈值也不会熔断5statIntervalMs统计时长(单位为 ms),如 60*1000 代表分钟级1000 msslowRatioThreshold慢调用比例阈值,仅慢调用比例模式有效   接下来我们对这三种熔断策略分别进行配置测试: 1、寿险在SystemController.java里面添加需要熔断测试的接口@ApiOperation(value = "慢调用比例熔断策略") @GetMapping(value = "sentinel/slow/request/ratio") public Result<String> sentinelRR() { try { double randomNumber; randomNumber = Math.random(); if (randomNumber >= 0 && randomNumber <= 0.80) { Thread.sleep(300L); } else if (randomNumber >= 0.80 && randomNumber <= 0.80 + 0.10) { Thread.sleep(10L); } catch (InterruptedException e) { e.printStackTrace(); return Result.success("慢调用比例熔断策略"); @ApiOperation(value = "异常比例熔断策略") @GetMapping(value = "sentinel/error/ratio") public Result sentinelRatio() { int i = 1/0; return Result.success("异常比例熔断策略"); @ApiOperation(value = "异常数熔断策略") @GetMapping(value = "sentinel/error/count") public Result sentinelCount() { int i = 1/0; return Result.success("异常数熔断策略"); }2、浏览器打开Sentinel管理控制台,打开降级规则菜单,新增降级规则,首先测试“慢调用比例”,根据官方介绍,最大RT是指最大允许的响应时间,我们这里设置成200ms,比例阈值设置成0.8,熔断时长为10s,最小请求数为5,意思是指:在1ms内请求数目大于5,并且慢调用的比例大于80%,则接下来的熔断时长内请求会自动被熔断,熔断时长是10秒,10秒之后会进入探测恢复状态(HALF-OPEN 状态),若接下来的一个请求响应时间小于200ms, 则结束熔断,若大于200ms 则会再次被熔断。image.png3、打开Jmeter,点击新建->测试计划->线程组->HTTP请求-聚合报告。线程组设置为15,循环次数1000image.pngimage.png4、测试结果image.png5、异常比例和异常数参考上面的测试方法进行测试,这里不再赘述,只是测试之前需要把GitEgg-Platform里面GitEggControllerAdvice.java统一异常处理的代码注释掉,否则测试代码抛出的异常会被捕获,达不到预想的效果。

SpringCloud微服务实战——搭建企业级开发框架(十四):集成Sentinel高可用流量管理框架【限流】

Sentinel 是面向分布式服务架构的高可用流量防护组件,主要以流量为切入点,从限流、流量整形、熔断降级、系统负载保护、热点防护等多个维度来帮助开发者保障微服务的稳定性。Sentinel 安装部署请参考:https://www.jianshu.com/p/9626b74aec1eSentinel 具有以下特性:丰富的应用场景:Sentinel 承接了阿里巴巴近 10 年的双十一大促流量的核心场景,例如秒杀(即突发流量控制在系统容量可以承受的范围)、消息削峰填谷、集群流量控制、实时熔断下游不可用应用等。完备的实时监控:Sentinel 同时提供实时的监控功能。您可以在控制台中看到接入应用的单台机器秒级数据,甚至 500 台以下规模的集群的汇总运行情况。广泛的开源生态:Sentinel 提供开箱即用的与其它开源框架/库的整合模块,例如与 Spring Cloud、Dubbo、gRPC 的整合。您只需要引入相应的依赖并进行简单的配置即可快速地接入 Sentinel。完善的 SPI 扩展点:Sentinel 提供简单易用、完善的 SPI 扩展接口。您可以通过实现扩展接口来快速地定制逻辑。例如定制规则管理、适配动态数据源等。1、在gitegg-platform-cloud中引入依赖<!-- Sentinel 高可用流量防护组件 --> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-sentinel</artifactId> </dependency>2、在gitegg-platform-cloud的application.yml文件中加入暴露/actuator/sentinel端点的配置management: endpoints: exposure: include: '*'3、GitEgg-Platform重新install,GitEgg-Cloud更新导入的依赖,启动gitegg-service-system服务,在浏览器中打开http://127.0.0.1:8001/actuator/sentinel地址,可以看到返回的Json信息,说明项目已经整合好了Sentinel。image.png{ "blockPage": null, "appName": "gitegg-service-system", "consoleServer": [], "coldFactor": "3", "rules": { "systemRules": [], "authorityRule": [], "paramFlowRule": [], "flowRules": [], "degradeRules": [] "metricsFileCharset": "UTF-8", "filter": { "order": -2147483648, "urlPatterns": [ "/**" "enabled": true "totalMetricsFileCount": 6, "datasource": {}, "clientIp": "172.16.10.3", "clientPort": "8719", "logUsePid": false, "metricsFileSize": 52428800, "logDir": "", "heartbeatIntervalMs": 10000 }4、在配置文件中添加Sentinel服务地址,默认情况下 Sentinel 会在客户端首次调用的时候进行初始化,开始向控制台发送心跳包。也可以配置sentinel.eager=true ,取消Sentinel控制台懒加载。spring: cloud: sentinel: filter: enabled: true transport: port: 8719 #指定sentinel控制台的地址 dashboard: 127.0.0.1:8086 eager: true5、SystemController.java中添加限流的测试方法@ApiOperation(value = "限流测试") @GetMapping(value = "sentinel/protected") public Result<String> sentinelProtected() { return Result.data("访问的是限流测试接口"); }6、启动服务,通过浏览器访问刚刚新增的测试接口地址,http://127.0.0.1:8011/system/sentinel/protected,刷新几次,然后打开Sentinel控制台地址,可以看到当前服务的访问情况image.png7、以上是没有对接口进行限流的情况,现在我们设置规则,对接口进行限流,打开Sentinel控制台,点击左侧限流规则菜单,然后点击右上角“新增流控规则”按钮,在弹出的输入框中,资源名输入需要限流的接口,我们这里设置为:/system/sentinel/protected,阈值类型:QPS, 单机阈值:20,确定添加。image.png8、为了测试并发请求,我们这里借助压力测试工具Jmeter,具体使用方法https://jmeter.apache.org/,下载好Jmeter之后,点击新建->测试计划->线程组->HTTP请求-查看结果树。我们限流设置的单机阈值为20,我们这里线程组先设置为20,查看请求是否会被限流,然后再将线程组设置为100查看是否被限流。image.pngimage.pngimage.pngimage.pngimage.png从以上测试结果可以看到当设置为100时,出现访问失败,返回Blocked by Sentinel (flow limiting),说明限流已生效。9、Sentinel同时也支持热点参数限流和系统自适应限流,这里只需要在Sentinel控制台配置即可,所以这里不介绍具体操作及代码:热点参数限流:何为热点?热点即经常访问的数据。很多时候我们希望统计某个热点数据中访问频次最高的 Top K 数据,并对其访问进行限制。比如:商品 ID 为参数,统计一段时间内最常购买的商品 ID 并进行限制用户 ID 为参数,针对一段时间内频繁访问的用户 ID 进行限制热点参数限流会统计传入参数中的热点参数,并根据配置的限流阈值与模式,对包含热点参数的资源调用进行限流。热点参数限流可以看做是一种特殊的流量控制,仅对包含热点参数的资源调用生效。Sentinel 利用 LRU 策略统计最近最常访问的热点参数,结合令牌桶算法来进行参数级别的流控。热点参数限流支持集群模式,详细使用指南:https://github.com/alibaba/Sentinel/wiki/%E7%83%AD%E7%82%B9%E5%8F%82%E6%95%B0%E9%99%90%E6%B5%81系统自适应限流:Sentinel 系统自适应限流从整体维度对应用入口流量进行控制,结合应用的 Load、CPU 使用率、总体平均 RT、入口 QPS 和并发线程数等几个维度的监控指标,通过自适应的流控策略,让系统的入口流量和系统的负载达到一个平衡,让系统尽可能跑在最大吞吐量的同时保证系统整体的稳定性。,详细使用指南:https://github.com/alibaba/Sentinel/wiki/%E7%B3%BB%E7%BB%9F%E8%87%AA%E9%80%82%E5%BA%94%E9%99%90%E6%B5%81

SpringCloud微服务实战——搭建企业级开发框架(十三):OpenFeign+Ribbon实现高可用重试机制

Spring Cloud OpenFeign 默认是使用Ribbon实现负载均衡和重试机制的,虽然Feign有自己的重试机制,但该功能在Spring Cloud OpenFeign基本用不上,除非有特定的业务需求,则可以实现自己的Retryer,然后在全局注入或者针对特定的客户端使用特定的Retryer。  在SpringCloud体系项目中,引入的重试机制保证了高可用的同时,也会带来一些其它的问题,如幂等操作或一些没必要的重试,下面我们实际操作来测试Spring Cloud架构中的重试机制。1、因为Ribbon默认是开启重试机制的,使用上一章节的代码可以测试重试机制,这里为了分辨是否执行了重试,我们把gitegg-service-cloud下面配置的Ribbon负载均衡策略改为轮询。按照上一章节方式启动三个服务,然后页面快速点击测试,发现服务端口一直有规律的切换。然后,快速关闭其中一个gitegg-service-system服务,此时继续在页面快速点击测试,我们发现接口并没有报错,而是切换到其中一个服务的端口,这说明重试机制有效。2、接下来,我们修改配置文件使重试机制失效,就可以看到服务关闭后因没有重试机制系统报错的结果。修改GitEgg-Platform工程下子工程gitegg-service-cloud的代码,添加Ribbon相关配置文件,因为Ribbon默认是开启重试机制的,这里选择关闭ribbon: #请求连接的超时时间 ConnectTimeout: 5000 #请求处理/响应的超时时间 ReadTimeout: 5000 #对所有操作请求都进行重试 OkToRetryOnAllOperations: false #切换实例的重试次数 MaxAutoRetriesNextServer: 0 #当前实例的重试次数 MaxAutoRetries: 03、GitEgg-Platform重新install,GitEgg-Cloud项目重新导入依赖,然后重启三个服务,这时,快速点击测试接口的时候再关闭其中一个服务,发现接口在访问服务的时候因为没有重试机制,导致访问接口报错image.png

SpringCloud微服务实战——搭建企业级开发框架(十二):OpenFeign+Ribbon实现负载均衡

Ribbon是Netflix下的负载均衡项目,它主要实现中间层应用程序的负载均衡。为Ribbon配置服务提供者地址列表后,Ribbon就会基于某种负载均衡算法,自动帮助服务调用者去请求。Ribbon默认提供的负载均衡算法有多种,例如轮询、随即、加权轮训等,也可以为Ribbon实现自定义的负载均衡算法。Ribbon有以下特性:负载均衡器,可支持插拔式的负载均衡规则对多种协议提供支持,如HTTP、TCP、UDP集成了负载均衡功能的客户端Feign利用Ribbon实现负载均衡的过程:通过在启动类加@EnableFeignCleints注解开启FeignCleint根据Feign的规则实现接口,并加在接口定义处添加@FeignCleint注解服务启动后,扫描带有@ FeignCleint的注解的类,并将这些信息注入到ioc容器中当接口的方法被调用,通过jdk的代理,来生成具体的RequesTemplateRequesTemplate再生成RequestRequest交给Client去处理,其中Client可以是HttpUrlConnection、HttpClient也可以是Okhttp 最后Client被封装到LoadBalanceClient类,这个类结合类Ribbon做到了负载均衡。  OpenFeign 中使用 Ribbon 进行负载均衡,所以 OpenFeign 直接内置了 Ribbon。在导入OpenFeign 依赖后,无需再专门导入 Ribbon 依赖。接下来,我们把gitegg-service-base作为服务的调用方,启动两个不同端口的gitegg-service-system作为服务的被调用方,测试Ribbon的负载均衡。1、首先在gitegg-service-system工程中,新建被调用的controller方法,返回系统配置的端口号以区分是哪个服务被调用了。package com.gitegg.service.system.controller; import com.gitegg.platform.boot.common.base.Result; import com.gitegg.service.system.dto.SystemDTO; import com.gitegg.service.system.service.ISystemService; import io.swagger.annotations.Api; import io.swagger.annotations.ApiOperation; import lombok.RequiredArgsConstructor; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Value; import org.springframework.cloud.context.config.annotation.RefreshScope; import org.springframework.web.bind.annotation.*; import javax.validation.Valid; @RestController @RequestMapping(value = "system") @RequiredArgsConstructor(onConstructor_ = @Autowired) @Api(tags = "gitegg-system") @RefreshScope public class SystemController { private final ISystemService systemService; @Value("${spring.datasource.maxActive}") private String nacosMaxActiveType; @Value("${server.port}") private Integer serverPort; @GetMapping(value = "list") @ApiOperation(value = "system list接口") public Object list() { return systemService.list(); @GetMapping(value = "page") @ApiOperation(value = "system page接口") public Object page() { return systemService.page(); @GetMapping(value = "exception") @ApiOperation(value = "自定义异常及返回测试接口") public Result<String> exception() { return Result.data(systemService.exception()); @PostMapping(value = "valid") @ApiOperation(value = "参数校验测试接口") public Result<SystemDTO> valid(@Valid @RequestBody SystemDTO systemDTO) { return Result.data(systemDTO); @PostMapping(value = "nacos") @ApiOperation(value = "Nacos读取配置文件测试接口") public Result<String> nacos() { return Result.data(nacosMaxActiveType); @GetMapping(value = "api/by/id") @ApiOperation(value = "Fegin Get调用测试接口") public Result<Object> feginById(@RequestParam("id") String id) { return Result.data(systemService.list()); @PostMapping(value = "api/by/dto") @ApiOperation(value = "Fegin Post调用测试接口") public Result<Object> feginByDto(@Valid @RequestBody SystemDTO systemDTO) { return Result.data(systemDTO); @GetMapping("/api/ribbon") @ApiOperation(value = "Ribbon调用测试接口") public Result<String> testRibbon() { return Result.data("现在访问的服务端口是:" + serverPort); }2、在gitegg-service-system-api工程中,编写使用OpenFeign调用testRibbon的公共方法package com.gitegg.service.system.api.feign; import com.gitegg.platform.boot.common.base.Result; import com.gitegg.service.system.api.dto.ApiSystemDTO; import org.springframework.cloud.openfeign.FeignClient; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.PostMapping; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.bind.annotation.RequestParam; @FeignClient(name = "gitegg-service-system") public interface ISystemFeign { * OpenFeign测试Get * @param id * @return @GetMapping("/system/api/by/id") Result<Object> querySystemById(@RequestParam("id") Long id); * OpenFeign测试Post * @param apiSystemDTO * @return ApiSystemDTO @PostMapping("/system/api/by/dto") Result<ApiSystemDTO> querySystemByDto(@RequestBody ApiSystemDTO apiSystemDTO); * OpenFeign测试Ribbon负载均衡功能 * @return @GetMapping("/system/api/ribbon") Result<String> testRibbon(); }3、在gitegg-service-base中添加测试Ribbon负载均衡的Feign调用方法package com.gitegg.service.base.controller; import com.gitegg.platform.boot.common.base.Result; import com.gitegg.service.system.api.dto.ApiSystemDTO; import com.gitegg.service.system.api.feign.ISystemFeign; import io.swagger.annotations.Api; import io.swagger.annotations.ApiOperation; import lombok.RequiredArgsConstructor; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.cloud.context.config.annotation.RefreshScope; import org.springframework.web.bind.annotation.*; import javax.validation.Valid; @RestController @RequestMapping(value = "base") @RequiredArgsConstructor(onConstructor_ = @Autowired) @Api(tags = "gitegg-base") @RefreshScope public class BaseController { private final ISystemFeign systemFeign; @GetMapping(value = "api/by/id") @ApiOperation(value = "Fegin Get调用测试接口") public Result<Object> feginById(@RequestParam("id") Long id) { return Result.data(systemFeign.querySystemById(id)); @PostMapping(value = "api/by/dto") @ApiOperation(value = "Fegin Post调用测试接口") public Result<Object> feginByDto(@Valid @RequestBody ApiSystemDTO systemDTO) { return Result.data(systemFeign.querySystemByDto(systemDTO)); @PostMapping(value = "api/ribbon") @ApiOperation(value = "Ribbon调用测试接口") public Result<Object> testRibbon() { return Result.data(systemFeign.testRibbon()); }4、先启动gitegg-service-base服务,再启动gitegg-service-system服务,服务启动成功之后,将gitegg-service-system下bootstrap.yml里面server.port改为8011,然后再点击启动,这样就启动了两个gitegg-service-system服务(如果运行两个服务时提示:gitegg-service-system is not allowed to run in parallel. Would you like to stop the running one?,这时,在IDEA中点击Run-Edit configurations-勾选Allow parallel run即可),服务全部启动完毕之后,可以在Console窗口里面看到三个服务的Consoleimage.png三个服务:image.png5、打开浏览器访问:http://127.0.0.1:8001/doc.html,点击Ribbon调用测试接口菜单,进行测试,点击请求,我们可以看到每次返回的端口都是变化的,一会儿是8001一会儿是8011,因为Ribbon负载均衡默认是使用的轮询策略image.pngimage.png6、如果我们需要修改负载均衡策略或者自定义负载均衡策略,根据我们的架构设计,我们在GitEgg-Platform的子工程gitegg-platform-cloud中设置公共的负载均衡策略,然后每个微服务需要不同的策略的话,可以在自己的工程中添加配置文件。接下来,在gitegg-platform-cloud中新建Ribbon配置类package com.gitegg.platform.cloud.ribbon.config; import com.netflix.loadbalancer.IRule; import com.netflix.loadbalancer.RandomRule; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; * @Description Ribbon公共负载均衡策略配置 @Configuration public class RibbonConfig { * 负载均衡策略配置 * @return @Bean public IRule rule(){ //随机策略 从所有可用的提供者中随机选择一个 return new RandomRule(); }7、修改完成之后,GitEgg_Platform工程重新执行install,GitEgg_Cloud刷新导入的包,参照步骤5再执行测试,这时我们发现微服务返回的端口,不再是有规律的切换,而是随机不确定的出现。注意:这里RibbonConfig只用于测试负载均衡策略,请不要在生产环境中这样使用,否则会出现问题:在微服务A中调用微服务B和微服务C,然后再调用微服务B,这是RibbonLoadBalancerClient在获取微服务时,渠到的serviceId为null,就会获取到上次的微服务,进而导致404错误。因为OpenFeign默认使用的是Ribbon提供的负载均衡策略,我们在实际应用中可以选择Nacos提供的NacosRule策略,利用Nacos权重进行负载均衡:#负载均衡策略 NFLoadBalancerRuleClassName: com.alibaba.cloud.nacos.ribbon.NacosRule

SpringCloud微服务实战——搭建企业级开发框架(十一):集成OpenFeign用于微服务间调用

作为Spring Cloud的子项目之一,Spring Cloud OpenFeign以将OpenFeign集成到Spring Boot应用中的方式,为微服务架构下服务之间的调用提供了解决方案。首先,利用了OpenFeign的声明式方式定义Web服务客户端;其次还更进一步,通过集成Ribbon或Eureka实现负载均衡的HTTP客户端。  OpenFeign 可以使消费者将提供者提供的服务名伪装为接口进行消费,消费者只需使用“Service 接口+ 注解”的方式。即可直接调用 Service 接口方法,而无需再使用 RestTemplate 了。其实原理还是使用RestTemplate,而通过Feign(伪装)成我们熟悉的习惯。  GitEgg框架除了新建Fegin服务之外,还定义实现了消费者Fegin-api,在其他微服务调用的时候,只需要引入Fegin-api即可直接调用,不需要在自己重复开发消费者调用接口。1、在GitEgg-Platform工程的子工程gitegg-platform-cloud中引入spring-cloud-starter-openfeign依赖,重新install GitEgg-Platform工程,然后GitEgg-Cloud项目需要重新在IDEA中执行Reload All Maven Projects。<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>GitEgg-Platform</artifactId> <groupId>com.gitegg.platform</groupId> <version>1.0-SNAPSHOT</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>gitegg-platform-cloud</artifactId> <name>${project.artifactId}</name> <version>${project.parent.version}</version> <packaging>jar</packaging> <dependencies> <!-- Nacos 服务注册发现--> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId> </dependency> <!-- Nacos 分布式配置--> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId> </dependency> <!-- OpenFeign 微服务调用解决方案--> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-openfeign</artifactId> </dependency> </dependencies> </project>  我们从系统架构设计方面考虑,GitEgg-Cloud下的gitegg-service作为业务逻辑处理模块,gitegg-service-api作为微服务统一对外提供接口的模块,这里在测试的时候需要用到两个微服务之间的调用,我们这里在gitegg-service下gitegg-service-base里面新建测试代码,和gitegg-service-system之间相互调用。注意,这里需要说明,gitegg-service-api并不是继承gitegg-service做业务扩展,而是对外提供接口的抽象,比如现在有A、B、C三个系统A、B都需要调用C的同一个方法,如果按照业务逻辑来罗列代码的话,那么就需要在A和B中写相同的调用方法来调用C,这里我们抽出来一个api模块,专门存放调用微服务C的调用方法,在使用时,A和B只需要引入C的jar包即可直接使用调用方法。2、在gitegg-service-system-api工程中,引入SpringBoot,SpringCloud,Swagger2的依赖,新建ISystemFeign.java和ApiSystemDTO.java,作为OpenFeign调用微服务的公共方法:<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>GitEgg-Cloud</artifactId> <groupId>com.gitegg.cloud</groupId> <version>1.0-SNAPSHOT</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>gitegg-service-api</artifactId> <name>${project.artifactId}</name> <version>${project.parent.version}</version> <packaging>pom</packaging> <modules> <module>gitegg-service-base-api</module> <module>gitegg-service-bigdata-api</module> <module>gitegg-service-system-api</module> </modules> <dependencies> <!-- gitegg Spring Boot自定义及扩展 --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-boot</artifactId> </dependency> <!-- gitegg Spring Cloud自定义及扩展 --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-cloud</artifactId> </dependency> <!-- gitegg swagger2-knife4j --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-swagger</artifactId> </dependency> </dependencies> </project>package com.gitegg.service.system.api.feign; import com.gitegg.platform.boot.common.base.Result; import com.gitegg.service.system.api.dto.ApiSystemDTO; import org.springframework.cloud.openfeign.FeignClient; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.PostMapping; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.bind.annotation.RequestParam; @FeignClient(name = "gitegg-service-system") public interface ISystemFeign { * OpenFeign测试Get * @param id * @return @GetMapping("/system/api/by/id") Result<Object> querySystemById(@RequestParam("id") Long id); * OpenFeign测试Post * @param apiSystemDTO * @return ApiSystemDTO @PostMapping("/system/api/by/dto") Result<ApiSystemDTO> querySystemByDto(@RequestBody ApiSystemDTO apiSystemDTO); }package com.gitegg.service.system.api.dto; import lombok.Data; import javax.validation.constraints.Max; import javax.validation.constraints.Min; import javax.validation.constraints.NotNull; import javax.validation.constraints.Size; @Data public class ApiSystemDTO { @NotNull @Min(value = 10, message = "id必须大于10") @Max(value = 150, message = "id必须小于150") private Long id; @NotNull(message = "名称不能为空") @Size(min = 3, max = 20, message = "名称长度必须在3-20之间") private String name; }2、在gitegg-service-system工程中,修改SystemController.java,添加需要被微服务调用的方法:package com.gitegg.service.system.controller; import com.gitegg.platform.boot.common.base.Result; import com.gitegg.service.system.dto.SystemDTO; import com.gitegg.service.system.service.ISystemService; import io.swagger.annotations.Api; import io.swagger.annotations.ApiOperation; import lombok.RequiredArgsConstructor; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Value; import org.springframework.cloud.context.config.annotation.RefreshScope; import org.springframework.web.bind.annotation.*; import javax.validation.Valid; @RestController @RequestMapping(value = "system") @RequiredArgsConstructor(onConstructor_ = @Autowired) @Api(tags = "gitegg-system") @RefreshScope public class SystemController { private final ISystemService systemService; @Value("${spring.datasource.maxActive}") private String nacosMaxActiveType; @GetMapping(value = "list") @ApiOperation(value = "system list接口") public Object list() { return systemService.list(); @GetMapping(value = "page") @ApiOperation(value = "system page接口") public Object page() { return systemService.page(); @GetMapping(value = "exception") @ApiOperation(value = "自定义异常及返回测试接口") public Result<String> exception() { return Result.data(systemService.exception()); @PostMapping(value = "valid") @ApiOperation(value = "参数校验测试接口") public Result<SystemDTO> valid(@Valid @RequestBody SystemDTO systemDTO) { return Result.data(systemDTO); @PostMapping(value = "nacos") @ApiOperation(value = "Nacos读取配置文件测试接口") public Result<String> nacos() { return Result.data(nacosMaxActiveType); @GetMapping(value = "api/by/id") @ApiOperation(value = "Fegin Get调用测试接口") public Result<Object> feginById(@RequestParam("id") String id) { return Result.data(systemService.list()); @PostMapping(value = "api/by/dto") @ApiOperation(value = "Fegin Post调用测试接口") public Result<Object> feginByDto(@Valid @RequestBody SystemDTO systemDTO) { return Result.data(systemDTO); }3、参照gitegg-service-system工程,在gitegg-service-base工程下,引入gitegg-service-system-api依赖,新建BaseController.java、GitEggBaseApplication.java、bootstrap.yml作为服务调用方:pom.xml:<dependencies> <!-- gitegg-service-system 的fegin公共调用方法 --> <dependency> <groupId>com.gitegg.cloud</groupId> <artifactId>gitegg-service-system-api</artifactId> <version>${project.parent.version}</version> </dependency> </dependencies>BaseController.java:package com.gitegg.service.base.controller; import com.gitegg.platform.boot.common.base.Result; import com.gitegg.service.system.api.dto.ApiSystemDTO; import com.gitegg.service.system.api.feign.ISystemFeign; import io.swagger.annotations.Api; import io.swagger.annotations.ApiOperation; import lombok.RequiredArgsConstructor; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.cloud.context.config.annotation.RefreshScope; import org.springframework.web.bind.annotation.*; import javax.validation.Valid; @RestController @RequestMapping(value = "base") @RequiredArgsConstructor(onConstructor_ = @Autowired) @Api(tags = "gitegg-base") @RefreshScope public class BaseController { private final ISystemFeign systemFeign; @GetMapping(value = "api/by/id") @ApiOperation(value = "Fegin Get调用测试接口") public Result<Object> feginById(@RequestParam("id") Long id) { return Result.data(systemFeign.querySystemById(id)); @PostMapping(value = "api/by/dto") @ApiOperation(value = "Fegin Post调用测试接口") public Result<Object> feginByDto(@Valid @RequestBody ApiSystemDTO systemDTO) { return Result.data(systemFeign.querySystemByDto(systemDTO)); }GitEggBaseApplication.java:package com.gitegg.service.base; import org.mybatis.spring.annotation.MapperScan; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.client.discovery.EnableDiscoveryClient; import org.springframework.cloud.openfeign.EnableFeignClients; import org.springframework.context.annotation.ComponentScan; * gitegg-base 启动类 @EnableDiscoveryClient @EnableFeignClients(basePackages = "com.gitegg") @ComponentScan(basePackages = "com.gitegg") @MapperScan("com.gitegg.*.*.mapper") @SpringBootApplication public class GitEggBaseApplication { public static void main(String[] args) { SpringApplication.run(GitEggBaseApplication.class,args); }bootstrap.yml:server: port: 8002 spring: application: name: gitegg-service-base cloud: nacos: discovery: server-addr: 127.0.0.1:8848 config: server-addr: 127.0.0.1:8848 file-extension: yaml prefix: gitegg-service-system group: DEFAULT_GROUP enabled: true4、分别启动gitegg-service-base和gitegg-service-system项目,打开浏览器,访问http://127.0.0.1:8002/doc.html(这里gitegg-service-base的端口设置为8002,所以访问gitegg-service-base的服务进行测试),在页面左侧菜单分别点击Fegin Get调用测试接口和Fegin Post调用测试接口,可以查看微服务调用成功image.pngimage.png

十、Linux(CentOS7) 安装 Sentinel

1、下载Sentinel发布版本,地址https://github.com/alibaba/Sentinel/releases2、将下载的jar包sentinel-dashboard-1.8.0.jar上传到Linux服务器,Sentinel 是一个标准的 Spring Boot 应用,以 Spring Boot 的方式运行 jar 包即可,执行启动命令nohup java -Dserver.port=8086 -Dproject.name=sentinel-dashboard -jar sentinel-dashboard-1.8.0.jar >/dev/null &3、在浏览器输入测试的http://ip:8086 即可访问登录界面,默认用户名密码为sentinel/sentinelimage.png4、至此,一个简单的Sentinel就部署成功了,其他更详细功能及使用方式请参考:https://github.com/alibaba/Sentinel/wiki/%E4%BB%8B%E7%BB%8D

九、Linux(CentOS7) 安装Alibaba Nacos

Nacos是一个更易于构建云原生应用的动态服务发现、配置管理和服务管理平台,Nacos 致力于帮助您发现、配置和管理微服务。Nacos 提供了一组简单易用的特性集,帮助您快速实现动态服务发现、服务配置、服务元数据及流量管理。1、Nacos发布地址:https://github.com/alibaba/nacos/releases,从里面选择需要的版本,这里选择1.4.0版本,下载地址为:https://github.com/alibaba/nacos/releases/download/1.4.0/nacos-server-1.4.0.tar.gz。 image.png2、下载完成后,上传到测试Linux服务器解压。(如果只想本地windows安装,可以下载nacos-server-1.4.0.zip,解压后使用方法基本一致)[root@localhost soft_home]# cd nacos [root@localhost nacos]# ls nacos-server-1.4.0.tar.gz [root@localhost nacos]# tar -zxvf nacos-server-1.4.0.tar.gz nacos/LICENSE nacos/NOTICE nacos/target/nacos-server.jar nacos/conf/ nacos/conf/schema.sql nacos/conf/nacos-mysql.sql nacos/conf/application.properties.example nacos/conf/nacos-logback.xml nacos/conf/cluster.conf.example nacos/conf/application.properties nacos/bin/startup.sh nacos/bin/startup.cmd nacos/bin/shutdown.sh nacos/bin/shutdown.cmd [root@localhost nacos]# ls nacos nacos-server-1.4.0.tar.gz [root@localhost nacos]# cd nacos [root@localhost nacos]# ls bin conf LICENSE NOTICE target [root@localhost nacos]# cd bin [root@localhost bin]# ls shutdown.cmd shutdown.sh startup.cmd startup.sh [root@localhost bin]# pwd /bigdata/soft_home/nacos/nacos/bin [root@localhost bin]#3、修改配置文件的数据库连接,修改为自己实际的数据#*************** Config Module Related Configurations ***************# ### If use MySQL as datasource: spring.datasource.platform=mysql ### Count of DB: db.num=1 ### Connect URL of DB: db.url.0=jdbc:mysql://127.0.0.1:3306/nacos?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTC db.user=nacos db.password=nacos4、在数据库中刷入/nacos/conf目录下的nacos-mysql.sql数据库脚本,如果需要其他配置或者了解使用方式可以访问官网,官网地址:https://nacos.io/zh-cn/docs/quick-start.html。5、进入到bin目录下直接执行sh startup.sh -m standalone。[root@localhost bin]# sh startup.sh -m standalone /usr/java/jdk1.8.0_77/bin/java -server -Xms2g -Xmx2g -Xmn1g -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m -XX:-OmitStackTraceInFastThrow -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/bigdata/soft_home/nacos/nacos/logs/java_heapdump.hprof -XX:-UseLargePages -Dnacos.member.list= -Djava.ext.dirs=/usr/java/jdk1.8.0_77/jre/lib/ext:/usr/java/jdk1.8.0_77/lib/ext -Xloggc:/bigdata/soft_home/nacos/nacos/logs/nacos_gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Dloader.path=/bigdata/soft_home/nacos/nacos/plugins/health,/bigdata/soft_home/nacos/nacos/plugins/cmdb -Dnacos.home=/bigdata/soft_home/nacos/nacos -jar /bigdata/soft_home/nacos/nacos/target/nacos-server.jar --spring.config.location=file:/bigdata/soft_home/nacos/nacos/conf/,classpath:/,classpath:/config/,file:./,file:./config/ --logging.config=/bigdata/soft_home/nacos/nacos/conf/nacos-logback.xml --server.max-http-header-size=524288 nacos is starting with cluster nacos is starting,you can check the /bigdata/soft_home/nacos/nacos/logs/start.out6、服务启动之后,可以访问http://ip:8848/nacos访问管理后台,默认用户名密码:nacos/nacos image.png image.png

SpringCloud微服务实战——搭建企业级开发框架(十):使用Nacos分布式配置中心

随着业务的发展、微服务架构的升级,服务的数量、程序的配置日益增多(各种微服务、各种服务器地址、各种参数),传统的配置文件方式和数据库的方式已无法满足开发人员对配置管理的要求:安全性:配置跟随源代码保存在代码库中,容易造成配置泄漏。时效性:修改配置,需要重启服务才能生效。局限性:无法支持动态调整:例如日志开关、功能开关。因此,分布式配置中心应运而生!使用Nacos之前首先了解一下SpringBoot配置文件bootstrap与application的加载顺序:bootstrap.yml(bootstrap.properties)先加载application.yml(application.properties)后加载bootstrap.yml 用于应用程序上下文的引导阶段bootstrap.yml 由父Spring ApplicationContext加载Nacos的Config默认读取的是bootstrap.yml配置文件,如果将Nacos Config的配置写到application.yml里面,工程启动时就会一直报错。1、在GitEgg-Platform工程的子工程gitegg-platform-cloud中引入spring-cloud-starter-alibaba-nacos-config依赖,重新install GitEgg-Platform工程,然后GitEgg-Cloud项目需要重新在IDEA中执行Reload All Maven Projects。<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>GitEgg-Platform</artifactId> <groupId>com.gitegg.platform</groupId> <version>1.0-SNAPSHOT</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>gitegg-platform-cloud</artifactId> <name>${project.artifactId}</name> <version>${project.parent.version}</version> <packaging>jar</packaging> <dependencies> <!-- Nacos 服务注册发现--> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId> </dependency> <!-- Nacos 分布式配置--> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId> </dependency> </dependencies> </project>2、因为Nacos默认读取服务配置要写在 bootstrap.yml 中,所以我们在gitegg-service-system工程下新建 bootstrap.yml文件,同时在 bootstrap.yml 做好Nacos Config的配置server: port: 8001 spring: application: name: gitegg-service-system cloud: nacos: discovery: server-addr: 127.0.0.1:8848 config: server-addr: 127.0.0.1:8848 file-extension: yml group: DEFAULT_GROUP enabled: true3、在Nacos服务器上新建gitegg-service-system.yaml配置,将application.yml里面的配置信息复制到Nacos服务器上的配置信息里,然后删除application.yml,在 Nacos Spring Cloud 中,dataId 的完整格式如下:${prefix}-${spring.profiles.active}.${file-extension}prefix 默认为 spring.application.name 的值,也可以通过配置项 spring.cloud.nacos.config.prefix来配置。spring.profiles.active 即为当前环境对应的 profile,详情可以参考 Spring Boot文档。 注意:当 spring.profiles.active 为空时,对应的连接符 - 也将不存在,dataId 的拼接格式变成 ${prefix}.${file-extension}file-exetension 为配置内容的数据格式,可以通过配置项 spring.cloud.nacos.config.file-extension 来配置。目前只支持 properties 和 yaml 类型。详细配置信息可以参考 Spring Boot文档 image.pngspring: datasource: type: com.alibaba.druid.pool.DruidDataSource url: jdbc:mysql://127.0.0.1/gitegg_cloud?zeroDateTimeBehavior=convertToNull&useUnicode=true&characterEncoding=utf8&allowMultiQueries=true username: root password: root initialSize: 1 minIdle: 3 maxActive: 20 # 配置获取连接等待超时的时间 maxWait: 60000 # 配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒 timeBetweenEvictionRunsMillis: 60000 # 配置一个连接在池中最小生存的时间,单位是毫秒 minEvictableIdleTimeMillis: 30000 validationQuery: select 'x' testWhileIdle: true testOnBorrow: false testOnReturn: false # 打开PSCache,并且指定每个连接上PSCache的大小 poolPreparedStatements: true maxPoolPreparedStatementPerConnectionSize: 20 # 配置监控统计拦截的filters,去掉后监控界面sql无法统计,'wall'用于防火墙 filters: config,stat,slf4j # 通过connectProperties属性来打开mergeSql功能;慢SQL记录 connectionProperties: druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000; # 合并多个DruidDataSource的监控数据 useGlobalDataSourceStat: true mybatis-plus: mapper-locations: classpath*:/com/gitegg/*/*/mapper/*Mapper.xml typeAliasesPackage: com.gitegg.*.*.entity global-config: #主键类型 0:"数据库ID自增", 1:"用户输入ID",2:"全局唯一ID (数字类型唯一ID)", 3:"全局唯一ID UUID"; id-type: 2 #字段策略 0:"忽略判断",1:"非 NULL 判断"),2:"非空判断" field-strategy: 2 #驼峰下划线转换 db-column-underline: true #刷新mapper 调试神器 refresh-mapper: true #数据库大写下划线转换 #capital-mode: true #逻辑删除配置 logic-delete-value: 1 logic-not-delete-value: 0 configuration: map-underscore-to-camel-case: true cache-enabled: false4、以上就可以读取配置文件了,我们在SystemController.java里面添加读取配置的测试代码,读取配置的某一个属性,如果需要读取实时刷新数据,可以添加@RefreshScope注解package com.gitegg.service.system.controller; import com.gitegg.platform.boot.common.base.Result; import com.gitegg.platform.boot.common.exception.BusinessException; import com.gitegg.service.system.dto.SystemDTO; import com.gitegg.service.system.service.ISystemService; import io.swagger.annotations.Api; import io.swagger.annotations.ApiOperation; import lombok.AllArgsConstructor; import lombok.RequiredArgsConstructor; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Value; import org.springframework.web.bind.annotation.*; import javax.validation.Valid; @RestController @RequestMapping(value = "system") @RequiredArgsConstructor(onConstructor_ = @Autowired) @Api(tags = "gitegg-system") @RefreshScope public class SystemController { private final ISystemService systemService; @Value("${spring.datasource.maxActive}") private String nacosMaxActiveType; @GetMapping(value = "list") @ApiOperation(value = "system list接口") public Object list() { return systemService.list(); @GetMapping(value = "page") @ApiOperation(value = "system page接口") public Object page() { return systemService.page(); @GetMapping(value = "exception") @ApiOperation(value = "自定义异常及返回测试接口") public Result<String> exception() { return Result.data(systemService.exception()); @PostMapping(value = "valid") @ApiOperation(value = "参数校验测试接口") public Result<SystemDTO> valid(@Valid @RequestBody SystemDTO systemDTO) { return Result.data(systemDTO); @PostMapping(value = "nacos") @ApiOperation(value = "Nacos读取配置文件测试接口") public Result<String> nacos() { return Result.data(nacosMaxActiveType); }5、启动项目,打开浏览器访问:http://127.0.0.1:8001/doc.html,点击Nacos读取配置文件测试接口菜单,进行测试,可以查看读取到的配置信息,因为添加了@RefreshScope,我们测试实时刷新功能,手动修改Nacos里面的spring.datasource.maxActive配置,再次执行测试接口,可以看到读取到的配置信息已刷新 image.png

SpringCloud微服务实战——搭建企业级开发框架(九):使用Nacos发现、配置和管理微服务

Nacos是一个更易于构建云原生应用的动态服务发现、配置管理和服务管理平台,Nacos 致力于帮助您发现、配置和管理微服务。Nacos 提供了一组简单易用的特性集,帮助您快速实现动态服务发现、服务配置、服务元数据及流量管理。Nacos部署请参考Nacos安装指南:https://www.jianshu.com/p/2e065c15d7301、跟之前新建SpringBoot自定义扩展一样,我们在GitEgg_Platform中新建gitegg-platform-cloud子工程,此工程主要用于Spring Cloud相关功能的自定义及扩展。2、在GitEgg_Platform中的gitegg-platform-bom子工程添加SpringCloud Alibaba的依赖<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.3.3.RELEASE</version> <relativePath /> </parent> <modelVersion>4.0.0</modelVersion> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-bom</artifactId> <name>${project.artifactId}</name> <version>${gitegg.project.version}</version> <packaging>pom</packaging> <properties> <!-- jdk版本1.8 --> <java.version>1.8</java.version> <!-- maven-compiler-plugin插件版本,Java代码编译 --> <maven.plugin.version>3.8.1</maven.plugin.version> <!-- maven编译时指定编码UTF-8 --> <maven.compiler.encoding>UTF-8</maven.compiler.encoding> <!-- 项目统一字符集编码UTF-8 --> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <!-- 项目统一字符集编码UTF-8 --> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <!-- GitEgg项目统一设置版本号 --> <gitegg.project.version>1.0-SNAPSHOT</gitegg.project.version> <!-- mysql数据库驱动 --> <mysql.connector.version>8.0.17</mysql.connector.version> <!-- postgresql数据库驱动 --> <postgresql.connector.version>9.1-901.jdbc4</postgresql.connector.version> <!-- 数据库连接池Druid --> <druid.version>1.1.23</druid.version> <!-- Mybatis Plus增强工具 --> <mybatis.plus.version>3.4.0</mybatis.plus.version> <!-- Knife4j Swagger2文档 --> <knife4j.version>3.0.1</knife4j.version> <!-- Spring Cloud Alibaba --> <spring.cloud.alibaba>2.2.3.RELEASE</spring.cloud.alibaba> </properties> <dependencyManagement> <dependencies> <!-- gitegg数据库驱动及连接池 --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-db</artifactId> <version>${gitegg.project.version}</version> </dependency> <!-- gitegg mybatis-plus --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-mybatis</artifactId> <version>${gitegg.project.version}</version> </dependency> <!-- gitegg swagger2-knife4j --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-swagger</artifactId> <version>${gitegg.project.version}</version> </dependency> <!-- gitegg boot自定义扩展 --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-boot</artifactId> <version>${gitegg.project.version}</version> </dependency> <!-- gitegg cloud自定义扩展 --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-cloud</artifactId> <version>${gitegg.project.version}</version> </dependency> <!-- mysql数据库驱动 --> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>${mysql.connector.version}</version> </dependency> <!-- postgresql数据库驱动 --> <dependency> <groupId>postgresql</groupId> <artifactId>postgresql</artifactId> <version>${postgresql.connector.version}</version> </dependency> <!-- 数据库连接池 --> <dependency> <groupId>com.alibaba</groupId> <artifactId>druid-spring-boot-starter</artifactId> <version>${druid.version}</version> </dependency> <!-- Mybatis Plus增强工具 --> <dependency> <groupId>com.baomidou</groupId> <artifactId>mybatis-plus-boot-starter</artifactId> <version>${mybatis.plus.version}</version> </dependency> <!-- Swagger2 knife4j bom方式引入 --> <dependency> <groupId>com.github.xiaoymin</groupId> <artifactId>knife4j-dependencies</artifactId> <version>${knife4j.version}</version> <type>pom</type> <scope>import</scope> </dependency> <!-- Spring Cloud Alibaba --> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-alibaba-dependencies</artifactId> <version>${spring.cloud.alibaba}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> </project>3、在gitegg-platform-cloud工程中引入spring-cloud-starter-alibaba-nacos-discovery<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>GitEgg-Platform</artifactId> <groupId>com.gitegg.platform</groupId> <version>1.0-SNAPSHOT</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>gitegg-platform-cloud</artifactId> <name>${project.artifactId}</name> <version>${project.parent.version}</version> <packaging>jar</packaging> <dependencies> <!-- Nacos 服务注册发现--> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId> </dependency> </dependencies> </project>4、GitEgg_Platform工程重新执行install,在GitEgg_Cloud的子工程gitegg-service中引入gitegg-platform-cloud<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>GitEgg-Cloud</artifactId> <groupId>com.gitegg.cloud</groupId> <version>1.0-SNAPSHOT</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>gitegg-service</artifactId> <packaging>pom</packaging> <modules> <module>gitegg-service-base</module> <module>gitegg-service-bigdata</module> <module>gitegg-service-system</module> </modules> <dependencies> <!-- gitegg Spring Boot自定义及扩展 --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-boot</artifactId> </dependency> <!-- gitegg Spring Cloud自定义及扩展 --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-cloud</artifactId> </dependency> <!-- gitegg数据库驱动及连接池 --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-db</artifactId> </dependency> <!-- gitegg mybatis-plus --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-mybatis</artifactId> </dependency> <!-- gitegg swagger2-knife4j --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-swagger</artifactId> </dependency> <!-- spring boot web核心包 --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <!-- spring boot 健康监控 --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> </dependencies> </project>5、修改application.yml文件,添加nacos配置:server: port: 8001 spring: application: name: gitegg-service-system cloud: nacos: discovery: server-addr: 127.0.0.1:8848 datasource: type: com.alibaba.druid.pool.DruidDataSource url: jdbc:mysql://127.0.0.1/gitegg_cloud?zeroDateTimeBehavior=convertToNull&useUnicode=true&characterEncoding=utf8&allowMultiQueries=true username: root password: root initialSize: 1 minIdle: 3 maxActive: 20 # 配置获取连接等待超时的时间 maxWait: 60000 # 配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒 timeBetweenEvictionRunsMillis: 60000 # 配置一个连接在池中最小生存的时间,单位是毫秒 minEvictableIdleTimeMillis: 30000 validationQuery: select 'x' testWhileIdle: true testOnBorrow: false testOnReturn: false # 打开PSCache,并且指定每个连接上PSCache的大小 poolPreparedStatements: true maxPoolPreparedStatementPerConnectionSize: 20 # 配置监控统计拦截的filters,去掉后监控界面sql无法统计,'wall'用于防火墙 filters: config,stat,slf4j # 通过connectProperties属性来打开mergeSql功能;慢SQL记录 connectionProperties: druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000; # 合并多个DruidDataSource的监控数据 useGlobalDataSourceStat: true mybatis-plus: mapper-locations: classpath*:/com/gitegg/*/*/mapper/*Mapper.xml typeAliasesPackage: com.gitegg.*.*.entity global-config: #主键类型 0:"数据库ID自增", 1:"用户输入ID",2:"全局唯一ID (数字类型唯一ID)", 3:"全局唯一ID UUID"; id-type: 2 #字段策略 0:"忽略判断",1:"非 NULL 判断"),2:"非空判断" field-strategy: 2 #驼峰下划线转换 db-column-underline: true #刷新mapper 调试神器 refresh-mapper: true #数据库大写下划线转换 #capital-mode: true #逻辑删除配置 logic-delete-value: 1 logic-not-delete-value: 0 configuration: map-underscore-to-camel-case: true cache-enabled: false6、修改GitEggSystemApplication.java添加注解@EnableDiscoveryClient,然后运行GitEggSystemApplication:package com.gitegg.service.system; import org.mybatis.spring.annotation.MapperScan; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.client.discovery.EnableDiscoveryClient; import org.springframework.context.annotation.ComponentScan; * gitegg-system 启动类 @EnableDiscoveryClient @ComponentScan(basePackages = "com.gitegg") @MapperScan("com.gitegg.*.*.mapper") @SpringBootApplication public class GitEggSystemApplication { public static void main(String[] args) { SpringApplication.run(GitEggSystemApplication.class,args); }7、在浏览器中打开nacos的地址,点击左侧菜单的服务列表,可以查看到服务已经注册到nacos image.png

SpringCloud微服务实战——搭建企业级开发框架(八):使用注解校验微服务消息参数

平时开发过程中,经常要用到参数校验,如果直接在代码逻辑里面写参数校验,代码有点冗余且用起来不是非常方便,显得代码逻辑复杂且重复代码太多,这里我们使用注解的方式进行参数校验,SpringBoot中常用的用于参数校验的注解如下:@AssertFalse 所注解的元素必须是Boolean类型,且值为false @AssertTrue 所注解的元素必须是Boolean类型,且值为true @DecimalMax 所注解的元素必须是数字,且值小于等于给定的值 @DecimalMin 所注解的元素必须是数字,且值大于等于给定的值 @Digits 所注解的元素必须是数字,且值必须是指定的位数 @Future 所注解的元素必须是将来某个日期 @Max 所注解的元素必须是数字,且值小于等于给定的值 @Min 所注解的元素必须是数字,且值小于等于给定的值 @Range 所注解的元素需在指定范围区间内 @NotNull 所注解的元素值不能为null @NotBlank 所注解的元素值有内容 @Null 所注解的元素值为null @Past 所注解的元素必须是某个过去的日期 @PastOrPresent 所注解的元素必须是过去某个或现在日期 @Pattern 所注解的元素必须满足给定的正则表达式 @Size 所注解的元素必须是String、集合或数组,且长度大小需保证在给定范围之内 @Email 所注解的元素需满足Email格式1、在GitEgg-Platform工程的子工程gitegg-platform-boot里添加spring-boot-starter-validation依赖,因为自SpringBoot2.3.X开始spring-boot-starter-web默认不再引入校验框架,这里需要手动引入,pom.xml如下<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>GitEgg-Platform</artifactId> <groupId>com.gitegg.platform</groupId> <version>1.0-SNAPSHOT</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>gitegg-platform-boot</artifactId> <name>${project.artifactId}</name> <version>${project.parent.version}</version> <packaging>jar</packaging> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <optional>true</optional> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-validation</artifactId> </dependency> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-swagger</artifactId> <optional>true</optional> </dependency> </dependencies> </project>2、GitEgg-Platform工程重新install,在GitEgg-Cloud工程的子工程gitegg-service-system里面新建类SystemDTO.javapackage com.gitegg.service.system.dto; import lombok.Data; import javax.validation.constraints.*; @Data public class SystemDTO { @NotNull @Min(value = 10, message = "id必须大于10") @Max(value = 150, message = "id必须小于150") private Long id; @NotNull(message = "名称不能为空") @Size(min = 3, max = 20, message = "名称长度必须在3-20之间") private String name; }3、SystemController.java类里面添加参数校验测试接口package com.gitegg.service.system.controller; import com.gitegg.platform.boot.common.base.Result; import com.gitegg.platform.boot.common.exception.BusinessException; import com.gitegg.service.system.dto.SystemDTO; import com.gitegg.service.system.service.ISystemService; import io.swagger.annotations.Api; import io.swagger.annotations.ApiOperation; import lombok.AllArgsConstructor; import org.springframework.web.bind.annotation.*; import javax.validation.Valid; @RestController @RequestMapping(value = "system") @AllArgsConstructor @Api(tags = "gitegg-system") public class SystemController { private final ISystemService systemService; @GetMapping(value = "list") @ApiOperation(value = "system list接口") public Object list() { return systemService.list(); @GetMapping(value = "page") @ApiOperation(value = "system page接口") public Object page() { return systemService.page(); @GetMapping(value = "exception") @ApiOperation(value = "自定义异常及返回测试接口") public Result<String> exception() { return Result.data(systemService.exception()); @PostMapping(value = "valid") @ApiOperation(value = "参数校验测试接口") public Result<SystemDTO> valid(@Valid @RequestBody SystemDTO systemDTO) { return Result.data(systemDTO); }4、运行GitEggSystemApplication.java,打开浏览器访问:http://127.0.0.1:8001/doc.html,然后点击左侧的参数校验测试接口,使用Swagger2进行测试,即可查看校验结果 image.png5、这里的提示信息用到了上一章节讲到的统一异常处理逻辑:/** * 非法请求-参数校验 @ExceptionHandler(value = {MethodArgumentNotValidException.class}) public Result handlerMethodArgumentNotValidException(MethodArgumentNotValidException methodArgumentNotValidException) { //获取异常字段及对应的异常信息 StringBuffer stringBuffer = new StringBuffer(); methodArgumentNotValidException.getBindingResult().getFieldErrors().stream() .map(t -> t.getField() + t.getDefaultMessage() + ";") .forEach(e -> stringBuffer.append(e)); String errorMessage = stringBuffer.toString(); Result result = Result.error(ResultCodeEnum.PARAM_VALID_ERROR, errorSystem + errorMessage); return result;

SpringCloud微服务实战——搭建企业级开发框架(七):自定义通用响应消息及统一异常处理

平时开发过程中,无可避免我们需要处理各类异常,所以这里我们在公共模块中自定义统一异常,Spring Boot 提供 @RestControllerAdvice 注解统一异常处理,我们在GitEgg_Platform中新建gitegg-platform-boot子工程,此工程主要用于Spring Boot相关功能的自定义及扩展。1、修改gitegg-platform-boot的pom.xml,添加spring-boot-starter-web和swagger依赖,设置optional为true,让这个包在项目之间依赖不传递。<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <optional>true</optional> </dependency> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-swagger</artifactId> <optional>true</optional> </dependency>2、自定义通用响应消息类,Result和PageResult,一个是普通响应消息,一个是分页响应消息。Result类:package com.gitegg.platform.boot.common.base; import com.gitegg.platform.boot.common.enums.ResultCodeEnum; import io.swagger.annotations.ApiModel; import io.swagger.annotations.ApiModelProperty; import lombok.*; * @ClassName: Result * @Description: 自定义通用响应类 * @author GitEgg * @date 2020年09月19日 下午9:24:50 @ApiModel(description = "通用响应类") @Getter @ToString public class Result<T> { @ApiModelProperty(value = "是否成功", required = true) private boolean success; @ApiModelProperty(value ="响应代码", required = true) private int code; @ApiModelProperty(value ="提示信息", required = true) private String msg; @ApiModelProperty(value ="响应数据") private T data; * @param code * @param data * @param msg private Result(int code, T data, String msg) { this.success = ResultCodeEnum.SUCCESS.code == code; this.code = code; this.msg = msg; this.data = data; * @param resultCodeEnum private Result(ResultCodeEnum resultCodeEnum ) { this(resultCodeEnum.code, null, resultCodeEnum.msg); * @param resultCodeEnum * @param msg private Result(ResultCodeEnum resultCodeEnum , String msg) { this(resultCodeEnum, null, msg); * @param resultCodeEnum * @param data private Result(ResultCodeEnum resultCodeEnum , T data) { this(resultCodeEnum, data, resultCodeEnum.msg); * @param resultCodeEnum * @param data * @param msg private Result(ResultCodeEnum resultCodeEnum , T data, String msg) { this(resultCodeEnum.code, data, msg); * @param data 数据 * @param <T> T 响应数据 * @ public static <T> Result<T> data(T data) { return data(data, ResultCodeEnum.SUCCESS.msg); * @param data 数据 * @param msg 消息 * @param <T> T 响应数据 * @ public static <T> Result<T> data(T data, String msg) { return data(ResultCodeEnum.SUCCESS.code, data, msg); * @param code 状态码 * @param data 数据 * @param msg 消息 * @param <T> T 响应数据 * @ public static <T> Result<T> data(int code, T data, String msg) { return new Result<>(code, data, msg); * 返回Result * @param * @param <T> T 响应数据 * @返回Result public static <T> Result<T> success() { return new Result<>(ResultCodeEnum.SUCCESS); * 返回Result * @param msg 消息 * @param <T> T 响应数据 * @返回Result public static <T> Result<T> success(String msg) { return new Result<>(ResultCodeEnum.SUCCESS, msg); * 返回Result * @param * @param <T> T 响应数据 * @返回Result public static <T> Result<T> success(ResultCodeEnum resultCodeEnum ) { return new Result<>(resultCodeEnum); * 返回Result * @param * @param msg 提示信息 * @param <T> T 响应数据 * @返回Result public static <T> Result<T> success(ResultCodeEnum resultCodeEnum , String msg) { return new Result<>(resultCodeEnum, msg); * 返回Result * @param <T> T 响应数据 * @返回Result public static <T> Result<T> error() { return new Result<>(ResultCodeEnum.ERROR, ResultCodeEnum.ERROR.msg); * 返回Result * @param msg 消息 * @param <T> T 响应数据 * @返回Result public static <T> Result<T> error(String msg) { return new Result<>(ResultCodeEnum.ERROR, msg); * 返回Result * @param code 状态码 * @param msg 消息 * @param <T> T 响应数据 * @返回Result public static <T> Result<T> error(int code, String msg) { return new Result<>(code, null, msg); * 返回Result * @param * @param <T> T 响应数据 * @返回Result public static <T> Result<T> error(ResultCodeEnum resultCodeEnum ) { return new Result<>(resultCodeEnum); * 返回Result * @param * @param msg 提示信息 * @param <T> T 响应数据 * @返回Result public static <T> Result<T> error(ResultCodeEnum resultCodeEnum , String msg) { return new Result<>(resultCodeEnum, msg); * @param <T> * @param flag * @return public static <T> Result<T> result(boolean flag) { return flag ? Result.success("操作成功") : Result.error("操作失败"); }PageResult类:package com.gitegg.platform.boot.common.base; import java.util.List; import com.gitegg.platform.boot.common.enums.ResultCodeEnum; import io.swagger.annotations.ApiModel; import io.swagger.annotations.ApiModelProperty; import lombok.Data; * @ClassName: PageResult * @Description: 通用分页返回 * @author GitEgg * @date * @param <T> @Data @ApiModel("通用分页响应类") public class PageResult<T> { @ApiModelProperty(value = "是否成功", required = true) private boolean success; @ApiModelProperty(value ="响应代码", required = true) private int code; @ApiModelProperty(value ="提示信息", required = true) private String msg; @ApiModelProperty(value ="总数量", required = true) private long count; @ApiModelProperty(value ="分页数据") private List<T> data; public PageResult(long total, List<T> rows) { this.count = total; this.data = rows; this.code = ResultCodeEnum.SUCCESS.code; this.msg = ResultCodeEnum.SUCCESS.msg; }3、自定义通用响应消息枚举类ResultCodeEnum。package com.gitegg.platform.boot.common.enums; * @ClassName: ResultCodeEnum * @Description: 自定义返回码枚举 * @author GitEgg * @date 2020年09月19日 下午11:49:45 public enum ResultCodeEnum { SUCCESS(200, "操作成功"), * 系统错误 ERROR(500, "系统错误"), * 操作失败 FAILED(101, "操作失败"), * 未登录/登录超时 UNAUTHORIZED(102, "登录超时"), * 参数错误 PARAM_ERROR(103, "参数错误"), * 参数错误-已存在 INVALID_PARAM_EXIST(104, "请求参数已存在"), * 参数错误 INVALID_PARAM_EMPTY(105, "请求参数为空"), * 参数错误 PARAM_TYPE_MISMATCH(106, "参数类型不匹配"), * 参数错误 PARAM_VALID_ERROR(107, "参数校验失败"), * 参数错误 ILLEGAL_REQUEST(108, "非法请求"), * 验证码错误 INVALID_VCODE(204, "验证码错误"), * 用户名或密码错误 INVALID_USERNAME_PASSWORD(205, "账号或密码错误"), INVALID_RE_PASSWORD(206, "两次输入密码不一致"), * 用户名或密码错误 INVALID_OLD_PASSWORD(207, "旧密码错误"), * 用户名重复 USERNAME_ALREADY_IN(208, "用户名已存在"), * 用户不存在 INVALID_USERNAME(209, "用户名不存在"), * 角色不存在 INVALID_ROLE(210, "角色不存在"), * 角色不存在 ROLE_USED(211, "角色使用中,不可删除"), * 没有权限 NO_PERMISSION(403, "当前用户无该接口权限"); public int code; public String msg; ResultCodeEnum(int code, String msg) { this.code = code; this.msg = msg; public int getCode() { return code; public void setCode(int code) { this.code = code; public String getMsg() { return msg; public void setMsg(String msg) { this.msg = msg; }4、自定义异常类BusinessException和SystemExceptionpackage com.gitegg.platform.boot.common.exception; import com.gitegg.platform.boot.common.enums.ResultCodeEnum; import lombok.AllArgsConstructor; import lombok.Data; import lombok.Getter; import lombok.Setter; * @ClassName: BusinessException * @Description: 业务处理异常 * @author GitEgg * @date @Getter @Setter public class BusinessException extends RuntimeException { private int code; private String msg; public BusinessException() { this.code = ResultCodeEnum.FAILED.code; this.msg = ResultCodeEnum.FAILED.msg; public BusinessException(String message) { this.code = ResultCodeEnum.FAILED.code; this.msg = message; public BusinessException(int code, String msg) { this.code = code; this.msg = msg; public BusinessException(Throwable cause) { super(cause); public BusinessException(String message, Throwable cause) { super(message, cause); }package com.gitegg.platform.boot.common.exception; import com.gitegg.platform.boot.common.enums.ResultCodeEnum; import lombok.Getter; * @ClassName: SystemException * @Description: 系统处理异常 * @author GitEgg * @date @Getter public class SystemException extends RuntimeException { private int code; private String msg; public SystemException() { this.code = ResultCodeEnum.ERROR.code; this.msg = ResultCodeEnum.ERROR.msg; public SystemException(String message) { this.code = ResultCodeEnum.ERROR.code; this.msg = message; public SystemException(int code, String msg) { this.code = code; this.msg = msg; public SystemException(Throwable cause) { super(cause); public SystemException(String message, Throwable cause) { super(message, cause); }5、自定义统一异常处理类GitEggControllerAdvice.javapackage com.gitegg.platform.boot.common.advice; import com.gitegg.platform.boot.common.base.Result; import com.gitegg.platform.boot.common.enums.ResultCodeEnum; import com.gitegg.platform.boot.common.exception.BusinessException; import com.gitegg.platform.boot.common.exception.SystemException; import lombok.extern.slf4j.Slf4j; import org.springframework.beans.factory.annotation.Value; import org.springframework.http.converter.HttpMessageNotReadableException; import org.springframework.ui.Model; import org.springframework.web.HttpMediaTypeNotAcceptableException; import org.springframework.web.HttpMediaTypeNotSupportedException; import org.springframework.web.HttpRequestMethodNotSupportedException; import org.springframework.web.bind.MethodArgumentNotValidException; import org.springframework.web.bind.MissingPathVariableException; import org.springframework.web.bind.MissingServletRequestParameterException; import org.springframework.web.bind.WebDataBinder; import org.springframework.web.bind.annotation.ExceptionHandler; import org.springframework.web.bind.annotation.InitBinder; import org.springframework.web.bind.annotation.ModelAttribute; import org.springframework.web.bind.annotation.RestControllerAdvice; import org.springframework.web.method.annotation.MethodArgumentTypeMismatchException; import org.springframework.web.servlet.NoHandlerFoundException; import javax.annotation.PostConstruct; import javax.servlet.http.HttpServletRequest; import javax.validation.ConstraintViolationException; @Slf4j @RestControllerAdvice public class GitEggControllerAdvice { * 服务名 @Value("${spring.application.name}") private String serverName; * 微服务系统标识 private String errorSystem; @PostConstruct public void init() { this.errorSystem = new StringBuffer() .append(this.serverName) .append(": ").toString(); * 应用到所有@RequestMapping注解方法,在其执行之前初始化数据绑定器 @InitBinder public void initBinder(WebDataBinder binder) { * 把值绑定到Model中,使全局@RequestMapping可以获取到该值 @ModelAttribute public void addAttributes(Model model) { * 全局异常捕捉处理 @ExceptionHandler(value = {Exception.class}) public Result handlerException(Exception exception, HttpServletRequest request) { log.error("请求路径uri={},系统内部出现异常:{}", request.getRequestURI(), exception); Result result = Result.error(ResultCodeEnum.ERROR, errorSystem + exception.toString()); return result; * 非法请求异常 @ExceptionHandler(value = { HttpMediaTypeNotAcceptableException.class, HttpMediaTypeNotSupportedException.class, HttpRequestMethodNotSupportedException.class, MissingServletRequestParameterException.class, NoHandlerFoundException.class, MissingPathVariableException.class, HttpMessageNotReadableException.class public Result handlerSpringAOPException(Exception exception) { Result result = Result.error(ResultCodeEnum.ILLEGAL_REQUEST, errorSystem + exception.getMessage()); return result; * 非法请求异常-参数类型不匹配 @ExceptionHandler(value = MethodArgumentTypeMismatchException.class) public Result handlerSpringAOPException(MethodArgumentTypeMismatchException exception) { Result result = Result.error(ResultCodeEnum.PARAM_TYPE_MISMATCH, errorSystem + exception.getMessage()); return result; * 非法请求-参数校验 @ExceptionHandler(value = {MethodArgumentNotValidException.class}) public Result handlerMethodArgumentNotValidException(MethodArgumentNotValidException methodArgumentNotValidException) { //获取异常字段及对应的异常信息 StringBuffer stringBuffer = new StringBuffer(); methodArgumentNotValidException.getBindingResult().getFieldErrors().stream() .map(t -> t.getField()+"=>"+t.getDefaultMessage()+" ") .forEach(e -> stringBuffer.append(e)); String errorMessage = stringBuffer.toString(); Result result = Result.error(ResultCodeEnum.PARAM_VALID_ERROR, errorSystem + errorMessage); return result; * 非法请求异常-参数校验 @ExceptionHandler(value = {ConstraintViolationException.class}) public Result handlerConstraintViolationException(ConstraintViolationException constraintViolationException) { String errorMessage = constraintViolationException.getLocalizedMessage(); Result result = Result.error(ResultCodeEnum.PARAM_VALID_ERROR, errorSystem + errorMessage); return result; * 自定义业务异常-BusinessException @ExceptionHandler(value = {BusinessException.class}) public Result handlerCustomException(BusinessException exception) { String errorMessage = exception.getMsg(); Result result = Result.error(exception.getCode(), errorSystem + errorMessage); return result; * 自定义系统异常-SystemException @ExceptionHandler(value = {SystemException.class}) public Result handlerCustomException(SystemException exception) { String errorMessage = exception.getMsg(); Result result = Result.error(exception.getCode(), errorSystem + errorMessage); return result; }6、重新将GitEgg-Platform进行install,在GitEgg-Cloud中的gitegg-service引入gitegg-platform-boot<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>GitEgg-Cloud</artifactId> <groupId>com.gitegg.cloud</groupId> <version>1.0-SNAPSHOT</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>gitegg-service</artifactId> <packaging>pom</packaging> <modules> <module>gitegg-service-base</module> <module>gitegg-service-bigdata</module> <module>gitegg-service-system</module> </modules> <dependencies> <!-- gitegg Spring Boot自定义及扩展 --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-boot</artifactId> </dependency> <!-- gitegg数据库驱动及连接池 --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-db</artifactId> </dependency> <!-- gitegg mybatis-plus --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-mybatis</artifactId> </dependency> <!-- gitegg swagger2-knife4j --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-swagger</artifactId> </dependency> <!-- spring boot web核心包 --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <!-- spring boot 健康监控 --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> </dependencies> </project>7、修改SystemController.java、ISystemService.java和SystemServiceImpl.java增加异常处理的测试代码SystemController.java:package com.gitegg.service.system.controller; import com.gitegg.platform.boot.common.base.Result; import com.gitegg.platform.boot.common.exception.BusinessException; import com.gitegg.service.system.service.ISystemService; import io.swagger.annotations.Api; import io.swagger.annotations.ApiOperation; import lombok.AllArgsConstructor; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; @RestController @RequestMapping(value = "system") @AllArgsConstructor @Api(tags = "gitegg-system") public class SystemController { private final ISystemService systemService; @GetMapping(value = "list") @ApiOperation(value = "system list接口") public Object list() { return systemService.list(); @GetMapping(value = "page") @ApiOperation(value = "system page接口") public Object page() { return systemService.page(); @GetMapping(value = "exception") @ApiOperation(value = "自定义异常及返回测试接口") public Result<String> exception() { return Result.data(systemService.exception()); }ISystemService.java:package com.gitegg.service.system.service; import com.baomidou.mybatisplus.extension.plugins.pagination.Page; import com.gitegg.service.system.entity.SystemTable; import java.util.List; public interface ISystemService { List<SystemTable> list(); Page<SystemTable> page(); String exception(); }SystemServiceImpl.java:package com.gitegg.service.system.service.impl; import com.baomidou.mybatisplus.extension.plugins.pagination.Page; import com.gitegg.platform.boot.common.exception.BusinessException; import com.gitegg.service.system.entity.SystemTable; import com.gitegg.service.system.mapper.SystemTableMapper; import com.gitegg.service.system.service.ISystemService; import lombok.AllArgsConstructor; import org.springframework.stereotype.Service; import java.util.List; @Service @AllArgsConstructor public class SystemServiceImpl implements ISystemService { private final SystemTableMapper systemTableMapper; @Override public List<SystemTable> list() { return systemTableMapper.list(); @Override public Page<SystemTable> page() { Page<SystemTable> page = new Page<>(1, 10); List<SystemTable> records = systemTableMapper.page(page); page.setRecords(records); return page; @Override public String exception() { throw new BusinessException("自定义异常"); // return "成功获得数据"; }8、运行GitEggSystemApplication,打开浏览器访问:http://127.0.0.1:8001/doc.html,然后点击左侧的异常处理接口,使用Swagger2进行测试,即可看到结果 image.png

SpringCloud微服务实战——搭建企业级开发框架(六):使用knife4j集成Swagger2接口文档

knife4j是为集成Swagger生成api文档的增强解决方案,前后端Java代码以及前端Ui模块进行分离,在微服务架构下使用更加灵活,提供专注于Swagger的增强解决方案,不同于只是改善增强前端Ui部分,我们这里使用knife4j作为文档管理工具来代替swagger-ui。1、在GitEgg-Platform工程下新建gitegg-platform-swagger子工程,在GigEgg-Platform中的子工程gitegg-platform-bom中,修改pom.xml,以maven bom的方式使用knife4j:<!-- Swagger2 knife4j bom方式引入 --> <dependency> <groupId>com.github.xiaoymin</groupId> <artifactId>knife4j-dependencies</artifactId> <version>${knife4j.version}</version> <type>pom</type> <scope>import</scope> </dependency>2、在gitegg-platform-swagger子工程中的pom.xml添加knife4j引用:<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>GitEgg-Platform</artifactId> <groupId>com.gitegg.platform</groupId> <version>1.0-SNAPSHOT</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>gitegg-platform-swagger</artifactId> <name>${project.artifactId}</name> <version>${project.parent.version}</version> <packaging>jar</packaging> <dependencies> <dependency> <groupId>com.github.xiaoymin</groupId> <artifactId>knife4j-spring-boot-starter</artifactId> </dependency> </dependencies> </project>3、在gitegg-platform-swagger子工程中新建SwaggerConfig.java文件:package com.gitegg.platform.swagger.config; import com.github.xiaoymin.knife4j.spring.annotations.EnableKnife4j; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.Import; import springfox.bean.validators.configuration.BeanValidatorPluginsConfiguration; import springfox.documentation.builders.ApiInfoBuilder; import springfox.documentation.builders.PathSelectors; import springfox.documentation.builders.RequestHandlerSelectors; import springfox.documentation.service.ApiInfo; import springfox.documentation.spi.DocumentationType; import springfox.documentation.spring.web.plugins.Docket; import springfox.documentation.swagger2.annotations.EnableSwagger2; @Configuration @EnableSwagger2 @EnableKnife4j @Import(BeanValidatorPluginsConfiguration.class) public class SwaggerConfig { @Bean(value = "GitEggApi") public Docket GitEggApi() { Docket docket=new Docket(DocumentationType.SWAGGER_2) .apiInfo(apiInfo()) //分组名称 .groupName("2.X版本") .select() //这里指定Controller扫描包路径 .apis(RequestHandlerSelectors.basePackage("com.gitegg.*.*.controller")) .paths(PathSelectors.any()) .build(); return docket; private ApiInfo apiInfo() { return new ApiInfoBuilder().version("1.0.0") .title("Spring Cloud Swagger2 文档") .description("Spring Cloud Swagger2 文档") .termsOfServiceUrl("www.gitegg.com") .build(); }4、在gitegg-service工程中引入gitegg-platform-swagger<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>GitEgg-Cloud</artifactId> <groupId>com.gitegg.cloud</groupId> <version>1.0-SNAPSHOT</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>gitegg-service</artifactId> <packaging>pom</packaging> <modules> <module>gitegg-service-base</module> <module>gitegg-service-bigdata</module> <module>gitegg-service-system</module> </modules> <dependencies> <!-- gitegg数据库驱动及连接池 --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-db</artifactId> </dependency> <!-- gitegg mybatis-plus --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-mybatis</artifactId> </dependency> <!-- gitegg swagger2-knife4j --> <dependency> <groupId>com.gitegg.platform</groupId> <artifactId>gitegg-platform-swagger</artifactId> </dependency> <!-- spring boot web核心包 --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <!-- spring boot 健康监控 --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> </dependencies> </project>5、在gitegg-service-system工程下的SystemController.java类里面添加Swagger2相关注解package com.gitegg.service.system.controller; import com.gitegg.service.system.service.ISystemService; import io.swagger.annotations.Api; import io.swagger.annotations.ApiOperation; import lombok.AllArgsConstructor; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; @RestController @RequestMapping(value = "system") @AllArgsConstructor @Api(tags = "gitegg-system") public class SystemController { private final ISystemService systemService; @GetMapping(value = "list") @ApiOperation(value = "system list接口") public Object list() { return systemService.list(); @GetMapping(value = "page") @ApiOperation(value = "system page接口") public Object page() { return systemService.page(); }6、GitEggSystemApplication.java加入组件扫描的注解,让Spring在启动的时候加载到swagger2的配置:package com.gitegg.service.system; import org.mybatis.spring.annotation.MapperScan; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.context.annotation.ComponentScan; * gitegg-system 启动类 @ComponentScan(basePackages = "com.gitegg") @MapperScan("com.gitegg.*.*.mapper") @SpringBootApplication public class GitEggSystemApplication { public static void main(String[] args) { SpringApplication.run(GitEggSystemApplication.class,args); }7、运行gitegg-service-system,打开浏览器访问:http://127.0.0.1:8001/doc.html,可以看到swagger2文档界面 image.png

SpringCloud微服务实战——搭建企业级开发框架(四十四):【微服务监控告警实现方式一】使用Actuator + Spring Boot Admin实现简单的微服务监控告警系统
业务系统正常运行的稳定性十分重要,作为SpringBoot的四大核心之一,Actuator让你时刻探知SpringBoot服务运行状态信息,是保障系统正常运行必不可少的组件。 spring-boot-starter-actuator提供的是一系列HTTP或者JMX监控端点,通过监控端点我们可以获取到系统的运行统计信息,同时,我们可以自己选择开启需要的监控端点,也可以自定义扩展监控端点。 Actuator通过端点对外暴露的监控信息是JSON格式数据,我们需要使用界面来展示,目前使用比较多的就是Spring Boot Admin或者Prometheus + Grafana的方式
IoT小程序在展示中央空调采集数据和实时运行状态上的应用
IoT小程序框架在跨系统平台(AliOS Things、Ubuntu、Linux、MacOS、Window等)方面提供了非常优秀的基础能力,应用的更新升级提供了多种方式,在实际业务开发过程中可以灵活选择。IoT小程序框架通过JSAPI提供了调用系统底层应用的能力,同时提供了自定义JSAPI扩展封装的方法,这样就足够业务开发通过自定义的方式满足特殊的业务需求。 IoT小程序在前端框架能力、应用框架能力、图形框架能力都进行了适配和优化。那么接下来,我们按照其官方步骤搭建开发环境,然后结合中央空调数据采集和状态显示的实际应用场景开发物联网小程序应用。
SpringCloud微服务实战——搭建企业级开发框架(四十一):扩展JustAuth+SpringSecurity+Vue实现多租户系统微信扫码、钉钉扫码等第三方登录
如果我们自己的系统需要调用第三方登录,那么我们就需要实现单点登录客户端,然后跟需要对接的平台调试登录SDK。JustAuth是第三方授权登录的工具类库,对接了国外内数十家第三方登录的SDK,我们在需要实现第三方登录时,只需要集成JustAuth工具包,然后配置即可实现第三方登录,省去了需要对接不同SDK的麻烦。   JustAuth官方提供了多种入门指南,集成使用非常方便。但是如果要贴合我们自有开发框架的业务需求,还是需要进行整合优化。下面根据我们的系统需求,从两方面进行整合:一是支持多租户功能,二是和自有系统的用户进行匹配。
4、Flutter开发-导入并升级flutter-go示例
因Flutter升级,FlutterGo暂停维护,这里导入的项目只能切回到旧版本,这里为了适应新版本的Flutter和Dart,我们新建项目,升级flutter-go,并记录学习。 1、按照之前章节,新建一个flutter_go的Flutter项目,修改build.gradle文件
一、安装InfluxDB wget https://dl.influxdata.com/influxdb/releases/influxdb-1.8.0.x86_64.rpm sudo yum localinstall influxdb-1.8.0.x86_64.rpm systemctl enable influxdb systemctl start influxdb
SpringCloud微服务实战——搭建企业级开发框架(三十八):搭建ELK日志采集与分析系统
一套好的日志分析系统可以详细记录系统的运行情况,方便我们定位分析系统性能瓶颈、查找定位系统问题。上一篇说明了日志的多种业务场景以及日志记录的实现方式,那么日志记录下来,相关人员就需要对日志数据进行处理与分析,基于E(ElasticSearch)L(Logstash)K(Kibana)组合的日志分析系统可以说是目前各家公司普遍的首选方案。 • Elasticsearch: 分布式、RESTful 风格的搜索和数据分析引擎,可快速存储、搜索、分析海量的数据。在ELK中用于存储所有日志数据。
SpringCloud微服务实战——搭建企业级开发框架(三十六):使用Spring Cloud Stream实现可灵活配置消息中间件的功能
在以往消息队列的使用中,我们通常使用集成消息中间件开源包来实现对应功能,而消息中间件的实现又有多种,比如目前比较主流的ActiveMQ、RocketMQ、RabbitMQ、Kafka,Stream等,这些消息中间件的实现都各有优劣。   在进行框架设计的时候,我们考虑是否能够和之前实现的短信发送、分布式存储等功能一样,抽象统一消息接口,屏蔽底层实现,在用到消息队列时,使用统一的接口代码,然后在根据自己业务需要
SpringCloud微服务实战——搭建企业级开发框架(三十五):SpringCloud + Docker + k8s实现微服务集群打包部署-集群环境部署【下】
• sonarqube默认用户名密码: admin/admin • 卸载命令:docker-compose -f jenkins-compose.yml down -v 六、Jenkins自动打包部署配置   项目部署有多种方式,从最原始的可运行jar包直接部署到JDK环境下运行,到将可运行的jar包放到docker容器中运行,再到现在比较流行的把可运行的jar包和docker放到k8s的pod环境中运行。每一种新的部署方式都是对原有部署方式的改进和优化,这里不着重介绍每种方式的优缺点,只简单说明一下使用Kubernetes 的原因:Kubernetes 主要提供弹性伸缩、服务发现、自我修复,
SpringCloud微服务实战——搭建企业级开发框架(三十五):SpringCloud + Docker + k8s实现微服务集群打包部署-集群环境部署【上】
一、集群环境规划配置 生产环境不要使用一主多从,要使用多主多从。这里使用三台主机进行测试一台Master(172.16.20.111),两台Node(172.16.20.112和172.16.20.113) 1、设置主机名 CentOS7安装完成之后,设置固定ip,三台主机做相同设置 vi /etc/sysconfig/network-scripts/ifcfg-ens33 #在最下面ONBOOT改为yes,新增固定地址IPADDR,172.16.20.111,172.16.20.112,172.16.20.113 ONBOOT=yes IPADDR=172.16.20.111
SpringCloud微服务实战——搭建企业级开发框架(三十三):整合Skywalking实现链路追踪
Skywalking是由国内开源爱好者吴晟(原OneAPM工程师)开源并提交到Apache孵化器的产品,它同时吸收了Zipkin/Pinpoint/CAT的设计思路,支持非侵入式埋点。是一款基于分布式跟踪的应用程序性能监控系统。另外社区还发展出了一个叫OpenTracing的组织,旨在推进调用链监控的一些规范和标准工作。 1、下载Skywalking,下载地址:https://skywalking.apache.org/downloads/#download-the-latest-versions ,根据需求选择发布的版本,这里我们选择最新发布版v8.4.0 for H2/MySQL/TiDB
SpringCloud微服务实战——搭建企业级开发框架(三十一):自定义MybatisPlus代码生成器实现前后端代码自动生成
理想的情况下,代码生成可以节省很多重复且没有技术含量的工作量,并且代码生成可以按照统一的代码规范和格式来生成代码,给日常的代码开发提供很大的帮助。但是,代码生成也有其局限性,当牵涉到复杂的业务逻辑时,简单的代码生成功能无法解决。   目前市面上的代码生成器层出不穷,大多数的原理是基于已有的代码逻辑模板,按照一定的规则来生成CRUD代码。至于更为复杂的代码生成大家都在人工智能领域探索
SpringCloud微服务实战——搭建企业级开发框架(三十):整合EasyExcel实现数据表格导入导出功能
批量上传数据导入、数据统计分析导出,已经基本是系统必不可缺的一项功能,这里从性能和易用性方面考虑,集成EasyExcel。EasyExcel是一个基于Java的简单、省内存的读写Excel的开源项目,在尽可能节约内存的情况下支持读写百M的Excel:   Java解析、生成Excel比较有名的框架有Apache poi、jxl。但他们都存在一个严重的问题就是非常的耗内存,poi有一套SAX模式的API可以一定程度的解决一些内存溢出的问题,但POI还是有一些缺陷,比如07版Excel解压缩以及解压后存储都是在内存中完成的,内存消耗依然很大。easyexcel重写了poi对07版Excel的解析,
SpringCloud微服务实战——搭建企业级开发框架(二十八):扩展MybatisPlus插件DataPermissionInterceptor实现数据权限控制
一套完整的系统权限需要支持功能权限和数据权限,前面介绍了系统通过RBAC的权限模型来实现功能的权限控制,这里我们来介绍,通过扩展Mybatis-Plus的插件DataPermissionInterceptor实现数据权限控制。   简单介绍一下,所谓功能权限,顾名思义是指用户在系统中拥有对哪些功能操作的权限控制,而数据权限是指用户在系统中能够访问哪些数据的权限控制,数据权限又分为行级数据权限和列级数据权限。
SpringCloud微服务实战——搭建企业级开发框架(二十七):集成多数据源+Seata分布式事务+读写分离+分库分表
读写分离:为了确保数据库产品的稳定性,很多数据库拥有双机热备功能。也就是,第一台数据库服务器,是对外提供增删改业务的生产服务器;第二台数据库服务器,主要进行读的操作。   目前有多种方式实现读写分离,一种是Mycat这种数据库中间件,需要单独部署服务,通过配置来实现读写分离,不侵入到业务代码中;还有一种是dynamic-datasource/shardingsphere-jdbc这种,需要在业务代码引入jar包进行开发。
SpringCloud微服务实战——搭建企业级开发框架(二十六):自定义扩展OAuth2实现短信验证码登录
我们系统集成了短信通知服务,这里我们进行OAuth2的扩展,使系统支持短信验证码登录。 1、在gitegg-oauth中新增SmsCaptchaTokenGranter 自定义短信验证码令牌授权处理类 2、自定义GitEggTokenGranter,支持多种token模式
SpringCloud微服务实战——搭建企业级开发框架(二十五):集成短信通知服务
目前系统集成短信似乎是必不可少的部分,由于各种云平台都提供了不同的短信通道,这里我们增加多租户多通道的短信验证码,并增加配置项,使系统可以支持多家云平台提供的短信服务。这里以阿里云和腾讯云为例,集成短信通知服务。 1、在GitEgg-Platform中新建gitegg-platform-sms基础工程,定义抽象方法和配置类 SmsSendService发送短信抽象接口:
SpringCloud微服务实战——搭建企业级开发框架(二十四):集成行为验证码和图片验证码实现登录功能
随着近几年技术的发展,人们对于系统安全性和用户体验的要求越来越高,大多数网站系统都逐渐采用行为验证码来代替图片验证码。GitEgg-Cloud集成了开源行为验证码组件和图片验证码,并在系统中添加可配置项来选择具体使用哪种验证码。 • AJ-Captcha:行为验证码 • EasyCaptcha: 图片验证码
SpringCloud微服务实战——搭建企业级开发框架(二十三):Gateway+OAuth2+JWT实现微服务统一认证授权
OAuth2是一个关于授权的开放标准,核心思路是通过各类认证手段(具体什么手段OAuth2不关心)认证用户身份,并颁发token(令牌),使得第三方应用可以使用该token(令牌)在限定时间、限定范围访问指定资源。   OAuth2中使用token验证用户登录合法性,但token最大的问题是不携带用户信息,资源服务器无法在本地进行验证,每次对于资源的访问,资源服务器都需要向认证服务器发起请求,一是验证token的有效性,二是获取token对应的用户信息。如果有大量的此类请求,无疑处理效率是很低,且认证服务器会变成一个中心节点
SpringCloud微服务实战——搭建企业级开发框架(二十二):基于MybatisPlus插件TenantLineInnerInterceptor实现多租户功能
多租户技术的基本概念:   多租户技术(英语:multi-tenancy technology)或称多重租赁技术,是一种软件架构技术,它是在探讨与实现如何于多用户的环境下共用相同的系统或程序组件,并且仍可确保各用户间数据的隔离性。   在云计算的加持之下,多租户技术被广为运用于开发云各式服务,不论是IaaS,PaaS还是SaaS,都可以看到多租户技术的影子。
SpringCloud微服务实战——搭建企业级开发框架(二十一):基于RBAC模型的系统权限设计
RBAC(基于角色的权限控制)模型的核心是在用户和权限之间引入了角色的概念。取消了用户和权限的直接关联,改为通过用户关联角色、角色关联权限的方法来间接地赋予用户权限,从而达到用户和权限解耦的目的,RBAC介绍原文链接。 RABC的好处
SpringCloud分布式应用微服务系统组件列表: • 微服务框架组件:Spring Boot2 + SpringCloud Hoxton.SR8 + SpringCloud Alibaba • Spring Boot Admin: 管理和监控SpringBoot应用程序的微服务健康状态 • 数据持久化组件:MySql + Druid + MyBatis + MyBatis-Plus
SpringCloud微服务实战——搭建企业级开发框架(二十):集成Reids缓存
这章我们来介绍在系统中引入redisson-spring-boot-starter依赖来实现redis缓存管理 1、在GitEgg-Platform中新建gitegg-platform-redis用于管理工程中用到的Redis公共及通用方法。
SpringCloud微服务实战——搭建企业级开发框架(十九):Gateway使用knife4j聚合微服务文档
本章介绍Spring Cloud Gateway网关如何集成knife4j,通过网关聚合所有的Swagger微服务文档 1、gitegg-gateway中引入knife4j依赖,如果没有后端代码编写的话,仅仅引入一个swagger的前端ui模块就可以了
SpringCloud微服务实战——搭建企业级开发框架(十七):Sentinel+Nacos配置持久化
Sentinel Dashboard中添加的规则是存储在内存中的,我们的微服务或者Sentinel一重启规则就丢失了,现在我们将Sentinel规则持久化配置到Nacos中,在Nacos中添加规则,然后同步到Sentinel Dashboard服务中。Sentinel 支持以下几种规则:流量控制规则、熔断降级规则、系统保护规则、来源访问控制规则 和 热点参数规则。具体可查看官网 Sentinel 规则
SpringCloud微服务实战——搭建企业级开发框架(十六):集成Sentinel高可用流量管理框架【自定义返回消息】
Sentinel限流之后,默认的响应消息为Blocked by Sentinel (flow limiting),对于系统整体功能提示来说并不统一,参考我们前面设置的统一响应及异常处理方式,返回相同的格式的消息。 1、在自定义Sentinel返回消息之前,需要调整一下代码结构,因为这里要用到统一返回异常的格式,考虑到后期可能的使用问题,
SpringCloud微服务实战——搭建企业级开发框架(十四):集成Sentinel高可用流量管理框架【限流】
Sentinel 是面向分布式服务架构的高可用流量防护组件,主要以流量为切入点,从限流、流量整形、熔断降级、系统负载保护、热点防护等多个维度来帮助开发者保障微服务的稳定性。Sentinel 安装部署请参考:https://www.jianshu.com/p/9626b74aec1e Sentinel 具有以下特性: • 丰富的应用场景:Sentinel 承接了阿里巴巴近 10 年的双十一大促流量的核心场景,例如秒杀(即突发流量控制在系统容量可以承受的范围)、消息削峰填谷、集群流量控制、实时熔断下游不可用应用等。
SpringCloud微服务实战——搭建企业级开发框架(十三):OpenFeign+Ribbon实现高可用重试机制
Spring Cloud OpenFeign 默认是使用Ribbon实现负载均衡和重试机制的,虽然Feign有自己的重试机制,但该功能在Spring Cloud OpenFeign基本用不上,除非有特定的业务需求,则可以实现自己的Retryer,然后在全局注入或者针对特定的客户端使用特定的Retryer。   在SpringCloud体系项目中,引入的重试机制保证了高可用的同时,也会带来一些其它的问题,如幂等操作或一些没必要的重试,下面我们实际操作来测试Spring Cloud架构中的重试机制。
SpringCloud微服务实战——搭建企业级开发框架(十二):OpenFeign+Ribbon实现负载均衡
Ribbon是Netflix下的负载均衡项目,它主要实现中间层应用程序的负载均衡。为Ribbon配置服务提供者地址列表后,Ribbon就会基于某种负载均衡算法,自动帮助服务调用者去请求。Ribbon默认提供的负载均衡算法有多种,例如轮询、随即、加权轮训等,也可以为Ribbon实现自定义的负载均衡算法。 Ribbon有以下特性:
十、Linux(CentOS7) 安装 Sentinel
1、下载Sentinel发布版本,地址https://github.com/alibaba/Sentinel/releases 2、将下载的jar包sentinel-dashboard-1.8.0.jar上传到Linux服务器,Sentinel 是一个标准的 Spring Boot 应用,以 Spring Boot 的方式运行 jar 包即可,执行启动命令 nohup java -Dserver.port=8086 -Dproject.name=sentinel-dashboard -jar sentinel-dashboard-1.8.0.jar >/dev/null &
SpringCloud微服务实战——搭建企业级开发框架(七):自定义通用响应消息及统一异常处理
平时开发过程中,无可避免我们需要处理各类异常,所以这里我们在公共模块中自定义统一异常,Spring Boot 提供 @RestControllerAdvice 注解统一异常处理,我们在GitEgg_Platform中新建gitegg-platform-boot子工程,此工程主要用于Spring Boot相关功能的自定义及扩展。 1、修改gitegg-platform-boot的pom.xml,添加spring-boot-starter-web和swagger依赖,设置optional为true,让这个包在项目之间依赖不传递。
 
推荐文章