{"meta":{"title":"龙儿之家","subtitle":"hexo.huangge1199.cn","description":"千里之行,始于足下","author":"轩辕龙儿","url":"https://hexo.huangge1199.cn","root":"/"},"pages":[{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{}],"posts":[{"title":"element-plus选择器自定义筛选方法(拼音首字母搜索)","slug":"element-plusxuan-ze-qi-zi-ding-yi-shai-xuan-fang-fa-pin-yin-shou-zi-mu-sou-suo","date":"2024-07-05T06:05:35.000Z","updated":"2024-07-05T06:54:48.906Z","comments":true,"path":"/post/element-plusxuan-ze-qi-zi-ding-yi-shai-xuan-fang-fa-pin-yin-shou-zi-mu-sou-suo/","link":"","excerpt":"","content":"

引言

最近,来了个需求,需要在下拉列表中做筛选。下拉列表显示的是中文,但筛选时可能会输入中文的拼音首字母。因此,需要实现一个筛选功能,能够根据拼音首字母筛选出匹配的选项。

\n

自定义筛选方法

前端使用的是 vue3element-plus。我使用的组件是 Select 选择器 | Element Plus 。为 el-select 添加 filterable 属性即可启用搜索功能。默认情况下,Select 会找出所有 label 属性包含输入值的选项。但这里需要匹配拼音首字母进行搜索,因此要通过传入一个 filter-method 来实现。filter-method 是一个函数,它会在输入值发生变化时调用,参数为当前输入值。

\n

下面是这部分的简短代码:

\n

vue部分:

\n
<el-select\n  v-model="queryParams.word"\n  filterable\n  placeholder="请输入"\n  clearable\n  :filter-method="filterMethod"\n  >\n  <el-option\n    v-for="word in words"\n    :key="word"\n    :label="word"\n    :value="word"\n  />\n</el-select>/el-form-item>
\n

JS部分:

\n
const showSearch = ref(true);\n// option选项\nconst words = ref(['你好', '世界', '中国', '中国最棒'])\n// 保留原始的option选项\nconst wordsOld = ref(['你好', '世界', '中国', '中国最棒'])\n\nconst data = reactive({\n  queryParams: {\n    word: null,\n  },\n});\n\nconst { queryParams } = toRefs(data);\n\n// 多选框选中数据\nfunction filterMethod(val){\n  // 如果有输入值,根据输入内容进行筛选\n  if(val){\n    // 先将option中的选项words清空\n    words.value = []\n    // 从原始的option选项中遍历筛选\n    wordsOld.value.forEach(word => {\n      // 添加筛选逻辑,满足的word添加到words中显示\n      words.value.push(word)\n    })\n  } else {\n    // 如果没有输入值,恢复成原始的option选项\n    words.value = wordsOld.value\n  }\n}
\n

拼音首字母匹配

可以使用 pinyin-pro 来实现拼音匹配,例子如下:

\n
import { pinyin } from 'pinyin-pro';\n\nconst word = ref('中国最棒')\n\nconst firstLetterPinyin = pinyin(word, { pattern: 'first' }).replace(/\\s+/g, '');\nif(firstLetterPinyin.includes(queryLower)){\n  words.value.push(word)\n}
\n

pinyin 方法使用 pinyin-pro 库将汉字转换为指定样式的拼音字符串。这里面’first’样式为获取每一个字的小写拼音首字母,中间用空格分割,而我匹配时不需要空格,因此在firstLetterPinyin设值时添加了replace(/\\s+/g, '')将转换的字符串去除空格。

\n

完整的vue代码

<template>\n  <div class="app-container">\n    <el-form :model="queryParams" ref="queryRef" :inline="true" v-show="showSearch">\n      <el-form-item label="中文拼音匹配选择器">\n        <el-select\n          v-model="queryParams.word"\n          filterable\n          placeholder="请输入"\n          clearable\n          :filter-method="filterMethod"\n          >\n          <el-option\n            v-for="word in words"\n            :key="word"\n            :label="word"\n            :value="word"\n          />\n        </el-select>\n      </el-form-item>\n    </el-form>\n  </div>\n</template>\n\n<script setup name="word">\nimport { pinyin } from 'pinyin-pro';\n\nconst showSearch = ref(true);\nconst words = ref(['你好', '世界', '中国', '中国最棒'])\nconst wordsOld = ref(['你好', '世界', '中国', '中国最棒'])\n\nconst data = reactive({\n  queryParams: {\n    word: null,\n  },\n});\n\nconst { queryParams } = toRefs(data);\n\n// 多选框选中数据\nfunction filterMethod(val){\n  if(val){\n    const queryLower = val.toLowerCase();\n    words.value = []\n\n    // 筛选拼音首字母包含输入内容的选项\n    wordsOld.value.forEach(word => {\n      const firstLetterPinyin = convertToPinyin(word, 'first').replace(/\\s+/g, '');\n      if(firstLetterPinyin.includes(queryLower)){\n        words.value.push(word)\n      }\n    })\n    // 筛选选包含输入内容的选项\n    wordsOld.value.forEach(word => {\n      if(word.includes(val)){\n        words.value.push(word)\n      }\n    })\n  } else {\n    words.value = wordsOld.value\n  }\n}\n\nfunction convertToPinyin(word, type) {\n  return pinyin(word, { pattern: type });\n}\n</script>
\n

代码解析:

\n
    \n
  1. 模板部分

    \n\n
  2. \n
  3. 脚本部分

    \n\n
  4. \n
\n

以上代码实现了一个带拼音首字母匹配功能的下拉选择器,能有效地根据用户输入进行筛选。

\n","categories":[{"name":"前端","slug":"前端","permalink":"https://hexo.huangge1199.cn/categories/%E5%89%8D%E7%AB%AF/"},{"name":"vue","slug":"前端/vue","permalink":"https://hexo.huangge1199.cn/categories/%E5%89%8D%E7%AB%AF/vue/"}],"tags":[{"name":"前端","slug":"前端","permalink":"https://hexo.huangge1199.cn/tags/%E5%89%8D%E7%AB%AF/"},{"name":"vue","slug":"vue","permalink":"https://hexo.huangge1199.cn/tags/vue/"}]},{"title":"Java实现RS485串口通信","slug":"javashi-xian-rs485chuan-kou-tong-xin","date":"2024-06-24T10:10:11.000Z","updated":"2024-06-24T05:42:44.020Z","comments":true,"path":"/post/javashi-xian-rs485chuan-kou-tong-xin/","link":"","excerpt":"","content":"

近期,我接到了一个任务,将报警器接入到Java项目中,而接入的方式就是通过RS485接入,本人之前可以说是对此毫无所知。不过要感谢现在的互联网,通过网络我查到了我想要知道的一切,这里记录下本次学习的情况,供大家参考

\n

一、RS485简单介绍

RS485是一种常用的串行通信标准,广泛应用于工业自动化和嵌入式系统。它采用差分信号传输,具有抗干扰能力强、传输距离远等优点。以下是关于RS485串口的一些关键点:

\n

1、硬件连接

\n

2、通信方式

\n

3、数据发送和接收

\n

二、电脑需要做的准备

Windows系统还好,需要一个USB转RS485的转换器就可以了,基本不需要额外安装什么其他的。Linux系统可能就麻烦些,除了一个USB转RS485的转换器外,可能还需要下载相应的驱动(Linux这部分本人未实际操作,全凭网上的资料)。当然,如果你的电脑或者是设备本身就带RS485串口那就方便了,直接接上就好。

\n

接线方面,A接T+,B接T-

\n

三、代码方面

本人使用的是Springboot项目,通过网上的查询,可以使用 jSerialCommRXTX 库来实现串口通信。

\n

1、jSerialComm

本人觉得使用这个库相对简单些,直接在pom文件引入依赖就可以了,依赖代码如下:

\n
<dependency>\n    <groupId>com.fazecast</groupId>\n    <artifactId>jSerialComm</artifactId>\n    <version>2.9.2</version>\n</dependency>
\n

2、RXTX

这个尼,个人觉得相对复杂些,首先,要先去下载RXTX的jar包(rxtx-2.1-7-bins-r2),在使用时,除了在pom文件中引入压缩包内的RXTXcomm.jar包外,还需要在系统%JAVA_HOME%/jre/bin目录下放入对应的文件,比如说Windows的需要放入rxtxParallel.dllrxtxSerial.dll两个文件

\n

压缩包内容:

\n

\"\"

\n

3、代码例子

我这边用的jSerialComm的方式,引入jar包后,Java的测试代码如下:

\n
import com.fazecast.jSerialComm.SerialPort;\n\npublic class RS485Communication {\n    public static void main(String[] args) {\n        //SerialPort[] commPorts = SerialPort.getCommPorts();\n        //if (commPorts.length == 0) {\n        //    log.error("设备未插入!");\n        //}\n\t\t//SerialPort serialPort = null;\n        //if(SystemUtils.isLinux()){\n        //    for (SerialPort commPort : commPorts) {\n        //        log.info(JSON.toJSONString(commPort));\n        //        if(commPort.getSystemPortName().equals("ttyS2")){\n        //            serialPort = commPort;\n        //        }\n        //    }\n        //} else {\n        //    for (SerialPort commPort : commPorts) {\n        //        log.info(JSON.toJSONString(commPort));\n        //        if(commPort.getSystemPortName().contains("COM")){\n        //            serialPort = commPort;\n        //        }\n        //    }\n        //}\n\t    // 这里取的是第一个,如果你有多个,可以通过注释的代码来确定你使用的是哪一个\n        SerialPort serialPort = SerialPort.getCommPorts()[0];\n        serialPort.setBaudRate(9600);\n        serialPort.setNumDataBits(8);\n        serialPort.setNumStopBits(SerialPort.ONE_STOP_BIT);\n        serialPort.setParity(SerialPort.NO_PARITY);\n\n        if (serialPort.openPort()) {\n            System.out.println("Port opened successfully.");\n        } else {\n            System.out.println("Failed to open port.");\n            return;\n        }\n\n        // 发送数据\n        byte[] dataToSend = {0x00, 0x10}; // 示例16位数据\n        serialPort.writeBytes(dataToSend, dataToSend.length);\n\n        // 接收数据\n        byte[] readBuffer = new byte[2];\n        serialPort.readBytes(readBuffer, readBuffer.length);\n        System.out.println("Received: " + bytesToHex(readBuffer));\n\n        serialPort.closePort();\n    }\n\n    private static String bytesToHex(byte[] bytes) {\n        StringBuilder sb = new StringBuilder();\n        for (byte b : bytes) {\n            sb.append(String.format("%02X ", b));\n        }\n        return sb.toString().trim();\n    }\n}\n\n
\n","categories":[{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/categories/java/"}],"tags":[{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/tags/java/"}]},{"title":"20240512盖章整理","slug":"20240512gai-zhang-zheng-li","date":"2024-05-13T11:01:02.000Z","updated":"2024-05-13T03:05:09.323Z","comments":true,"path":"/post/20240512gai-zhang-zheng-li/","link":"","excerpt":"","content":"

盖章整理

","categories":[{"name":"盖章","slug":"盖章","permalink":"https://hexo.huangge1199.cn/categories/%E7%9B%96%E7%AB%A0/"}],"tags":[{"name":"盖章","slug":"盖章","permalink":"https://hexo.huangge1199.cn/tags/%E7%9B%96%E7%AB%A0/"}]},{"title":"ts(typescript)看这篇就够了","slug":"ts(typescript)-kan-zhe-pian-jiu-gou-le","date":"2024-04-23T07:50:10.000Z","updated":"2024-04-30T02:19:42.332Z","comments":true,"path":"/post/ts(typescript)-kan-zhe-pian-jiu-gou-le/","link":"","excerpt":"","content":"

Typescript 简介

TypeScript是用于应用程序规模开发的JavaScript。

\n

TypeScript是强类型,面向对象的编译语言。它是由微软的Anders Hejlsberg(C#的设计者)设计的。

\n

TypeScript既是一种语言又是一组工具。TypeScript是JavaScript的一个超集。换句话说,TypeScript是JavaScript加上一些额外的功能。

\n

TypeScript 扩展了 JavaScript 的语法,所以任何现有的 JavaScript 程序可以不加改变的在 TypeScript 下工作。TypeScript 是为大型应用之开发而设计,而编译时它产生 JavaScript 以确保兼容性。

\n

TypeScript 可以编译出纯净、 简洁的 JavaScript 代码,并且可以运行在任何浏览器上、Node.js 环境中和任何支持 ECMAScript 3(或更高版本)的 JavaScript 引擎中。

\n

TypeScript 的优势

TypeScript相对于纯粹的JavaScript具有许多优势,特别是在开发大型应用程序时。以下是一些TypeScript的优势:

\n

静态类型系统

TypeScript引入了静态类型系统,允许开发者在声明变量、函数参数、返回值等时指定类型。这种静态类型检查可以帮助捕获常见的编程错误,例如类型不匹配、未定义的属性或方法等,提供更好的代码质量和可靠性。

\n

更好的代码智能感知

因为TypeScript了解代码中的类型信息,因此编辑器可以提供更准确和强大的代码智能感知和自动补全功能。这可以显著提高开发效率,并减少常见的编码错误。

\n

更易于重构和维护

静态类型和面向对象特性使得代码更模块化、更结构化,从而更易于重构和维护。IDE可以更好地支持重构操作,并能够更好地理解代码的结构和依赖关系。

\n

更丰富的面向对象特性

TypeScript支持类、接口、继承、多态等面向对象编程的特性,使得代码组织更清晰、更易于理解。这对于构建大型应用程序非常有用。

\n

更好的工具支持:

TypeScript配合现代的集成开发环境(如VS Code、WebStorm等),可以提供强大的代码导航、重构、调试和代码分析工具。此外,TypeScript还能够与许多流行的前端框架(如Angular、React等)良好集成。

\n

增强的语言功能:

TypeScript不仅仅是JavaScript的超集,它还引入了一些新的语言功能,如箭头函数、可选参数、默认参数、模板字符串等,使得代码更简洁和易读。

\n

更好的生态系统:

TypeScript拥有庞大的社区支持,许多常用的JavaScript库和框架都提供了类型定义文件,可以轻松地与TypeScript集成。这使得使用第三方库时具有更好的类型安全性和开发体验。

\n

基础类型

TypeScript支持与JavaScript几乎相同的数据类型 数字,字符串,结构体,布尔值等,此外还提供了实用的枚举类型方便我们使用。

\n

布尔值

最基本的数据类型就是简单的 true/false 值,在 JavaScript 和 TypeScript 里叫做 boolean(其它语言中也一样)。 我们来定义一个布尔类型的变量:

\n
let isDone: boolean = false;
\n

在TypeScript中, 在参数名称后面使用冒号:来指定参数的类型

\n
let 变量名: 数据类型
\n

数字

和 JavaScript 一样,TypeScript 里的所有数字都是浮点数。 这些浮点数的类型是 number。 除了支持十进制和十六进制字面量,TypeScript 还支持 ECMAScript 2015 中引入的二进制和八进制字面量。

\n
let decLiteral: number = 6;\nlet hexLiteral: number = 0xf00d;\nlet binaryLiteral: number = 0b1010;\nlet octalLiteral: number = 0o744;
\n

字符串

字符串新特性

JavaScript 程序的另一项基本操作是处理网页或服务器端的文本数据。 像其它语言里一样,我们使用 string 表示文本数据类型。 和 JavaScript 一样,可以使用双引号 “或单引号’表示字符串。

\n
let name: string = "bob";\nname = "loen";
\n

以上字符串不支持换行.

\n

多行字符串

在Typescript中你可以使用反引号 ` 表示多行字符串.

\n
let hello: string = `Welcome to \nW3cschool`;
\n

内嵌表达式

你还可以使用模版字符串,也就是在反引号中使用 ${ expr }这种形式嵌入表达式

\n
let name: string = `Loen`;\nlet age: number = 37;\nlet sentence: string = `Hello, my name is ${ name }.\n\n\nI'll be ${ age + 1 } years old next month.`;
\n

这与下面定义sentence的方式效果相同:

\n
let sentence: string = "Hello, my name is " + name + ".\\n\\n" +\n    "I'll be " + (age + 1) + " years old next month.";
\n
\n

我们可以看到Typescript定义的字符串更加清晰简单.

\n
\n

自动拆分字符串

我们可以用字符串模板去调用一个方法

\n
function userinfo(params,name,age){\n    console.log(params);\n    console.log(name);\n    console.log(age);\n}\n\n\nlet myname = "Loen Wang";\nlet getAge = function(){\n    return 18;\n}\n// 调用\nuserinfo`hello my name is ${myname}, i'm ${getAge()}`
\n

结果:
\"\"

\n

数组

TypeScript 有两种方式可以定义数组。

\n

第一种, 是在元素类型后面接上 [],表示由此类型元素组成的一个数组:

\n
let list: number[] = [1, 2, 3];
\n

第二种方式是使用数组泛型,Array<元素类型>:

\n
let list: Array<number> = [1, 2, 3];
\n

元组 Tuple

元组类型允许表示一个已知元素数量和类型的数组,各元素的类型不必相同。 比如,你可以定义一对值分别为 stringnumber 类型的元组。

\n
// 声明一个元组类型\nlet x: [string, number];\n// 初始化元组\nx = ['hello', 10]; \nx = [10, 'hello']; // 这里会报错,类型错误
\n

枚举

enum 类型是对 JavaScript 标准数据类型的一个补充。 像 C# 等其它语言一样,使用枚举类型可以为一组数值赋予友好的名字。

\n
enum Color {Red, Green, Blue}\nlet c: Color = Color.Green;
\n

默认情况下,从0开始为元素编号。 你也可以手动的指定成员的数值。 例如,我们将上面的例子改成从 1开始编号:

\n
enum Color {Red = 1, Green, Blue}\nlet c: Color = Color.Green;
\n

或者,全部都采用手动赋值:

\n
enum Color {Red = 1, Green = 2, Blue = 4}\nlet c: Color = Color.Green;
\n

枚举类型提供的一个便利是你可以由枚举的值得到它的名字。 例如,我们知道数值为2,但是不确定它映射到Color里的哪个名字,我们可以查找相应的名字:

\n
enum Color {Red = 1, Green, Blue}\nlet colorName: string = Color[2];\n\n\nalert(colorName);  // 显示'Green'因为上面代码里它的值是2
\n

Any

如果不希望类型检查器对值进行检查,直接通过编译阶段的检查。 那么我们可以使用 any类型来标记这些变量:

\n
let notSure: any = 4;\nnotSure = "这是一个字符串";\nnotSure = false; // 现在我们又可以将其改成布尔类型
\n

在对现有代码进行改写的时候,any类型是十分有用的,它允许你在编译时可选择地包含或移除类型检查。 你可能认为 Object有相似的作用,就像它在其它语言中那样。 但是 Object类型的变量只是允许你给它赋任意值 - 但是却不能够在它上面调用任意的方法,即便它真的有这些方法:

\n
let notSure: any = 4;\nnotSure.ifItExists();// 存在这个方法\nnotSure.toFixed(); // 存在这个方法\n\n\nlet prettySure: Object = 4;\nprettySure.toFixed(); // 错误:对象类型上不存在 toFixed 属性
\n

当你只知道一部分数据的类型时,any类型也是有用的。 比如,你有一个数组,它包含了不同的类型的数据:

\n
let list: any[] = [1, true, "free"];\n\n\nlist[1] = 100;
\n

Void

某种程度上来说,void类型像是与any类型相反,它表示没有任何类型。 当一个函数没有返回值时,你通常会见到其返回值类型是 void

\n
function warnUser(): void {\n    alert("This is my warning message");\n}
\n

声明一个void类型的变量没有什么大用,因为你只能为它赋予undefinednull

\n
let unusable: void = undefined;
\n

Null 和 Undefined

TypeScript 里,undefinednull 两者各自有自己的类型分别叫做 undefinednull。 和 void 相似,它们的本身的类型用处不是很大:

\n
// 我们无法给这些变量赋值\nlet u: undefined = undefined;\nlet n: null = null;
\n

默认情况下 null 和 undefined 是所有类型的子类型

\n

就是说你可以把 null 和 undefined 赋值给 number 类型的变量。

\n

然而,当你编译时指定了 —strictNullChecks 标记, null 和 undefined 只能赋值给 void 和它们自己。

\n
\n

注意:我们鼓励尽可能地使用--strictNullChecks,但在本教程里我们假设这个标记是关闭的。

\n
\n

Never

never 类型表示的是那些永不存在的值的类型。

\n

例如, never 类型是那些总是会抛出异常或根本就不会有返回值的函数表达式或箭头函数表达式的返回值类型;

\n

never 类型是任何类型的子类型,也可以赋值给任何类型; 然而,没有类型是 never 的子类型或可以赋值给 never 类型(除了 never 本身之外)。 即使 any 也不可以赋值给 never 。

\n

下面是一些返回 never 类型的函数:

\n
// 返回never的函数必须存在无法达到的终点\nfunction error(message: string): never {\n    throw new Error(message);\n}\n\n\n// 推断的返回值类型为never\nfunction fail() {\n    return error("Something failed");\n}\n\n\n// 返回never的函数必须存在无法达到的终点\nfunction infiniteLoop(): never {\n    while (true) {\n    }\n}
\n
\n

箭头表达式将再后面的课程中学习到。

\n
\n

类型断言

通过类型断言这种方式可以告诉编译器,“相信我,我知道自己在干什么”。 类型断言好比其它语言里的类型转换,但是不进行特殊的数据检查和解构。 它没有运行时的影响,只是在编译阶段起作用。

\n

TypeScript 会假设你,程序员,已经进行了必须的检查。

\n

类型断言有两种形式。 其一是尖括号语法:

\n
let someValue: any = "this is a string";\n\n\nlet strLength: number = (<string>someValue).length;
\n

另一个为as语法:

\n
let someValue: any = "this is a string";\n\n\nlet strLength: number = (someValue as string).length;
\n

两种形式是等价的。 至于使用哪个大多数情况下是凭个人喜好;

\n

然而,当你在 TypeScript 里使用 JSX 时,只有 as语法断言是被允许的。

\n

符号介绍

自ECMAScript 2015起,symbol成为了一种新的原生类型,就像number和string一样。

\n

symbol类型的值是通过Symbol构造函数创建的。

\n
let sym1 = Symbol();\nlet sym2 = Symbol("key"); // 可选的字符串key
\n

Symbols是不可改变且唯一的。

\n
let sym2 = Symbol("key");\nlet sym3 = Symbol("key");\n\n\nsym2 === sym3; // false
\n

symbols是唯一的像字符串一样,symbols也可以被用做对象属性的键。

\n
let sym = Symbol();\nlet obj = {\n    [sym]: "value"\n};\n\n\nconsole.log(obj[sym]); // "value"
\n

Symbols也可以与计算出的属性名声明相结合来声明对象的属性和类成员。

\n
const getClassNameSymbol = Symbol();\n\n\nclass C {\n    [getClassNameSymbol](){\n       return "C";\n    }\n}\n\n\nlet c = new C();\nlet className = c[getClassNameSymbol](); // "C"
\n

变量声明

let和const

letconst是JavaScript里相对较新的变量声明方式。 像我们之前提到过的, let在很多方面与var是相似的,但是可以帮助大家避免在JavaScript里常见一些问题。 const只能一次赋值, 再次赋值会报错。

\n\n

因为 TypeScript 是 JavaScript 的超集,所以它本身就支持let和const。 下面我们会详细说明这些新的声明方式以及为什么推荐使用它们来代替 var

\n

var 声明

一直以来我们都是通过var关键字定义 JavaScript 变量。

\n
var a = 10;
\n

大家都能理解,这里定义了一个名为a值为10的变量。

\n

我们也可以在函数内部定义变量:

\n
function f() {\n    var message = "Hello, world!";\n\n\n    return message;\n}
\n

并且我们也可以在其它函数内部访问相同的变量。

\n
function f() {\n    var a = 10;\n    return function g() {\n        var b = a + 1;\n        return b;\n    }\n}\n\n\nvar g = f();\ng(); // returns 11;
\n

上面的例子里,g 可以获取到 f 函数里定义的 a 变量。 每当 g 被调用时,它都可以访问到 f 里的 a 变量。 即使当 g 在 f 已经执行完后才被调用,它仍然可以访问及修改 a 。

\n
function f() {\n    var a = 1;\n\n\n    a = 2;\n    var b = g();\n    a = 3;\n\n\n    return b;\n\n\n    function g() {\n        return a;\n    }\n}\n\n\nf(); // returns 2
\n

作用域规则

对于熟悉其它语言的人来说,var声明有些奇怪的作用域规则。 看下面的例子:

\n
function f(shouldInitialize: boolean) {\n    if (shouldInitialize) {\n        var x = 10;\n    }\n\n\n    return x;\n}\n\n\nf(true);  // returns '10'\nf(false); // returns 'undefined'
\n

变量 x 是定义在 if 语句里面 ,但是我们却可以在语句的外面访问它。 这是因为 var声明可以在包含它的函数,模块,命名空间或全局作用域内部任何位置被访问,包含它的代码块对此没有什么影响。

\n

这些作用域规则可能会引发一些错误。 其中之一就是,多次声明同一个变量并不会报错:

\n
function sumMatrix(matrix: number[][]) {\n    var sum = 0;\n    for (var i = 0; i < matrix.length; i++) {\n        var currentRow = matrix[i];\n        for (var i = 0; i < currentRow.length; i++) {\n            sum += currentRow[i];\n        }\n    }\n\n\n    return sum;\n}
\n

这里很容易看出一些问题,里层的 for 循环会覆盖变量 i,因为所有 i 都引用相同的函数作用域内的变量。 这很容易引发无穷的麻烦。

\n

let 声明

现在你已经知道了var存在一些问题,这恰好说明了为什么用let语句来声明变量。

\n
let hello = "Hello!";
\n

块作用域

当用 let 声明一个变量,它使用的是词法作用域或块作用域。 不同于使用 var 声明的变量那样可以在包含它们的函数外访问,块作用域变量在包含它们的块或 for 循环之外是不能访问的。

\n
function f(input: boolean) {\n    let a = 100;\n\n\n    if (input) {\n        // Still okay to reference 'a'\n        let b = a + 1;\n        return b;\n    }\n\n\n    // Error: 'b' doesn't exist here\n    return b;\n}
\n

这里我们定义了2个变量 a 和 b 。 a 的作用域是 f 函数体内,而 b 的作用域是 if 语句块里。

\n

catch语句里声明的变量也具有同样的作用域规则。

\n
try {\n    throw "oh no!";\n}\ncatch (e) {\n    console.log("Oh well.");\n}\n\n\n// Error: 'e' doesn't exist here\nconsole.log(e);
\n

拥有块级作用域的变量的另一个特点是,它们不能在被声明之前读或写。

\n

虽然这些变量始终“存在”于它们的作用域里,但在直到声明它的代码之前的区域都属于 暂时性死区。 它只是用来说明我们不能在 let语句之前访问它们,幸运的是 TypeScript 可以告诉我们这些信息。

\n
a++; // illegal to use 'a' before it's declared;\nlet a;
\n

注意: 我们仍然可以在一个拥有块作用域变量被声明前获取它。 只是我们不能在变量声明前去调用那个函数。 如果生成代码目标为ES2015,现代的运行时会抛出一个错误;然而,现今 TypeScript 是不会报错的。

\n
function foo() {\n    // okay to capture 'a'\n    return a;\n}\n\n\n// 不能在'a'被声明前调用'foo'\n// 运行时应该抛出错误\nfoo();\n\n\nlet a;
\n

重定义及屏蔽

我们提过使用 var 声明时,它不在乎你声明多少次;你只会得到1个。

\n
function f(x) {\n    var x;\n    var x;\n\n\n    if (true) {\n        var x;\n    }\n}
\n

在上面的例子里,所有x的声明实际上都引用一个相同的x,并且这是完全有效的代码。 这经常会成为bug的来源。 好的是, let声明就不会这么宽松了。

\n
let x = 10;\nlet x = 20; // 错误,不能在1个作用域里多次声明`x`
\n

并不是要求两个均是块级作用域的声明 TypeScript 才会给出一个错误的警告。

\n
function f(x) {\n    let x = 100; // error: interferes with parameter declaration\n}\n\n\nfunction g() {\n    let x = 100;\n    var x = 100; // 错误:不能同时声明'x'\n}
\n

并不是说块级作用域变量不能用函数作用域变量来声明。 而是块级作用域变量需要在明显不同的块里声明。

\n
function f(condition, x) {\n    if (condition) {\n        let x = 100;\n        return x;\n    }\n\n\n    return x;\n}\n\n\nf(false, 0); // returns 0\nf(true, 0);  // returns 100
\n

在一个嵌套作用域里引入一个新名字的行为称做屏蔽。 它是一把双刃剑,它可能会不小心地引入新问题,同时也可能会解决一些错误。 例如,假设我们现在用 let重写之前的sumMatrix函数。

\n
function sumMatrix(matrix: number[][]) {\n    let sum = 0;\n    for (let i = 0; i < matrix.length; i++) {\n        var currentRow = matrix[i];\n        for (let i = 0; i < currentRow.length; i++)      {\n            sum += currentRow[i];\n        }\n    }\n\n\n    return sum;\n}
\n

这个版本的循环能得到正确的结果,因为内层循环的i可以屏蔽掉外层循环的i

\n

通常来讲应该避免使用这种屏蔽,因为我们需要写出清晰的代码。

\n

块级作用域变量的获取

let声明每次迭代都会创建一个新作用域。 这就是我们在使用立即执行的函数表达式时做的事,所以在 setTimeout 例子里我们仅使用 let 声明就可以了。

\n
for (let i = 0; i < 10 ; i++) {\n    setTimeout(function() {\n        console.log(i); \n    }, 100 * i);\n}
\n

会输出与预料一致的结果:

\n
0\n1\n2\n3\n4\n5\n6\n7\n8\n9
\n

const 声明

const 声明是声明变量的另一种方式。

\n
const numLivesForCat = 9;
\n

const声明的变量只允许一次赋值, 引用的值是不可变的。

\n
const numLivesForCat = 9;\nconst kitty = {\n    name: "Aurora",\n    numLives: numLivesForCat,\n}\n\n\n// 重新赋值一个类会报错\nkitty = {\n    name: "Loen",\n    numLives: numLivesForCat\n};\n\n\n// 属性修改是允许的\nkitty.name = "Rory";\nkitty.name = "Kitty";\nkitty.name = "Cat";\nkitty.numLives--;
\n

除非你使用特殊的方法去避免,实际上const变量的内部状态是可修改的。 幸运的是,TypeScript允许你将对象的成员设置成只读的。

\n

解构

解构数组

最简单的解构莫过于数组的解构赋值了:

\n
let input = [1, 2];\nlet [first, second] = input;\nconsole.log(first); // outputs 1\nconsole.log(second); // outputs 2
\n

这创建了2个命名变量 firstsecond。 相当于使用了索引,但更为方便:

\n
first = input[0];\nsecond = input[1];
\n

解构作用于已声明的变量会更好:

\n
// 对换变量的值\n[first, second] = [second, first];
\n

作用于函数参数:

\n
function f([first, second]: [number, number]) {\n    console.log(first);\n    console.log(second);\n}\nf(input);
\n

你可以在数组里使用...语法创建剩余变量:

\n
let [first, ...rest] = [1, 2, 3, 4];\nconsole.log(first); // outputs 1\nconsole.log(rest); // outputs [ 2, 3, 4 ]
\n

当然,由于是 JavaScript, 你可以忽略你不关心的尾随元素:

\n
let [first] = [1, 2, 3, 4];\nconsole.log(first); // outputs 1
\n

或其它元素:

\n
let [, second, , fourth] = [1, 2, 3, 4];
\n

对象解构

你也可以解构对象:

\n
let o = {\n    a: "foo",\n    b: 12,\n    c: "bar"\n};\nlet { a, b } = o;
\n

这通过 o.a and o.b 创建了 ab 。 注意,如果你不需要 c 你可以忽略它。

\n

就像数组解构,你可以用没有声明的赋值:

\n
({ a, b } = { a: "baz", b: 101 });
\n

注意:我们需要用括号将它括起来,因为Javascript通常会将以 { 起始的语句解析为一个块。

\n

你可以在对象里使用…语法创建剩余变量:

\n
let { a, ...passthrough } = o;\nlet total = passthrough.b + passthrough.c.length;
\n

属性重命名

你也可以给属性以不同的名字:

\n
let { a: newName1, b: newName2 } = o;
\n

这里的语法开始变得混乱。 你可以将 a: newName1 读做 a 作为 newName1。 方向是从左到右,好像你写成了以下样子:

\n
let newName1 = o.a;\nlet newName2 = o.b;
\n

令人困惑的是,这里的冒号不是指示类型的。 如果你想指定它的类型, 仍然需要在其后写上完整的模式。

\n
let {a, b}: {a: string, b: number} = o;
\n

默认值

默认值可以让你在属性为 undefined 时使用缺省值:

\n
function keepWholeObject(wholeObject: { a: string, b?: number }) \n{\n    let { a, b = 1001 } = wholeObject;\n}
\n

现在,即使 b 为 undefined , keepWholeObject 函数的变量 wholeObject 的属性 a 和 b 都会有值。

\n

函数声明

解构也能用于函数声明。 看以下简单的情况:

\n
type C = { a: string, b?: number }\nfunction f({ a, b }: C): void {\n    // ...\n}
\n

通常情况下更多的是指定默认值,解构默认值有些棘手。 首先,你需要在默认值之前设置其格式。

\n
function f({ a, b } = { a: "", b: 0 }): void {\n    // ...\n}\nf(); // 默认 { a: "", b: 0 }
\n

你需要知道在解构属性上给予一个默认或可选的属性用来替换主初始化列表。 要知道 C 的定义有一个 b 可选属性:

\n
function f({ a, b = 0 } = { a: "" }): void {\n    // ...\n}\nf({ a: "yes" }); // 默认 b = 0\nf(); // 默认 {a: ""},  b = 0\nf({}); // 错误, 如果您提供参数,则需要'a'
\n

从前面的例子可以看出, 要小心使用解构。就算是最简单的解构表达式也是难以理解的。 尤其当存在深层嵌套解构的时候,就算这时没有堆叠在一起的重命名,默认值和类型注解,也是令人难以理解的。

\n
\n

解构表达式要尽量保持小而简单。

\n
\n

展开

展开操作符正与解构相反。 它允许你将一个数组展开为另一个数组,或将一个对象展开为另一个对象。 例如:

\n
let first = [1, 2];\nlet second = [3, 4];\nlet bothPlus = [0, ...first, ...second, 5];
\n

这会令bothPlus的值为 [0, 1, 2, 3, 4, 5] 。 展开操作创建了 first 和 second 的一份浅拷贝。 它们不会被展开操作所改变。

\n

你还可以展开对象:

\n
let defaults = { food: "spicy", price: "$", ambiance: "noisy" };\nlet search = { ...defaults, food: "rich" };
\n

search的值为 { food: “rich”, price: “$”, ambiance: “noisy” } 。 对象的展开比数组的展开要复杂的多。 像数组展开一样,它是从左至右进行处理,但结果仍为对象。 这就意味着出现在展开对象后面的属性会覆盖前面的属性。 因此,如果我们修改上面的例子,在结尾处进行展开的话:

\n
let defaults = { food: "spicy", price: "$", ambiance: "noisy" };\nlet search = { food: "rich", ...defaults };
\n

那么, defaults 里的 food 属性会重写 food: “rich” ,在这里这并不是我们想要的结果。

\n

对象展开还有其它一些意想不到的限制。 首先,它仅包含对象 自身的可枚举属性。 大体上是说当你展开一个对象实例时,你会丢失其方法:

\n
class C {\n  p = 12;\n  m() {\n  }\n}\nlet c = new C();\nlet clone = { ...c };\nclone.p; // 没问题\nclone.m(); // 错误
\n

函数

函数介绍

函数是JavaScript应用程序的基础。 它帮助你实现抽象层,模拟类,信息隐藏和模块。 在TypeScript里,虽然已经支持类,命名空间和模块,但函数仍然是主要的定义 行为的地方。 TypeScript为JavaScript函数添加了额外的功能,让我们可以更容易地使用。

\n

Typescript 函数

和JavaScript一样,TypeScript函数可以创建有名字的函数和匿名函数。 你可以随意选择适合应用程序的方式,不论是定义一系列API函数还是只使用一次的函数。

\n

通过下面的例子可以迅速回想起这两种JavaScript中的函数:

\n
// 命名函数\nfunction add(x, y) {\n    return x + y;\n}\n\n\n// 匿名函数\nlet myAdd = function(x, y) { return x + y; };
\n

函数类型

为函数定义类型 我们可以为函数本身添加返回值类型。

\n
函数():类型 {}
\n

我们给函数添加类型:

\n
function add(x: number, y: number): number {\n    return x + y;\n}\n\n\nlet myAdd = function(x: number, y: number): number { return x + y; };
\n

TypeScript能够根据返回语句自动推断出返回值类型,因此我们通常省略它。

\n

函数参数

TypeScript里的每个函数参数都是必须的。 传递给一个函数的参数个数必须与函数期望的参数个数一致。

\n
function buildName(firstName: string, lastName: string) {\n    return firstName + " " + lastName;\n}\n// error, too few parameters\nlet result1 = buildName("Bob");\n\n\n// error, too many parameters\nlet result2 = buildName("Bob", "Adams", "Sr.");\n\n\n// 这种方式是正确的\nlet result3 = buildName("Bob", "Adams");
\n

可选参数

在TypeScript里我们可以在参数名旁使用 ? 实现可选参数的功能。 比如,我们想让last name是可选的:

\n
function buildName(firstName: string, lastName?: string) {\n    if (lastName)\n        return firstName + " " + lastName;\n    else\n        return firstName;\n}\n// 现在这样也可以\nlet result1 = buildName("Bob"); \n\n\n// error, too many parameters\nlet result2 = buildName("Bob", "Adams", "Sr.");\n\n\n// 这种方式是正确的\nlet result3 = buildName("Bob", "Adams");
\n

注意: 可选参数必须跟在必须参数后面。 如果上例我们想让first name是可选的,那么就必须调整它们的位置,把first name放在后面。

\n

默认参数

在TypeScript里,我们也可以为参数提供一个默认值。让我们修改上例,把last name的默认值设置为”Smith”。

\n
function buildName(firstName: string, lastName = "Smith") {\n    return firstName + " " + lastName;\n}\n\n\n// 这样是可以工作的 返回 "Bob Smith"\nlet result1 = buildName("Bob");\n\n\n// 这样也可以工作返回 "Bob Smith"\nlet result2 = buildName("Bob", undefined);\n\n\n// error, too many parameters\nlet result3 = buildName("Bob", "Adams", "Sr.");\n// 这是正确的返回 "Bob Adams"\nlet result4 = buildName("Bob", "Adams");
\n

剩余参数

当你想同时操作多个参数,而你并不知道会有多少参数传递进来。 在JavaScript里,你可以使用 arguments来访问所有传入的参数。 而在TypeScript里,你可以使用 …变量名 把所有参数收集到一个变量里:

\n
function buildName(firstName: string, ...restOfName: string[]) {\n  return firstName + " " + restOfName.join(" ");\n}\n\n\nlet employeeName = buildName("Joseph", "Samuel", "Lucas", "MacKinzie");
\n

剩余参数会被当做个数不限的可选参数。 可以一个都没有,同样也可以有任意个。

\n

这个省略号也会在带有剩余参数的函数类型定义上使用到:

\n
function buildName(firstName: string, ...restOfName: string[]) {\n  return firstName + " " + restOfName.join(" ");\n}
\n

箭头函数

表现形式

基本语法 ES6 允许使用“箭头”(=>)定义函数 箭头函数相当于匿名函数,并且简化了函数定义 表现形式一: 包含一个表达式,连{ … }和return都省略掉了

\n
 x => x * x\n//等同于\nfunction (x) {\n  return x*x;\n};
\n

表示形式二: 包含多条语句,这时候就不能省略{ … }和return

\n
x => {\nif (x > 0) {\n    return x * x;\n}\nelse {\n    return - x * x;\n}\n}
\n

this

箭头函数的引入有两个方面的作用:

\n\n

普通函数: this指向调用它的那个对象 箭头函数:不绑定this,会捕获其所在的上下文的this值,作为自己的this值,任何方法都改变不了其指向,如: call(),bind(),apply()

\n
var obj = {\na: 10,\nb: () => {\nconsole.log('b this.a:',this.a); // undefined\nconsole.log('b this:',this); // Window\n },\n c: function() {\nconsole.log('c this.a:',this.a); // 10\nconsole.log('c this:',this); // {a: 10, b: ƒ, c: ƒ}\n}\n }\nobj.b(); \nobj.c();
\n

执行结果: \"\"

\n

函数重载

所谓函数重载就是同一个函数,根据传递的参数不同,会有不同的表现形式。

\n

JavaScript本身是没有重载这个概念,不过可以模拟实现。 JavaScript 代码实例如下:

\n
function func(){ \n  if(arguments.length==0){ \n    alert("欢迎来到w3cschool");  \n  } \n  else if(arguments.length==1){ \n    alert(arguments[0]) \n  } \n} \nfunc(); \nfunc(2);
\n

上面代码利用arguments对象来判断传递参数的数量,然后执行不同的代码。

\n

TypeScript 函数重载

TypeScript提供了重载功能,TypeScript的函数重载只有一个函数体,也就是说无论声明多少个同名且不同签名的函数,它们共享一个函数体,在调用时会根据传递实参类型的不同,利用流程控制语句控制代码的执行。

\n

TypeScript代码实例如下:

\n
function func(x:string):string;\nfunction func(x:number):number;\nfunction func(x:any):any{\n  if(typeof x=="string"){\n    return "欢迎来到w3cschool"\n  }else if(typeof x=="number"){\n    return 5\n  }\n}
\n

function func(x:any):any不是函数重载列表一部分,所以上述代码只定义两个重载。

\n

重载函数的共用函数体部分如下:

\n
function func(x:any):any{\n  if(typeof x=="string"){\n    return "欢迎来到w3cschool"\n  }else if(typeof x=="number"){\n    return 5\n  }\n}
\n

重载函数编译后的JavaScript代码:

\n
function func(x) {\n  if (typeof x == "string") {\n    return "欢迎来到w3cschool";\n  }\n  else if (typeof x == "number") {\n    return 5;\n  }\n}
\n

由于JavaScript本身不支持重载,所以TypeScript重载实质上为了方便调用者如何调用函数。

\n

接口

接口介绍

在TypeScript里,接口的作用就是为这些类型命名和为你的代码或第三方代码定义契约。

\n

接口初探

下面通过一个简单示例来观察接口是如何工作的:

\n
function printLabel(labelledObj: { label: string }) {\n  console.log(labelledObj.label);\n}\n\n\nlet myObj = { size: 10, label: "Size 10 Object" };\nprintLabel(myObj);
\n

类型检查器会查看 printLabel 的调用。 printLabel 有一个参数,并要求这个对象参数有一个名为 label 类型为 string 的属性。

\n

需要注意的是,我们传入的对象参数实际上会包含很多属性,但是编译器只会检查那些必需的属性是否存在,并且其类型是否匹配。

\n

下面我们重写上面的例子,这次使用接口来描述:必须包含一个 label 属性且类型为 string :

\n
interface LabelledValue {\n  label: string;\n}\n\n\nfunction printLabel(labelledObj: LabelledValue) {\n  console.log(labelledObj.label);\n}\n\n\nlet myObj = {size: 10, label: "Size 10 Object"};\nprintLabel(myObj);
\n

LabelledValue接口就好比一个名字,用来描述上面例子里的要求。 它代表了有一个 label 属性且类型为 string 的对象。

\n

只要传入的对象满足上面提到的必要条件,那么它就是被允许的。

\n

类型检查器不会去检查属性的顺序,只要相应的属性存在并且类型也是对的就可以。

\n

可选属性

接口里的属性不全都是必需的。 有些是只在某些条件下存在,或者根本不存在。 可选属性在应用“option bags”模式时很常用,即给函数传入的参数对象中只有部分属性赋值了。

\n

下面是应用了“option bags”的例子:

\n
interface SquareConfig {\n  color?: string;\n  width?: number;\n}\n\n\nfunction createSquare(config: SquareConfig): {color: string; area: number} {\n  let newSquare = {color: "white", area: 100};\n  if (config.color) {\n    newSquare.color = config.color;\n  }\n  if (config.width) {\n    newSquare.area = config.width * config.width;\n  }\n  return newSquare;\n}\n\n\nlet mySquare = createSquare({color: "black"});
\n

带有可选属性的接口与普通的接口定义差不多,只是在可选属性名字定义的后面加一个?符号。

\n

可选属性的好处之一是可以对可能存在的属性进行预定义,好处之二是可以捕获引用了不存在的属性时的错误。 比如,我们故意将 createSquare里的color属性名拼错,就会得到一个错误提示:

\n
interface SquareConfig {\n  color?: string;\n  width?: number;\n}\n\n\nfunction createSquare(config: SquareConfig): { color: string; area: number } {\n  let newSquare = {color: "white", area: 100};\n  if (config.color) {\n    // Error: Property 'clor' does not exist on type 'SquareConfig'\n    newSquare.color = config.clor;\n  }\n  if (config.width) {\n    newSquare.area = config.width * config.width;\n  }\n  return newSquare;\n}\n\n\nlet mySquare = createSquare({color: "black"});
\n

只读属性

可以在属性名前用 readonly来指定只读属性:

\n
interface Point {\n    readonly x: number;\n    readonly y: number;\n}
\n

可以通过赋值一个对象字面量来构造一个Point。 赋值后, x 和 y 再也不能被改变了。

\n
let p1: Point = { x: 10, y: 20 };\np1.x = 5; // error!
\n

TypeScript 具有 ReadonlyArray 类型,它与 Array 相似,只是把所有可变方法去掉了,因此可以确保数组创建后再也不能被修改:

\n
let a: number[] = [1, 2, 3, 4];\nlet ro: ReadonlyArray<number> = a;\nro[0] = 12; // error!\nro.push(5); // error!\nro.length = 100; // error!\na = ro; // error!
\n

上面代码的最后一行,可以看到就算把整个ReadonlyArray赋值到一个普通数组也是不可以的。 但是你可以用类型断言重写:

\n
a = ro as number[];
\n

readonly, const使用时机

做为变量使用的话用 const ,若做为属性则使用 readonly 。

\n

函数类型

接口能够描述JavaScript中对象拥有的各种各样的外形。 除了描述带有属性的普通对象外,接口也可以描述函数类型。

\n

为了使用接口表示函数类型,我们需要给接口定义一个调用签名。 它就像是一个只有参数列表和返回值类型的函数定义。参数列表里的每个参数都需要名字和类型。

\n
interface SearchFunc {\n  (source: string, subString: string): boolean;\n}
\n

这样定义后,我们可以像使用其它接口一样使用这个函数类型的接口。 下例展示了如何创建一个函数类型的变量,并将一个同类型的函数赋值给这个变量。

\n
let mySearch: SearchFunc;\nmySearch = function(source: string, subString: string) {\n  let result = source.search(subString);\n  return result > -1;\n}
\n

对于函数类型的类型检查来说,函数的参数名不需要与接口里定义的名字相匹配。 比如,我们使用下面的代码重写上面的例子:

\n
let mySearch: SearchFunc;\nmySearch = function(src: string, sub: string): boolean {\n  let result = src.search(sub);\n  return result > -1;\n}
\n

函数的参数会逐个进行检查,要求对应位置上的参数类型是兼容的。 如果你不想指定类型,TypeScript的类型系统会推断出参数类型,因为函数直接赋值给了 SearchFunc类型变量。 函数的返回值类型是通过其返回值推断出来的(此例是 falsetrue)。 如果让这个函数返回数字或字符串,类型检查器会警告我们函数的返回值类型与SearchFunc接口中的定义不匹配。

\n
let mySearch: SearchFunc;\nmySearch = function(src, sub) {\n    let result = src.search(sub);\n    return result > -1;\n}
\n

实现接口

与C#或Java里接口的基本作用一样,TypeScript也能够用它来明确的强制一个类去符合某种契约。

\n
interface ClockInterface {\n    currentTime: Date;\n}\n\n\nclass Clock implements ClockInterface {\n    currentTime: Date;\n    constructor(h: number, m: number) { }\n}
\n

你也可以在接口中描述一个方法,在类里实现它,如同下面的setTime方法一样:

\n
interface ClockInterface {\n    currentTime: Date;\n    setTime(d: Date);\n}\n\n\nclass Clock implements ClockInterface {\n    currentTime: Date;\n    setTime(d: Date) {\n        this.currentTime = d;\n    }\n    constructor(h: number, m: number) { }\n}
\n

接口描述了类的公共部分,而不是公共和私有两部分。 它不会帮你检查类是否具有某些私有成员。

\n

继承接口

和类一样,接口也可以相互继承。 这让我们能够从一个接口里复制成员到另一个接口里,可以更灵活地将接口分割到可重用的模块里。

\n
interface Shape {\n    color: string;\n}\n\n\ninterface Square extends Shape {\n    sideLength: number;\n}\n\n\nlet square = <Square>{};\nsquare.color = "blue";\nsquare.sideLength = 10;
\n

一个接口可以继承多个接口,创建出多个接口的合成接口。

\n
interface Shape {\n    color: string;\n}\n\n\ninterface PenStroke {\n    penWidth: number;\n}\n\n\ninterface Square extends Shape, PenStroke {\n    sideLength: number;\n}\n\n\nlet square = <Square>{};\nsquare.color = "blue";\nsquare.sideLength = 10;\nsquare.penWidth = 5.0;
\n

类介绍

传统的JavaScript程序使用函数和基于原型的继承来创建可重用的组件,但对于熟悉使用面向对象方式的程序员来讲就有些棘手,因为他们用的是基于类的继承并且对象是由类构建出来的。 从ECMAScript 2015,也就是ECMAScript 6开始,JavaScript程序员将能够使用基于类的面向对象的方式。

\n

使用TypeScript,我们允许开发者现在就使用这些特性,并且编译后的JavaScript可以在所有主流浏览器和平台上运行,而不需要等到下个JavaScript版本。

\n

下面看一个使用类的例子:

\n
class Greeter {\n    greeting: string;\n    constructor(message: string) {\n        this.greeting = message;\n    }\n    greet() {\n        return "Hello, " + this.greeting;\n    }\n}\n\n\nlet greeter = new Greeter("world");
\n

如果你使用过C#或Java,你会对这种语法非常熟悉。 我们声明一个 Greeter类。这个类有3个成员:一个叫做 greeting的属性,一个构造函数和一个 greet方法。

\n

你会注意到,我们在引用任何一个类成员的时候都用了 this。 它表示我们访问的是类的成员。

\n

最后一行,我们使用 new 构造了 Greeter类的一个实例。 它会调用之前定义的构造函数,创建一个Greeter类型的新对象,并执行构造函数初始化它。

\n

继承

在TypeScript里,我们可以使用常用的面向对象模式。 基于类的程序设计中一种最基本的模式是允许使用继承来扩展现有的类。

\n

看下面的例子:

\n
class Animal {\n    move(distanceInMeters: number = 0) {\n        console.log(`Animal moved ${distanceInMeters}m.`);\n    }\n}\n\n\nclass Dog extends Animal {\n    bark() {\n        console.log('Woof! Woof!');\n    }\n}\n\n\nconst dog = new Dog();\ndog.bark();\ndog.move(10);\ndog.bark();
\n

这个例子展示了最基本的继承:类从基类中继承了属性和方法。 这里, Dog是一个 派生类,它派生自Animal 基类,通过 extends关键字。 派生类通常被称作 子类,基类通常被称作 超类

\n

因为 Dog继承了 Animal的功能,因此我们可以创建一个 Dog的实例,它能够 bark() 和 move()

\n

下面我们来看个更加复杂的例子。

\n
class Animal {\n    name: string;\n    constructor(theName: string) { this.name = theName; }\n    move(distanceInMeters: number = 0) {\n        console.log(`${this.name} moved ${distanceInMeters}m.`);\n    }\n}\n\n\nclass Snake extends Animal {\n    constructor(name: string) { super(name); }\n    move(distanceInMeters = 5) {\n        console.log("Slithering...");\n        super.move(distanceInMeters);\n    }\n}\n\n\nclass Horse extends Animal {\n    constructor(name: string) { super(name); }\n    move(distanceInMeters = 45) {\n        console.log("Galloping...");\n        super.move(distanceInMeters);\n    }\n}\n\n\nlet sam = new Snake("Sammy the Python");\nlet tom: Animal = new Horse("Tommy the Palomino");\n\n\nsam.move();\ntom.move(34);
\n

这个例子展示了一些上面没有提到的特性。 这一次,我们使用 extends关键字创建了 Animal的两个子类: Horse 和 Snake

\n

与前一个例子的不同点是,派生类包含了一个构造函数,它必须调用 super(),它会执行基类的构造函数。 而且,在构造函数里访问 this的属性之前,我们 一定要调用 super()。 这个是TypeScript强制执行的一条重要规则。

\n

这个例子演示了如何在子类里可以重写父类的方法。 Snake类和 Horse类都创建了 move方法,它们重写了从 Animal继承来的 move方法,使得 move方法根据不同的类而具有不同的功能。 注意,即使tom被声明为 Animal类型,但因为它的值是 Horse,调用 tom.move(34)时,它会调用 Horse里重写的方法:

\n
Slithering...\nSammy the Python moved 5m.\nGalloping...\nTommy the Palomino moved 34m.
\n

公共,私有与受保护的修饰符

public

在TypeScript里,成员都默认为 public

\n

你也可以明确的将一个成员标记成 public。 我们可以用下面的方式来重写 Animal类:

\n
class Animal {\n    public name: string;\n    public constructor(theName: string) { this.name = theName; }\n    public move(distanceInMeters: number) {\n        console.log(`${this.name} moved ${distanceInMeters}m.`);\n    }\n}
\n

private

当成员被标记成 private时,它就不能在声明它的类的外部访问。比如:

\n
class Animal {\n    private name: string;\n    constructor(theName: string) { this.name = theName; }\n}\n\n\nnew Animal("Cat").name; // 错误: 'name' 是私有的.
\n

protected

protected修饰符与 private修饰符的行为很相似,但有一点不同, protected成员在派生类中仍然可以访问。例如:

\n
class Person {\n    protected name: string;\n    constructor(name: string) { this.name = name; }\n}\n\n\nclass Employee extends Person {\n    private department: string;\n\n\n    constructor(name: string, department: string) {\n        super(name)\n        this.department = department;\n    }\n\n\n    public getElevatorPitch() {\n        return `Hello, my name is ${this.name} and I work in ${this.department}.`;\n    }\n}\n\n\nlet howard = new Employee("Howard", "Sales");\nconsole.log(howard.getElevatorPitch());\nconsole.log(howard.name); // 错误
\n

注意,我们不能在 Person类外使用 name,但是我们仍然可以通过 Employee类的实例方法访问,因为 Employee是由 Person派生而来的。

\n

构造函数也可以被标记成 protected。 这意味着这个类不能在包含它的类外被实例化,但是能被继承。比如,

\n
class Person {\n    protected name: string;\n    protected constructor(theName: string) { this.name = theName; }\n}\n\n\n// Employee 能够继承 Person\nclass Employee extends Person {\n    private department: string;\n\n\n    constructor(name: string, department: string) {\n        super(name);\n        this.department = department;\n    }\n\n\n    public getElevatorPitch() {\n        return `Hello, my name is ${this.name} and I work in ${this.department}.`;\n    }\n}\n\n\nlet howard = new Employee("Howard", "Sales");\nlet john = new Person("John"); // 错误: 'Person' 的构造函数是被保护的.
\n

readonly修饰符

你可以使用 readonly关键字将属性设置为只读的。 只读属性必须在声明时或构造函数里被初始化。

\n
class Octopus {\n    readonly name: string;\n    readonly numberOfLegs: number = 8;\n    constructor (theName: string) {\n        this.name = theName;\n    }\n}\nlet dad = new Octopus("Man with the 8 strong legs");\ndad.name = "Man with the 3-piece suit"; // 错误! name 是只读的.
\n

参数属性

在上面的例子中,我们不得不定义一个受保护的成员 name和一个构造函数参数 theName在 Person类里,并且立刻给 name和 theName赋值。 这种情况经常会遇到。 参数属性可以方便地让我们在一个地方定义并初始化一个成员。 下面的例子是对之前 Animal类的修改版,使用了参数属性:

\n
class Animal {\n    constructor(private name: string) { }\n    move(distanceInMeters: number) {\n        console.log(`${this.name} moved ${distanceInMeters}m.`);\n    }\n}
\n

注意看我们是如何舍弃了 theName,仅在构造函数里使用 private name: string参数来创建和初始化 name成员。 我们把声明和赋值合并至一处。

\n

参数属性通过给构造函数参数添加一个访问限定符来声明。 使用 private限定一个参数属性会声明并初始化一个私有成员;对于 public和 protected来说也是一样。

\n

存取器

TypeScript支持通过getters/setters来截取对对象成员的访问。 它能帮助你有效的控制对对象成员的访问。

\n

下面来看如何把一个简单的类改写成使用 get和 set。 首先,我们从一个没有使用存取器的例子开始。

\n
class Employee {\n    fullName: string;\n}\n\nlet employee = new Employee();\nemployee.fullName = "Bob Smith";\nif (employee.fullName) {\n    console.log(employee.fullName);\n}
\n

我们可以随意的设置 fullName,这是非常方便的,但是这也可能会带来麻烦。

\n

下面这个版本里,我们先检查用户密码是否正确,然后再允许其修改员工信息。 我们把对 fullName的直接访问改成了可以检查密码的 set方法。 我们也加了一个 get方法,让上面的例子仍然可以工作。

\n
let passcode = "secret passcode";\n\nclass Employee {\n    private _fullName: string;\n\n    get fullName(): string {\n        return this._fullName;\n    }\n\n    set fullName(newName: string) {\n        if (passcode && passcode == "secret passcode") {\n            this._fullName = newName;\n        }\n        else {\n            console.log("Error: Unauthorized update of employee!");\n        }\n    }\n}\n\nlet employee = new Employee();\nemployee.fullName = "Bob Smith";\nif (employee.fullName) {\n    alert(employee.fullName);\n}
\n

我们可以修改一下密码,来验证一下存取器是否是工作的。当密码不对时,会提示我们没有权限去修改员工。

\n

对于存取器有下面几点需要注意的:

\n

首先,存取器要求你将编译器设置为输出ECMAScript 5或更高。 不支持降级到ECMAScript 3。 其次,只带有 get不带有 set的存取器自动被推断为 readonly。 这在从代码生成 .d.ts文件时是有帮助的,因为利用这个属性的用户会看到不允许够改变它的值。

\n

静态属性

到目前为止,我们只讨论了类的实例成员,那些仅当类被实例化的时候才会被初始化的属性。 我们也可以创建类的静态成员,这些属性存在于类本身上面而不是类的实例上。 在这个例子里,我们使用static定义 origin,因为它是所有网格都会用到的属性。 每个实例想要访问这个属性的时候,都要在 origin前面加上类名。 如同在实例属性上使用 this.前缀来访问属性一样,这里我们使用 Grid.来访问静态属性。

\n
class Grid {\n    static origin = {x: 0, y: 0};\n    calculateDistanceFromOrigin(point: {x: number; y: number;}) {\n        let xDist = (point.x - Grid.origin.x);\n        let yDist = (point.y - Grid.origin.y);\n        return Math.sqrt(xDist * xDist + yDist * yDist) / this.scale;\n    }\n    constructor (public scale: number) { }\n}\n\nlet grid1 = new Grid(1.0);  // 1x scale\nlet grid2 = new Grid(5.0);  // 5x scale\n\nconsole.log(grid1.calculateDistanceFromOrigin({x: 10, y: 10}));\nconsole.log(grid2.calculateDistanceFromOrigin({x: 10, y: 10}));
\n

抽象类

抽象类做为其它派生类的基类使用。 它们一般不会直接被实例化。 不同于接口,抽象类可以包含成员的实现细节。 abstract关键字是用于定义抽象类和在抽象类内部定义抽象方法。

\n
abstract class Animal {\n    abstract makeSound(): void;\n    move(): void {\n        console.log('roaming the earch...');\n    }\n}
\n

抽象类中的抽象方法不包含具体实现并且必须在派生类中实现。 抽象方法的语法与接口方法相似。 两者都是定义方法签名但不包含方法体。 然而,抽象方法必须包含 abstract关键字并且可以包含访问修饰符。

\n
abstract class Department {\n\n    constructor(public name: string) {\n    }\n\n    printName(): void {\n        console.log('Department name: ' + this.name);\n    }\n\n    abstract printMeeting(): void; // 必须在派生类中实现\n}\n\nclass AccountingDepartment extends Department {\n\n    constructor() {\n        super('Accounting and Auditing'); // 在派生类的构造函数中必须调用 super()\n    }\n\n    printMeeting(): void {\n        console.log('The Accounting Department meets each Monday at 10am.');\n    }\n\n    generateReports(): void {\n        console.log('Generating accounting reports...');\n    }\n}\n\nlet department: Department; // 允许创建一个对抽象类型的引用\ndepartment = new Department(); // 错误: 不能创建一个抽象类的实例\ndepartment = new AccountingDepartment(); // 允许对一个抽象子类进行实例化和赋值\ndepartment.printName();\ndepartment.printMeeting();\ndepartment.generateReports(); // 错误: 方法在声明的抽象类中不存在
\n

泛型

泛型介绍

软件工程中,我们不仅要创建一致的定义良好的API,同时也要考虑可重用性。 组件不仅能够支持当前的数据类型,同时也能支持未来的数据类型,这在创建大型系统时为你提供了十分灵活的功能。

\n

在像C#和Java这样的语言中,可以使用泛型来创建可重用的组件,一个组件可以支持多种类型的数据。 这样用户就可以以自己的数据类型来使用组件。

\n

非泛型例子

下面来创建 identity函数。 这个函数会返回任何传入它的值。 你可以把这个函数当成是 echo命令。

\n

非泛型例子1:

\n
function identity(arg: number): number {\n    return arg;\n}
\n

非泛型例子2: 使用any类型来定义函数

\n
function identity(arg: any): any {\n    return arg;\n}
\n

使用any类型会导致这个函数可以接收任何类型的arg参数,这样就丢失了一些信息:传入的类型与返回的类型应该是相同的。如果我们传入一个数字,我们只知道任何类型的值都有可能被返回。

\n

泛型的例子

我们需要一种方法使返回值的类型与传入参数的类型是相同的。 这里,我们使用了 类型变量,它是一种特殊的变量,只用于表示类型而不是值。

\n
function identity<T>(arg: T): T {\n    return arg;\n}
\n

我们给identity添加了类型变量T。 T帮助我们捕获用户传入的类型(比如:number),之后我们就可以使用这个类型。 之后我们再次使用了 T当做返回值类型。现在我们可以知道参数类型与返回值类型是相同的了。 这允许我们跟踪函数里使用的类型的信息。

\n

我们把这个版本的identity函数叫做泛型,因为它可以适用于多个类型。 不同于使用 any,它不会丢失信息,像第一个例子那像保持准确性,传入数值类型并返回数值类型。

\n

泛型类

泛型类看上去与泛型接口差不多。 泛型类使用( <>)括起泛型类型,跟在类名后面。

\n
class GenericNumber<T> {\n    zeroValue: T;\n    add: (x: T, y: T) => T;\n}\n\n\nlet myGenericNumber = new GenericNumber<number>();\nmyGenericNumber.zeroValue = 0;\nmyGenericNumber.add = function(x, y) { return x + y; };
\n

GenericNumber类的使用是十分直观的,并且你可能已经注意到了,没有什么去限制它只能使用number类型。 也可以使用字符串或其它更复杂的类型。

\n
let stringNumeric = new GenericNumber<string>();\nstringNumeric.zeroValue = "";\nstringNumeric.add = function(x, y) { return x + y; };\n\n\nconsole.log(stringNumeric.add(stringNumeric.zeroValue, "test"));
\n

与接口一样,直接把泛型类型放在类后面,可以帮助我们确认类的所有属性都在使用相同的类型。

\n

我们在类那节说过,类有两部分:静态部分和实例部分。 泛型类指的是实例部分的类型,所以类的静态属性不能使用这个泛型类型。

\n

枚举

默认情况下,枚举是基于 0 的,也就是说第一个值是 0,后面的值依次递增。不要担心,当中的每一个值都可以显式指定,只要不出现重复即可,没有被显式指定的值,都会在前一个值的基础上递增。

\n
enum Color {Red, Green, Blue}\nlet c: Color = Color.Green;  // 1
\n

或者

\n
enum Color {Red = 1, Green, Blue = 4}\nlet c: Color = Color.Green;  // 2
\n

枚举有一个很方便的特性,就是您也可以向枚举传递一个数值,然后获取它对应的名称值。举个例子,如果我们有一个值 2,但是不清楚在 Color 枚举中与之对应的名称是什么,我们就可以通过以下的方式来进行检索:

\n
enum Color {Red = 1, Green, Blue}\nlet colorName: string = Color[2];  // 'Green'
\n

但是像上面的这种写法不是太好,因为如果您给定的数值没有与之对应的枚举项,那么结果就是 undefined。所以,如果您想要得到指定枚举项的字符串名称,可以使用类似这样的写法:

\n
let colorName: string = Color[Color.Green];  // 'Green'
\n

命名空间

TypeScript里使用命名空间(之前叫做“内部模块”)来组织你的代码。任何使用 module关键字来声明一个内部模块的地方都应该使用namespace关键字来替换。 这就避免了让新的使用者被相似的名称所迷惑。

\n

命名空间介绍

下面的例子里,把所有与验证器相关的类型都放到一个叫做Validation的命名空间里。 因为我们想让这些接口和类在命名空间之外也是可访问的,所以需要使用 export。 相反的,变量 lettersRegexp和numberRegexp是实现的细节,不需要导出,因此它们在命名空间外是不能访问的。 在文件末尾的测试代码里,由于是在命名空间之外访问,因此需要限定类型的名称,比如 Validation.LettersOnlyValidator。

\n
namespace Validation {\n    export interface StringValidator {\n        isAcceptable(s: string): boolean;\n    }\n\n\n    const lettersRegexp = /^[A-Za-z]+$/;\n    const numberRegexp = /^[0-9]+$/;\n\n\n    export class LettersOnlyValidator implements StringValidator {\n        isAcceptable(s: string) {\n            return lettersRegexp.test(s);\n        }\n    }\n\n\n    export class ZipCodeValidator implements StringValidator {\n        isAcceptable(s: string) {\n            return s.length === 5 && numberRegexp.test(s);\n        }\n    }\n}\n\n\n// Some samples to try\nlet strings = ["Hello", "98052", "101"];\n\n\n// Validators to use\nlet validators: { [s: string]: Validation.StringValidator; } = {};\nvalidators["ZIP code"] = new Validation.ZipCodeValidator();\nvalidators["Letters only"] = new Validation.LettersOnlyValidator();\n\n\n// Show whether each string passed each validator\nfor (let s of strings) {\n    for (let name in validators) {\n        console.log(`"${ s }" - ${ validators[name].isAcceptable(s) ? "matches" : "does not match" } ${ name }`);\n    }\n}
","categories":[{"name":"前端","slug":"前端","permalink":"https://hexo.huangge1199.cn/categories/%E5%89%8D%E7%AB%AF/"}],"tags":[{"name":"前端","slug":"前端","permalink":"https://hexo.huangge1199.cn/tags/%E5%89%8D%E7%AB%AF/"}]},{"title":"win电脑安装绿色版MySQL8","slug":"win-dian-nao-an-zhuang-lv-se-ban-mysql8","date":"2024-03-19T05:17:52.000Z","updated":"2024-03-19T05:52:31.395Z","comments":true,"path":"/post/win-dian-nao-an-zhuang-lv-se-ban-mysql8/","link":"","excerpt":"","content":"

一、下载压缩包

下载mysql server的zip文件,地址:Windows (x86, 64-bit), ZIP Archive

\n

解压后:

\n

\"\"

\n

二、创建配置文件(可忽略)

配置文件可存放位置及名称:

\n\n

三、初始化数据库

以管理员身份运行cmd,进入到bin目录,运行下面的命令创建mysql默认的数据库,并创建一个root账号,空密码

\n
mysqld --initialize-insecure
\n

四、启动MySQL服务

我使用的是安装到服务的方式,执行下面的命令

\n
mysqld --install-manual
\n

\"\"

\n

默认创建的服务名称为MySQL,然后在服务中启动

\n

\"\"

\n

也可以直接运行一下命令

\n
mysqld
\n

这是最简单的方式了,但是无法安装到服务中,其他详细的可参看帮助说明

\n
mysqld --verbose --help | more
\n

\"\"

\n

红色框内的是安装相关的命令,蓝色框内是移除服务相关的命令

\n

五、测试

运行命令确认MySQL能够使用

\n
mysql -uroot
\n

\"\"

\n

六、修改root密码

由于前面的方式,root用户没有密码,需要添加个密码,MySQL进入后,执行下面的命令给root设置密码

\n
use mysql;\nALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '新密码';\nFLUSH PRIVILEGES;
\n","categories":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/categories/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}],"tags":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}]},{"title":"Git操作指南:子模块、用户名修改和Subtree","slug":"gitcao-zuo-zhi-nan-zi-mo-kuai-yong-hu-ming-xiu-gai-he-subtree","date":"2024-03-13T08:22:06.000Z","updated":"2024-03-13T08:53:05.816Z","comments":true,"path":"/post/gitcao-zuo-zhi-nan-zi-mo-kuai-yong-hu-ming-xiu-gai-he-subtree/","link":"","excerpt":"","content":"

引言

在软件开发中,版本控制是一个至关重要的环节。Git 作为目前最流行的版本控制工具之一,提供了丰富的功能和灵活的操作方式。本文将介绍一些常用的 Git 操作,包括管理子模块、修改用户名、使用 Git Subtree 合并项目以及其他一些常见操作。

\n

一、引用子模块

git submodule是一个用于将其他两个 Git 仓库嵌入到一个主仓库中。这样做可以使主仓库包含其他两个仓库的内容,并能够管理它们的版本和更新。以下是将两个其他仓库添加为子模块到主仓库的基本步骤:

\n

1、初始化主仓库

mkdir main_project\ncd main_project\ngit init
\n

2、添加子模块

使用 git submodule add 命令将其他仓库添加为子模块到主仓库中。

\n
git submodule add <URL_of_repository1> repository1_folder\ngit submodule add <URL_of_repository2> repository2_folder
\n

3、提交更改

git commit -m "Add submodules repository1 and repository2"
\n

现在,主仓库包含了两个子模块,它们的内容在 repository1_folderrepository2_folder 中。

\n

当你克隆主仓库时,子模块的内容并不会自动下载。你需要执行以下命令来初始化和更新子模块:

\n
git submodule update --init --recursive
\n

这会初始化并拉取子模块的内容。之后,你可以像管理普通的 Git 仓库一样来管理这些子模块,例如切换到不同的分支或提交更改。

\n

需要注意的是,子模块在主仓库中只是一个指向子仓库的引用,它不会把子仓库的内容直接嵌入到主仓库中。这意味着你可以独立地管理每个子仓库的版本和更新。

\n

在主仓库中,如果需要查看子模块的提交记录,可以使用下面的命令:

\n
git log --recurse-submodules
\n

二、删除引用的子模块

如果需要删除子模块,你需要执行以下步骤:

\n

1、移除子模块的配置

使用 git submodule deinit 命令来从 .gitmodules 文件中移除子模块的配置信息,并删除 .git/modules/<submodule_folder> 文件夹中的子模块内容。例如,假设子模块的文件夹名为 submodule_folder

\n
git submodule deinit -f <submodule_folder>
\n

2、 删除子模块的文件夹

删除主项目中包含子模块内容的文件夹。在上面的例子中,删除名为 <submodule_folder> 的文件夹:

\n
git rm -f <submodule_folder>
\n

3、提交更改

git commit -m "Remove submodule <submodule_folder>"\ngit push
\n

三、修改用户名

要修改 Git 中的用户名,你需要执行以下步骤:

\n

1、全局修改用户名

使用以下命令设置全局用户名:

\n
git config --global user.name "Your New Username"
\n

替换 "Your New Username" 为你想要设置的新用户名。

\n

2、针对单个仓库修改用户名(可选)

如果你只想在特定的仓库中修改用户名,而不是全局修改,可以在该仓库中执行以下命令:

\n
git config user.name "Your New Username"
\n

3、验证修改是否成功

你可以运行以下命令来验证修改是否成功:

\n
git config user.name
\n

这会显示当前配置的用户名,确保它已经更新为你想要的新用户名。

\n

通过执行上述步骤,你就可以修改 Git 中的用户名了。

\n

四、整合子模块

Git Subtree 是一个用于合并不同 Git 仓库的工具,它允许将一个仓库的部分历史合并到另一个仓库中,而且可以保留提交记录。

\n

以下是将子模块项目转移到主项目中并保存子模块项目的提交记录的基本步骤:

\n

1、添加子模块内容到主项目中

git subtree add --prefix=<submodule_folder> <submodule_repo_url> <submodule_branch> --squash
\n

这个命令将子模块的内容合并到主项目中的指定文件夹 <submodule_folder> 中。--squash 选项用于将子模块的历史压缩成一个新的提交。

\n

2、提交更改到主项目

git commit -m "Merge submodule repository into main project"
\n

这个提交将包含所有合并的子模块内容。

\n

3、在以后的更新中同步子模块内容(可选)

如果子模块的内容在原始仓库中发生了变化,你可能想要将这些变化同步到主项目中。你可以使用以下命令:

\n
git subtree pull --prefix=<submodule_folder> <submodule_repo_url> <submodule_branch> --squash
\n

这会将子模块的最新更改合并到主项目中。

\n

使用 git subtree 的主要优点是它可以保留子模块项目的提交历史,并将其合并到主项目的提交历史中。这样可以更清晰地追踪子模块项目的变化,并且可以保持主项目的整洁性。

\n

五、其他常见操作

除了上述操作之外,还有一些其他常见的 Git 操作:

\n\n

结语

本文介绍了一些常见的 Git 操作,包括管理子模块、修改用户名、使用 Git Subtree 合并项目以及其他一些常用操作。通过熟练掌握这些操作,你将能够更加高效地使用 Git 进行版本控制,并且更好地管理你的项目代码。

\n","categories":[{"name":"git","slug":"git","permalink":"https://hexo.huangge1199.cn/categories/git/"}],"tags":[{"name":"git","slug":"git","permalink":"https://hexo.huangge1199.cn/tags/git/"}]},{"title":"docker-compose部署单机版nacos","slug":"docker-composebu-shu-dan-ji-ban-nacos","date":"2024-03-08T08:47:43.000Z","updated":"2024-03-08T08:53:09.144Z","comments":true,"path":"/post/docker-composebu-shu-dan-ji-ban-nacos/","link":"","excerpt":"","content":"

nacos数据库建表语句

/*\n * Copyright 1999-2018 Alibaba Group Holding Ltd.\n *\n * Licensed under the Apache License, Version 2.0 (the "License");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n *      http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an "AS IS" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n/******************************************/\n/*   数据库全名 = nacos_config   */\n/*   表名称 = config_info   */\n/******************************************/\nCREATE TABLE `config_info` (\n                               `id` BIGINT(20) NOT NULL AUTO_INCREMENT COMMENT 'id',\n                               `data_id` VARCHAR(255) NOT NULL COMMENT 'data_id',\n                               `group_id` VARCHAR(255) DEFAULT NULL,\n                               `content` LONGTEXT NOT NULL COMMENT 'content',\n                               `md5` VARCHAR(32) DEFAULT NULL COMMENT 'md5',\n                               `gmt_create` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',\n                               `gmt_modified` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',\n                               `src_user` TEXT COMMENT 'source user',\n                               `src_ip` VARCHAR(50) DEFAULT NULL COMMENT 'source ip',\n                               `app_name` VARCHAR(128) DEFAULT NULL,\n                               `tenant_id` VARCHAR(128) DEFAULT '' COMMENT '租户字段',\n                               `c_desc` VARCHAR(256) DEFAULT NULL,\n                               `c_use` VARCHAR(64) DEFAULT NULL,\n                               `effect` VARCHAR(64) DEFAULT NULL,\n                               `type` VARCHAR(64) DEFAULT NULL,\n                               `c_schema` TEXT,\n                               `encrypted_data_key` TEXT NOT NULL COMMENT '秘钥',\n                               PRIMARY KEY (`id`),\n                               UNIQUE KEY `uk_configinfo_datagrouptenant` (`data_id`,`group_id`,`tenant_id`)\n) ENGINE=INNODB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info';\n\n/******************************************/\n/*   数据库全名 = nacos_config   */\n/*   表名称 = config_info_aggr   */\n/******************************************/\nCREATE TABLE `config_info_aggr` (\n                                    `id` BIGINT(20) NOT NULL AUTO_INCREMENT COMMENT 'id',\n                                    `data_id` VARCHAR(255) NOT NULL COMMENT 'data_id',\n                                    `group_id` VARCHAR(255) NOT NULL COMMENT 'group_id',\n                                    `datum_id` VARCHAR(255) NOT NULL COMMENT 'datum_id',\n                                    `content` LONGTEXT NOT NULL COMMENT '内容',\n                                    `gmt_modified` DATETIME NOT NULL COMMENT '修改时间',\n                                    `app_name` VARCHAR(128) DEFAULT NULL,\n                                    `tenant_id` VARCHAR(128) DEFAULT '' COMMENT '租户字段',\n                                    PRIMARY KEY (`id`),\n                                    UNIQUE KEY `uk_configinfoaggr_datagrouptenantdatum` (`data_id`,`group_id`,`tenant_id`,`datum_id`)\n) ENGINE=INNODB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='增加租户字段';\n\n\n/******************************************/\n/*   数据库全名 = nacos_config   */\n/*   表名称 = config_info_beta   */\n/******************************************/\nCREATE TABLE `config_info_beta` (\n                                    `id` BIGINT(20) NOT NULL AUTO_INCREMENT COMMENT 'id',\n                                    `data_id` VARCHAR(255) NOT NULL COMMENT 'data_id',\n                                    `group_id` VARCHAR(128) NOT NULL COMMENT 'group_id',\n                                    `app_name` VARCHAR(128) DEFAULT NULL COMMENT 'app_name',\n                                    `content` LONGTEXT NOT NULL COMMENT 'content',\n                                    `beta_ips` VARCHAR(1024) DEFAULT NULL COMMENT 'betaIps',\n                                    `md5` VARCHAR(32) DEFAULT NULL COMMENT 'md5',\n                                    `gmt_create` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',\n                                    `gmt_modified` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',\n                                    `src_user` TEXT COMMENT 'source user',\n                                    `src_ip` VARCHAR(50) DEFAULT NULL COMMENT 'source ip',\n                                    `tenant_id` VARCHAR(128) DEFAULT '' COMMENT '租户字段',\n                                    `encrypted_data_key` TEXT NOT NULL COMMENT '秘钥',\n                                    PRIMARY KEY (`id`),\n                                    UNIQUE KEY `uk_configinfobeta_datagrouptenant` (`data_id`,`group_id`,`tenant_id`)\n) ENGINE=INNODB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info_beta';\n\n/******************************************/\n/*   数据库全名 = nacos_config   */\n/*   表名称 = config_info_tag   */\n/******************************************/\nCREATE TABLE `config_info_tag` (\n                                   `id` BIGINT(20) NOT NULL AUTO_INCREMENT COMMENT 'id',\n                                   `data_id` VARCHAR(255) NOT NULL COMMENT 'data_id',\n                                   `group_id` VARCHAR(128) NOT NULL COMMENT 'group_id',\n                                   `tenant_id` VARCHAR(128) DEFAULT '' COMMENT 'tenant_id',\n                                   `tag_id` VARCHAR(128) NOT NULL COMMENT 'tag_id',\n                                   `app_name` VARCHAR(128) DEFAULT NULL COMMENT 'app_name',\n                                   `content` LONGTEXT NOT NULL COMMENT 'content',\n                                   `md5` VARCHAR(32) DEFAULT NULL COMMENT 'md5',\n                                   `gmt_create` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',\n                                   `gmt_modified` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',\n                                   `src_user` TEXT COMMENT 'source user',\n                                   `src_ip` VARCHAR(50) DEFAULT NULL COMMENT 'source ip',\n                                   PRIMARY KEY (`id`),\n                                   UNIQUE KEY `uk_configinfotag_datagrouptenanttag` (`data_id`,`group_id`,`tenant_id`,`tag_id`)\n) ENGINE=INNODB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info_tag';\n\n/******************************************/\n/*   数据库全名 = nacos_config   */\n/*   表名称 = config_tags_relation   */\n/******************************************/\nCREATE TABLE `config_tags_relation` (\n                                        `id` BIGINT(20) NOT NULL COMMENT 'id',\n                                        `tag_name` VARCHAR(128) NOT NULL COMMENT 'tag_name',\n                                        `tag_type` VARCHAR(64) DEFAULT NULL COMMENT 'tag_type',\n                                        `data_id` VARCHAR(255) NOT NULL COMMENT 'data_id',\n                                        `group_id` VARCHAR(128) NOT NULL COMMENT 'group_id',\n                                        `tenant_id` VARCHAR(128) DEFAULT '' COMMENT 'tenant_id',\n                                        `nid` BIGINT(20) NOT NULL AUTO_INCREMENT,\n                                        PRIMARY KEY (`nid`),\n                                        UNIQUE KEY `uk_configtagrelation_configidtag` (`id`,`tag_name`,`tag_type`),\n                                        KEY `idx_tenant_id` (`tenant_id`)\n) ENGINE=INNODB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_tag_relation';\n\n/******************************************/\n/*   数据库全名 = nacos_config   */\n/*   表名称 = group_capacity   */\n/******************************************/\nCREATE TABLE `group_capacity` (\n                                  `id` BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT COMMENT '主键ID',\n                                  `group_id` VARCHAR(128) NOT NULL DEFAULT '' COMMENT 'Group ID,空字符表示整个集群',\n                                  `quota` INT(10) UNSIGNED NOT NULL DEFAULT '0' COMMENT '配额,0表示使用默认值',\n                                  `usage` INT(10) UNSIGNED NOT NULL DEFAULT '0' COMMENT '使用量',\n                                  `max_size` INT(10) UNSIGNED NOT NULL DEFAULT '0' COMMENT '单个配置大小上限,单位为字节,0表示使用默认值',\n                                  `max_aggr_count` INT(10) UNSIGNED NOT NULL DEFAULT '0' COMMENT '聚合子配置最大个数,,0表示使用默认值',\n                                  `max_aggr_size` INT(10) UNSIGNED NOT NULL DEFAULT '0' COMMENT '单个聚合数据的子配置大小上限,单位为字节,0表示使用默认值',\n                                  `max_history_count` INT(10) UNSIGNED NOT NULL DEFAULT '0' COMMENT '最大变更历史数量',\n                                  `gmt_create` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',\n                                  `gmt_modified` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',\n                                  PRIMARY KEY (`id`),\n                                  UNIQUE KEY `uk_group_id` (`group_id`)\n) ENGINE=INNODB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='集群、各Group容量信息表';\n\n/******************************************/\n/*   数据库全名 = nacos_config   */\n/*   表名称 = his_config_info   */\n/******************************************/\nCREATE TABLE `his_config_info` (\n                                   `id` BIGINT(64) UNSIGNED NOT NULL,\n                                   `nid` BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT,\n                                   `data_id` VARCHAR(255) NOT NULL,\n                                   `group_id` VARCHAR(128) NOT NULL,\n                                   `app_name` VARCHAR(128) DEFAULT NULL COMMENT 'app_name',\n                                   `content` LONGTEXT NOT NULL,\n                                   `md5` VARCHAR(32) DEFAULT NULL,\n                                   `gmt_create` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,\n                                   `gmt_modified` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,\n                                   `src_user` TEXT,\n                                   `src_ip` VARCHAR(50) DEFAULT NULL,\n                                   `op_type` CHAR(10) DEFAULT NULL,\n                                   `tenant_id` VARCHAR(128) DEFAULT '' COMMENT '租户字段',\n                                   `encrypted_data_key` TEXT NOT NULL COMMENT '秘钥',\n                                   PRIMARY KEY (`nid`),\n                                   KEY `idx_gmt_create` (`gmt_create`),\n                                   KEY `idx_gmt_modified` (`gmt_modified`),\n                                   KEY `idx_did` (`data_id`)\n) ENGINE=INNODB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='多租户改造';\n\n\n/******************************************/\n/*   数据库全名 = nacos_config   */\n/*   表名称 = tenant_capacity   */\n/******************************************/\nCREATE TABLE `tenant_capacity` (\n                                   `id` BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT COMMENT '主键ID',\n                                   `tenant_id` VARCHAR(128) NOT NULL DEFAULT '' COMMENT 'Tenant ID',\n                                   `quota` INT(10) UNSIGNED NOT NULL DEFAULT '0' COMMENT '配额,0表示使用默认值',\n                                   `usage` INT(10) UNSIGNED NOT NULL DEFAULT '0' COMMENT '使用量',\n                                   `max_size` INT(10) UNSIGNED NOT NULL DEFAULT '0' COMMENT '单个配置大小上限,单位为字节,0表示使用默认值',\n                                   `max_aggr_count` INT(10) UNSIGNED NOT NULL DEFAULT '0' COMMENT '聚合子配置最大个数',\n                                   `max_aggr_size` INT(10) UNSIGNED NOT NULL DEFAULT '0' COMMENT '单个聚合数据的子配置大小上限,单位为字节,0表示使用默认值',\n                                   `max_history_count` INT(10) UNSIGNED NOT NULL DEFAULT '0' COMMENT '最大变更历史数量',\n                                   `gmt_create` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',\n                                   `gmt_modified` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',\n                                   PRIMARY KEY (`id`),\n                                   UNIQUE KEY `uk_tenant_id` (`tenant_id`)\n) ENGINE=INNODB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='租户容量信息表';\n\n\nCREATE TABLE `tenant_info` (\n                               `id` BIGINT(20) NOT NULL AUTO_INCREMENT COMMENT 'id',\n                               `kp` VARCHAR(128) NOT NULL COMMENT 'kp',\n                               `tenant_id` VARCHAR(128) DEFAULT '' COMMENT 'tenant_id',\n                               `tenant_name` VARCHAR(128) DEFAULT '' COMMENT 'tenant_name',\n                               `tenant_desc` VARCHAR(256) DEFAULT NULL COMMENT 'tenant_desc',\n                               `create_source` VARCHAR(32) DEFAULT NULL COMMENT 'create_source',\n                               `gmt_create` BIGINT(20) NOT NULL COMMENT '创建时间',\n                               `gmt_modified` BIGINT(20) NOT NULL COMMENT '修改时间',\n                               PRIMARY KEY (`id`),\n                               UNIQUE KEY `uk_tenant_info_kptenantid` (`kp`,`tenant_id`),\n                               KEY `idx_tenant_id` (`tenant_id`)\n) ENGINE=INNODB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='tenant_info';\n\nCREATE TABLE `users` (\n                         `username` VARCHAR(50) NOT NULL PRIMARY KEY,\n                         `password` VARCHAR(500) NOT NULL,\n                         `enabled` BOOLEAN NOT NULL\n);\n\nCREATE TABLE `roles` (\n                         `username` VARCHAR(50) NOT NULL,\n                         `role` VARCHAR(50) NOT NULL,\n                         UNIQUE INDEX `idx_user_role` (`username` ASC, `role` ASC) USING BTREE\n);\n\nCREATE TABLE `permissions` (\n                               `role` VARCHAR(50) NOT NULL,\n                               `resource` VARCHAR(255) NOT NULL,\n                               `action` VARCHAR(8) NOT NULL,\n                               UNIQUE INDEX `uk_role_permission` (`role`,`resource`,`action`) USING BTREE\n);\n\nINSERT INTO users (username, PASSWORD, enabled) VALUES ('nacos', '$2a$10$EuWPZHzz32dJN7jexM34MOeYirDdFAZm2kuWj7VEOJhhZkDrxfvUu', TRUE);\n\nINSERT INTO roles (username, role) VALUES ('nacos', 'ROLE_ADMIN');
\n

docker-compose.yaml内容如下:

version: "3.0"\nservices:\n  nacos:\n    image: nacos/nacos-server:2.0.3\n    container_name: nacos\n    volumes:\n      - ./logs/:/home/nacos/logs\n    ports:\n      - "8848:8848"\n      - "9848:9848"\n    environment:\n      MODE: standalone\n      PREFER_HOST_MODE: hostname\n      SPRING_DATASOURCE_PLATFORM: mysql\n      MYSQL_SERVICE_HOST: 数据库IP地址(例:127.0.0.1)\n      MYSQL_SERVICE_DB_NAME: 数据库名称\n      MYSQL_SERVICE_PORT: 数据库端口号\n      MYSQL_SERVICE_USER: 数据库连接用户名\n      MYSQL_SERVICE_PASSWORD: 数据库连接密码\n    restart: always
\n

web登录页

http://IP:8848/nacos
用户名和密码都是nacos

\n","categories":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/categories/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}],"tags":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}]},{"title":"Map类方法整理(jdk8)","slug":"maplei-xiang-jie-jdk8","date":"2024-02-19T03:09:55.000Z","updated":"2024-02-19T06:04:13.167Z","comments":true,"path":"/post/maplei-xiang-jie-jdk8/","link":"","excerpt":"","content":"

前言

今天在查看力扣周赛385题解时,发现了几个我平时没注意的map方法,看了jdk相关的源码,感觉很巧妙,可以帮我节省代码,于是乎顺带着整个Map类的方法都过了一遍,下面是我看后整理的内容。

\n

Map类中包括了以下方法:

\n\n

方法详解

clear()

源码:

\n
/**\n * Removes all of the mappings from this map (optional operation).\n * The map will be empty after this call returns.\n *\n * @throws UnsupportedOperationException if the <tt>clear</tt> operation\n *         is not supported by this map\n */\nvoid clear();
\n

功能:移除Map中所有的键值对。

\n

compute(K,BiFunction)

源码:

\n
/**\n * Attempts to compute a mapping for the specified key and its current\n * mapped value (or {@code null} if there is no current mapping). For\n * example, to either create or append a {@code String} msg to a value\n * mapping:\n *\n * <pre> {@code\n * map.compute(key, (k, v) -> (v == null) ? msg : v.concat(msg))}</pre>\n * (Method {@link #merge merge()} is often simpler to use for such purposes.)\n *\n * <p>If the function returns {@code null}, the mapping is removed (or\n * remains absent if initially absent).  If the function itself throws an\n * (unchecked) exception, the exception is rethrown, and the current mapping\n * is left unchanged.\n *\n * @implSpec\n * The default implementation is equivalent to performing the following\n * steps for this {@code map}, then returning the current value or\n * {@code null} if absent:\n *\n * <pre> {@code\n * V oldValue = map.get(key);\n * V newValue = remappingFunction.apply(key, oldValue);\n * if (oldValue != null ) {\n *    if (newValue != null)\n *       map.put(key, newValue);\n *    else\n *       map.remove(key);\n * } else {\n *    if (newValue != null)\n *       map.put(key, newValue);\n *    else\n *       return null;\n * }\n * }</pre>\n *\n * <p>The default implementation makes no guarantees about synchronization\n * or atomicity properties of this method. Any implementation providing\n * atomicity guarantees must override this method and document its\n * concurrency properties. In particular, all implementations of\n * subinterface {@link java.util.concurrent.ConcurrentMap} must document\n * whether the function is applied once atomically only if the value is not\n * present.\n *\n * @param key key with which the specified value is to be associated\n * @param remappingFunction the function to compute a value\n * @return the new value associated with the specified key, or null if none\n * @throws NullPointerException if the specified key is null and\n *         this map does not support null keys, or the\n *         remappingFunction is null\n * @throws UnsupportedOperationException if the {@code put} operation\n *         is not supported by this map\n *         (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</\n * @throws ClassCastException if the class of the specified key or value\n *         prevents it from being stored in this map\n *         (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</\n * @since 1.8\n */\ndefault V compute(K key,\n        BiFunction<? super K, ? super V, ? extends V> remappingFunction) {\n    Objects.requireNonNull(remappingFunction);\n    V oldValue = get(key);\n    V newValue = remappingFunction.apply(key, oldValue);\n    if (newValue == null) {\n        // delete mapping\n        if (oldValue != null || containsKey(key)) {\n            // something to remove\n            remove(key);\n            return null;\n        } else {\n            // nothing to do. Leave things as they were.\n            return null;\n        }\n    } else {\n        // add or replace old mapping\n        put(key, newValue);\n        return newValue;\n    }\n}
\n

功能:

\n\n

computeIfAbsent(K,Function)

源码:

\n
/**\n * If the specified key is not already associated with a value (or is mapped\n * to {@code null}), attempts to compute its value using the given mapping\n * function and enters it into this map unless {@code null}.\n *\n * <p>If the function returns {@code null} no mapping is recorded. If\n * the function itself throws an (unchecked) exception, the\n * exception is rethrown, and no mapping is recorded.  The most\n * common usage is to construct a new object serving as an initial\n * mapped value or memoized result, as in:\n *\n * <pre> {@code\n * map.computeIfAbsent(key, k -> new Value(f(k)));\n * }</pre>\n *\n * <p>Or to implement a multi-value map, {@code Map<K,Collection<V>>},\n * supporting multiple values per key:\n *\n * <pre> {@code\n * map.computeIfAbsent(key, k -> new HashSet<V>()).add(v);\n * }</pre>\n *\n *\n * @implSpec\n * The default implementation is equivalent to the following steps for this\n * {@code map}, then returning the current value or {@code null} if now\n * absent:\n *\n * <pre> {@code\n * if (map.get(key) == null) {\n *     V newValue = mappingFunction.apply(key);\n *     if (newValue != null)\n *         map.put(key, newValue);\n * }\n * }</pre>\n *\n * <p>The default implementation makes no guarantees about synchronization\n * or atomicity properties of this method. Any implementation providing\n * atomicity guarantees must override this method and document its\n * concurrency properties. In particular, all implementations of\n * subinterface {@link java.util.concurrent.ConcurrentMap} must document\n * whether the function is applied once atomically only if the value is not\n * present.\n *\n * @param key key with which the specified value is to be associated\n * @param mappingFunction the function to compute a value\n * @return the current (existing or computed) value associated with\n *         the specified key, or null if the computed value is null\n * @throws NullPointerException if the specified key is null and\n *         this map does not support null keys, or the mappingFunction\n *         is null\n * @throws UnsupportedOperationException if the {@code put} operation\n *         is not supported by this map\n *         (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @throws ClassCastException if the class of the specified key or value\n *         prevents it from being stored in this map\n *         (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @since 1.8\n */\ndefault V computeIfAbsent(K key,\n        Function<? super K, ? extends V> mappingFunction) {\n    Objects.requireNonNull(mappingFunction);\n    V v;\n    if ((v = get(key)) == null) {\n        V newValue;\n        if ((newValue = mappingFunction.apply(key)) != null) {\n            put(key, newValue);\n            return newValue;\n        }\n    }\n    return v;\n}
\n

功能:

\n\n

computeIfPresent(K,BiFunction)

源码:

\n
/**\n * If the value for the specified key is present and non-null, attempts to\n * compute a new mapping given the key and its current mapped value.\n *\n * <p>If the function returns {@code null}, the mapping is removed.  If the\n * function itself throws an (unchecked) exception, the exception is\n * rethrown, and the current mapping is left unchanged.\n*\n * @implSpec\n * The default implementation is equivalent to performing the following\n * steps for this {@code map}, then returning the current value or\n * {@code null} if now absent:\n *\n * <pre> {@code\n * if (map.get(key) != null) {\n *     V oldValue = map.get(key);\n *     V newValue = remappingFunction.apply(key, oldValue);\n *     if (newValue != null)\n *         map.put(key, newValue);\n *     else\n *         map.remove(key);\n * }\n * }</pre>\n *\n * <p>The default implementation makes no guarantees about synchronization\n * or atomicity properties of this method. Any implementation providing\n * atomicity guarantees must override this method and document its\n * concurrency properties. In particular, all implementations of\n * subinterface {@link java.util.concurrent.ConcurrentMap} must document\n * whether the function is applied once atomically only if the value is not\n * present.\n *\n * @param key key with which the specified value is to be associated\n * @param remappingFunction the function to compute a value\n * @return the new value associated with the specified key, or null if none\n * @throws NullPointerException if the specified key is null and\n *         this map does not support null keys, or the\n *         remappingFunction is null\n * @throws UnsupportedOperationException if the {@code put} operation\n *         is not supported by this map\n *         (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @throws ClassCastException if the class of the specified key or value\n *         prevents it from being stored in this map\n *         (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @since 1.8\n */\ndefault V computeIfPresent(K key,\n        BiFunction<? super K, ? super V, ? extends V> remappingFunction) {\n    Objects.requireNonNull(remappingFunction);\n    V oldValue;\n    if ((oldValue = get(key)) != null) {\n        V newValue = remappingFunction.apply(key, oldValue);\n        if (newValue != null) {\n            put(key, newValue);\n            return newValue;\n        } else {\n            remove(key);\n            return null;\n        }\n    } else {\n        return null;\n    }\n}
\n

功能:

\n\n

containsKey(Object)

源码:

\n
/**\n * Returns <tt>true</tt> if this map contains a mapping for the specified\n * key.  More formally, returns <tt>true</tt> if and only if\n * this map contains a mapping for a key <tt>k</tt> such that\n * <tt>(key==null ? k==null : key.equals(k))</tt>.  (There can be\n * at most one such mapping.)\n *\n * @param key key whose presence in this map is to be tested\n * @return <tt>true</tt> if this map contains a mapping for the specified\n *         key\n * @throws ClassCastException if the key is of an inappropriate type for\n *         this map\n * (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @throws NullPointerException if the specified key is null and this map\n *         does not permit null keys\n * (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n */\nboolean containsKey(Object key);
\n

功能:判断Map中是否包含指定的键。

\n

containsValue(Object)

源码:

\n
/**\n * Returns <tt>true</tt> if this map maps one or more keys to the\n * specified value.  More formally, returns <tt>true</tt> if and only if\n * this map contains at least one mapping to a value <tt>v</tt> such that\n * <tt>(value==null ? v==null : value.equals(v))</tt>.  This operation\n * will probably require time linear in the map size for most\n * implementations of the <tt>Map</tt> interface.\n *\n * @param value value whose presence in this map is to be tested\n * @return <tt>true</tt> if this map maps one or more keys to the\n *         specified value\n * @throws ClassCastException if the value is of an inappropriate type for\n *         this map\n * (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @throws NullPointerException if the specified value is null and this\n *         map does not permit null values\n * (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n */\nboolean containsValue(Object value);
\n

功能:判断Map中是否包含指定的值。

\n

entrySet()

源码:

\n
/**\n * Returns a {@link Set} view of the mappings contained in this map.\n * The set is backed by the map, so changes to the map are\n * reflected in the set, and vice-versa.  If the map is modified\n * while an iteration over the set is in progress (except through\n * the iterator's own <tt>remove</tt> operation, or through the\n * <tt>setValue</tt> operation on a map entry returned by the\n * iterator) the results of the iteration are undefined.  The set\n * supports element removal, which removes the corresponding\n * mapping from the map, via the <tt>Iterator.remove</tt>,\n * <tt>Set.remove</tt>, <tt>removeAll</tt>, <tt>retainAll</tt> and\n * <tt>clear</tt> operations.  It does not support the\n * <tt>add</tt> or <tt>addAll</tt> operations.\n *\n * @return a set view of the mappings contained in this map\n */\nSet<Map.Entry<K, V>> entrySet();
\n

功能:返回一个包含Map中所有键值对的Set集合。

\n

equals(Object)

源码:

\n
/**\n * Compares the specified object with this map for equality.  Returns\n * <tt>true</tt> if the given object is also a map and the two maps\n * represent the same mappings.  More formally, two maps <tt>m1</tt> and\n * <tt>m2</tt> represent the same mappings if\n * <tt>m1.entrySet().equals(m2.entrySet())</tt>.  This ensures that the\n * <tt>equals</tt> method works properly across different implementations\n * of the <tt>Map</tt> interface.\n *\n * @param o object to be compared for equality with this map\n * @return <tt>true</tt> if the specified object is equal to this map\n */\nboolean equals(Object o);
\n

功能:比较两个 Map 对象是否相等。两个 Map 相等的条件是:

\n
    \n
  1. 两个 Map 对象具有相同的键值对数量。

    \n
  2. \n
  3. 对于每个键,两个 Map 对象的键值必须相等。

    \n
  4. \n
\n

forEach(BiConsumer)

源码:

\n
/**\n * Performs the given action for each entry in this map until all entries\n * have been processed or the action throws an exception.   Unless\n * otherwise specified by the implementing class, actions are performed in\n * the order of entry set iteration (if an iteration order is specified.)\n * Exceptions thrown by the action are relayed to the caller.\n *\n * @implSpec\n * The default implementation is equivalent to, for this {@code map}:\n * <pre> {@code\n * for (Map.Entry<K, V> entry : map.entrySet())\n *     action.accept(entry.getKey(), entry.getValue());\n * }</pre>\n *\n * The default implementation makes no guarantees about synchronization\n * or atomicity properties of this method. Any implementation providing\n * atomicity guarantees must override this method and document its\n * concurrency properties.\n *\n * @param action The action to be performed for each entry\n * @throws NullPointerException if the specified action is null\n * @throws ConcurrentModificationException if an entry is found to be\n * removed during iteration\n * @since 1.8\n */\ndefault void forEach(BiConsumer<? super K, ? super V> action) {\n    Objects.requireNonNull(action);\n    for (Map.Entry<K, V> entry : entrySet()) {\n        K k;\n        V v;\n        try {\n            k = entry.getKey();\n            v = entry.getValue();\n        } catch(IllegalStateException ise) {\n            // this usually means the entry is no longer in the map.\n            throw new ConcurrentModificationException(ise);\n        }\n        action.accept(k, v);\n    }\n}
\n

功能:对Map中的每个键值对执行指定的操作。

\n

get(Object)

源码:

\n
/**\n * Returns the value to which the specified key is mapped,\n * or {@code null} if this map contains no mapping for the key.\n *\n * <p>More formally, if this map contains a mapping from a key\n * {@code k} to a value {@code v} such that {@code (key==null ? k==null :\n * key.equals(k))}, then this method returns {@code v}; otherwise\n * it returns {@code null}.  (There can be at most one such mapping.)\n *\n * <p>If this map permits null values, then a return value of\n * {@code null} does not <i>necessarily</i> indicate that the map\n * contains no mapping for the key; it's also possible that the map\n * explicitly maps the key to {@code null}.  The {@link #containsKey\n * containsKey} operation may be used to distinguish these two cases.\n *\n * @param key the key whose associated value is to be returned\n * @return the value to which the specified key is mapped, or\n *         {@code null} if this map contains no mapping for the key\n * @throws ClassCastException if the key is of an inappropriate type for\n *         this map\n * (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @throws NullPointerException if the specified key is null and this map\n *         does not permit null keys\n * (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n */\nV get(Object key);
\n

功能:获取指定键对应的值,如果键不存在,则返回null。

\n

getOrDefault(Object, V)

源码:

\n
/**\n * Returns the value to which the specified key is mapped, or\n * {@code defaultValue} if this map contains no mapping for the key.\n *\n * @implSpec\n * The default implementation makes no guarantees about synchronization\n * or atomicity properties of this method. Any implementation providing\n * atomicity guarantees must override this method and document its\n * concurrency properties.\n *\n * @param key the key whose associated value is to be returned\n * @param defaultValue the default mapping of the key\n * @return the value to which the specified key is mapped, or\n * {@code defaultValue} if this map contains no mapping for the key\n * @throws ClassCastException if the key is of an inappropriate type for\n * this map\n * (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @throws NullPointerException if the specified key is null and this map\n * does not permit null keys\n * (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @since 1.8\n */\ndefault V getOrDefault(Object key, V defaultValue) {\n    V v;\n    return (((v = get(key)) != null) || containsKey(key))\n        ? v\n        : defaultValue;\n}
\n

功能:获取指定键对应的值,如果键不存在,则返回指定的默认值。

\n

hashCode()

源码:

\n
/**\n * Returns the hash code value for this map.  The hash code of a map is\n * defined to be the sum of the hash codes of each entry in the map's\n * <tt>entrySet()</tt> view.  This ensures that <tt>m1.equals(m2)</tt>\n * implies that <tt>m1.hashCode()==m2.hashCode()</tt> for any two maps\n * <tt>m1</tt> and <tt>m2</tt>, as required by the general contract of\n * {@link Object#hashCode}.\n *\n * @return the hash code value for this map\n * @see Map.Entry#hashCode()\n * @see Object#equals(Object)\n * @see #equals(Object)\n */\nint hashCode();
\n

功能:返回 Map 对象的哈希码。

\n

isEmpty()

源码:

\n
/**\n * Returns <tt>true</tt> if this map contains no key-value mappings.\n *\n * @return <tt>true</tt> if this map contains no key-value mappings\n */\nboolean isEmpty();
\n

功能:判断Map是否为空

\n

keySet()

源码:

\n
/**\n * Returns a {@link Set} view of the keys contained in this map.\n * The set is backed by the map, so changes to the map are\n * reflected in the set, and vice-versa.  If the map is modified\n * while an iteration over the set is in progress (except through\n * the iterator's own <tt>remove</tt> operation), the results of\n * the iteration are undefined.  The set supports element removal,\n * which removes the corresponding mapping from the map, via the\n * <tt>Iterator.remove</tt>, <tt>Set.remove</tt>,\n * <tt>removeAll</tt>, <tt>retainAll</tt>, and <tt>clear</tt>\n * operations.  It does not support the <tt>add</tt> or <tt>addAll</tt>\n * operations.\n *\n * @return a set view of the keys contained in this map\n */\nSet<K> keySet();
\n

功能:返回 Map 中所有键的 Set 集合

\n

merge(K,V,BiFunction)

源码:

\n
/**\n * If the specified key is not already associated with a value or is\n * associated with null, associates it with the given non-null value.\n * Otherwise, replaces the associated value with the results of the given\n * remapping function, or removes if the result is {@code null}. This\n * method may be of use when combining multiple mapped values for a key.\n * For example, to either create or append a {@code String msg} to a\n * value mapping:\n *\n * <pre> {@code\n * map.merge(key, msg, String::concat)\n * }</pre>\n *\n * <p>If the function returns {@code null} the mapping is removed.  If the\n * function itself throws an (unchecked) exception, the exception is\n * rethrown, and the current mapping is left unchanged.\n *\n * @implSpec\n * The default implementation is equivalent to performing the following\n * steps for this {@code map}, then returning the current value or\n * {@code null} if absent:\n *\n * <pre> {@code\n * V oldValue = map.get(key);\n * V newValue = (oldValue == null) ? value :\n *              remappingFunction.apply(oldValue, value);\n * if (newValue == null)\n *     map.remove(key);\n * else\n *     map.put(key, newValue);\n * }</pre>\n *\n * <p>The default implementation makes no guarantees about synchronization\n * or atomicity properties of this method. Any implementation providing\n * atomicity guarantees must override this method and document its\n * concurrency properties. In particular, all implementations of\n * subinterface {@link java.util.concurrent.ConcurrentMap} must document\n * whether the function is applied once atomically only if the value is not\n * present.\n *\n * @param key key with which the resulting value is to be associated\n * @param value the non-null value to be merged with the existing value\n *        associated with the key or, if no existing value or a null value\n *        is associated with the key, to be associated with the key\n * @param remappingFunction the function to recompute a value if present\n * @return the new value associated with the specified key, or null if no\n *         value is associated with the key\n * @throws UnsupportedOperationException if the {@code put} operation\n *         is not supported by this map\n *         (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @throws ClassCastException if the class of the specified key or value\n *         prevents it from being stored in this map\n *         (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @throws NullPointerException if the specified key is null and this map\n *         does not support null keys or the value or remappingFunction is\n *         null\n * @since 1.8\n */\ndefault V merge(K key, V value,\n        BiFunction<? super V, ? super V, ? extends V> remappingFunction) {\n    Objects.requireNonNull(remappingFunction);\n    Objects.requireNonNull(value);\n    V oldValue = get(key);\n    V newValue = (oldValue == null) ? value :\n               remappingFunction.apply(oldValue, value);\n    if(newValue == null) {\n        remove(key);\n    } else {\n        put(key, newValue);\n    }\n    return newValue;\n}
\n

功能:将指定的键和值合并到Map中,根据提供的函数计算新值。

\n

put(K,V)

源码:

\n
/**\n * Associates the specified value with the specified key in this map\n * (optional operation).  If the map previously contained a mapping for\n * the key, the old value is replaced by the specified value.  (A map\n * <tt>m</tt> is said to contain a mapping for a key <tt>k</tt> if and only\n * if {@link #containsKey(Object) m.containsKey(k)} would return\n * <tt>true</tt>.)\n *\n * @param key key with which the specified value is to be associated\n * @param value value to be associated with the specified key\n * @return the previous value associated with <tt>key</tt>, or\n *         <tt>null</tt> if there was no mapping for <tt>key</tt>.\n *         (A <tt>null</tt> return can also indicate that the map\n *         previously associated <tt>null</tt> with <tt>key</tt>,\n *         if the implementation supports <tt>null</tt> values.)\n * @throws UnsupportedOperationException if the <tt>put</tt> operation\n *         is not supported by this map\n * @throws ClassCastException if the class of the specified key or value\n *         prevents it from being stored in this map\n * @throws NullPointerException if the specified key or value is null\n *         and this map does not permit null keys or values\n * @throws IllegalArgumentException if some property of the specified key\n *         or value prevents it from being stored in this map\n */\nV put(K key, V value);
\n

功能:将指定的键值对添加到Map中。

\n

putAll(Map)

源码:

\n
/**\n * Copies all of the mappings from the specified map to this map\n * (optional operation).  The effect of this call is equivalent to that\n * of calling {@link #put(Object,Object) put(k, v)} on this map once\n * for each mapping from key <tt>k</tt> to value <tt>v</tt> in the\n * specified map.  The behavior of this operation is undefined if the\n * specified map is modified while the operation is in progress.\n *\n * @param m mappings to be stored in this map\n * @throws UnsupportedOperationException if the <tt>putAll</tt> operation\n *         is not supported by this map\n * @throws ClassCastException if the class of a key or value in the\n *         specified map prevents it from being stored in this map\n * @throws NullPointerException if the specified map is null, or if\n *         this map does not permit null keys or values, and the\n *         specified map contains null keys or values\n * @throws IllegalArgumentException if some property of a key or value in\n *         the specified map prevents it from being stored in this map\n */\nvoid putAll(Map<? extends K, ? extends V> m);
\n

功能:将指定Map中的所有键值对添加到当前Map中。

\n

putIfAbsent(K,V)

源码:

\n
/**\n * If the specified key is not already associated with a value (or is mapped\n * to {@code null}) associates it with the given value and returns\n * {@code null}, else returns the current value.\n *\n * @implSpec\n * The default implementation is equivalent to, for this {@code\n * map}:\n *\n * <pre> {@code\n * V v = map.get(key);\n * if (v == null)\n *     v = map.put(key, value);\n *\n * return v;\n * }</pre>\n *\n * <p>The default implementation makes no guarantees about synchronization\n * or atomicity properties of this method. Any implementation providing\n * atomicity guarantees must override this method and document its\n * concurrency properties.\n *\n * @param key key with which the specified value is to be associated\n * @param value value to be associated with the specified key\n * @return the previous value associated with the specified key, or\n *         {@code null} if there was no mapping for the key.\n *         (A {@code null} return can also indicate that the map\n *         previously associated {@code null} with the key,\n *         if the implementation supports null values.)\n * @throws UnsupportedOperationException if the {@code put} operation\n *         is not supported by this map\n *         (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @throws ClassCastException if the key or value is of an inappropriate\n *         type for this map\n *         (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @throws NullPointerException if the specified key or value is null,\n *         and this map does not permit null keys or values\n *         (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @throws IllegalArgumentException if some property of the specified key\n *         or value prevents it from being stored in this map\n *         (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @since 1.8\n */\ndefault V putIfAbsent(K key, V value) {\n    V v = get(key);\n    if (v == null) {\n        v = put(key, value);\n    }\n    return v;\n}
\n

功能:将指定的键值对添加到Map中,但仅当指定的键在Map中不存在时。

\n

remove(Object)

源码:

\n
/**\n * Removes the mapping for a key from this map if it is present\n * (optional operation).   More formally, if this map contains a mapping\n * from key <tt>k</tt> to value <tt>v</tt> such that\n * <code>(key==null ?  k==null : key.equals(k))</code>, that mapping\n * is removed.  (The map can contain at most one such mapping.)\n *\n * <p>Returns the value to which this map previously associated the key,\n * or <tt>null</tt> if the map contained no mapping for the key.\n *\n * <p>If this map permits null values, then a return value of\n * <tt>null</tt> does not <i>necessarily</i> indicate that the map\n * contained no mapping for the key; it's also possible that the map\n * explicitly mapped the key to <tt>null</tt>.\n *\n * <p>The map will not contain a mapping for the specified key once the\n * call returns.\n *\n * @param key key whose mapping is to be removed from the map\n * @return the previous value associated with <tt>key</tt>, or\n *         <tt>null</tt> if there was no mapping for <tt>key</tt>.\n * @throws UnsupportedOperationException if the <tt>remove</tt> operation\n *         is not supported by this map\n * @throws ClassCastException if the key is of an inappropriate type for\n *         this map\n * (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @throws NullPointerException if the specified key is null and this\n *         map does not permit null keys\n * (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n */\nV remove(Object key);
\n

功能:移除Map中指定键对应的值。

\n

remove(Object,Object)

源码:

\n
/**\n * Removes the entry for the specified key only if it is currently\n * mapped to the specified value.\n *\n * @implSpec\n * The default implementation is equivalent to, for this {@code map}:\n *\n * <pre> {@code\n * if (map.containsKey(key) && Objects.equals(map.get(key), value)) {\n *     map.remove(key);\n *     return true;\n * } else\n *     return false;\n * }</pre>\n *\n * <p>The default implementation makes no guarantees about synchronization\n * or atomicity properties of this method. Any implementation providing\n * atomicity guarantees must override this method and document its\n * concurrency properties.\n *\n * @param key key with which the specified value is associated\n * @param value value expected to be associated with the specified key\n * @return {@code true} if the value was removed\n * @throws UnsupportedOperationException if the {@code remove} operation\n *         is not supported by this map\n *         (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @throws ClassCastException if the key or value is of an inappropriate\n *         type for this map\n *         (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @throws NullPointerException if the specified key or value is null,\n *         and this map does not permit null keys or values\n *         (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @since 1.8\n */\ndefault boolean remove(Object key, Object value) {\n    Object curValue = get(key);\n    if (!Objects.equals(curValue, value) ||\n        (curValue == null && !containsKey(key))) {\n        return false;\n    }\n    remove(key);\n    return true;\n}
\n

功能:移除Map中指定键对应的值,仅当该键关联的值与指定值相等时才移除。

\n

replace(K,V,V)

源码:

\n
/**\n * Replaces the entry for the specified key only if currently\n * mapped to the specified value.\n *\n * @implSpec\n * The default implementation is equivalent to, for this {@code map}:\n *\n * <pre> {@code\n * if (map.containsKey(key) && Objects.equals(map.get(key), value)) {\n *     map.put(key, newValue);\n *     return true;\n * } else\n *     return false;\n * }</pre>\n *\n * The default implementation does not throw NullPointerException\n * for maps that do not support null values if oldValue is null unless\n * newValue is also null.\n *\n * <p>The default implementation makes no guarantees about synchronization\n * or atomicity properties of this method. Any implementation providing\n * atomicity guarantees must override this method and document its\n * concurrency properties.\n *\n * @param key key with which the specified value is associated\n * @param oldValue value expected to be associated with the specified key\n * @param newValue value to be associated with the specified key\n * @return {@code true} if the value was replaced\n * @throws UnsupportedOperationException if the {@code put} operation\n *         is not supported by this map\n *         (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @throws ClassCastException if the class of a specified key or value\n *         prevents it from being stored in this map\n * @throws NullPointerException if a specified key or newValue is null,\n *         and this map does not permit null keys or values\n * @throws NullPointerException if oldValue is null and this map does not\n *         permit null values\n *         (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @throws IllegalArgumentException if some property of a specified key\n *         or value prevents it from being stored in this map\n * @since 1.8\n */\ndefault boolean replace(K key, V oldValue, V newValue) {\n    Object curValue = get(key);\n    if (!Objects.equals(curValue, oldValue) ||\n        (curValue == null && !containsKey(key))) {\n        return false;\n    }\n    put(key, newValue);\n    return true;\n}
\n

功能:将Map中指定键对应的旧值替换为新值,仅当键对应的值与旧值相等时才替换。

\n

replace(K,V)

源码:

\n
/**\n * Replaces the entry for the specified key only if it is\n * currently mapped to some value.\n *\n * @implSpec\n * The default implementation is equivalent to, for this {@code map}:\n *\n * <pre> {@code\n * if (map.containsKey(key)) {\n *     return map.put(key, value);\n * } else\n *     return null;\n * }</pre>\n *\n * <p>The default implementation makes no guarantees about synchronization\n * or atomicity properties of this method. Any implementation providing\n * atomicity guarantees must override this method and document its\n * concurrency properties.\n  *\n * @param key key with which the specified value is associated\n * @param value value to be associated with the specified key\n * @return the previous value associated with the specified key, or\n *         {@code null} if there was no mapping for the key.\n *         (A {@code null} return can also indicate that the map\n *         previously associated {@code null} with the key,\n *         if the implementation supports null values.)\n * @throws UnsupportedOperationException if the {@code put} operation\n *         is not supported by this map\n *         (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @throws ClassCastException if the class of the specified key or value\n *         prevents it from being stored in this map\n *         (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @throws NullPointerException if the specified key or value is null,\n *         and this map does not permit null keys or values\n * @throws IllegalArgumentException if some property of the specified key\n *         or value prevents it from being stored in this map\n * @since 1.8\n */\ndefault V replace(K key, V value) {\n    V curValue;\n    if (((curValue = get(key)) != null) || containsKey(key)) {\n        curValue = put(key, value);\n    }\n    return curValue;\n}
\n

功能:将Map中指定键对应的值替换为新值,返回旧值。

\n

replaceAll(BiFunction)

源码:

\n
/**\n * Replaces each entry's value with the result of invoking the given\n * function on that entry until all entries have been processed or the\n * function throws an exception.  Exceptions thrown by the function are\n * relayed to the caller.\n *\n * @implSpec\n * <p>The default implementation is equivalent to, for this {@code map}:\n * <pre> {@code\n * for (Map.Entry<K, V> entry : map.entrySet())\n *     entry.setValue(function.apply(entry.getKey(), entry.getValue()));\n * }</pre>\n *\n * <p>The default implementation makes no guarantees about synchronization\n * or atomicity properties of this method. Any implementation providing\n * atomicity guarantees must override this method and document its\n * concurrency properties.\n *\n * @param function the function to apply to each entry\n * @throws UnsupportedOperationException if the {@code set} operation\n * is not supported by this map's entry set iterator.\n * @throws ClassCastException if the class of a replacement value\n * prevents it from being stored in this map\n * @throws NullPointerException if the specified function is null, or the\n * specified replacement value is null, and this map does not permit null\n * values\n * @throws ClassCastException if a replacement value is of an inappropriate\n *         type for this map\n *         (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @throws NullPointerException if function or a replacement value is null,\n *         and this map does not permit null keys or values\n *         (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @throws IllegalArgumentException if some property of a replacement value\n *         prevents it from being stored in this map\n *         (<a href="{@docRoot}/java/util/Collection.html#optional-restrictions">optional</a>)\n * @throws ConcurrentModificationException if an entry is found to be\n * removed during iteration\n * @since 1.8\n */\ndefault void replaceAll(BiFunction<? super K, ? super V, ? extends V> function) {\n    Objects.requireNonNull(function);\n    for (Map.Entry<K, V> entry : entrySet()) {\n        K k;\n        V v;\n        try {\n            k = entry.getKey();\n            v = entry.getValue();\n        } catch(IllegalStateException ise) {\n            // this usually means the entry is no longer in the map.\n            throw new ConcurrentModificationException(ise);\n        }\n        // ise thrown from function is not a cme.\n        v = function.apply(k, v);\n        try {\n            entry.setValue(v);\n        } catch(IllegalStateException ise) {\n            // this usually means the entry is no longer in the map.\n            throw new ConcurrentModificationException(ise);\n        }\n    }\n}
\n

功能:对Map中的每个键值对执行指定的替换操作。

\n

size()

源码:

\n
/**\n * Returns the number of key-value mappings in this map.  If the\n * map contains more than <tt>Integer.MAX_VALUE</tt> elements, returns\n * <tt>Integer.MAX_VALUE</tt>.\n *\n * @return the number of key-value mappings in this map\n */\nint size();
\n

功能:返回Map中键值对的数量。

\n

values()

源码:

\n
/**\n * Returns a {@link Collection} view of the values contained in this map.\n * The collection is backed by the map, so changes to the map are\n * reflected in the collection, and vice-versa.  If the map is\n * modified while an iteration over the collection is in progress\n * (except through the iterator's own <tt>remove</tt> operation),\n * the results of the iteration are undefined.  The collection\n * supports element removal, which removes the corresponding\n * mapping from the map, via the <tt>Iterator.remove</tt>,\n * <tt>Collection.remove</tt>, <tt>removeAll</tt>,\n * <tt>retainAll</tt> and <tt>clear</tt> operations.  It does not\n * support the <tt>add</tt> or <tt>addAll</tt> operations.\n *\n * @return a collection view of the values contained in this map\n */\nCollection<V> values();
\n

功能:返回包含Map中所有值的Collection集合。

\n","categories":[{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/categories/java/"}],"tags":[{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/tags/java/"}]},{"title":"win11新电脑环境安装","slug":"win11xin-dian-nao-huan-jing-an-zhuang","date":"2024-02-04T01:19:58.000Z","updated":"2024-02-04T06:34:55.541Z","comments":true,"path":"/post/win11xin-dian-nao-huan-jing-an-zhuang/","link":"","excerpt":"","content":"

新的mini主机到了,为了之后的开发方便,需要先安装各种软件,这里记录下需要安装的软件,我这边是以Java为主

\n

Java

我这边Java下载安装的是17版本的,下载地址:Java Downloads | Oracle,下面是下载页面,根据自己电脑的情况安装不同的版本

\n

\"\"

\n

maven

我使用的是3.6.3,下载地址:maven

\n

\"\"

\n

nvm

安装包在GitHub中下载的,安装说明也挺详细,地址:nvm-windows

\n

git

官网下载地址:git

\n

\"\"

\n

nodejs

通过nvm安装LTS版本

\n","categories":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/categories/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}],"tags":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}]},{"title":"1686. 石子游戏 VI(2024-02-02)","slug":"1686-shi-zi-you-xi-vi-2024-02-02","date":"2024-02-02T07:25:51.000Z","updated":"2024-02-02T07:32:45.137Z","comments":true,"path":"/post/1686-shi-zi-you-xi-vi-2024-02-02/","link":"","excerpt":"","content":"

力扣每日一题
题目:1686. 石子游戏 VI

\n

\"2024-02-02.png\"

\n

日期:2024-02-02
用时:15 m 0 s
时间:103ms
内存:57.95MB
代码:

class Solution {\n    public int stoneGameVI(int[] aliceValues, int[] bobValues) {\n        int cnt = aliceValues.length;\n        int[][] arrs = new int[cnt][2];\n        for (int i = 0; i < cnt; i++) {\n            arrs[i] = new int[]{aliceValues[i],bobValues[i]};\n        }\n        Arrays.sort(arrs,(a,b)->(b[0]+ b[1])-(a[0]+ a[1]));\n        int sub = 0;\n        for (int i = 0; i < cnt; i++) {\n            sub+=i%2==0?arrs[i][0]:-arrs[i][1];\n        }\n        return sub == 0? 0 :sub / Math.abs(sub);\n    }\n}

\n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"2024-01-27-跑章整理","slug":"2024-01-27-pao-zhang-zheng-li","date":"2024-01-29T01:31:39.000Z","updated":"2024-02-05T06:36:39.349Z","comments":true,"path":"/post/2024-01-27-pao-zhang-zheng-li/","link":"","excerpt":"","content":"

总览

通过几天晚上的整理,规划出了今天的路线,大概上要去下面这些地方:辽宁美术馆,城市规划馆,万豪酒店,k11,广电博物馆,文化路万达,盛京龙城,盛京大家庭,大悦城乐高,全运路万达,在跑章的过程中,临时加入了大悦城霸王别姬

\n

计划

通过整理这些地方的地点和营业时间,初步按照下面的顺序依次跑章

\n

辽宁美术馆

\n

城市规划馆

\n

沈阳皇朝万豪酒店

\n

k11

\n

广电博物馆

\n

文化路万达

\n

盛京龙城

\n

盛京大家庭

\n

大悦城乐高

\n","categories":[{"name":"盖章","slug":"盖章","permalink":"https://hexo.huangge1199.cn/categories/%E7%9B%96%E7%AB%A0/"}],"tags":[{"name":"盖章","slug":"盖章","permalink":"https://hexo.huangge1199.cn/tags/%E7%9B%96%E7%AB%A0/"}]},{"title":"162. 寻找峰值(2023-12-18)","slug":"162-xun-zhao-feng-zhi-2023-12-18","date":"2023-12-18T03:10:01.000Z","updated":"2023-12-18T03:10:51.151Z","comments":true,"path":"/post/162-xun-zhao-feng-zhi-2023-12-18/","link":"","excerpt":"","content":"

力扣每日一题
题目:162. 寻找峰值

\n

\"2023-12-18.png\"
日期:2023-12-18
用时:10 m 9 s
时间:0 ms
内存:40.54 MB
代码:

class Solution {\n    public int findPeakElement(int[] nums) {\n        if(nums.length==1){\n            return 0;\n        }\n        if(nums.length==2){\n            return nums[0]>nums[1]?0:1;\n        }\n        if(nums[0]>nums[1]){\n            return 0;\n        }\n        if(nums[nums.length-1]>nums[nums.length-2]){\n            return nums.length-1;\n        }\n        for(int i=1;i<nums.length-1;i++){\n            if(nums[i]>nums[i-1]&&nums[i]>nums[i+1]){\n                return i;\n            }\n        }\n        return 0;\n    }\n}

\n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"2415. 反转二叉树的奇数层(2023-12-15)","slug":"2415-fan-zhuan-er-cha-shu-de-qi-shu-ceng-2023-12-15","date":"2023-12-18T03:04:16.000Z","updated":"2023-12-18T03:05:01.251Z","comments":true,"path":"/post/2415-fan-zhuan-er-cha-shu-de-qi-shu-ceng-2023-12-15/","link":"","excerpt":"","content":"

力扣每日一题
题目:2415. 反转二叉树的奇数层
\"2023-12-15.png\"
日期:2023-12-15
用时:6 m 51 s
时间:0 ms
内存:46.97 MB
代码:

/**\n * Definition for a binary tree node.\n * public class TreeNode {\n *     int val;\n *     TreeNode left;\n *     TreeNode right;\n *     TreeNode() {}\n *     TreeNode(int val) { this.val = val; }\n *     TreeNode(int val, TreeNode left, TreeNode right) {\n *         this.val = val;\n *         this.left = left;\n *         this.right = right;\n *     }\n * }\n */\nclass Solution {\n    public TreeNode reverseOddLevels(TreeNode root) {\n        dfs(root.left, root.right, 1);\n        return root;\n    }\n\n    void dfs(TreeNode left,TreeNode right,int odd){\n        if(left==null){\n            return;\n        }\n        if(odd==1){\n            int temp = left.val;\n            left.val = right.val;\n            right.val = temp;\n        }\n        dfs(left.left, right.right, 1-odd);\n        dfs(left.right, right.left, 1-odd);\n    }\n}

\n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"2132. 用邮票贴满网格图(2023-12-14)","slug":"2132-yong-you-piao-tie-man-wang-ge-tu-2023-12-14","date":"2023-12-14T01:39:13.000Z","updated":"2023-12-14T01:39:59.531Z","comments":true,"path":"/post/2132-yong-you-piao-tie-man-wang-ge-tu-2023-12-14/","link":"","excerpt":"","content":"

力扣每日一题
题目:2132. 用邮票贴满网格图
\"2023-12-14.png\"
日期:2023-12-14
用时:38 m 32 s
思路:使用前缀和+差分,只是往常是一维,现在变二维了,原理差不多
时间:22ms
内存:98.24MB
代码:

class Solution {\n    public boolean possibleToStamp(int[][] grid, int stampHeight, int stampWidth) {\n        int xl = grid.length;\n        int yl = grid[0].length;\n\n        // 前缀和\n        int[][] sum = new int[xl+1][yl+1];\n        for(int i=1;i<=xl;i++){\n            for(int j=1;j<=yl;j++){\n                sum[i][j] = sum[i-1][j]+sum[i][j-1]-sum[i-1][j-1]+grid[i-1][j-1];\n            }\n        }\n\n        // 差分\n        int[][] cnt = new int[xl+2][yl+2];\n        for(int xStart=stampHeight;xStart<=xl;xStart++){\n            for(int yStart=stampWidth;yStart<=yl;yStart++){\n                int xEnd = xStart-stampHeight+1;\n                int yEnd = yStart-stampWidth+1;\n                if(sum[xStart][yStart]+sum[xEnd-1][yEnd-1]-sum[xStart][yEnd-1]-sum[xEnd-1][yStart]==0){\n                    cnt[xEnd][yEnd]++;\n                    cnt[xStart+1][yStart+1]++;\n                    cnt[xEnd][yStart+1]--;\n                    cnt[xStart+1][yEnd]--;\n                }\n            }\n        }\n\n        // 判断单元格是否能放邮戳\n        for(int i=1;i<=xl;i++){\n            for(int j=1;j<=yl;j++){\n                cnt[i][j] += cnt[i][j-1]+cnt[i-1][j]-cnt[i-1][j-1];\n                if(grid[i-1][j-1]==0&&cnt[i][j]==0){\n                    return false;\n                }\n            }\n        }\n\n        return true;\n    }\n}

\n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"2697. 字典序最小回文串(2023-12-13)","slug":"2697-zi-dian-xu-zui-xiao-hui-wen-chuan-2023-12-13","date":"2023-12-13T01:01:34.000Z","updated":"2023-12-13T01:03:09.576Z","comments":true,"path":"/post/2697-zi-dian-xu-zui-xiao-hui-wen-chuan-2023-12-13/","link":"","excerpt":"","content":"

力扣每日一题
题目:2697. 字典序最小回文串
\"2023-12-13.png\"
日期:2023-12-13
用时:4 m 53 s
时间:7ms
内存:43.61MB
代码:

class Solution {\n    public String makeSmallestPalindrome(String s) {\n        char[] chs = s.toCharArray();\n        int size = s.length();\n        for(int i=0;i<size/2;i++){\n            if(chs[i]>chs[size-1-i]){\n                chs[i] = chs[size-1-i];\n            }else{\n                chs[size-1-i] = chs[i];\n            }\n        }\n        return new String(chs);\n    }\n}

\n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"2454. 下一个更大元素 IV(2023-12-12)","slug":"2454-xia-yi-ge-geng-da-yuan-su-iv","date":"2023-12-13T00:55:55.000Z","updated":"2023-12-13T00:57:15.476Z","comments":true,"path":"/post/2454-xia-yi-ge-geng-da-yuan-su-iv/","link":"","excerpt":"","content":"

力扣每日一题
题目:2454. 下一个更大元素 IV
\"2023-12-12.png\"
日期:2023-12-12
用时:35 m 09 s
时间:614ms
内存:57.18MB
代码:

class Solution {\n    public int[] secondGreaterElement(int[] nums) {\n        int[] res = new int[nums.length];\n        Arrays.fill(res, -1);\n        List<Integer> list1 = new ArrayList<>();\n        List<Integer> list2 = new ArrayList<>();\n        for (int i = 0; i < nums.length; i++) {\n            while (!list2.isEmpty() && nums[list2.get(list2.size() - 1)] < nums[i]) {\n                res[list2.get(list2.size() - 1)] = nums[i];\n                list2.remove(list2.size() - 1);\n            }\n            int j = list1.size();\n            for(;j>0;j--){\n                if(nums[list1.get(j - 1)] >= nums[i]){\n                    break;\n                }\n            }\n            while (j<list1.size()) {\n                list2.add(list1.get(j));\n                list1.remove(j);\n            }\n            list1.add(i);\n        }\n        return res;\n    }\n}

\n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"沈阳四家万达(2023-12-09、2023-12-10)","slug":"shen-yang-si-jia-wan-da-2023-12-09-2023-12-10","date":"2023-12-13T00:54:12.000Z","updated":"2023-12-13T00:55:20.471Z","comments":true,"path":"/post/shen-yang-si-jia-wan-da-2023-12-09-2023-12-10/","link":"","excerpt":"","content":"

全运路万达8枚
铁西万达4枚
北一路万达16枚
太原街万达8枚

\n

\"沈阳四家万达盖章打卡_17_夏夜晚风_来自小红书网页版.jpg\"

\n

\"沈阳四家万达盖章打卡_18_夏夜晚风_来自小红书网页版.jpg\"

\n

\"沈阳四家万达盖章打卡_14_夏夜晚风_来自小红书网页版.jpg\"

\n

\"沈阳四家万达盖章打卡_13_夏夜晚风_来自小红书网页版.jpg\"

\n

\"沈阳四家万达盖章打卡_16_夏夜晚风_来自小红书网页版.jpg\"

\n

\"沈阳四家万达盖章打卡_15_夏夜晚风_来自小红书网页版.jpg\"

\n

\"沈阳四家万达盖章打卡_11_夏夜晚风_来自小红书网页版.jpg\"

\n

\"沈阳四家万达盖章打卡_12_夏夜晚风_来自小红书网页版.jpg\"

\n

\"沈阳四家万达盖章打卡_9_夏夜晚风_来自小红书网页版.jpg\"

\n

\"沈阳四家万达盖章打卡_10_夏夜晚风_来自小红书网页版.jpg\"

\n

\"沈阳四家万达盖章打卡_3_夏夜晚风_来自小红书网页版.jpg\"

\n

\"沈阳四家万达盖章打卡_2_夏夜晚风_来自小红书网页版.jpg\"

\n

\"沈阳四家万达盖章打卡_4_夏夜晚风_来自小红书网页版.jpg\"

\n

\"沈阳四家万达盖章打卡_5_夏夜晚风_来自小红书网页版.jpg\"

\n

\"沈阳四家万达盖章打卡_1_夏夜晚风_来自小红书网页版.jpg\"

\n

\"沈阳四家万达盖章打卡_7_夏夜晚风_来自小红书网页版.jpg\"

\n

\"沈阳四家万达盖章打卡_8_夏夜晚风_来自小红书网页版.jpg\"

\n

\"沈阳四家万达盖章打卡_6_夏夜晚风_来自小红书网页版.jpg\"

\n

(摘自小红书https://www.xiaohongshu.com/explore/65758b310000000006020803?m_source=mengfanwetab)

\n","categories":[{"name":"盖章","slug":"盖章","permalink":"https://hexo.huangge1199.cn/categories/%E7%9B%96%E7%AB%A0/"}],"tags":[{"name":"盖章","slug":"盖章","permalink":"https://hexo.huangge1199.cn/tags/%E7%9B%96%E7%AB%A0/"}]},{"title":"2008. 出租车的最大盈利(2023-12-08)","slug":"2008-chu-zu-che-de-zui-da-ying-li-2023-12-08","date":"2023-12-13T00:52:23.000Z","updated":"2023-12-13T00:53:09.585Z","comments":true,"path":"/post/2008-chu-zu-che-de-zui-da-ying-li-2023-12-08/","link":"","excerpt":"","content":"

力扣每日一题

\n

题目:2008. 出租车的最大盈利

\"2023-12-08.png\"

\n

简短说明

今天的解题有点曲折,完全是一步一步优化来的,看上面的截图,最开始的超时,超时后我加了记忆化搜索,虽然通过了,但是执行时间不太理想,接下来我稍微优化了下,但是执行时间基本没动过,接下来,又尝试着去掉递归,这次效果很显著,执行时间直接从2000多毫秒降低到了18毫秒

\n

过程

下面我分别把这四次的代码都展示出来,记录下每次的优化,代码展示顺序是按照上面的截图从下往上的

\n

超出时间限制

class Solution {\n    public long maxTaxiEarnings(int n, int[][] rides) {\n      List<int[]>[] prices = new ArrayList[n+1];\n      for(int[] ride:rides){\n        if(prices[ride[1]]==null){\n          prices[ride[1]] = new ArrayList<>();\n        }\n        prices[ride[1]].add(new int[]{ride[0],ride[1]-ride[0]+ride[2]});\n      }\n      return dfs(n,prices);\n    }\n    long dfs(int index,List<int[]>[] prices){\n      if(index==1){\n        return 0;\n      }\n      long res = dfs(index-1,prices);\n      if(prices[index]!=null){\n        for(int[] price:prices[index]){\n          res = Math.max(res,dfs(price[0],prices)+price[1]);\n        }\n      }\n      return res;\n    }\n}
\n

2219ms + 84.8MB

class Solution {\n    public long maxTaxiEarnings(int n, int[][] rides) {\n      List<int[]>[] prices = new ArrayList[n+1];\n      for(int[] ride:rides){\n        if(prices[ride[1]]==null){\n          prices[ride[1]] = new ArrayList<>();\n        }\n        prices[ride[1]].add(new int[]{ride[0],ride[1]-ride[0]+ride[2]});\n      }\n      max = new long[n+1];\n      return dfs(n,prices);\n    } \n    long[] max;\n    long dfs(int index,List<int[]>[] prices){\n      if(index==1){\n        return 0;\n      }\n      long res = dfs(index-1,prices);\n      if(prices[index]!=null){\n        for(int[] price:prices[index]){\n          if(max[price[0]]>0){\n            res = Math.max(res,max[price[0]]+price[1]);\n          }else{\n            res = Math.max(res,dfs(price[0],prices)+price[1]);\n          }\n        }\n      }\n      max[index] = res;\n      return res;\n    }\n}
\n

2306ms + 84.2MB

class Solution {\n    public long maxTaxiEarnings(int n, int[][] rides) {\n      List<int[]>[] prices = new ArrayList[n+1];\n      for(int[] ride:rides){\n        if(prices[ride[1]]==null){\n          prices[ride[1]] = new ArrayList<>();\n        }\n        prices[ride[1]].add(new int[]{ride[0],ride[1]-ride[0]+ride[2]});\n      }\n      max = new long[n+1];\n      return dfs(n,prices);\n    } \n    long[] max;\n    long dfs(int index,List<int[]>[] prices){\n      if(index==1){\n        return 0;\n      }\n      if(max[index]>0){\n        return max[index];\n      }\n      long res = dfs(index-1,prices);\n      if(prices[index]!=null){\n        for(int[] price:prices[index]){\n          res = Math.max(res,dfs(price[0],prices)+price[1]);\n        }\n      }\n      max[index] = res;\n      return res;\n    }\n}
\n

18ms + 67.3MB

class Solution {\n    public long maxTaxiEarnings(int n, int[][] rides) {\n      List<int[]>[] prices = new ArrayList[n+1];\n      for(int[] ride:rides){\n        if(prices[ride[1]]==null){\n          prices[ride[1]] = new ArrayList<>();\n        }\n        prices[ride[1]].add(new int[]{ride[0],ride[1]-ride[0]+ride[2]});\n      }\n      long[] max = new long[n+1];\n      for(int i=2;i<=n;i++){\n        max[i] = max[i-1];\n        if(prices[i]!=null){\n          for(int[] price:prices[i]){\n            max[i] = Math.max(max[i],max[price[0]]+price[1]);\n          }\n        }\n      }\n      return max[n];\n    }\n}
","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"1466. 重新规划路线(2023-12-07)","slug":"1466-chong-xin-gui-hua-lu-xian-2023-12-07","date":"2023-12-13T00:48:55.000Z","updated":"2023-12-13T00:50:31.436Z","comments":true,"path":"/post/1466-chong-xin-gui-hua-lu-xian-2023-12-07/","link":"","excerpt":"","content":"

力扣每日一题
题目:1466. 重新规划路线
\"2023-12-07.png\"
日期:2023-12-07
用时:45 m 36 s
时间:37ms
内存:69.64MB
代码:

class Solution {\n    public int minReorder(int n, int[][] connections) {\n        list = new List[n];\n        Arrays.setAll(list, k -> new ArrayList<>());\n        for (int[] connection : connections) {\n            int start = connection[0];\n            int end = connection[1];\n            list[start].add(new int[] {end, 1});\n            list[end].add(new int[] {start, 0});\n        }\n        return dfs(0, -1);\n    }\n    \n    List<int[]>[] list;\n\n    private int dfs(int index, int target) {\n        int ans = 0;\n        for (int[] num : list[index]) {\n            if (num[0] != target) {\n                ans += num[1] + dfs(num[0], index);\n            }\n        }\n        return ans;\n    }\n}

\n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"2646. 最小化旅行的价格总和(2023-12-06)","slug":"2646-zui-xiao-hua-lu-xing-de-jie-ge-zong-he-2023-12-06","date":"2023-12-06T14:03:31.000Z","updated":"2023-12-07T01:07:09.586Z","comments":true,"path":"/post/2646-zui-xiao-hua-lu-xing-de-jie-ge-zong-he-2023-12-06/","link":"","excerpt":"","content":"

力扣每日一题
题目:2646. 最小化旅行的价格总和
\"2023-12-06.png\"
日期:2023-12-06
用时:30 m 14 s
时间:8ms
内存:42.98MB
思路:先统计旅行中每个节点路过的次数(dfs方法),再计算减半后的价格之和的最小值(dp方法),最后比较下减半和未减半的价格。dp方法中,对于相邻的父子节点有两种情况:

\n\n

代码:每条路上通过的城市数量实际就是图中每个节点的子节点数量。

class Solution {\n    public int minimumTotalPrice(int n, int[][] edges, int[] price, int[][] trips) {\n      list = new ArrayList[n];\n      for(int i=0;i<n;i++){\n        list[i] = new ArrayList<>();\n      }\n      for(int[] edge:edges){\n        list[edge[0]].add(edge[1]);\n        list[edge[1]].add(edge[0]);\n      }\n      cnt = new int[n];\n      for(int[] trip:trips){\n        end = trip[1];\n        dfs(trip[0],-1);\n      }\n      int[] res = dp(0,-1,price);\n      return Math.min(res[0],res[1]);\n    }\n    List<Integer>[] list;\n    int end;\n    int[] cnt;\n    boolean dfs(int x, int fa) {\n        if (x == end) {\n            cnt[x]++;\n            return true;\n        }\n        for (int y : list[x]) {\n            if (y != fa && dfs(y, x)) {\n                cnt[x]++;\n                return true;\n            }\n        }\n        return false;\n    }\n    int[] dp(int index,int target,int[] price){\n      int prices = price[index]*cnt[index];\n      int halfPrices = prices/2;\n      for(int num:list[index]){\n        if(num!=target){\n          int[] res = dp(num,index,price);\n          prices += Math.min(res[0],res[1]);\n          halfPrices += res[0];\n        }\n      }\n      return new int[]{prices,halfPrices};\n    }\n}

\n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"2477. 到达首都的最少油耗(2023-12-05)","slug":"2477-dao-da-shou-du-de-zui-shao-you-hao-2023-12-05","date":"2023-12-05T02:03:09.000Z","updated":"2023-12-07T01:05:51.468Z","comments":true,"path":"/post/2477-dao-da-shou-du-de-zui-shao-you-hao-2023-12-05/","link":"","excerpt":"","content":"

力扣每日一题
题目:2477. 到达首都的最少油耗
\"2023-12-05.png\"
日期:2023-12-05
用时:34 m 15 s
时间:37ms
内存:84.8MB
思路:分别计算每条路上通过的城市数量(数量/座位数,向上取整),然后求和,这里每条路上通过的城市数量实际就是图中每个节点的子节点数量。
代码:每条路上通过的城市数量实际就是图中每个节点的子节点数量。

class Solution {\n    public long minimumFuelCost(int[][] roads, int seats) {\n        int size = roads.length+1;\n        List<Integer>[] list = new ArrayList[size];\n        for(int i=0;i<size;i++){\n            list[i] = new ArrayList<>();\n        }\n        for(int[] road:roads){\n            int num1 = road[0];\n            int num2 = road[1];\n            list[num1].add(num2);\n            list[num2].add(num1);\n        };\n        dfs(0,-1,list,seats);\n        return sum;\n    }\n    long sum = 0;\n    private int dfs(int start,int end,List<Integer>[] list,int seats){\n        int cnt =1;\n        for(int num: list[start]){\n            if(num!=end){\n                cnt+=dfs(num,start,list,seats);\n            }\n        }\n        if(start>0){\n            sum+=(cnt-1)/seats+1;\n        }\n        return cnt;\n    }\n}

\n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"1038. 从二叉搜索树到更大和树(2023-12-04)","slug":"1038-cong-er-cha-sou-suo-shu-dao-geng-da-he-shu-2023-12-04","date":"2023-12-04T02:27:05.000Z","updated":"2023-12-07T01:02:59.188Z","comments":true,"path":"/post/1038-cong-er-cha-sou-suo-shu-dao-geng-da-he-shu-2023-12-04/","link":"","excerpt":"","content":"

力扣每日一题
题目:1038. 从二叉搜索树到更大和树

\n

\"2023-12-04.png\"
日期:2023-12-04
用时:12 m 23 s
时间:0ms
内存:39.39MB
代码:

/**\n * Definition for a binary tree node.\n * public class TreeNode {\n *     int val;\n *     TreeNode left;\n *     TreeNode right;\n *     TreeNode() {}\n *     TreeNode(int val) { this.val = val; }\n *     TreeNode(int val, TreeNode left, TreeNode right) {\n *         this.val = val;\n *         this.left = left;\n *         this.right = right;\n *     }\n * }\n */\nclass Solution {\n    public TreeNode bstToGst(TreeNode root) {\n        dfs(root);\n        return root;\n    }\n    int sum = 0;\n    private void dfs(TreeNode node) {\n        if (node == null) {\n            return;\n        }\n        dfs(node.right);\n        sum += node.val;\n        node.val = sum;\n        dfs(node.left);\n    }\n}

\n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"2661. 找出叠涂元素(2023-12-01)","slug":"2661-zhao-chu-die-tu-yuan-su-2023-12-01","date":"2023-12-01T01:44:29.000Z","updated":"2023-12-07T01:01:34.575Z","comments":true,"path":"/post/2661-zhao-chu-die-tu-yuan-su-2023-12-01/","link":"","excerpt":"","content":"

力扣每日一题
题目:2661. 找出叠涂元素

\n

\"2023-12-01.png\"
日期:2023-12-01
用时:7 m 4 s
时间:26ms
内存:67.45MB
代码:

class Solution {\n    public int firstCompleteIndex(int[] arr, int[][] mat) {\n        Map<Integer,int[]> map = new HashMap<>();\n        for(int i=0;i<mat.length;i++){\n            for(int j=0;j<mat[0].length;j++){\n                map.put(mat[i][j],new int[]{i,j});\n            }\n        }\n        int[] xc = new int[mat.length];\n        int[] yc = new int[mat[0].length];\n        for(int i=0;i<arr.length;i++){\n            int[] tmp = map.get(arr[i]);\n            xc[tmp[0]]++;\n            yc[tmp[1]]++;\n            if(xc[tmp[0]]==mat[0].length||yc[tmp[1]]==mat.length){\n                return i;\n            }\n        }\n        return 0;\n    }\n}

\n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"1657. 确定两个字符串是否接近(2023-11-30)","slug":"1657-que-ding-liang-ge-zi-fu-chuan-shi-fou-jie-jin-2023-11-30","date":"2023-11-30T02:33:56.000Z","updated":"2023-12-07T00:59:51.192Z","comments":true,"path":"/post/1657-que-ding-liang-ge-zi-fu-chuan-shi-fou-jie-jin-2023-11-30/","link":"","excerpt":"","content":"

力扣每日一题
题目:1657. 确定两个字符串是否接近

\n

\"2023-11-30.png\"
日期:2023-11-30
用时:21 m 07 s
时间:11ms
内存:43.70MB
代码:

class Solution {\n    public boolean closeStrings(String word1, String word2) {\n        if(word1.length()!=word2.length()){\n            return false;\n        }\n        int[] arr1 = new int[26];\n        int[] arr2 = new int[26];\n        int mask1=0;\n        int mask2=0;\n        for(int i=0;i<word1.length();i++){\n            arr1[word1.charAt(i)-'a']++;\n            arr2[word2.charAt(i)-'a']++;\n            mask1 |= 1<<(word1.charAt(i)-'a');\n            mask2 |= 1<<(word2.charAt(i)-'a');\n        }\n        Arrays.sort(arr1);\n        Arrays.sort(arr2);\n        return Arrays.equals(arr1,arr2)&&mask1==mask2;\n    }\n}

\n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"2336. 无限集中的最小数字(2023.11.29)","slug":"2336-wu-xian-ji-zhong-de-zui-xiao-shu-zi-2023-11-29","date":"2023-11-29T01:57:56.000Z","updated":"2023-12-07T00:58:03.312Z","comments":true,"path":"/post/2336-wu-xian-ji-zhong-de-zui-xiao-shu-zi-2023-11-29/","link":"","excerpt":"","content":"

力扣每日一题
题目:2336. 无限集中的最小数字
\"2023-11-29.png\"
日期:2023-11-29
用时:3 m 50 s
时间:71ms
内存:43.68MB
代码:

class SmallestInfiniteSet {\n\n    List<Integer> list;\n\n    public SmallestInfiniteSet() {\n        list = new ArrayList<>();\n        for(int i=1;i<1001;i++){\n            list.add(i);\n        }\n        Collections.sort(list);\n    }\n    \n    public int popSmallest() {\n        int num = list.get(0);\n        list.remove(0);\n        return num;\n    }\n    \n    public void addBack(int num) {\n        if(!list.contains(num)){\n            list.add(num);\n            Collections.sort(list);\n        }\n    }\n}\n\n/**\n * Your SmallestInfiniteSet object will be instantiated and called as such:\n * SmallestInfiniteSet obj = new SmallestInfiniteSet();\n * int param_1 = obj.popSmallest();\n * obj.addBack(num);\n */

\n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"1670. 设计前中后队列(2023.11.28)","slug":"1670-she-ji-qian-zhong-hou-dui-lie-2023-11-28","date":"2023-11-28T07:56:04.000Z","updated":"2023-12-07T00:55:56.841Z","comments":true,"path":"/post/1670-she-ji-qian-zhong-hou-dui-lie-2023-11-28/","link":"","excerpt":"","content":"

力扣每日一题
题目:1670. 设计前中后队列
日期:2023-11-28
用时:8 m 23 s
时间:6ms
内存:43.55MB
代码:

class FrontMiddleBackQueue {\n\n    List<Integer> list;\n\n    public FrontMiddleBackQueue() {\n        list = new ArrayList<>();\n    }\n    \n    public void pushFront(int val) {\n        list.add(0,val);\n    }\n    \n    public void pushMiddle(int val) {\n        list.add(list.size()/2,val);\n    }\n    \n    public void pushBack(int val) {\n        list.add(val);\n    }\n    \n    public int popFront() {\n        if(list.size()==0){\n            return -1;\n        }\n        int res = list.get(0);\n        list.remove(0);\n        return res;\n    }\n    \n    public int popMiddle() {\n        if(list.size()==0){\n            return -1;\n        }\n        int res = list.get((list.size()-1)/2);\n        list.remove((list.size()-1)/2);\n        return res;\n    }\n    \n    public int popBack() {\n        if(list.size()==0){\n            return -1;\n        }\n        int res = list.get(list.size()-1);\n        list.remove(list.size()-1);\n        return res;\n    }\n}\n\n/**\n * Your FrontMiddleBackQueue object will be instantiated and called as such:\n * FrontMiddleBackQueue obj = new FrontMiddleBackQueue();\n * obj.pushFront(val);\n * obj.pushMiddle(val);\n * obj.pushBack(val);\n * int param_4 = obj.popFront();\n * int param_5 = obj.popMiddle();\n * int param_6 = obj.popBack();\n */

\n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"907. 子数组的最小值之和(2023.11.27)","slug":"907-zi-shu-zu-de-zui-xiao-zhi-zhi-he-2023-11-27","date":"2023-11-27T01:41:39.000Z","updated":"2023-12-07T00:46:34.473Z","comments":true,"path":"/post/907-zi-shu-zu-de-zui-xiao-zhi-zhi-he-2023-11-27/","link":"","excerpt":"","content":"

力扣每日一题
题目:907. 子数组的最小值之和
日期:2023-11-27
用时:14 m 14 s
时间:19ms
内存:47.42MB
代码:

class Solution {\n    public int sumSubarrayMins(int[] arr) {\n        int n=arr.length;\n        int res = 0;\n        int mod=1000000007;\n        Deque<Integer> deque=new ArrayDeque<>();\n        for (int i=0; i <= n; i++) {\n            int cur = i<n?arr[i] : 0;\n            while (!deque.isEmpty() && arr[deque.peekLast()] >= cur) {\n                int index = deque.pollLast();\n                int l=deque.isEmpty()?-1:deque.peekLast();\n                res += 1L*(index-l)*(i-index)%mod*arr[index]%mod;\n                res %= mod;\n            }\n            deque.addLast(i);\n        }\n        return res;\n    }\n}

\n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"Python静态爬虫","slug":"pyStaticSpiders","date":"2023-11-01T02:19:20.000Z","updated":"2023-11-01T03:33:01.838Z","comments":true,"path":"/post/pyStaticSpiders/","link":"","excerpt":"","content":"

什么是Python静态爬虫

Python静态爬虫是一种使用Python编写的网络爬虫程序,用于从互联网上抓取网页内容。与动态爬虫不同,静态爬虫只获取网页的HTML源代码,不执行JavaScript代码。因此,静态爬虫适用于那些主要通过HTML展示信息的网站。

\n

什么是爬虫

网络爬虫,又被称为网页蜘蛛、网络机器人等,是一种按照一定的规则,自动地抓取万维网信息的程序或者脚本。通俗的讲,就是通过程序去获取web页面上自己想要的数据,也就是自动抓取数据。
你可以将每个爬虫视作你的”分身”
,它的基本操作就像模拟人的行为去各个网站溜达,点点按钮,查查数据,或者把看到的信息背回来。比如搜索引擎离不开爬虫,比如百度搜索引擎的爬虫叫作百度蜘蛛(Baiduspider)。百度蜘蛛每天会在海量的互联网信息中进行爬取,爬取优质信息并收录,当用户在百度搜索引擎上检索对应关键词时,百度将对关键词进行分析处理,从收录的网页中找出相关网页,按照一定的排名规则进行排序并将结果展现给用户。

\n

爬虫可以做什么

爬虫可以用于爬取图片、视频或其他任何可以通过浏览器访问的资源。通过编写爬虫程序,可以模拟浏览器向服务器发送请求,获取所需的资源,并将其保存到本地或进行进一步处理和分析。

\n

对于图片,爬虫可以爬取网页上的图片链接,然后将图片下载到本地。这可以用于批量下载图片,或者从多个网站上收集特定主题的图片。

\n

对于视频,爬虫可以爬取视频的URL或嵌入代码,然后使用相应的工具将视频下载到本地。这可以用于下载在线视频、音乐视频或其他多媒体内容。

\n

需要注意的是,在爬取资源时需要遵守网站的使用条款和服务协议,并尊重知识产权和版权法律。此外,为了避免给目标网站造成过大的负担,建议合理设置爬取频率和并发请求数。

\n

爬虫的本质是什么

爬虫可以用于以下方面:

\n
    \n
  1. 数据采集:爬虫可以模拟浏览器向服务器发送请求,获取网页中的数据。通过编写爬虫程序,可以自动化地从网站上抓取所需的数据,如商品信息、新闻内容、评论等。

    \n
  2. \n
  3. 搜索引擎:爬虫是搜索引擎的重要组成部分。搜索引擎通过爬取互联网上的网页,建立索引库,并根据用户的搜索请求返回相关的搜索结果。

    \n
  4. \n
  5. 数据分析:爬虫可以从各种网站上抓取大量的数据,然后对这些数据进行分析和处理。通过对数据的挖掘和分析,可以发现有价值的信息和趋势,为决策提供支持。

    \n
  6. \n
  7. 价格比较:爬虫可以定期爬取不同电商平台上的商品信息,包括价格、评论等。通过对这些数据的分析,可以帮助用户找到最优惠的购物选择。

    \n
  8. \n
  9. 舆情监测:爬虫可以定期爬取社交媒体、新闻网站等平台上的评论和帖子,对其中的内容进行情感分析和主题分类。这可以帮助企业了解公众对其产品或品牌的看法,及时调整营销策略。

    \n
  10. \n
\n

总之,爬虫的本质是通过模拟浏览器自动向服务器发送请求,获取、处理并解析结果的自动化程序。它可以用于数据采集、搜索引擎、数据分析、价格比较和舆情监测等多个领域。

\n

Python静态爬虫的实现方法

    \n
  1. 发送HTTP请求:静态爬虫首先向目标网站发送一个HTTP请求,以获取网页的HTML源代码。

    \n
  2. \n
  3. 解析HTML:静态爬虫使用HTML解析器(如BeautifulSoup、lxml等)对获取到的HTML源代码进行解析,提取出所需的信息。

    \n
  4. \n
  5. 存储数据:静态爬虫将提取到的数据存储在本地文件或数据库中,以便后续处理和分析。

    \n
  6. \n
  7. 重复执行:静态爬虫可以设置定时任务,定期执行上述操作,以持续抓取网页内容。

    \n
  8. \n
\n

Python静态爬虫常用库

requests

介绍

Requests 是一个 Python 第三方库,用于发送 HTTP/1.1 请求。它继承了 urllib2 的所有特性,并提供了更加简洁、友好的 API。以下是 Requests 的一些主要特性:

\n
    \n
  1. 支持 HTTP连接保持和连接池。

    \n
  2. \n
  3. 支持使用 cookie 保持会话。

    \n
  4. \n
  5. 支持文件上传。

    \n
  6. \n
  7. 自动确定响应内容的编码。

    \n
  8. \n
  9. 支持国际化的 URL 和 POST 数据自动编码。

    \n
  10. \n
\n

安装

要使用 Requests 库,首先需要安装。可以通过以下命令安装:

pip install requests\n# 或者\npip3 install requests

\n

BeautifulSoup

一个用于解析HTML和XML文档的库,可以方便地提取所需信息。

\n

lxml

一个高性能的Python库,用于处理XML和HTML文档。

\n

re

Python内置的正则表达式库,用于匹配和提取文本中的特定模式。

\n

pymongo

pymongo是Python中用来操作MongoDB的一个库。

\n

mongoengine

MongoEngine是一个专为Python设计的库,用于操作MongoDB数据库。

\n

redis

pymysql

总结

Python静态爬虫是一种简单易用的网络爬虫技术,可以帮助我们快速地从互联网上抓取网页内容。通过学习Python静态爬虫的基本概念、实现方法和常用库,初学者可以轻松入门Python静态爬虫,为进一步深入学习网络爬虫打下坚实的基础。

\n","categories":[{"name":"python","slug":"python","permalink":"https://hexo.huangge1199.cn/categories/python/"}],"tags":[{"name":"python","slug":"python","permalink":"https://hexo.huangge1199.cn/tags/python/"}]},{"title":"Windows系统下设置程序开机自启(WinSW)","slug":"windowsxi-tong-xia-she-zhi-cheng-xu-kai-ji-zi-qi-winsw","date":"2023-10-17T09:54:01.000Z","updated":"2023-12-07T00:41:28.139Z","comments":true,"path":"/post/windowsxi-tong-xia-she-zhi-cheng-xu-kai-ji-zi-qi-winsw/","link":"","excerpt":"","content":"

介绍

WinSW可以将Windows上的任何程序作为系统服务进行管理,已达到开机自启的效果。

\n

支持的平台

WinSW需要运行在拥有.NET Framework 4.6.1或者更新版本的Windows平台下

\n

下载

\n

使用说明

全局应用

    \n
  1. 获取WinSW.exe文件
  2. \n
  3. 编写myapp.xml文件(详细内容看XML配置文件
  4. \n
  5. 运行winsw install myapp.xml [options]安装服务,使其写入系统服务中
  6. \n
  7. 运行winsw start myapp.xml 开启服务
  8. \n
  9. 运行winsw status myapp.xml 查看服务的运行状态
  10. \n
\n

单一应用

    \n
  1. 获取WinSW.exe文件并将其更名为你的服务名(例如myapp.exe).
  2. \n
  3. 编写myapp.xml文件
  4. \n
  5. 请确保前面两个文件在同一目录
  6. \n
  7. 运行myapp.exe install [options]安装服务,使其写入系统服务中
  8. \n
  9. 运行myapp.exe start开启服务
  10. \n
  11. 运行myapp status myapp.xml 查看服务的运行状态
  12. \n
\n

命令

除了使用说明中的installstartstatus三个命令外,WinSW还提供了其他的命令,具体命令及说明如下:

\n\n

扩展命令:

\n\n

XML配置文件

文件结构

xml文件的根元素必须是 <service>, 并支持以下的子元素

\n

例子:

\n
<service>\n  <id>jenkins</id>\n  <name>Jenkins</name>\n  <description>This service runs Jenkins continuous integration system.</description>\n  <env name="JENKINS_HOME" value="%BASE%"/>\n  <executable>java</executable>\n  <arguments>-Xrs -Xmx256m -jar "%BASE%\\jenkins.war" --httpPort=8080</arguments>\n  <log mode="roll"></log>\n</service>
\n

环境变量扩展

配置 XML 文件可以包含 %Name% 形式的环境变量扩展。如果发现这种情况,将自动用变量的实际值替换。如果引用了未定义的环境变量,则不会进行替换。

\n

此外,服务包装器还会自行设置环境变量 BASE,该变量指向包含重命名后的 WinSW.exe 的目录。这对引用同一目录中的其他文件非常有用。由于这本身就是一个环境变量,因此也可以从服务包装器启动的子进程中访问该值。

\n

配置条目

id

必填 指定 Windows 内部用于标识服务的 ID。在系统中安装的所有服务中,该 ID 必须是唯一的,且应完全由字母数字字符组成。

\n
<id>jenkins</id>
\n

executable

必填 该元素指定要启动的可执行文件。它可以是绝对路径,也可以直接指定可执行文件的名称,然后从 PATH 中搜索(但要注意的是,服务通常以不同的用户账户运行,因此它的 PATH 可能与 shell 不同)。

\n
<executable>java</executable>
\n

name

可选项 服务的简短显示名称,可以包含空格和其他字符。该名称不能太长,如 ,而且在给定系统的所有服务中也必须是唯一的。

\n
<name>Jenkins</name>
\n

description

可选 对服务的长篇可读描述。当服务被选中时,它会显示在 Windows 服务管理器中。

\n
<description>This service runs Jenkins continuous integration system.</description>
\n

startmode

可选 此元素指定 Windows 服务的启动模式。可以是以下值之一:自动或手动。有关详细信息,请参阅 ChangeStartMode - Win32 apps | Microsoft Learn](https://learn.microsoft.com/zh-cn/windows/win32/cimwin32prov/changestartmode-method-in-class-win32-service)) 方法。默认值为自动Automatic

\n

delayedAutoStart

可选 如果定义了Automatic模式,此布尔选项将启用延迟启动模式。更多信息,请参阅Startup Processes and Delayed Automatic Start

\n

请注意,该启动模式不适用于 Windows 7 和 Windows Server 2008 以上的旧版本。在这种情况下,Windows 服务安装可能会失败。

\n
<delayedAutoStart>true</delayedAutoStart>
\n

depend

可选 指定此服务依赖的其他服务的 ID。当服务 X 依赖于服务 Y 时,X 只能在 Y 运行时运行。

\n

可使用多个元素指定多个依赖关系。

\n
<depend>Eventlog</depend>\n<depend>W32Time</depend>
\n

log

可选 用 和启动模式设置不同的日志目录:append(默认)、reset(清除日志)、ignore(忽略)、roll(移动到 *.old)。

\n

更多信息,请参阅Logging and error reporting

\n
<log mode="roll"></log>
\n

arguments

可选 元素指定要传递给可执行文件的参数。

\n
<arguments>arg1 arg2 arg3</arguments>
\n

或者

\n
<arguments>\n  arg1\n  arg2\n  arg3\n</arguments>
\n

stopargument/stopexecutable

可选 当服务被请求停止时,winsw 会简单地调用 TerminateProcess function立即杀死服务。但是,如果存在 元素,winsw 将使用指定的参数启动另一个 进程(或 ,如果已指定),并期望该进程启动服务进程的优雅关闭。

\n

然后,Winsw 将等待这两个进程自行退出,然后向 Windows 报告服务已终止。

\n

使用 时,必须使用 而不是

\n
<executable>catalina.sh</executable>\n<startarguments>jpda run</startarguments>\n\n<stopexecutable>catalina.sh</stopexecutable>\n<stoparguments>stop</stoparguments>
\n

Additional commands

扩展命令包括prestart,poststart,prestop,poststop四个,以prestart为例写法如下:

\n
<prestart>\n  <executable></executable>\n  <arguments></arguments>\n  <stdoutPath></stdoutPath>\n  <stderrPath></stderrPath>\n</prestart>
\n\n

共用的命令如下:

\n\n

在 stdoutPath 或 stderrPath 中指定 NUL 可处理相应的数据流

\n

preshutdown

当系统关闭时,让服务有更多时间停止。

\n

系统默认的预关机超时时间为三分钟。

\n
<preshutdown>false</preshutdown>\n<preshutdownTimeout>3 min</preshutdown>
\n

stoptimeout

当服务被请求停止时,winsw 会首先尝试向控制台应用程序发送 Ctrl+C 信号,或向 Windows 应用程序发布关闭消息,然后等待长达 15 秒的时间,让进程自己优雅地退出。如果超时或无法发送信号或消息,winsw 就会立即终止服务。

\n

通过这个可选元素,您可以更改 “15 秒 “的值,这样就可以控制 winsw 让服务自行关闭的时间。

\n
<stoptimeout>10sec</stoptimeout>
\n

Environment

如有必要,可多次指定该可选元素,以指定要为子进程设置的环境变量。

\n
<env name="HOME" value="c:\\abc" />
\n

interactive

如果指定了此可选元素,则允许服务与桌面交互,如显示新窗口和对话框。

\n
<interactive>true</interactive>
\n

请注意,自引入 UAC(Windows Vista 及以后版本)以来,服务已不再真正允许与桌面交互。在这些操作系统中,这样做的目的只是让用户切换到一个单独的窗口站来与服务交互。

\n

beeponshutdown

该可选元素用于在服务关闭时发出simple tones。此功能只能用于调试,因为某些操作系统和硬件不支持此功能。

\n
<beeponshutdown>true</beeponshutdown>
\n

download

可以多次指定这个可选元素,以便让服务包装器从 URL 获取资源并将其作为文件放到本地。此操作在服务启动时,即 <executable> 指定的应用程序启动前运行。

\n

对于需要身份验证的服务器,必须根据身份验证类型指定一些参数。只有基本身份验证需要额外的子参数。支持的身份验证类型有

\n\n

参数 unsecureAuth 仅在传输协议为 HTTP(未加密数据传输)时有效。这是一个安全漏洞,因为凭据是以明文发送的!对于 SSPI 身份验证来说,这并不重要,因为身份验证令牌是加密的。

\n

对于使用 HTTPS 传输协议的目标服务器来说,颁发服务器证书的 CA 必须得到客户端的信任。当服务器位于互联网上时,通常会出现这种情况。当一个组织在内部网中使用自行签发的 CA 时,情况可能并非如此。在这种情况下,有必要将 CA 导入 Windows 客户端的证书 MMC。请参阅 “Manage Trusted Root Certificates)”中的说明。必须将自行签发的 CA 导入计算机的可信根证书颁发机构。

\n

默认情况下,如果操作失败(如从不可用),下载命令不会导致服务启动失败。为了在这种情况下强制下载失败,可以指定 failOnError 布尔属性。

\n

要指定自定义代理,请使用参数 proxy,格式如下:

\n\n
<download from="http://example.com/some.dat" to="%BASE%\\some.dat" />\n\n<download from="http://example.com/some.dat" to="%BASE%\\some.dat" failOnError="true"/>\n\n<download from="http://example.com/some.dat" to="%BASE%\\some.dat" proxy="http://192.168.1.5:80/"/>\n\n<download from="https://example.com/some.dat" to="%BASE%\\some.dat" auth="sspi" />\n\n<download from="https://example.com/some.dat" to="%BASE%\\some.dat" failOnError="true"\n          auth="basic" user="aUser" password="aPassw0rd" />\n\n<download from="http://example.com/some.dat" to="%BASE%\\some.dat"\n          proxy="http://aUser:aPassw0rd@192.168.1.5:80/"\n          auth="basic" unsecureAuth="true"\n          user="aUser" password="aPassw0rd" />
\n

这是开发自我更新服务的另一个有用的组成部分。

\n

自 2.7 版起,如果目标文件存在,WinSW 将在 If-Modified-Since 标头中发送其最后写入时间,如果收到 304 Not Modified,则跳过下载。

\n

onfailure

当 winsw 启动的进程失败(即以非零退出代码退出)时,这个可选的可重复元素将控制其行为。

\n
<onfailure action="restart" delay="10 sec"/>\n<onfailure action="restart" delay="20 sec"/>\n<onfailure action="reboot" />
\n

例如,上述配置会导致服务在第一次故障后 10 秒内重新启动,在第二次故障后 20 秒内重新启动,然后如果服务再次发生故障,Windows 将重新启动。

\n

每个元素都包含一个强制的 action 属性和可选的 delay 属性,前者用于控制 Windows SCM 将采取的行动,后者用于控制采取该行动前的延迟时间。action 的合法值为

\n\n

延迟属性的后缀可能是秒/秒/分/分/小时/小时/天/天。如果缺少,延迟属性默认为 0。

\n

如果服务不断发生故障,并且超过了配置的 <onfailure>次数,则会重复上次的操作。因此,如果只想始终自动重启服务,只需像这样指定一个 <onfailure> 元素即可:

\n
<onfailure action="restart" />
\n

resetfailure

此可选元素控制 Windows SCM 重置故障计数的时间。例如,如果您指定 1 小时,而服务持续运行的时间超过一小时,那么故障计数将重置为零。

\n

换句话说,这是您认为服务成功运行的持续时间。默认为 1 天。

\n
<resetfailure>1 hour</resetfailure>
\n

Security descriptor

SDDL 格式的服务安全描述符字符串。

\n

有关详细信息,请参阅安全描述符定义语言

\n
<securityDescriptor></securityDescriptor>
\n

Service account

服务默认安装为 LocalSystem 账户。如果您的服务不需要很高的权限级别,可以考虑使用 LocalService 账户NetworkService 帐户或用户账户。

\n

要使用用户账户,请像这样指定 <serviceaccount> 元素:

\n
<serviceaccount>\n  <username>DomainName\\UserName</username>\n  <password>Pa55w0rd</password>\n  <allowservicelogon>true</allowservicelogon>\n</serviceaccount>
\n

<username>的格式为 DomainName\\UserName 或 UserName@DomainName。如果账户属于内置域,则可以指定 .\\UserName。

\n

<allowservicelogon> 是可选项。如果设置为 true,将自动为列出的账户设置 “允许以服务身份登录 “的权限。

\n

要使用Group Managed Service Accounts Overview,请在账户名后追加 $ 并删除 <password> 元素:

\n
<serviceaccount>\n  <username>DomainName\\GmsaUserName$</username>\n  <allowservicelogon>true</allowservicelogon>\n</serviceaccount>
\n

LocalSystem account

要明确使用LocalSystem 帐户 ,请指定以下内容:

\n
<serviceaccount>\n  <username>LocalSystem</username>\n</serviceaccount>
\n

请注意,该账户没有密码,因此提供的任何密码都将被忽略。

\n

LocalService account

要使用 LocalService 帐户,请指定以下内容:

\n
<serviceaccount>\n  <username>NT AUTHORITY\\LocalService</username>\n</serviceaccount>
\n

请注意,该账户没有密码,因此提供的任何密码都将被忽略。

\n

NetworkService account

要使用 NetworkService 帐户,请指定以下内容:

\n
<serviceaccount>\n  <username>NT AUTHORITY\\NetworkService</username>\n</serviceaccount>
\n

请注意,该账户没有密码,因此提供的任何密码都将被忽略。

\n

prompt

可选。提示输入用户名和密码。

\n
<serviceaccount>\n  <prompt>dialog|console</prompt>\n</serviceaccount>
\n\n

Working directory

某些服务在运行时需要指定工作目录。为此,请像这样指定<workingdirectory>元素:

\n
<workingdirectory>C:\\application</workingdirectory>
\n

Priority

可选择指定服务进程的调度优先级(相当于 Unix nice),可选值包括idle, belownormal, normal, abovenormal, high, realtime(不区分大小写)。

\n
<priority>idle</priority>
\n

指定高于正常值的优先级会产生意想不到的后果。有关详细信息,请参阅 .NET 文档中的 ProcessPriorityClass 枚举。此功能的主要目的是以较低的优先级启动进程,以免干扰计算机的交互式使用。

\n

Auto refresh

<autoRefresh>true</autoRefresh>
\n

当服务启动或执行以下命令时,自动刷新服务属性:

\n\n

默认值为 true。

\n

sharedDirectoryMapping

默认情况下,即使在 Windows 服务配置文件中进行了配置,Windows 也不会为服务建立共享驱动器映射。由于域策略的原因,有时无法解决这个问题。

\n

这样就可以在启动可执行文件之前映射外部共享目录。

\n
<sharedDirectoryMapping>\n  <map label="N:" uncpath="\\\\UNC" />\n  <map label="M:" uncpath="\\\\UNC2" />\n</sharedDirectoryMapping>
","categories":[{"name":"工具","slug":"工具","permalink":"https://hexo.huangge1199.cn/categories/%E5%B7%A5%E5%85%B7/"}],"tags":[{"name":"工具","slug":"工具","permalink":"https://hexo.huangge1199.cn/tags/%E5%B7%A5%E5%85%B7/"}]},{"title":"Windows系统下设置程序开机自启(WinSW)","slug":"winsw","date":"2023-10-13T00:45:13.000Z","updated":"2023-10-17T08:51:33.918Z","comments":true,"path":"/post/winsw/","link":"","excerpt":"","content":"

介绍

WinSW可以将Windows上的任何程序作为系统服务进行管理,已达到开机自启的效果。

\n

支持的平台

WinSW需要运行在拥有.NET Framework 4.6.1或者更新版本的Windows平台下

\n

下载

\n

使用说明

全局应用

    \n
  1. 获取WinSW.exe文件
  2. \n
  3. 编写myapp.xml文件(详细内容看XML配置文件
  4. \n
  5. 运行winsw install myapp.xml [options]安装服务,使其写入系统服务中
  6. \n
  7. 运行winsw start myapp.xml 开启服务
  8. \n
  9. 运行winsw status myapp.xml 查看服务的运行状态
  10. \n
\n

单一应用

    \n
  1. 获取WinSW.exe文件并将其更名为你的服务名(例如myapp.exe).
  2. \n
  3. 编写myapp.xml文件
  4. \n
  5. 请确保前面两个文件在同一目录
  6. \n
  7. 运行myapp.exe install [options]安装服务,使其写入系统服务中
  8. \n
  9. 运行myapp.exe start开启服务
  10. \n
  11. 运行myapp status myapp.xml 查看服务的运行状态
  12. \n
\n

命令

除了使用说明中的installstartstatus三个命令外,WinSW还提供了其他的命令,具体命令及说明如下:

\n\n

扩展命令:

\n\n

XML配置文件

文件结构

xml文件的根元素必须是 <service>, 并支持以下的子元素

\n

例子:

\n
<service>\n  <id>jenkins</id>\n  <name>Jenkins</name>\n  <description>This service runs Jenkins continuous integration system.</description>\n  <env name="JENKINS_HOME" value="%BASE%"/>\n  <executable>java</executable>\n  <arguments>-Xrs -Xmx256m -jar "%BASE%\\jenkins.war" --httpPort=8080</arguments>\n  <log mode="roll"></log>\n</service>
\n

环境变量扩展

配置 XML 文件可以包含 %Name% 形式的环境变量扩展。如果发现这种情况,将自动用变量的实际值替换。如果引用了未定义的环境变量,则不会进行替换。

\n

此外,服务包装器还会自行设置环境变量 BASE,该变量指向包含重命名后的 WinSW.exe 的目录。这对引用同一目录中的其他文件非常有用。由于这本身就是一个环境变量,因此也可以从服务包装器启动的子进程中访问该值。

\n

配置条目

id

必填 指定 Windows 内部用于标识服务的 ID。在系统中安装的所有服务中,该 ID 必须是唯一的,且应完全由字母数字字符组成。

\n
<id>jenkins</id>
\n

executable

必填 该元素指定要启动的可执行文件。它可以是绝对路径,也可以直接指定可执行文件的名称,然后从 PATH 中搜索(但要注意的是,服务通常以不同的用户账户运行,因此它的 PATH 可能与 shell 不同)。

\n
<executable>java</executable>
\n

name

可选项 服务的简短显示名称,可以包含空格和其他字符。该名称不能太长,如 ,而且在给定系统的所有服务中也必须是唯一的。

\n
<name>Jenkins</name>
\n

description

可选 对服务的长篇可读描述。当服务被选中时,它会显示在 Windows 服务管理器中。

\n
<description>This service runs Jenkins continuous integration system.</description>
\n

startmode

可选 此元素指定 Windows 服务的启动模式。可以是以下值之一:自动或手动。有关详细信息,请参阅 ChangeStartMode - Win32 apps | Microsoft Learn](https://learn.microsoft.com/zh-cn/windows/win32/cimwin32prov/changestartmode-method-in-class-win32-service)) 方法。默认值为自动Automatic

\n

delayedAutoStart

可选 如果定义了Automatic模式,此布尔选项将启用延迟启动模式。更多信息,请参阅Startup Processes and Delayed Automatic Start

\n

请注意,该启动模式不适用于 Windows 7 和 Windows Server 2008 以上的旧版本。在这种情况下,Windows 服务安装可能会失败。

\n
<delayedAutoStart>true</delayedAutoStart>
\n

depend

可选 指定此服务依赖的其他服务的 ID。当服务 X 依赖于服务 Y 时,X 只能在 Y 运行时运行。

\n

可使用多个元素指定多个依赖关系。

\n
<depend>Eventlog</depend>\n<depend>W32Time</depend>
\n

log

可选 用 和启动模式设置不同的日志目录:append(默认)、reset(清除日志)、ignore(忽略)、roll(移动到 *.old)。

\n

更多信息,请参阅Logging and error reporting

\n
<log mode="roll"></log>
\n

arguments

可选 元素指定要传递给可执行文件的参数。

\n
<arguments>arg1 arg2 arg3</arguments>
\n

或者

\n
<arguments>\n  arg1\n  arg2\n  arg3\n</arguments>
\n

stopargument/stopexecutable

可选 当服务被请求停止时,winsw 会简单地调用 TerminateProcess function立即杀死服务。但是,如果存在 元素,winsw 将使用指定的参数启动另一个 进程(或 ,如果已指定),并期望该进程启动服务进程的优雅关闭。

\n

然后,Winsw 将等待这两个进程自行退出,然后向 Windows 报告服务已终止。

\n

使用 时,必须使用 而不是

\n
<executable>catalina.sh</executable>\n<startarguments>jpda run</startarguments>\n\n<stopexecutable>catalina.sh</stopexecutable>\n<stoparguments>stop</stoparguments>
\n

Additional commands

扩展命令包括prestart,poststart,prestop,poststop四个,以prestart为例写法如下:

\n
<prestart>\n  <executable></executable>\n  <arguments></arguments>\n  <stdoutPath></stdoutPath>\n  <stderrPath></stderrPath>\n</prestart>
\n\n

共用的命令如下:

\n\n

在 stdoutPath 或 stderrPath 中指定 NUL 可处理相应的数据流

\n

preshutdown

当系统关闭时,让服务有更多时间停止。

\n

系统默认的预关机超时时间为三分钟。

\n
<preshutdown>false</preshutdown>\n<preshutdownTimeout>3 min</preshutdown>
\n

stoptimeout

当服务被请求停止时,winsw 会首先尝试向控制台应用程序发送 Ctrl+C 信号,或向 Windows 应用程序发布关闭消息,然后等待长达 15 秒的时间,让进程自己优雅地退出。如果超时或无法发送信号或消息,winsw 就会立即终止服务。

\n

通过这个可选元素,您可以更改 “15 秒 “的值,这样就可以控制 winsw 让服务自行关闭的时间。

\n
<stoptimeout>10sec</stoptimeout>
\n

Environment

如有必要,可多次指定该可选元素,以指定要为子进程设置的环境变量。

\n
<env name="HOME" value="c:\\abc" />
\n

interactive

如果指定了此可选元素,则允许服务与桌面交互,如显示新窗口和对话框。

\n
<interactive>true</interactive>
\n

请注意,自引入 UAC(Windows Vista 及以后版本)以来,服务已不再真正允许与桌面交互。在这些操作系统中,这样做的目的只是让用户切换到一个单独的窗口站来与服务交互。

\n

beeponshutdown

该可选元素用于在服务关闭时发出simple tones。此功能只能用于调试,因为某些操作系统和硬件不支持此功能。

\n
<beeponshutdown>true</beeponshutdown>
\n

download

可以多次指定这个可选元素,以便让服务包装器从 URL 获取资源并将其作为文件放到本地。此操作在服务启动时,即 <executable> 指定的应用程序启动前运行。

\n

对于需要身份验证的服务器,必须根据身份验证类型指定一些参数。只有基本身份验证需要额外的子参数。支持的身份验证类型有

\n\n

参数 unsecureAuth 仅在传输协议为 HTTP(未加密数据传输)时有效。这是一个安全漏洞,因为凭据是以明文发送的!对于 SSPI 身份验证来说,这并不重要,因为身份验证令牌是加密的。

\n

对于使用 HTTPS 传输协议的目标服务器来说,颁发服务器证书的 CA 必须得到客户端的信任。当服务器位于互联网上时,通常会出现这种情况。当一个组织在内部网中使用自行签发的 CA 时,情况可能并非如此。在这种情况下,有必要将 CA 导入 Windows 客户端的证书 MMC。请参阅 “Manage Trusted Root Certificates)”中的说明。必须将自行签发的 CA 导入计算机的可信根证书颁发机构。

\n

默认情况下,如果操作失败(如从不可用),下载命令不会导致服务启动失败。为了在这种情况下强制下载失败,可以指定 failOnError 布尔属性。

\n

要指定自定义代理,请使用参数 proxy,格式如下:

\n\n
<download from="http://example.com/some.dat" to="%BASE%\\some.dat" />\n\n<download from="http://example.com/some.dat" to="%BASE%\\some.dat" failOnError="true"/>\n\n<download from="http://example.com/some.dat" to="%BASE%\\some.dat" proxy="http://192.168.1.5:80/"/>\n\n<download from="https://example.com/some.dat" to="%BASE%\\some.dat" auth="sspi" />\n\n<download from="https://example.com/some.dat" to="%BASE%\\some.dat" failOnError="true"\n          auth="basic" user="aUser" password="aPassw0rd" />\n\n<download from="http://example.com/some.dat" to="%BASE%\\some.dat"\n          proxy="http://aUser:aPassw0rd@192.168.1.5:80/"\n          auth="basic" unsecureAuth="true"\n          user="aUser" password="aPassw0rd" />
\n

这是开发自我更新服务的另一个有用的组成部分。

\n

自 2.7 版起,如果目标文件存在,WinSW 将在 If-Modified-Since 标头中发送其最后写入时间,如果收到 304 Not Modified,则跳过下载。

\n

onfailure

当 winsw 启动的进程失败(即以非零退出代码退出)时,这个可选的可重复元素将控制其行为。

\n
<onfailure action="restart" delay="10 sec"/>\n<onfailure action="restart" delay="20 sec"/>\n<onfailure action="reboot" />
\n

例如,上述配置会导致服务在第一次故障后 10 秒内重新启动,在第二次故障后 20 秒内重新启动,然后如果服务再次发生故障,Windows 将重新启动。

\n

每个元素都包含一个强制的 action 属性和可选的 delay 属性,前者用于控制 Windows SCM 将采取的行动,后者用于控制采取该行动前的延迟时间。action 的合法值为

\n\n

延迟属性的后缀可能是秒/秒/分/分/小时/小时/天/天。如果缺少,延迟属性默认为 0。

\n

如果服务不断发生故障,并且超过了配置的 <onfailure>次数,则会重复上次的操作。因此,如果只想始终自动重启服务,只需像这样指定一个 <onfailure> 元素即可:

\n
<onfailure action="restart" />
\n

resetfailure

此可选元素控制 Windows SCM 重置故障计数的时间。例如,如果您指定 1 小时,而服务持续运行的时间超过一小时,那么故障计数将重置为零。

\n

换句话说,这是您认为服务成功运行的持续时间。默认为 1 天。

\n
<resetfailure>1 hour</resetfailure>
\n

Security descriptor

SDDL 格式的服务安全描述符字符串。

\n

有关详细信息,请参阅安全描述符定义语言

\n
<securityDescriptor></securityDescriptor>
\n

Service account

服务默认安装为 LocalSystem 账户。如果您的服务不需要很高的权限级别,可以考虑使用 LocalService 账户NetworkService 帐户或用户账户。

\n

要使用用户账户,请像这样指定 <serviceaccount> 元素:

\n
<serviceaccount>\n  <username>DomainName\\UserName</username>\n  <password>Pa55w0rd</password>\n  <allowservicelogon>true</allowservicelogon>\n</serviceaccount>
\n

<username>的格式为 DomainName\\UserName 或 UserName@DomainName。如果账户属于内置域,则可以指定 .\\UserName。

\n

<allowservicelogon> 是可选项。如果设置为 true,将自动为列出的账户设置 “允许以服务身份登录 “的权限。

\n

要使用Group Managed Service Accounts Overview,请在账户名后追加 $ 并删除 <password> 元素:

\n
<serviceaccount>\n  <username>DomainName\\GmsaUserName$</username>\n  <allowservicelogon>true</allowservicelogon>\n</serviceaccount>
\n

LocalSystem account

要明确使用LocalSystem 帐户 ,请指定以下内容:

\n
<serviceaccount>\n  <username>LocalSystem</username>\n</serviceaccount>
\n

请注意,该账户没有密码,因此提供的任何密码都将被忽略。

\n

LocalService account

要使用 LocalService 帐户,请指定以下内容:

\n
<serviceaccount>\n  <username>NT AUTHORITY\\LocalService</username>\n</serviceaccount>
\n

请注意,该账户没有密码,因此提供的任何密码都将被忽略。

\n

NetworkService account

要使用 NetworkService 帐户,请指定以下内容:

\n
<serviceaccount>\n  <username>NT AUTHORITY\\NetworkService</username>\n</serviceaccount>
\n

请注意,该账户没有密码,因此提供的任何密码都将被忽略。

\n

prompt

可选。提示输入用户名和密码。

\n
<serviceaccount>\n  <prompt>dialog|console</prompt>\n</serviceaccount>
\n\n

Working directory

某些服务在运行时需要指定工作目录。为此,请像这样指定<workingdirectory>元素:

\n
<workingdirectory>C:\\application</workingdirectory>
\n

Priority

可选择指定服务进程的调度优先级(相当于 Unix nice),可选值包括idle, belownormal, normal, abovenormal, high, realtime(不区分大小写)。

\n
<priority>idle</priority>
\n

指定高于正常值的优先级会产生意想不到的后果。有关详细信息,请参阅 .NET 文档中的 ProcessPriorityClass 枚举。此功能的主要目的是以较低的优先级启动进程,以免干扰计算机的交互式使用。

\n

Auto refresh

<autoRefresh>true</autoRefresh>
\n

当服务启动或执行以下命令时,自动刷新服务属性:

\n\n

默认值为 true。

\n

sharedDirectoryMapping

默认情况下,即使在 Windows 服务配置文件中进行了配置,Windows 也不会为服务建立共享驱动器映射。由于域策略的原因,有时无法解决这个问题。

\n

这样就可以在启动可执行文件之前映射外部共享目录。

\n
<sharedDirectoryMapping>\n  <map label="N:" uncpath="\\\\UNC" />\n  <map label="M:" uncpath="\\\\UNC2" />\n</sharedDirectoryMapping>
","categories":[{"name":"工具","slug":"工具","permalink":"https://hexo.huangge1199.cn/categories/%E5%B7%A5%E5%85%B7/"}],"tags":[{"name":"工具","slug":"工具","permalink":"https://hexo.huangge1199.cn/tags/%E5%B7%A5%E5%85%B7/"}]},{"title":"解决Java应用中的字符编码问题:深入理解JVM编码格式","slug":"jvm-encoding","date":"2023-10-10T07:30:36.000Z","updated":"2023-10-10T08:19:09.329Z","comments":true,"path":"/post/jvm-encoding/","link":"","excerpt":"","content":"

导言

在Java应用程序开发中,字符编码问题是一个常见的挑战。正确处理字符编码对于数据的完整性至关重要。本文将深入探讨JVM(Java虚拟机)编码格式的相关内容,包括如何查询、设置和修改,以及如何应对字符编码问题。

\n

1、JVM编码格式简介:

JVM(Java虚拟机)是运行Java程序的核心组件,它负责将Java字节码转换为机器指令。在Java应用程序中,正确的编码设置非常重要,因为它直接影响到字符串的处理和输出。了解JVM的编码格式以及如何设置和管理它们对于开发可靠和可移植的Java应用程序至关重要。

\n

2、查询JVM的编码格式:

有多种方法可以查询JVM的编码格式。其中一种方法是使用Java代码来查询。通过调用System.getProperty("file.encoding")方法,可以获取JVM当前的默认编码格式。另一种方法是使用命令行工具查看JVM的编码设置。可以使用以下命令来查看JVM参数:

\n
java -XX:+PrintFlagsFinal -version | grep -iE 'Default Charset'
\n

3、设置JVM的编码格式:

有两种主要方法可以配置JVM的编码格式。第一种是通过启动参数配置。在启动Java应用程序时,可以在命令行或脚本中添加特定的启动参数来设置JVM的编码格式。例如,使用以下命令来设置UTF-8编码:

\n
java -Dfile.encoding=UTF-8 YourApplication
\n

第二种方法是使用Java代码修改系统属性以设置JVM编码。可以通过调用System.setProperty("file.encoding", "UTF-8")方法来实现。确保在应用程序的适当位置执行此操作以确保编码在整个生命周期中保持一致。

\n

4、改JVM的默认编码格式:

\n
export JAVA_OPTS="-Dfile.encoding=UTF-8"
\n

然后重新启动应用程序,新的编码设置将生效。

\n\n

5、注意事项和最佳实践:

在选择和设置JVM的编码格式时,需要注意以下几点:

\n\n

结论

正确设置JVM编码格式对于Java应用程序至关重要,因为它直接影响字符数据的传输和处理。通过本文提供的详细指南,您可以解决字符编码问题,确保应用程序在各种情况下都能正确运行。不要低估字符编码的重要性,因为它可能对您的应用程序的可靠性产生深远影响。

\n","categories":[{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/categories/java/"}],"tags":[{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/tags/java/"}]},{"title":"深入了解 Cron 时间字段:定时任务的精确控制","slug":"linuxCron","date":"2023-09-14T07:24:10.000Z","updated":"2023-09-15T02:48:59.913Z","comments":true,"path":"/post/linuxCron/","link":"","excerpt":"","content":"

在 Linux 和 Unix 系统中,cron 是一个强大的工具,用于执行预定时间的任务。Cron 允许用户自动化各种重复性任务,如备份、系统监控、日志清理等。在
cron 中,时间的设定是至关重要的,它使用一些特殊的时间字段来确定任务的执行时机。本文将深入探讨常见的 cron 时间字段及其用途。

\n

1、常规 Cron 时间字段

常规 Cron 时间字段:精确控制任务执行时间

\n

在常规 cron 时间字段中,您可以通过分钟、小时、日期等来精确控制任务的执行时间。以下是一些示例:

\n

1.1、每天凌晨执行备份任务

0 0 * * * /usr/local/bin/backup.sh
\n

1.2、每小时执行系统监控任务

0 * * * * /usr/local/bin/system_monitor.sh
\n

1.3、每周执行日志清理任务:

0 2 * * 6 /usr/local/bin/clean_logs.sh
\n

1.4、每月执行系统更新任务:

0 3 1 * * /usr/bin/apt-get update && /usr/bin/apt-get upgrade -y
\n

1.5、每隔 15 分钟执行检查网站可用性任务:

*/15 * * * * /usr/local/bin/check_website.sh
\n

这些常规的 cron 时间字段允许您按照特定的时间表来安排任务的执行,非常适用于各种自动化需求。

\n

2、特殊 Cron 时间字段:简化时间设定

除了常规的时间字段外,还有一些特殊的时间字段,如 @reboot、@yearly、@monthly 等,它们可以更方便地设置任务的执行时间,通常用于特殊场景。示例:

\n

2.1、@reboot:系统启动时执行任务

@reboot /usr/local/bin/startup_script.sh
\n

2.2、@yearly 或 @annually:每年执行一次

@yearly /usr/local/bin/yearly_task.sh
\n

2.3、@monthly:每月执行一次

@monthly /usr/local/bin/monthly_task.sh
\n

2.4、@weekly:每周执行一次

@weekly /usr/local/bin/weekly_task.sh
\n

2.5、@daily 或 @midnight:每天执行一次

@daily /usr/local/bin/daily_task.sh
\n

2.6、@hourly:每小时执行一次

@hourly /usr/local/bin/hourly_task.sh
\n

这些特殊的时间字段使得在 crontab 中定义定时任务更加方便,您可以根据任务的周期性要求选择适当的时间字段。它们使时间设定更加直观和易读,而不需要编写复杂的时间表。通过合理利用cron 时间字段,您可以轻松自动化各种系统维护和管理任务,提高系统的效率和可靠性。

\n","categories":[{"name":"Linux","slug":"Linux","permalink":"https://hexo.huangge1199.cn/categories/Linux/"}],"tags":[{"name":"Linux","slug":"Linux","permalink":"https://hexo.huangge1199.cn/tags/Linux/"}]},{"title":"解决图片不刷新问题:浏览器缓存与缓存控制头的终极对决","slug":"vueImages","date":"2023-09-12T02:07:57.000Z","updated":"2023-09-12T05:37:33.993Z","comments":true,"path":"/post/vueImages/","link":"","excerpt":"","content":"

在现代Web开发中,许多开发者都曾经遇到过一个令人困扰的问题:当图片URL没有变化但图片内容却发生了变化时,浏览器似乎不会主动刷新图片,从而导致显示旧的内容。这个问题在网站和应用中的图片更新时尤为突出,可能会影响用户体验和页面正确性。

\n

在这篇博客文章中,我们将探讨这个问题,并提供多种解决方案,其中包括添加时间戳或随机参数以绕过浏览器缓存以及配置缓存控制头来告诉浏览器如何处理这些图片。我们将深入了解这些解决方案的实现方式以及它们在不同服务器和框架中的应用。

\n

问题的根源

问题的根本在于浏览器的缓存机制。浏览器会根据图片的URL来决定是否重新请求图片或者使用缓存中的版本。当图片的URL保持不变时,浏览器会倾向于使用已经缓存的旧版本,而不会去服务器重新获取新的图片内容。

\n

解决方案一:添加时间戳或随机参数

为了绕过浏览器的缓存机制,最简单的方法之一是在图片的URL上添加一个时间戳或随机参数。这将使每次请求都看起来像一个不同的URL,从而迫使浏览器重新加载图片。

\n
<img :src="'your-image-url.jpg?' + Date.now()">
\n

或者使用JavaScript生成随机参数:

\n
<img :src="'your-image-url.jpg?' + Math.random()">
\n

这种方法适用于各种Web开发环境,并且非常容易实现。

\n

解决方案二:配置缓存控制头

另一种更强大的方法是在服务器端配置缓存控制头。不同的服务器和框架有不同的配置方式,以下是一些示例:

\n

Apache

在Apache服务器上,您可以通过.htaccess文件来配置缓存控制头,告诉浏览器不要缓存特定类型的图片。

\n
<IfModule mod_headers.c>\n    # 禁止缓存指定文件类型的图片,例如 .jpg 和 .png\n    <FilesMatch "\\.(jpg|png)$">\n        Header set Cache-Control "no-cache, no-store, must-revalidate"\n        Header set Pragma "no-cache"\n        Header set Expires 0\n    </FilesMatch>\n</IfModule>
\n

Nginx

如果使用Nginx作为服务器,可以在Nginx配置文件中添加以下配置来实现缓存控制:

\n
location ~* \\.(jpg|png)$ {\n    expires -1;\n    add_header Cache-Control "no-store, no-cache, must-revalidate, max-age=0";\n    add_header Pragma "no-cache";\n}
\n

Node.js(Express框架)

在Node.js中使用Express框架,您可以创建一个中间件来设置缓存控制头,以确保浏览器不会缓存特定类型的图片。

\n
const express = require('express');\nconst app = express();\n\n// 禁止缓存指定文件类型的图片,例如 .jpg 和 .png\napp.use((req, res, next) => {\n    if (req.url.endsWith('.jpg') || req.url.endsWith('.png')) {\n        res.setHeader('Cache-Control', 'no-store, no-cache, must-revalidate, max-age=0');\n        res.setHeader('Pragma', 'no-cache');\n    }\n    next();\n});\n\n// 其他路由和中间件设置\n\napp.listen(3000, () => {\n    console.log('Server is running on port 3000');\n});
\n

结论

无论您选择哪种方法,解决图片不刷新的问题都是可能的。添加时间戳或随机参数是最简单的方法之一,但它可能需要在多个地方修改代码。配置缓存控制头则可以更全面地控制缓存行为,但需要在服务器端进行配置。

\n

根据您的项目需求和服务器环境,选择适合您的方法,并确保您的用户可以始终看到最新的图片内容,以提供更好的用户体验。希望本文对您有所帮助,解决了这个常见的开发问题。

\n","categories":[{"name":"vue","slug":"vue","permalink":"https://hexo.huangge1199.cn/categories/vue/"}],"tags":[{"name":"vue","slug":"vue","permalink":"https://hexo.huangge1199.cn/tags/vue/"}]},{"title":"选择合适的帧率和分辨率:优化RTSP流视频抓取","slug":"rtsp","date":"2023-09-06T02:24:03.000Z","updated":"2023-09-12T02:39:13.883Z","comments":true,"path":"/post/rtsp/","link":"","excerpt":"","content":"

引言

在实时视频流应用中,选择适当的帧率和分辨率对于确保视频流的顺畅播放和图像质量至关重要。本文将向您介绍如何使用Java和JavaCV库中的FFmpegFrameGrabber来从RTSP流中抓取图像,并在抓取时设置帧率和分辨率。

\n

一、配置开发环境

首先,确保您的Java项目中包含JavaCV库的依赖。您可以在Maven项目中添加以下依赖:

\n
<dependency>\n    <groupId>org.bytedeco</groupId>\n    <artifactId>javacv-platform</artifactId>\n    <version>1.5.1</version> <!-- 请检查最新版本 -->\n</dependency>
\n

二、使用Java代码抓取RTSP流图像

下面是一个示例Java代码,演示了如何使用FFmpegFrameGrabber从RTSP流中抓取图像并将其保存为JPEG格式的图像文件。

\n
import org.bytedeco.javacv.FFmpegFrameGrabber;\nimport org.bytedeco.javacv.Frame;\nimport org.bytedeco.javacv.Java2DFrameConverter;\n\nimport javax.imageio.ImageIO;\nimport java.awt.image.BufferedImage;\nimport java.io.File;\n\npublic class RTSPImageCapture {\n    public static void main(String[] args) {\n        String rtsp = "YOUR_RTSP_URL_HERE"; // 替换为实际的RTSP URL\n        String imgSrc = ""; // 图像保存路径\n        String linuxImg = "/path/to/linux/img/"; // Linux系统下的保存路径\n        String winImg = "C:\\\\path\\\\to\\\\windows\\\\img\\\\"; // Windows系统下的保存路径\n\n        try {\n            FFmpegFrameGrabber grabber = new FFmpegFrameGrabber(rtsp);\n            // 使用tcp的方式,不然会丢包很严重\n            grabber.setOption("rtsp_transport", "tcp");\n            grabber.start();\n            Frame frame = grabber.grabImage();\n            if (frame != null) {\n                if (imgSrc == null || imgSrc.isEmpty()) {\n                    String path = "";\n                    if (SystemUtils.isLinux()) {\n                        path = linuxImg;\n                    } else if (SystemUtils.isWindows()) {\n                        path = winImg;\n                    }\n                    imgSrc = path + "video.jpg";\n                }\n                File file = new File(imgSrc);\n                file.createNewFile();\n                Java2DFrameConverter converter = new Java2DFrameConverter();\n                BufferedImage bufferedImage = converter.getBufferedImage(frame);\n                ImageIO.write(bufferedImage, "jpg", file);\n            }\n            grabber.stop();\n        } catch (Exception e) {\n            e.printStackTrace();\n        }\n    }\n}
\n

确保将上述代码中的YOUR_RTSP_URL_HERE替换为实际的RTSP流URL,并设置正确的图像保存路径。

\n

三、帧率的选择

1、实时性要求

\n

2、考虑资源限制

\n

3、应用场景

\n

4、存储需求

\n

四、分辨率的选择

1、显示设备和屏幕大小

\n

2、带宽和性能

\n

3、应用场景

\n

4、存储需求

\n

五、设置帧率和分辨率的实际操作

要设置帧率和分辨率,您可以使用相应的方法来配置FFmpegFrameGrabber

\n
// 设置所需的帧率\ngrabber.setFrameRate(desiredFrameRate);\n\n// 设置所需的分辨率\ngrabber.setImageWidth(desiredWidth);\ngrabber.setImageHeight(desiredHeight);
\n

确保在调用grabber.start();之前进行这些设置,以确保配置在抓取开始之前生效。

\n

选择合适的帧率和分辨率是优化RTSP流视频抓取的关键步骤,可以提供良好的图像质量和实时性,同时考虑资源限制和存储需求。根据您的应用需求,选择最佳的参数设置,以获得最佳的用户体验。

\n

六、实时性和流畅性的权衡

在选择帧率和分辨率时,需要平衡实时性和流畅性。以下是一些有关权衡的考虑:

\n\n

七、动态调整

有些应用可能需要根据情况动态调整帧率和分辨率。例如,当网络带宽下降时,可以降低帧率和分辨率以适应当前条件,从而保持视频的流畅性。

\n

结论

选择合适的帧率和分辨率是优化RTSP流视频抓取的关键决策。根据应用的实时性要求、资源限制、显示设备、存储需求和网络条件,您可以调整这些参数以获得最佳的用户体验。实时性和流畅性之间的权衡是一个关键考虑因素,可以根据需要进行调整,以适应不同的应用场景。

\n

最终,了解您的应用需求并进行测试是选择合适的帧率和分辨率的关键。通过仔细权衡这些因素,您可以确保您的RTSP流视频抓取应用提供了所需的性能和图像质量。

\n","categories":[{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/categories/java/"}],"tags":[{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/tags/java/"}]},{"title":"沈阳盖章计划","slug":"sealSY","date":"2023-08-31T11:10:28.000Z","updated":"2023-09-01T01:29:48.302Z","comments":true,"path":"/post/sealSY/","link":"","excerpt":"","content":"

沈阳盖章计划

和平区

\n

铁西区

\n

沈河区

\n

浑南区

\n

皇姑区

\n

大东区

\n

沈北区

\n","categories":[{"name":"盖章","slug":"盖章","permalink":"https://hexo.huangge1199.cn/categories/%E7%9B%96%E7%AB%A0/"}],"tags":[{"name":"盖章","slug":"盖章","permalink":"https://hexo.huangge1199.cn/tags/%E7%9B%96%E7%AB%A0/"}]},{"title":"vue实现打印功能","slug":"vuePrint","date":"2023-08-17T01:49:58.000Z","updated":"2023-08-17T02:06:00.985Z","comments":true,"path":"/post/vuePrint/","link":"","excerpt":"","content":"

在Vue应用中调用打印机功能,可以使用JavaScriptwindow.print()方法。这个方法会打开打印对话框,然后让我们选择打印设置并打印文档,但是尼这种方法依赖于浏览器的打印功能。

\n

以下是一个简单的示例,演示如何在Vue组件中调用打印功能:

\n
    \n
  1. 在Vue组件中,将需要打印的内容放入一个具有唯一ID的元素中。例如,你可以使用<div id="printable-content"></div>来包裹打印内容。
  2. \n
\n
<template>\n  <div>\n    <button @click="print">打印</button>\n    <div id="printable-content">\n      <!-- 待打印的内容 -->\n    </div>\n  </div>\n</template>
\n
    \n
  1. 在Vue组件的methods中定义print方法,该方法将获取打印内容并调用window.print()方法打开打印对话框。
  2. \n
\n
<script>\nexport default {\n  methods: {\n    print() {\n      // 获取待打印的内容\n      let printableContent = document.getElementById('printable-content').innerHTML;\n      \n      // 创建一个新的窗口并加载打印内容\n      let printWindow = window.open('', '_blank');\n      printWindow.document.write('<html><head><title>打印内容</title></head><body>' + printableContent + '</body></html>');\n      \n      // 执行打印操作\n      printWindow.document.close();\n      printWindow.print();\n    }\n  }\n}\n</script>
\n
    \n
  1. 当点击”打印”按钮时,print方法会被调用,从而打开打印对话框。用户可以在对话框中选择打印设置并打印文档。
  2. \n
\n

最后,再次强调,这种方法依赖于浏览器的打印功能,因此它可能无法在所有打印机上正常工作。

\n","categories":[{"name":"前端","slug":"前端","permalink":"https://hexo.huangge1199.cn/categories/%E5%89%8D%E7%AB%AF/"},{"name":"vue","slug":"前端/vue","permalink":"https://hexo.huangge1199.cn/categories/%E5%89%8D%E7%AB%AF/vue/"}],"tags":[{"name":"前端","slug":"前端","permalink":"https://hexo.huangge1199.cn/tags/%E5%89%8D%E7%AB%AF/"},{"name":"vue","slug":"vue","permalink":"https://hexo.huangge1199.cn/tags/vue/"}]},{"title":"Java代码中对文件的操作","slug":"javaFile","date":"2023-08-15T05:56:51.000Z","updated":"2023-08-16T06:37:39.464Z","comments":true,"path":"/post/javaFile/","link":"","excerpt":"","content":"

引言

这几天的项目涉及到了文件的操作,我这边做一下整理

\n

环境说明

jdk版本:1.8.0_311

\n

对文件的操作

1、保存文件

/**\n * 保存文件\n *\n * @param file 文件\n * @param path 文件保存目录\n * @param name 保存后的文件名字\n */\npublic void saveFile(MultipartFile file, String path, String name) throws Exception {\n    if (file == null) {\n        throw new Exception("请上传有效文件!");\n    }\n    // 若目录不存在则创建目录\n    File folder = new File(path);\n    if (!folder.exists()) {\n        folder.mkdirs();\n    }\n\n    // 生成文件,folder为文件目录,newName为文件名\n    file.transferTo(new File(folder, name));\n}
\n

2、删除文件

/**\n * 删除指定目录下的指定文件\n *\n * @param path 文件路径(路径结尾不带“/”)\n * @param name 文件名称\n */\npublic void delFile(String path, String name) {\n    File file = new File(path + "/" + name);\n    file.delete();\n}
\n

3、删除指定的空目

/**\n * 删除指定的空目录,如果往上2层的目录也是空的,则一起删除\n *\n * @param path 目录路径(路径结尾不带“/”)\n */\npublic void delBlankDir(String path) {\n    for (int i = 0; i < 3; i++) {\n        File dirFile = new File(path);\n        if (dirFile.length() == 0) {\n            dirFile.delete();\n            path = path.substring(0, path.lastIndexOf("/"));\n        } else {\n            break;\n        }\n    }\n}
\n

4、验证文件是否是MP3格式

/**\n * 验证是否是MP3格式的文件\n *\n * @param multipartFile 验证的文件\n * @return true:是MP3、false:不是MP3\n */\npublic boolean isMP3File(MultipartFile multipartFile) {\n    try {\n        byte[] headerBytes = new byte[4];\n        multipartFile.getInputStream().read(headerBytes);\n        if (headerBytes[0] == (byte) 0x49 && headerBytes[1] == (byte) 0x44 &&\n                headerBytes[2] == (byte) 0x33) {\n            return true;\n        }\n    } catch (IOException e) {\n        e.printStackTrace();\n        return false;\n    }\n    return false;\n}
\n

5、音频格式转换

/**\n * 音频文件格式转换\n *\n * @param fpath  需要转换的音频文件路径\n * @param target 转换后的音频文件路径\n */\npublic void transferAudioPcm(String fpath, String target) {\n    List<String> commend = new ArrayList<>();\n    String path = "";\n    if (SystemUtils.isLinux()) {\n        path = "修改成Ffmpeg文件的路径";\n    } else if (SystemUtils.isWindows()) {\n        path = "修改成Ffmpeg文件的路径";\n    }\n    commend.add(path);\n    commend.add("-y");\n    commend.add("-i");\n    commend.add(fpath);\n    commend.add("-f");\n    commend.add("s16le");\n    commend.add("-ar");\n    commend.add("4000");\n    commend.add("-ac");\n    commend.add("-1");\n    commend.add(target);\n    try {\n        ProcessBuilder builder = new ProcessBuilder();\n        builder.command(commend);\n        Process p = builder.start();\n        p.waitFor();\n        p.destroy();\n    } catch (Exception e) {\n        e.printStackTrace();\n    }\n}
\n

6、改变linux系统下的文件权限

/**\n * 改变linux系统下的文件权限\n *\n * @param mod  修改后的权限\n * @param path 文件路径\n */\npublic void changePermission(String mod, String path) throws Exception {\n    // ProcessBuilder processBuilder = new ProcessBuilder("chmod", "775", "/data/a.txt");\n    ProcessBuilder processBuilder = new ProcessBuilder("chmod", mod, path);\n    Process process = processBuilder.start();\n    int exitCode = process.waitFor();\n    if (exitCode == 0) {\n        System.out.println("File permission changed successfully!");\n    } else {\n        System.out.println("Failed to change file permission.");\n    }\n}
\n

7、查询服务器磁盘空间

/**\n * 查询服务器磁盘空间\n *\n * @return map\n */\npublic Map<String, String> getDiskInfo() {\n    // 总空间\n    long totalSpace = 0;\n    // 已用空间\n    long usableSpace = 0;\n    // 可用空间\n    long unallocatedSpace = 0;\n    for (FileStore fileStore : FileSystems.getDefault().getFileStores()) {\n        try {\n            totalSpace += fileStore.getTotalSpace();\n            usableSpace += fileStore.getUsableSpace();\n            unallocatedSpace += fileStore.getUnallocatedSpace();\n        } catch (IOException e) {\n            throw new RuntimeException(e);\n        }\n    }\n    DecimalFormat decimalFormat = new DecimalFormat("#.00");\n    Map<String, String> map = new HashMap<>(3);\n    map.put("totalSpace", decimalFormat.format(totalSpace / (1024.0 * 1024 * 1024)));\n    map.put("usableSpace", decimalFormat.format(usableSpace / (1024.0 * 1024 * 1024)));\n    map.put("unallocatedSpace", decimalFormat.format(unallocatedSpace / (1024.0 * 1024 * 1024)));\n    return map;\n}
\n","categories":[{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/categories/java/"}],"tags":[{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/tags/java/"}]},{"title":"canvas","slug":"canvas","date":"2023-08-10T00:58:09.000Z","updated":"2023-08-10T01:56:36.673Z","comments":true,"path":"/post/canvas/","link":"","excerpt":"","content":"

引言

近期,工作中有一个功能,需要在页面上展示在图片上面绘制区域的功能,在网上找了找,发现了这个canvas

\n

另外,通过资料的查询,发现这个canvas可以替代flash,常见的flash应用场景可以用canvas配合audio

\n

canvas特点

\n

canvas能做什么

\n","categories":[{"name":"前端","slug":"前端","permalink":"https://hexo.huangge1199.cn/categories/%E5%89%8D%E7%AB%AF/"}],"tags":[{"name":"前端","slug":"前端","permalink":"https://hexo.huangge1199.cn/tags/%E5%89%8D%E7%AB%AF/"}]},{"title":"群晖nas为PHP配置Redis扩展","slug":"nasPhpRedis","date":"2023-07-27T02:12:03.000Z","updated":"2023-07-27T02:52:26.368Z","comments":true,"path":"/post/nasPhpRedis/","link":"","excerpt":"","content":"

首先,介绍下我的环境

\n\n

接下来,进入正题

\n

首先要使用ssh进入到群晖,账户要切换到root用户

\n

接下来,看下目前PHP7.4有哪些扩展,根据你安装位置的硬盘不同,volume1可能有所区别,命令如下:

\n
ll /volume1/@appstore/PHP7.4/usr/local/lib/php74/modules
\n

\"\"

\n

从上图中,我发现套件版的PHP7.4默认已经有了Redis扩展,接下来,再看看配置文件中是否配置了Redis,当然我这边是没有配置

\n

打开配置文件php-fpm.ini,我这边喜欢用vi命令,当然也可以使用vim,具体用哪一个看你系统支持已经个人喜好了,下面的volume1一样有区别的自行修改

\n
vi /volume1/@appstore/PHP7.4/misc/php-fpm.ini
\n

将下面的代码放到配置文件php-fpm.ini末尾,然后保存退出

\n
[Redis]\nextension_dir = "/volume1/@appstore/PHP7.4/usr/local/lib/php74/modules/"\nextension = redis.so
\n

\"\"

\n

扩展有了,配置文件也加上了,最后就是重启PHP7.4了,命令如下:

\n
synopkg restart PHP7.4
\n

\"\"

\n

看到重启成功了,至此,完成收工了

\n","categories":[{"name":"nas","slug":"nas","permalink":"https://hexo.huangge1199.cn/categories/nas/"}],"tags":[{"name":"nas","slug":"nas","permalink":"https://hexo.huangge1199.cn/tags/nas/"}]},{"title":"p5-01-set","slug":"p5-01-collection","date":"2023-07-25T06:22:23.000Z","updated":"2023-08-10T00:57:12.547Z","comments":true,"path":"/post/p5-01-collection/","link":"","excerpt":"","content":"","categories":[{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/categories/java/"},{"name":"P5笔记","slug":"java/P5笔记","permalink":"https://hexo.huangge1199.cn/categories/java/P5%E7%AC%94%E8%AE%B0/"}],"tags":[{"name":"P5笔记","slug":"P5笔记","permalink":"https://hexo.huangge1199.cn/tags/P5%E7%AC%94%E8%AE%B0/"}]},{"title":"P5学习笔记01-Java核心-数据结构","slug":"p5-01-structure","date":"2023-07-25T05:33:52.000Z","updated":"2023-07-25T06:02:00.695Z","comments":true,"path":"/post/p5-01-structure/","link":"","excerpt":"","content":"

常用的数据结构:

\n\n

数组

特点:

\n\n

链表

特点:

\n\n

包括:

\n\n

二叉树

特点:

\n\n

不平衡二叉树:

\n\n

红黑树

是一个自平衡的二叉查找树,树上的每个节点都遵循下面的规则:

\n\n","categories":[{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/categories/java/"},{"name":"P5笔记","slug":"java/P5笔记","permalink":"https://hexo.huangge1199.cn/categories/java/P5%E7%AC%94%E8%AE%B0/"}],"tags":[{"name":"P5笔记","slug":"P5笔记","permalink":"https://hexo.huangge1199.cn/tags/P5%E7%AC%94%E8%AE%B0/"}]},{"title":"deepin中steam的配置","slug":"steamByDeepin","date":"2023-06-24T05:34:05.000Z","updated":"2023-07-25T05:29:46.270Z","comments":true,"path":"/post/steamByDeepin/","link":"","excerpt":"","content":"

deepin中steam的配置

本人的deepin系统已经更新到20.9,文章仅供参考,可能会与你的情况有所出入

\n

下载安装

在deepin的应用商店中下载安装steam

\n

\"image\"

\n

中文设置

我这边安装完默认是英文界面,可以依次点击左上角是steam—》settings

\n

\"image\"

\n

在弹出的页面中点击左侧Interface,右侧按照红框内容选择简体中文

\n

\"image\"

\n

然后在点击重启按钮即可

\n

\"image\"

\n

重启后界面已经是中文的啦

\n

\"image\"

\n

兼容大多数的游戏

依次点击左上角的steam—>设置(英文版setting)—>弹出页面的兼容性,将红框的内容都设置完在一起重启,Proton那个下拉框选择最新的就好,我现在是8.0-2最新,也许你的并不一定是。

\n

\"image\"

\n

重启后,你就发现之前不能玩的大多数游戏都可以玩了

\n","categories":[{"name":"deepin","slug":"deepin","permalink":"https://hexo.huangge1199.cn/categories/deepin/"},{"name":"游戏","slug":"deepin/游戏","permalink":"https://hexo.huangge1199.cn/categories/deepin/%E6%B8%B8%E6%88%8F/"}],"tags":[{"name":"游戏","slug":"游戏","permalink":"https://hexo.huangge1199.cn/tags/%E6%B8%B8%E6%88%8F/"},{"name":"deepin","slug":"deepin","permalink":"https://hexo.huangge1199.cn/tags/deepin/"}]},{"title":"浏览器不走代理Proxifier问题解决","slug":"proxifier-proxy-fix","date":"2023-06-20T05:47:50.000Z","updated":"2023-06-20T07:50:18.052Z","comments":true,"path":"/post/proxifier-proxy-fix/","link":"","excerpt":"","content":"

环境需要用到代理软件Proxifier,但配好后chrome浏览器访问对应代理的应用却不行,之前还明明可以的,今天就突然不行了,于是乎,我去排查了原因,本篇文章就是我排查的记录吧,最后问题是解决了。每个人的情况不同,可能我的办法不一定适用于你的,因此,本篇文章仅做参考。

\n

确认Proxifier设置

    \n
  1. 打开Proxifier,选择菜单栏的“Profile” - “Proxification Rules”。

    \n
  2. \n
  3. 在“Proxification Rules”中,确认走代理的应用程序包含浏览器。如果不包含,可单击右键选择“Edit Selected Rule”,在“Edit Rule”中,设置“Any”为“Applications”后点击OK。

    \n

    确认系统代理设置

    首先,说明一下,本人是Win11的系统,可能会与你的有出入,下面是详细步骤:

    \n
  4. \n
  5. 点击电脑的开始菜单,打开设置。

    \n
  6. \n
  7. 点击左侧的“网络和Internet”,再点击右侧的代理。

    \n
  8. \n
\n

\"\"

\n
    \n
  1. 确认页面中红框的内容全是关闭状态,如果不是,改为关闭状态
  2. \n
\n

\"\"

\n","categories":[{"name":"问题记录","slug":"问题记录","permalink":"https://hexo.huangge1199.cn/categories/%E9%97%AE%E9%A2%98%E8%AE%B0%E5%BD%95/"},{"name":"网站建设","slug":"问题记录/网站建设","permalink":"https://hexo.huangge1199.cn/categories/%E9%97%AE%E9%A2%98%E8%AE%B0%E5%BD%95/%E7%BD%91%E7%AB%99%E5%BB%BA%E8%AE%BE/"}],"tags":[{"name":"网站建设","slug":"网站建设","permalink":"https://hexo.huangge1199.cn/tags/%E7%BD%91%E7%AB%99%E5%BB%BA%E8%AE%BE/"}]},{"title":"使用xlsxwriter和openplxl库操作Excel文件","slug":"pyHighExcel","date":"2023-05-31T03:13:46.000Z","updated":"2023-05-31T06:08:44.414Z","comments":true,"path":"/post/pyHighExcel/","link":"","excerpt":"","content":"

Excel文件是一种广泛使用的电子表格格式,用于存储和处理各种数据。在Python中,有多个库可以用于处理Excel文件,其中包括xlsxwriter和openplxl两个库。本文将介绍这两个库的使用方法以及如何使用它们来操作Excel文件。

\n

1、xlsxwriter生成Excel文件

xlsxwriter是一个用于生成Excel文件的Python库,支持多种格式的Excel文件(如.xlsx、.xlsm、.xltx、.xltm等),并且支持自定义样式和格式。下面将以一个简单的示例,来逐步介绍如何使用xlsxwriter库创建一个Excel文件并写入数据

\n

1.1、导入库并创建Excel文件

Excel文件名为:xlsxwriter插入数据和折线图.xlsx

\n
import xlsxwriter, random\n\nwb = xlsxwriter.Workbook('xlsxwriter插入数据和折线图.xlsx')
\n

1.2、创建一个sheet页

sheet标签页名字为:sheet1

\n
worksheet1 = wb.add_worksheet('sheet1')
\n

1.3、按行写入数据

这里以写入Excel的头部数据为例:

\n
headings = ['日期','数据1','数据2']\nworksheet1.write_row('A1',headings)
\n

1.4、按列写入数据

有了头部数据后,该写入下面的实际数据了

\n
# 创造数据\ndata = [\n    ['2019-1','2019-2','2019-3','2019-4','2019-5','2019-6','2019-7','2019-8','2019-9','2019-10','2019-11','2019-12',],\n    [random.randint(1,100) for i in range(12)],\n    [random.randint(1,100) for i in range(12)],\n] \n# 按列写入数据\nworksheet1.write_column('A2',data[0])\nworksheet1.write_column('B2',data[1])\nworksheet1.write_column('C2',data[2])
\n

1.5、新建图表对象

折线图表的定义:

\n
chart_col = wb.add_chart({'type':'line'})
\n

1.6、图表数据配置

这里的数据有两条,一个是数据1,一个是数据2,所以图表添加数据的代码如下:

\n
chart_col.add_series(\n    {\n        'name':'=sheet1!$B$1',\n        'categoies':'=sheet1!$A$2:$A$7',\n        'values':   '=sheet1!$B$2:$B$7',\n        'line': {'color': 'blue'},\n    }\n)\nchart_col.add_series(\n    {\n        'name':'=sheet1!$C$1',\n        'categories':'=sheet1!$A$2:$A$7',\n        'values':   '=sheet1!$C$2:$C$7',\n        'line': {'color': 'green'},\n    }\n)\n
\n

有两条数据,所以添加了两次。

\n

数据有四项,数据名、具体值对应的横坐标categories、具体值对应的纵坐标values、折线颜色,其中取值方式,直接是使用sheet的坐标形式,例如name是B1和B2,categories都是A2-A7,值分别是B2-B7和C2-C7。

\n

1.7、完成图表

数据添加之后,在设置下坐标的相关信息,就是标题、x轴、y轴的名字,以及图表位置和大小,代码如下:

\n
chart_col.set_title({'name':'虚假数据折线图'})\nchart_col.set_x_axis({'name':"横坐标"})\nchart_col.set_y_axis({'name':'纵坐标'})\n\nworksheet1.insert_chart('D2',chart_col,{'x_offset':25,'y_offset':10})\n\nwb.close()
\n

图表的位置和大小,是根据左上角的起始表格和x和y的偏移计算的。

\n

代码中是从D2做左上角起始,然后x和y分别便宜25和10个单位,得到了图片的最终大小。最后关闭wb。

\n

\"\"

\n

2、openpyxl追加Excel数据

openplxl是一个用于读取现有的Excel文件的Python库,支持多种格式的Excel文件(如.xlsx、.xlsm、.xltx、.xltm等),并且支持读取单元格的数据。

\n

2.1、打开文件

import openpyxl\nfilename = 'xlsxwriter插入数据和折线图.xlsx'\nwb = openpyxl.load_workbook(filename)
\n

2.2、拷贝sheet

sheet1 = wb['sheet1']\n# 拷贝sheet1\nsheet2 = wb.copy_worksheet(sheet1)\n# 设置拷贝后的名称为sheet2\nsheet2.title = "sheet2"
\n

2.3、追加数据内容

在sheet2中,数据1和数据2追加一年的数据,代码如下:

\n
# 读取最后一行\nrows = sheet2.max_row\n# 取出时间的字符串\nprev_date_str = sheet2.cell(row=rows,column=1).value\n# 时间字符串转时间对象\nprev_date = datetime.datetime.strptime(prev_date_str, "%Y-%m")\nfor i in range(1,13):\n    # 月份的计算,每次增加一个月,就得到了第二年的12个月\n    tmp_date = prev_date + relativedelta(months=i)\n    tmp_num1 = random.randint(1,100)\n    tmp_num2 = random.randint(1,100)\n    sheet2.append([tmp_date.strftime("%Y-%m"), tmp_num1, tmp_num2])
\n

实现思路:

\n\n

2.4、使用openpyxl画图表

在sheet2中对全部数据画折线图

\n
from openpyxl.chart import Series,LineChart, Reference\n# 图表对象\nchart = LineChart()\nrows = sheet2.max_row\n\n# 创建series对象\ndata1 = Reference(sheet2, min_col=2, min_row=1, max_col=2, max_row=rows) #涉及数据\ntitle1 = sheet2.cell(row=1,column=2).value\nseriesObj1 = Series(data1, title=title1)\n\n# 创建series对象\ndata2 = Reference(sheet2, min_col=3, min_row=1, max_col=3, max_row=rows) #涉及数据\ntitle2 = sheet2.cell(row=1,column=3).value\nseriesObj2 = Series(data2, title=title2)\n\n# 添加到chart中\nchart.append(seriesObj1)\nchart.append(seriesObj2)\n\n# 将图表添加到 sheet中\nsheet2.add_chart(chart, "E3")\n\n# 保存Excel\nwb.save('poenpyxl插入数据和折线图[copy xlsxwriter].xlsx')
\n

导入所需的画图工具,图表初始化,然后生成数据对象:

\n\n

最后文件保存,大功告成。

\n

\"\"

\n

\"\"

\n

3、总结

本文介绍了如何使用xlsxwriter和openplxl两个库来操作Excel文件。xlsxwriter库可以用于创建新的Excel文件并写入数据,而openplxl库则可以用于读取现有的Excel文件并读取单元格的数据。这些库都是Python中处理Excel文件的好工具,可以帮助我们更加高效地处理各种数据。

\n","categories":[{"name":"python","slug":"python","permalink":"https://hexo.huangge1199.cn/categories/python/"}],"tags":[{"name":"python","slug":"python","permalink":"https://hexo.huangge1199.cn/tags/python/"}]},{"title":"如何使用Python操作Excel文件?看这篇博客就够了!","slug":"pyExcel","date":"2023-05-30T06:09:15.000Z","updated":"2023-05-30T06:23:25.121Z","comments":true,"path":"/post/pyExcel/","link":"","excerpt":"","content":"

前言

如何使用Python操作Excel文件?看这篇博客就够了!

\n

在工作中,我们经常需要处理和分析数据。而Excel作为一种广泛使用的数据分析工具,被很多人所熟知。但是,对于一些非技术背景的用户来说,如何操作Excel却可能有些困难。这时候,Python就成为了一种非常有用的工具。

\n

本文将介绍如何使用Python对Excel文件进行读写操作。首先,我们将介绍Python中可以使用的第三方库xlrdxlwtxlutils,并通过示例来展示“如何使用xlwt库来将数据写入到Excel文件中”、“如何使用xlrd库来读取一个Excel文件的数据”和“如何使用三个库的配合来进行一边读取一边写入的操作”。

\n

通过本文的介绍,你将会了解到:

\n\n

如果你想学习如何使用Python操作Excel文件,那么这篇文章就是为你准备的。希望它能帮助你更好地理解和应用这个工具。

\n

1、写入Excel文件

首先来学习下,随机生成数据,写入一个Excel文件并保存,所使用到的库,是xlwt,安装命令pip install xlwt ,安装简单方便,无依赖,很快。

\n

1.1、新建一个WorkBook对象

import xlwt\nwb = xlwt.Workbook()
\n

1.2、新建sheet

sheet = wb.add_sheet('第一个sheet')
\n

1.3、写数据

head_data = ['姓名','地址','手机号','城市']\nfor head in head_data:\n    sheet.write(0,head_data.index(head),head)
\n

write函数写入,分别是x行 x列 数据,头部数据永远是第一行,所以第0行。数据的列,则是当前数据所在列表的索引,直接使用index函数即可。

\n

1.4、创建虚假数据

有了头部数据,现在就开始写入内容了,分别是随机姓名 随机地址 随机号码 随机城市,数据的来源都是faker库,一个专门创建虚假数据用来测试的库,安装命令:pip install faker

\n

因为头部信息已经写好,所以接下来是从第1行开始写数据,每行四个数据,准备写99个用户数据,所以用循环,循环99次,代码如下:

\n
import faker\nfake = faker.Faker()\nfor i in range(1,100):\n    sheet.write(i,0,fake.first_name() + ' ' + fake.last_name())\n    sheet.write(i,1,fake.address())\n    sheet.write(i,2,fake.phone_number())\n    sheet.write(i,3,fake.city())
\n

1.5、保存成xls文件

wb.save('虚假用户数据.xls')
\n

然后找到文件,使用office或者wps打开这个xls文件:

\n

\"\"

\n

2、读取Excel文件

写文件已经搞定,接下来就要学习下Excel的读操作,读取Excel的库是xlrd,对应read;xlrd的安装命令:pip install xlrd

\n

2.1、打开Excel文件

import xlrd\nwb = xlrd.open_workbook('虚假用户数据.xls')
\n

2.2、读取Excel数据

# 获取文件中全部的sheet,返回结构是list。\nsheets = wb.sheets()\n# 通过索引顺序获取。\nsheet = sheets[0]\n# 直接通过索引顺序获取。\nsheet = wb.sheet_by_index(0)\n# 通过名称获取。\nsheet = wb.sheet_by_name('第一个sheet')
\n

2.3、打印数据

# 获取行数\nrows = sheet.nrows\n# 获取列数\ncols = sheet.ncols\nfor row in range(rows):\n    for col in range(cols):\n        print(sheet.cell(row,col).value,end=' , ')\n    print('\\n')
\n

打印效果(只截取部分):

\n

\"\"

\n

3、在现有的Excel文件中追加内容

需求:往“虚假用户数据.xls”里面,追加额外的50条用户数据,就是标题+数据,达到150条。

\n

3.1、导入库

import xlrd\nfrom xlutils.copy import copy
\n

3.2、使用xlrd打开文件,然后xlutils赋值打开后的workbook

wb = xlrd.open_workbook('虚假用户数据.xls',formatting_info=True)\nxwb = copy(wb)
\n

3.3、有了workbook之后,就开始指定sheet,并获取这个sheet的总行数

sheet = xwb.get_sheet('第一个sheet')\nrows = sheet.get_rows()
\n

指定名称为“第一个sheet”的sheet,然后获取全部的行

\n

3.4、有了具体的行数,然后保证原有数据不变动的情况下,从第101行写数据

import faker\nfake = faker.Faker()\nfor i in range(len(rows),150):\n    sheet.write(i,0,fake.first_name() + ' ' + fake.last_name())\n    sheet.write(i,1,fake.address())\n    sheet.write(i,2,fake.phone_number())\n    sheet.write(i,3,fake.city())
\n

range函数,从len(rows)开始,到150-1结束,共50条。 faker库是制造虚假数据的,这个在前面写数据有用过,循环写入了50条。

\n

3.5、最后保存就可以了

xwb.save('虚假用户数据.xls')
\n

使用xwb,也就是操作之后的workbook对象,直接保存原来的文件名就可以了。

\n

4、总结

本文介绍了Python中常用的三个库:xlrd、xlwt和xlutils,分别用于读取Excel文件、写入Excel文件和处理Excel文件。这些库都有各自的优点和缺点,在实际使用时需要根据具体需求进行选择。

\n

同时,本文还提供了一些示例代码来演示如何使用这些库。通过这些示例代码,读者可以更好地了解这些库的使用方法和操作步骤。

\n

最后,提醒读者注意在使用这些库时要仔细阅读其文档和API,以避免出现不必要的错误。

\n

5、参考资料

\n","categories":[{"name":"python","slug":"python","permalink":"https://hexo.huangge1199.cn/categories/python/"}],"tags":[{"name":"python","slug":"python","permalink":"https://hexo.huangge1199.cn/tags/python/"}]},{"title":"从前端到后端:如何在 URL 参数中传递 JSON 数据","slug":"json-url-encoding","date":"2023-04-26T04:53:51.000Z","updated":"2023-04-26T04:59:20.803Z","comments":true,"path":"/post/json-url-encoding/","link":"","excerpt":"","content":"

引言

在 Web 开发中,我们经常需要将数据作为 URL 参数进行传递。当我们需要传递复杂的数据结构时,如何在前端将其转换为字符串,并在后端正确地解析它呢?本文将介绍如何在前端将 JSON 数据进行 URL 编码,并在后端将其解析为相应的数据类型,同时提供 Java 语言的示例代码。

\n

在前端使用 URL 参数传递 JSON 数据

有时候我们需要在前端将 JSON 数据传递给后端,例如通过 AJAX 请求或者页面跳转。URL 参数是一种常见的传递数据的方式,但是由于 URL 参数只支持字符串类型的数据,而 JSON 数据是一种复杂的数据类型,因此需要进行编码和解码操作。

\n

在 JavaScript 中,我们可以使用 JSON.stringify() 方法将 JSON 对象转换为字符串,然后使用 encodeURIComponent() 方法对字符串进行 URL 编码。以下是一个将 JSON 数据作为 URL 参数发送 AJAX 请求的示例:

\n
const data = { name: 'John', age: 30 };\nconst encodedData = encodeURIComponent(JSON.stringify(data));\n\nfetch(`/api/user?data=${encodedData}`)\n  .then(response => response.json())\n  .then(data => console.log(data))\n  .catch(error => console.error(error));
\n

在上面的示例中,我们首先创建了一个包含两个属性的 JSON 对象 data,然后将其转换为字符串并进行 URL 编码。然后我们使用 fetch() 方法发送一个带有 data 参数的 GET 请求,并在响应中使用 json() 方法将响应体解析为 JSON 对象。

\n

在后端解析 URL 参数

在后端中,我们需要解析从前端发送的包含 JSON 数据的 URL 参数。不同的后端语言和框架可能有不同的解析方式,这里以 Node.js 和 Java 为例,介绍如何解析 URL 参数。

\n

在 Node.js 中解析 URL 参数

在 Node.js 中,我们可以使用内置的 url 模块来解析 URL 参数,使用 querystring 模块来解析查询字符串参数。以下是一个使用 Node.js 解析 URL 参数的示例:

\n
const http = require('http');\nconst url = require('url');\nconst querystring = require('querystring');\n\nconst server = http.createServer((req, res) => {\n  const parsedUrl = url.parse(req.url);\n  const parsedQuery = querystring.parse(parsedUrl.query);\n\n  // 解析包含在 'data' 参数中的 JSON 字符串\n  const rawData = parsedQuery.data;\n  const myObject = JSON.parse(decodeURIComponent(rawData));\n\n  // 执行其他操作...\n\n  res.writeHead(200, { 'Content-Type': 'text/plain' });\n  res.end('Hello World!');\n});\n\nserver.listen(3000, () => {\n  console.log('Server running on port 3000');\n});
\n

在上面的示例中,我们首先使用 url.parse() 方法将请求 URL 解析为 URL 对象,然后使用 querystring.parse() 方法将查询字符串参数解析为对象。然后,我们从 data 参数中获取包含 JSON 字符串的原始数据,使用 decodeURIComponent() 解码该字符串,并使用 JSON.parse() 将其解析为 JavaScript 对象。

\n

在 Java 中解析 URL 参数

在 Java 中,我们可以使用 java.net.URLDecoder 类和 java.util.Map 接口来解析 URL 参数。以下是一个使用 Java 解析URL 参数的示例:

\n
import java.io.UnsupportedEncodingException;\nimport java.net.URLDecoder;\nimport java.util.HashMap;\nimport java.util.Map;\n\npublic class Main {\n  public static void main(String[] args) throws UnsupportedEncodingException {\n    String urlString = "http://localhost:3000/?data=%7B%22name%22%3A%22John%22%2C%22age%22%3A30%7D";\n    String[] urlParts = urlString.split("\\\\?");\n    String query = urlParts.length > 1 ? urlParts[1] : "";\n    Map<String, String> queryMap = new HashMap<>();\n    for (String param : query.split("&")) {\n      String[] pair = param.split("=");\n      String key = URLDecoder.decode(pair[0], "UTF-8");\n      String value = URLDecoder.decode(pair[1], "UTF-8");\n      queryMap.put(key, value);\n    }\n\n    // 解析包含在 'data' 参数中的 JSON 字符串\n    String rawData = queryMap.get("data");\n    String json = URLDecoder.decode(rawData, "UTF-8");\n    JSONObject myObject = new JSONObject(json);\n\n    // 执行其他操作...\n  }\n}
\n

在上面的示例中,我们首先将请求 URL 分为基础部分和查询字符串部分,然后将查询字符串参数解析为一个键值对的 Map 对象。然后,我们从 data 参数中获取包含 URL 编码的 JSON 字符串的原始数据,使用 URLDecoder.decode() 解码该字符串,并使用 JSONObject 类将其解析为 Java 对象。

\n

总结

在前端使用 URL 参数传递 JSON 数据时,需要先将 JSON 数据转换为字符串并进行 URL 编码。在后端中解析 URL 参数时,需要先将 URL 编码的字符串解码为原始数据,并将其解析为相应的数据类型。不同的后端语言和框架可能有不同的解析方式,但是基本的原理都是相同的。

\n","categories":[{"name":"web开发","slug":"web开发","permalink":"https://hexo.huangge1199.cn/categories/web%E5%BC%80%E5%8F%91/"}],"tags":[{"name":"web开发","slug":"web开发","permalink":"https://hexo.huangge1199.cn/tags/web%E5%BC%80%E5%8F%91/"}]},{"title":"选择哪种Web服务器?WebLogic vs Undertow vs Tomcat vs Nginx对比分析!","slug":"web-server-analysis","date":"2023-04-07T02:41:37.000Z","updated":"2023-04-07T05:30:26.520Z","comments":true,"path":"/post/web-server-analysis/","link":"","excerpt":"","content":"

前言

WebLogic、Undertow、Tomcat和Nginx是常用的Web服务器和应用程序服务器。它们具有不同的功能、应用场景、优缺点等方面的特点,本文将对它们进行详细的比较。

\n

功能比较

WebLogic是一个完整的JavaEE应用程序服务器,它具有强大的功能和灵活的配置。WebLogic支持分布式应用程序部署、负载均衡、高可用性、安全性等特性,适用于大型企业级Java应用程序。

\n

Undertow是一个轻量级的Web服务器和应用程序服务器,它具有高性能和可扩展性的特点。Undertow支持HTTP、HTTPS、AJAX、WebSockets等协议,适用于构建高性能、低延迟的Web应用程序。

\n

Tomcat是一个轻量级的Web服务器和应用程序服务器,它具有简单易用的特点。Tomcat支持Servlet、JSP等Java Web开发技术,适用于中小型Web应用程序。

\n

Nginx是一个高性能的Web服务器和反向代理服务器,它具有高并发能力、低延迟和高可靠性的特点。Nginx支持负载均衡、反向代理、HTTP缓存等特性,适用于构建高性能、高并发、低延迟的Web应用程序。

\n

应用场景比较

WebLogic适用于大型企业级Java应用程序,例如电子商务、金融服务、电信等行业的应用程序。WebLogic具有出色的可扩展性、高可靠性和安全性,适用于对性能、可靠性和安全性有严格要求的应用程序。

\n

Undertow适用于构建高性能、低延迟的Web应用程序,例如在线游戏、金融交易等需要快速响应的应用程序。Undertow具有轻量级、高性能和可扩展性的特点,适用于对性能有严格要求的应用程序。

\n

Tomcat适用于中小型Web应用程序,例如博客、社交网络、企业内部应用程序等。Tomcat具有轻量级、易于使用和配置的特点,适用于对性能要求不是特别高的应用程序。

\n

Nginx适用于构建高性能、高并发、低延迟的Web应用程序,例如电子商务、社交网络等需要支持大量并发用户访问的应用程序。Nginx具有高性能、高可靠性和可扩展性的特点,适用于对性能和可靠性有严格要求的应用程序。

\n

优缺点比较

WebLogic的优点是具有出色的可扩展性、高可靠性和安全性。它支持JavaEE规范,可以满足大型企业级应用程序的需求。缺点是相对于其他服务器而言比较复杂,需要一定的学习成本和配置成本,同时也需要更多的硬件资源支持。

\n

Undertow的优点是轻量级、高性能和可扩展性。它支持多种协议,适用于构建高性能、低延迟的Web应用程序。缺点是不支持JavaEE规范,无法满足大型企业级应用程序的需求,同时也缺乏成熟的生态系统和工具支持。

\n

Tomcat的优点是轻量级、易于使用和配置。它支持Servlet、JSP等Java Web开发技术,适用于中小型Web应用程序。缺点是相对于其他服务器而言功能较为简单,不能满足大型企业级应用程序的需求。

\n

Nginx的优点是高性能、高可靠性和可扩展性。它支持负载均衡、反向代理、HTTP缓存等特性,适用于构建高性能、高并发、低延迟的Web应用程序。缺点是不支持JavaEE规范,不能直接运行Java应用程序,需要结合其他服务器使用。

\n

支持的平台

\n

支持的编程语言

\n

管理和监控

\n

性能

\n

总结

WebLogic、Undertow、Tomcat和Nginx都是常用的Web服务器和应用程序服务器。它们具有不同的功能、应用场景、优缺点等方面的特点,选择合适的服务器需要根据具体的需求来决定。

\n

如果需要构建大型企业级Java应用程序,可以选择WebLogic;如果需要构建高性能、低延迟的Web应用程序,可以选择Undertow;如果需要构建中小型Web应用程序,可以选择Tomcat;如果需要构建高性能、高并发、低延迟的Web应用程序,可以选择Nginx。

\n

总之,选择合适的服务器可以提高应用程序的性能、可靠性和安全性,为用户提供更好的体验。

\n","categories":[{"name":"服务器","slug":"服务器","permalink":"https://hexo.huangge1199.cn/categories/%E6%9C%8D%E5%8A%A1%E5%99%A8/"}],"tags":[{"name":"服务器","slug":"服务器","permalink":"https://hexo.huangge1199.cn/tags/%E6%9C%8D%E5%8A%A1%E5%99%A8/"}]},{"title":"Firewall vs iptables:什么是最好的Linux防火墙工具?","slug":"fireWallTool","date":"2023-03-27T14:40:30.000Z","updated":"2023-03-28T07:34:58.477Z","comments":true,"path":"/post/fireWallTool/","link":"","excerpt":"","content":"

前言

作为一名Linux管理员,保护服务器免受网络攻击是最重要的任务之一。Linux操作系统提供了许多防火墙工具,其中最常用的是iptables和Firewall。本文将比较Firewall和iptables之间的不同之处,并探讨哪个防火墙工具更适合您的需求。

\n

Firewall和iptables是什么?

iptables是一个Linux防火墙工具,它通过对网络数据包进行过滤和修改来控制网络访问。Firewall是新一代的Linux动态防火墙,它基于D-Bus消息系统,采用了Zone和Service的概念来管理网络访问。

\n

iptables使用命令

\n

Firewall使用命令

\n

比较iptables和Firewall

语法和规则

iptables使用基于命令行的语法,可以直接使用iptables命令来添加、修改和删除防火墙规则。而Firewall使用XML或JSON格式的配置文件,可以使用firewall-cmd命令行工具或图形界面进行管理。

\n

实现方式

iptables是传统的Linux防火墙工具,而Firewall是新一代的动态防火墙。Firewall允许在运行时添加和删除规则,而不需要重启防火墙服务。

\n

管理和配置

iptables可以通过编辑配置文件来管理和配置规则,也可以通过调用命令行工具来实现。而Firewall是一种动态防火墙,它允许在运行时添加和删除规则,而不需要重启防火墙服务。

\n

性能和效率

iptables具有更高的性能和效率,可以处理更高的网络流量和更复杂的防火墙规则。而Firewall虽然具有动态性和易用性等优点,但其性能和效率不如iptables。

\n

iptables和Firewall的生效规则

对于iptables,当您添加或删除规则时,这些规则会立即生效,并在运行iptables -L命令时显示出来。但是,这些规则不会在系统重启后自动生效。为了保证规则在系统重启后仍然生效,您需要将这些规则保存到文件中,并确保在启动时加载该文件。您可以使用以下命令将当前iptables规则保存到文件中:

\n
iptables-save > /etc/sysconfig/iptables
\n

在系统重启后,可以使用以下命令加载保存的规则:

\n
iptables-restore < /etc/sysconfig/iptables
\n

对于Firewall,添加或删除规则时,这些规则不会立即生效,您需要运行以下命令使其生效:

\n
firewall-cmd --reload
\n

此命令会重新加载Firewall的规则,并应用任何更改。但是,请注意,如果您没有使用—permanent选项将规则永久保存,则在系统重启后,这些规则将被清除。为了确保规则在系统重启后仍然生效,您需要使用以下命令将规则永久保存:

\n
firewall-cmd --zone=public --add-port=8080/tcp --permanent
\n

此命令将添加一个允许端口8080的永久规则。在系统重启后,此规则将自动加载。

\n

综上所述,无论您使用iptables或Firewall哪一个修改防火墙规则时,请注意保存规则,并确保它们在系统重启后仍然生效。

\n

哪个防火墙工具更适合您的需求?

如果您需要处理高流量和复杂规则的环境,则使用iptables是一个很好的选择。iptables具有更高的性能和效率,并且可以处理更复杂的防火墙规则。

\n

如果您需要简单、易用和动态防火墙,则Firewall是一个很好的选择。Firewall具有动态性和易用性等优点,并允许在运行时添加和删除规则,而不需要重启防火墙服务。

\n

结论

防火墙是保护服务器安全的关键。iptables和Firewall是Linux操作系统上最常用的防火墙工具,它们之间有许多不同之处。选择哪种防火墙工具取决于您的具体需求和偏好。无论您选择使用哪种工具,都需要确保您的服务器受到良好的保护,以免受到网络攻击。

\n","categories":[{"name":"Linux","slug":"Linux","permalink":"https://hexo.huangge1199.cn/categories/Linux/"}],"tags":[{"name":"Linux","slug":"Linux","permalink":"https://hexo.huangge1199.cn/tags/Linux/"}]},{"title":"Nacos:1.0 vs. 2.0,你需要选择哪个版本来管理你的微服务?","slug":"nacosVerson","date":"2023-03-17T03:07:25.000Z","updated":"2023-03-17T03:12:13.485Z","comments":true,"path":"/post/nacosVerson/","link":"","excerpt":"","content":"

引言

Nacos是一个开源的分布式配置中心和服务发现平台,它可以帮助开发者轻松管理微服务架构中的配置和服务注册。在Nacos的不断发展中,1.0版本和2.0版本都是非常重要的版本,本篇博客将对这两个版本进行介绍和比较。

\n

一、Nacos 1.0版本

Nacos 1.0版本于2019年3月发布,它是Nacos的第一个正式版本,也是经过多次测试和优化后的稳定版本。相较于之前的beta版本,Nacos 1.0版本有了很大的改进和优化,主要包括以下几个方面:

\n

1. 功能完善

Nacos 1.0版本在功能上相对完善,包括了配置中心、服务注册与发现、命名空间、健康检查等核心功能。此外,Nacos 1.0版本还增加了可插拔的扩展能力,可以方便地扩展各种插件,例如自定义的服务发现协议。

\n

2. 性能提升

Nacos 1.0版本在性能上也有很大的提升,通过优化网络通信协议和数据存储方式,大大提高了系统的并发处理能力和吞吐量,可以满足更高的性能需求。

\n

3. 稳定性改进

Nacos 1.0版本在稳定性方面也进行了不少改进,通过增加监控和自动修复机制,可以更快地检测和修复系统故障,从而提高了系统的稳定性和可靠性。

\n

二、Nacos 2.0版本

Nacos 2.0版本于2020年9月发布,相对于1.0版本,它的改进和优化更加突出,主要体现在以下几个方面:

\n

1. 分布式一致性

Nacos 2.0版本引入了Raft算法,实现了分布式一致性,从而保证了集群环境下数据的强一致性和高可用性。

\n

2. 更多的功能支持

Nacos 2.0版本增加了更多的功能支持,例如DNS解析、动态配置刷新、访问控制等,为用户提供了更加全面的服务治理和配置管理能力。

\n

3. 更高的性能和扩展性

Nacos 2.0版本在性能和扩展性方面也有很大的提升,采用异步I/O、内存池等技术,大大提高了系统的处理能力和吞吐量。此外,Nacos 2.0版本还提供了更加灵活的插件机制,方便用户进行个性化定制和扩展。

\n

总结

Nacos 1.0版本和2.0版本都是非常重要的版本,它们分别在不同的方面进行了优化和改进,为用户提供了更加全面和稳定的服务治理和配置管理能力。如果您是初次接触Nacos,建议选择最新版本2.0,以便获得更好的性能和更多的功能支持。但如果您的应用已经在Nacos 1.0版本上运行良好,也可以继续使用该版本,因为它已经经过多次测试和优化,具有很高的稳定性和可靠性。不管选择哪个版本,都需要根据实际业务场景和需求来进行选择和配置,以获得最佳的服务治理和配置管理效果。

\n","categories":[{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/categories/java/"},{"name":"nacos","slug":"java/nacos","permalink":"https://hexo.huangge1199.cn/categories/java/nacos/"}],"tags":[{"name":"nacos","slug":"nacos","permalink":"https://hexo.huangge1199.cn/tags/nacos/"}]},{"title":"当数据遇上响应式编程:Java应用中如何使用R2DBC访问关系型数据库?","slug":"r2dbc","date":"2023-03-16T05:53:53.000Z","updated":"2023-03-16T06:23:52.212Z","comments":true,"path":"/post/r2dbc/","link":"","excerpt":"","content":"

在当今的大数据时代,关系型数据库仍然是最常用的数据存储方式之一。Java是一种广泛使用的编程语言,也是访问关系型数据库的主要语言之一。在Java应用程序中,通常使用JDBC(Java Database Connectivity)API来访问数据库。但是,JDBC使用的同步/阻塞模型在处理高并发和大数据量的情况下可能会成为瓶颈,因此R2DBC(Reactive Relational Database Connectivity)在此时显得更加合适。

\n

R2DBC是Java应用程序访问关系型数据库的一种新方式,它采用了响应式编程的思想,提供了异步、非阻塞的API,能够提高Java应用程序在高并发场景下的性能和可伸缩性。

\n

在本文中,我们将介绍R2DBC的基本概念和原理,并提供一些使用R2DBC的示例。

\n

R2DBC的基本概念和原理

R2DBC(Reactive Relational Database Connectivity)是一种基于异步、响应式编程模型的标准化关系型数据库连接API。R2DBC允许您使用响应式编程模型访问关系型数据库,这种编程模型通常用于处理大量并发请求、高吞吐量和低延迟场景。

\n

R2DBC的主要设计目标是提供一种简单的异步、响应式编程模型,以及一种统一的方式来连接不同类型的关系型数据库。与传统的JDBC API不同,R2DBC使用反应流作为响应式编程模型的基础,提供一组异步操作符,以便您可以使用流式编程模型来执行数据库操作。

\n

目前,R2DBC支持多种关系型数据库,包括MySQL、PostgreSQL、Microsoft SQL Server和H2数据库。在使用R2DBC时,您需要为您的数据库选择适当的R2DBC驱动程序,并按照驱动程序的要求进行配置。

\n

R2DBC提供了以下主要特性

\n

使用R2DBC的示例

使用R2DBC来连接MySQL数据库,您需要执行以下步骤:

\n

步骤1:添加依赖项

要在Java应用程序中使用R2DBC来访问MySQL数据库,首先需要将R2DBC MySQL依赖项添加到项目中。我们可以通过以下Maven依赖项将R2DBC MySQL引入我们的项目中:

\n
<dependency>\n    <groupId>dev.miku</groupId>\n    <artifactId>r2dbc-mysql</artifactId>\n    <version>0.8.8.RELEASE</version>\n</dependency>
\n

步骤2:配置数据库连接

在使用R2DBC访问MySQL数据库之前,我们需要先配置数据库连接。下面是一个示例配置:

\n
@Configuration\npublic class R2dbcConfiguration {\n\n    @Bean\n    public ConnectionFactory connectionFactory() {\n        return new MysqlConnectionFactory(\n            ConnectionFactoryOptions.builder()\n                .option(DRIVER, "mysql")\n                .option(HOST, "localhost")\n                .option(USER, "username")\n                .option(PASSWORD, "password")\n                .option(DATABASE, "database")\n                .build()\n        );\n    }\n}
\n

在上面的示例中,我们使用MysqlConnectionFactory类创建MySQL连接工厂。同时,我们使用ConnectionFactoryOptions类配置了连接选项,包括数据库驱动程序、主机、用户名、密码和数据库名称等。

\n

步骤3:使用连接工厂创建连接

一旦我们已经配置好了数据库连接,我们可以使用连接工厂创建一个新的数据库连接。以下是一个示例:

\n
public class UserRepository {\n\n    private final ConnectionFactory connectionFactory;\n\n    public UserRepository(ConnectionFactory connectionFactory) {\n        this.connectionFactory = connectionFactory;\n    }\n\n    public Flux<User> findAll() {\n        return Mono.from(connectionFactory.create())\n            .flatMapMany(connection ->\n                Flux.from(connection.createStatement("SELECT * FROM users").execute())\n                    .flatMap(result -> result.map((row, rowMetadata) ->\n                        new User(row.get("id", Long.class), row.get("name", String.class))\n                    ))\n                    .doFinally((signalType) -> Mono.from(connection.close()).subscribe())\n            );\n    }\n}
\n

在上面的示例中,我们创建了一个UserRepository类,并使用MysqlConnectionFactory类创建MySQL连接工厂。我们使用Mono.from(connectionFactory.create())方法创建一个新的数据库连接。接下来,我们使用Flux.from(connection.createStatement("SELECT * FROM users").execute())方法创建一个Flux,该Flux将使用SQL查询语句从数据库中检索所有用户记录。我们使用flatMap()方法将结果转换为我们的User对象,并将其作为Flux对象返回。最后,我们使用doFinally()方法关闭数据库连接。

\n

步骤4:使用R2DBC在Java应用程序中访问MySQL数据库

我们现在已经配置了数据库连接,并创建了一个用于访问数据库的UserRepository类。我们可以在Java应用程序中使用此类来访问MySQL数据库。以下是一个示例:

\n
public class Application {\n\n    public static void main(String[] args) {\n        ApplicationContext context = new AnnotationConfigApplicationContext(R2dbcConfiguration.class);\n        UserRepository userRepository = context.getBean(UserRepository.class);\n\n        userRepository.findAll()\n            .subscribe(user -> System.out.println("User: " + user));\n    }\n}
\n

在上面的示例中,我们创建了一个Application类,并在其中创建了一个UserRepository实例。我们调用userRepository.findAll()方法来检索所有用户记录,并在控制台上打印每个用户的名称。最后,我们使用subscribe()方法订阅Flux对象。

\n

总结

R2DBC是一种基于响应式编程的数据库访问API,它可以提高Java应用程序在高并发场景下的性能和可伸缩性。使用R2DBC可以让程序员使用异步、非阻塞的API访问关系型数据库,从而充分发挥计算机的CPU和内存资源。

\n

在使用R2DBC时,需要遵循基本步骤,包括添加R2DBC依赖项、配置数据库连接、使用连接工厂创建连接,以及执行查询或更新等操作。通过这些步骤,程序员可以编写高效、可伸缩的Java应用程序,从而更好地应对大规模数据处理和高并发访问的场景。

\n

总的来说,R2DBC是Java应用程序中非常有用的工具,可以帮助开发者提高程序的性能和可伸缩性。

\n","categories":[{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/categories/java/"}],"tags":[{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/tags/java/"},{"name":"R2DBC","slug":"R2DBC","permalink":"https://hexo.huangge1199.cn/tags/R2DBC/"}]},{"title":"当分布式遇上一致性:Raft、SofaJRaft和Distro协议大比拼","slug":"raftProtocol","date":"2023-03-15T03:23:28.000Z","updated":"2023-03-15T06:21:45.787Z","comments":true,"path":"/post/raftProtocol/","link":"","excerpt":"","content":"

前言

今天,我学习nacos的源码,看到了distro协议,于是本篇博客就由此而来了,通过网上查找的资料我大体整理了下,下面是整理后的结果。

\n

引言

分布式系统是由多个计算机节点组成的系统,这些节点通过网络相互连接,并协同工作来实现一个共同的目标。在分布式系统中,数据的一致性是一个非常重要的问题。分布式一致性算法可以帮助我们解决这个问题。本文将介绍三种分布式一致性算法:distro协议、sofajraft协议、raft协议,并讨论它们的适用场景和特点。

\n

Raft协议

Raft是一种分布式一致性算法,由Stanford大学的Diego Ongaro和John Ousterhout于2013年提出。Raft算法的主要目标是提供一种易于理解和实现的分布式一致性算法。Raft算法具有良好的可读性和易于理解的特点,使得它容易被人们理解和实现。Raft算法通过领导选举、日志复制、一致性检查点等基础功能,保证了分布式系统中数据的一致性。

\n

SofaJRaft协议

SofaJRaft是一种基于Raft协议的改进版本。SofaJRaft在Raft协议的基础上增加了一些特性,例如动态配置、快照等,以适应更加复杂的场景需求。SofaJRaft算法的设计目标是提供一个高性能、高可用、易于扩展的分布式一致性算法。SofaJRaft算法在性能和可扩展性方面优于Raft协议,适用于更为复杂的分布式系统,例如分布式存储、分布式数据库等。

\n

Distro协议

Distro协议是基于SofaJRaft协议的一种改进版本。Distro协议在SofaJRaft协议的基础上进一步优化,例如增加了故障转移功能,提高了容错性能。Distro协议的设计目标是提供一个高可靠、高性能、易于扩展的分布式一致性算法。Distro协议适用于更加严苛的分布式系统环境,例如金融、电信等领域的应用。

\n

三种协议比较

Raft协议、SofaJRaft协议和Distro协议都是分布式一致性算法,它们之间有以下的不同和优势:

\n
    \n
  1. Raft协议的可读性和易于理解性更好,适用于一些小规模的分布式系统。

    \n
  2. \n
  3. SofaJRaft协议增加了一些特性,例如动态配置、快照等,适用于更为复杂的分布式系统,例如分布式存储、分布式数据库等。

    \n
  4. \n
  5. Distro协议在SofaJRaft协议的基础上增加了故障转移功能,提高了容错性能,适用于更加严苛的分布式系统环境,例如金融、电信等领域的应用。

    \n
  6. \n
\n

共性

    \n
  1. 都使用领导者选举机制,通过选举一个领导者来管理整个系统。

    \n
  2. \n
  3. 都使用日志复制机制,通过复制日志来实现数据的一致性。

    \n
  4. \n
  5. 都可以实现线性一致性。

    \n
  6. \n
  7. 都可以扩展到多个节点。

    \n
  8. \n
\n

总之,选择合适的分布式一致性算法需要综合考虑系统规模、复杂度、容错性要求等因素。同时,需要注意算法的实现、性能、可维护性等方面的问题。

\n","categories":[{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/categories/java/"}],"tags":[{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/tags/java/"}]},{"title":"详细的Python Flask的操作","slug":"pythonFlask","date":"2023-02-14T07:47:27.000Z","updated":"2023-02-15T09:53:58.012Z","comments":true,"path":"/post/pythonFlask/","link":"","excerpt":"","content":"

本篇文章是Python Flask 建站框架入门课程_编程实战微课_w3cschool微课的学习笔记,根据课程整理而来,本人使用版本如下:

\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Python3.10.0
Flask2.2.2
\n
\n

简介

\n

核心函数库

Flask主要包括Werkzeug和Jinja2两个核心函数库,它们分别负责业务处理和安全方面的功能,这些基础函数为web项目开发过程提供了丰富的基础组件。

\n

Werkzeug

Werkzeug库十分强大,功能比较完善,支持URL路由请求集成,一次可以响应多个用户的访问请求;

\n

支持Cookie和会话管理,通过身份缓存数据建立长久连接关系,并提高用户访问速度;支持交互式Javascript调试,提高用户体验;

\n

可以处理HTTP基本事务,快速响应客户端推送过来的访问请求。

\n

Jinja2

Jinja2库支持自动HTML转移功能,能够很好控制外部黑客的脚本攻击;

\n

系统运行速度很快,页面加载过程会将源码进行编译形成python字节码,从而实现模板的高效运行;

\n

模板继承机制可以对模板内容进行修改和维护,为不同需求的用户提供相应的模板。

\n

安装

通过pip安装即可

\n
pip install Flask\n# pip3\npip3 install Flask
\n

目录结构

新项目创建后的结构

\"\"

\n

static文件夹:存放静态文件,比如css、js、图片等

\n

templates文件夹:模板文件目录

\n

app.py:应用启动程序

\n

获取URL参数

列出所有URL参数

request.args.__str__()

\n
from flask import Flask, request\n\napp = Flask(__name__)\n\n\n@app.route('/')\ndef hello_world():  # put application's code here\n    return request.args.__str__()\n\n\nif __name__ == '__main__':\n    app.run()
\n

在浏览器中访问http://127.0.0.1:5000/?name=Loen&age&app=ios&app=android,将显示:

\n
ImmutableMultiDict([('name', 'Loen'), ('age', ''), ('app', 'ios'), ('app', 'android')])
\n

列出浏览器传给我们的Flask服务的数据

from flask import Flask, request\n\napp = Flask(__name__)\n\n\n@app.route('/')\ndef hello_world():  # put application's code here\n\n    # 列出访问地址\n    print(request.path)\n\n    # 列出访问地址及参数\n    print(request.full_path)\n\n    return request.args.__str__()\n\n\nif __name__ == '__main__':\n    app.run()
\n

在浏览器中访问http://127.0.0.1:5000/?name=Loen&age&app=ios&app=android,控制台中显示

\n
/\n/?name=Loen&age&app=ios&app=android
\n

获取指定的参数值

from flask import Flask, request\n\napp = Flask(__name__)\n\n\n@app.route('/')\ndef hello_world():  # put application's code here\n\n    return request.args.get('name')\n\n\nif __name__ == '__main__':\n    app.run()
\n

在浏览器中访问http://127.0.0.1:5000/?name=Loen&age&app=ios&app=android,将显示:

\n
Loen
\n

处理多值

from flask import Flask, request\n\napp = Flask(__name__)\n\n\n@app.route('/')\ndef hello_world():  # put application's code here\n    r = request.args.getlist('app')  # 返回一个list\n    return r\n\n\nif __name__ == '__main__':\n    app.run()
\n

在浏览器中访问http://127.0.0.1:5000/?name=Loen&age&app=ios&app=android,将显示:

\n
[\n  "ios",\n  "android"\n]
\n

获取POST方法传送的数据

作为一种HTTP请求方法,POST用于向指定的资源提交要被处理的数据。

\n

我们在某些时候不适合将数据放到URL参数中,密或者数据太多,浏览器不一定支持太长长度的URL。这时,一般使用POST方法。

\n

本文章使用python的requests库模拟浏览器。

\n

安装命令:

\n
pip install requests
\n

看POST数据内容

app.py代码如下:

\n
from flask import Flask, request\n\napp = Flask(__name__)\n\n\n@app.route('/register', methods=['POST'])\ndef register():\n    print(request.headers)\n    print(request.stream.read())\n    return 'welcome'\n\n\nif __name__ == '__main__':\n    app.run()
\n

register.py代码如下:

\n
import requests\n\nif __name__ == '__main__':\n    user_info = {'name': 'Loen', 'password': 'loveyou'}\n    r = requests.post("http://127.0.0.1:5000/register", data=user_info)\n    print(r.text)
\n

运行app.py,然后运行register.py

\n

register.py将输出:

\n
welcome
\n

app.py将输出:

\n
Host: 127.0.0.1:5000\nUser-Agent: python-requests/2.28.2\nAccept-Encoding: gzip, deflate\nAccept: */*\nConnection: keep-alive\nContent-Length: 26\nContent-Type: application/x-www-form-urlencoded\n\n\nb'name=Loen&password=loveyou'\n127.0.0.1 - - [14/Feb/2023 21:12:17] "POST /register HTTP/1.1" 200 -
\n

解析POST数据

app.py代码如下:

\n
from flask import Flask, request\n\napp = Flask(__name__)\n\n\n@app.route('/register', methods=['POST'])\ndef register():\n    # print(request.stream.read()) # 不要用,否则下面的form取不到数据\n    print(request.form)\n    print(request.form['name'])\n    print(request.form.get('name'))\n    print(request.form.getlist('name'))\n    print(request.form.get('nickname', default='little apple'))\n    return 'welcome'\n\n\nif __name__ == '__main__':\n    app.run(port=5000, debug=True)
\n

register.py代码不变,运行app.py,然后运行register.py

\n

register.py将输出:

\n
welcome
\n

app.py将输出:

\n
ImmutableMultiDict([('name', 'Loen'), ('password', 'loveyou')])\nLoen\nLoen\n['Loen']\nlittle apple
\n

request.form会自动解析数据。

\n

request.form[‘name’]和request.form.get(‘name’)都可以获取name对应的值。

\n

request.form.get()可以为参数default指定值以作为默认值。

\n

获取POST中的列表数据

app.py代码如下:

\n
from flask import Flask, request\n\napp = Flask(__name__)\n\n\n@app.route('/register', methods=['POST'])\ndef register():\n    # print(request.stream.read()) # 不要用,否则下面的form取不到数据\n    print(request.form.getlist('name'))\n    return 'welcome'\n\n\nif __name__ == '__main__':\n    app.run(port=5000, debug=True)
\n

register.py代码如下:

\n
import requests\n\nif __name__ == '__main__':\n    user_info = {'name': ['Loen', 'Alan'], 'password': 'loveyou'}\n    r = requests.post("http://127.0.0.1:5000/register", data=user_info)\n    print(r.text)
\n

运行app.py,然后运行register.py

\n

register.py将输出:

\n
welcome
\n

app.py将输出:

\n
['Loen', 'Alan']
\n

处理和响应JSON数据

处理JSON数据

如果POST的数据是JSON格式,request.json会自动将json数据转换成Python类型(字典或者列表)。

\n

app.py代码如下:

\n
from flask import Flask, request\n\napp = Flask(__name__)\n\n\n@app.route('/add', methods=['POST'])\ndef add():\n    print(type(request.json))\n    print(request.json)\n    result = request.json['n1'] + request.json['n2']\n    return str(result)\n\n\nif __name__ == '__main__':\n    app.run(port=5000, debug=True)
\n

register.py代码如下:

\n
import requests\n\nif __name__ == '__main__':\n    json_data = {'n1': 5, 'n2': 3}\n    r = requests.post("http://127.0.0.1:5000/add", json=json_data)\n    print(r.text)
\n

运行app.py,然后运行register.py

\n

register.py将输出:

\n
8
\n

app.py将输出:

\n
<class 'dict'>\n{'n1': 5, 'n2': 3}
\n

响应JSON数据(Response)

app.py代码如下:

\n
import json\n\nfrom flask import Flask, request, Response\n\napp = Flask(__name__)\n\n\n@app.route('/add', methods=['POST'])\ndef add():\n    result = {'sum': request.json['n1'] + request.json['n2']}\n    return Response(json.dumps(result), mimetype='application/json')\n\n\nif __name__ == '__main__':\n    app.run(port=5000, debug=True)
\n

register.py代码如下:

\n
import requests\n\nif __name__ == '__main__':\n    json_data = {'n1': 5, 'n2': 3}\n    r = requests.post("http://127.0.0.1:5000/add", json=json_data)\n    print(r.headers)\n    print(r.text)
\n

运行app.py,然后运行register.py

\n

register.py将输出:

\n
/home/huangge1199/PycharmProjects/flaskProject/venv/bin/python /home/huangge1199/PycharmProjects/flaskProject/register.py \n{'Server': 'Werkzeug/2.2.2 Python/3.7.3', 'Date': 'Tue, 14 Feb 2023 13:37:49 GMT', 'Content-Type': 'application/json', 'Content-Length': '10', 'Connection': 'close'}\n{"sum": 8}
\n

响应JSON数据(jsonify)

app.py中app()返回时使用下面的内容,效果同之前一样

\n
return jsonify(result)
\n

上传表单

用 Flask 处理文件上传很简单,只要确保你没忘记在 HTML 表单中设置 enctype=”multipart/form-data” 属性,不然你的浏览器根本不会发送文件。

\n

安装响应的库werkzeug

\n
pip install werkzeug
\n

目录结构:

\n

\"\"

\n

app.py代码如下:

\n
from flask import Flask, request\nfrom werkzeug.utils import secure_filename\nimport os\n\napp = Flask(__name__)\n\n# 文件上传目录\napp.config['UPLOAD_FOLDER'] = 'static/uploads/'\n# 支持的文件格式\napp.config['ALLOWED_EXTENSIONS'] = {'png', 'jpg', 'jpeg', 'gif'}  # 集合类型\n\n\n# 判断文件名是否是我们支持的格式\ndef allowed_file(filename):\n    return '.' in filename and \\\n        filename.rsplit('.', 1)[1] in app.config['ALLOWED_EXTENSIONS']\n\n\n@app.route('/upload', methods=['POST'])\ndef upload():\n    upload_file = request.files['image']\n    if upload_file and allowed_file(upload_file.filename):  # 上传前文件在客户端的文件名\n        filename = secure_filename(upload_file.filename)\n        # 将文件保存到 static/uploads 目录,文件名同上传时使用的文件名\n        upload_file.save(os.path.join(app.root_path, app.config['UPLOAD_FOLDER'], filename))\n        return 'info is ' + request.form.get('info', '') + '. success'\n    else:\n        return 'failed'\n\n\nif __name__ == '__main__':\n    app.run()
\n

register.py代码如下:

\n
import requests\n\nif __name__ == "__main__":\n    file_data = {'image': open('flask.png', 'rb')}\n    user_info = {'info': 'flask'}\n    r = requests.post("http://127.0.0.1:5000/upload", data=user_info, files=file_data)\n    print(r.text)
\n

运行app.py,然后运行register.py,这时候文件已经上传到了指定目录中

\n

\"\"

\n

要控制上产文件的大小,可以设置请求实体的大小,代码如下:

\n
app.config['MAX_CONTENT_LENGTH'] = 16 * 1024 * 1024 #16MB
\n

获取上传文件的内容,代码如下:

\n
file_content = request.files['image'].stream.read()
\n

Restful URL

Restful URL可以看做是对 URL 参数的替代

\n

变量规则

写法如下:

\n
@app.route('/user/<username>/friends')
\n

转换类型

使用 Restful URL 得到的变量默认为str对象。我们可以用flask内置的转换机制,即在route中指定转换类型,写法如下:

\n
@app.route('/page/<int:num>')
\n

有3个默认的转换器:

\n\n

自定义转换器

自定义的转换器是一个继承werkzeug.routing.BaseConverter的类,修改to_python和to_url方法即可。

\n

to_python方法用于将url中的变量转换后供被@app.route包装的函数使用,to_url方法用于flask.url_for中的参数转换。

\n

下面是一个示例:

\n
from flask import Flask, url_for\nfrom werkzeug.routing import BaseConverter\n\n\nclass MyIntConverter(BaseConverter):\n\n    def __init__(self, url_map):\n        super(MyIntConverter, self).__init__(url_map)\n\n    def to_python(self, value):\n        return int(value)\n\n    def to_url(self, value):\n        return value * 2\n\n\napp = Flask(__name__)\napp.url_map.converters['my_int'] = MyIntConverter\n\n\n@app.route('/page/<my_int:num>')\ndef page(num):\n    print(num)\n    print(url_for('page', num='145'))  # page 对应的是 page函数 ,num 对应对应`/page/<my_int:num>`中的num,必须是str\n    return 'hello world'\n\n\nif __name__ == '__main__':\n    app.run()
\n

运行app.py,浏览器访问http://127.0.0.1:5000/page/28后,app.py的输出信息是:

\n
28\n/page/145145
\n

使用url_for生成链接

工具函数url_for可以让你以软编码的形式生成url,提供开发效率。

\n

例子app.py代码如下:

\n
from flask import Flask, url_for\n\napp = Flask(__name__)\n\n\n@app.route('/')\ndef hello_world():\n    pass\n\n\n@app.route('/user/<name>')\ndef user(name):\n    pass\n\n\n@app.route('/page/<int:num>')\ndef page(num):\n    pass\n\n\n@app.route('/test')\ndef test():\n    print(url_for('test'))\n    print(url_for('user', name='loen'))\n    print(url_for('page', num=1, q='welcome to w3c 15%2'))\n    print(url_for('static', filename='uploads/flask.png'))\n    return 'Hello'\n\n\nif __name__ == '__main__':\n    app.run()
\n

运行app.py。然后在浏览器中访问http://127.0.0.1:5000/testserver.py控制台将输出以下信息:

\n
/test\n/user/loen\n/page/1?q=welcome+to+w3c+15%252\n/static/uploads/flask.jpg
\n

使用redirect重定向网址

在浏览器中访问http://127.0.0.1:5000/old,浏览器的url会变成http://127.0.0.1:5000/new,并显示,app.py代码如下:

\n
from flask import Flask, url_for, redirect\n\napp = Flask(__name__)\n\n\n@app.route('/old')\ndef old():\n    print('this is old')\n    return redirect(url_for('new'))\n\n\n@app.route('/new')\ndef new():\n    print('this is new')\n    return 'this is new'\n\n\nif __name__ == '__main__':\n    app.run()
\n

运行app.py,然后在浏览器中访问http://127.0.0.1:5000/old

\n

浏览器显示:

\n
this is new
\n

控制台显示:

\n
this is old\nthis is new
\n

自定义404

处理HTTP错误

要处理HTTP错误,可以使用flask.abort函数。

\n

app.py代码如下:

\n
from flask import Flask, abort\n\napp = Flask(__name__)\n\n\n@app.route('/user')\ndef user():\n    abort(401)  # Unauthorized 未授权\n    print('Unauthorized, 请先登录')\n\n\nif __name__ == '__main__':\n    app.run()
\n

运行app.py,然后在浏览器中访问http://127.0.0.1:5000/user

\n

浏览器显示:

\n

\"\"

\n

自定义错误页面

page_unauthorized 函数返回的是一个元组,401 代表HTTP 响应状态码。

\n

如果省略401,则响应状态码会变成默认的 200。

\n

app.py代码如下:

\n
from flask import Flask, abort, render_template_string\n\napp = Flask(__name__)\n\n\n@app.route('/user')\ndef user():\n    abort(401)  # Unauthorized\n\n\n@app.errorhandler(401)\ndef page_unauthorized(error):\n    return render_template_string('<h1> Unauthorized </h1><h2>{{ error_info }}</h2>', error_info=error), 401\n\n\nif __name__ == '__main__':\n    app.run()
\n

运行app.py,然后在浏览器中访问http://127.0.0.1:5000/user

\n

浏览器显示:

\n

\"\"

\n

用户会话

session 用来记录用户的登录状态,一般基于cookie实现。

\n

app.py代码如下:

\n
from flask import Flask, render_template_string, request, session, redirect, url_for\n\napp = Flask(__name__)\n\napp.secret_key = 'LoenDSdtj\\9bX#%@!!*(0&^%)'\n\n\n@app.route('/login')\ndef login():\n    page = '''\n    <form action="{{ url_for('do_login') }}" method="post">\n        <p>name: <input type="text" name="user_name" /></p>\n        <input type="submit" value="Submit" />\n    </form>\n    '''\n    return render_template_string(page)\n\n\n@app.route('/do_login', methods=['POST'])\ndef do_login():\n    name = request.form.get('user_name')\n    session['user_name'] = name\n    return 'success'\n\n\n@app.route('/show')\ndef show():\n    return session['user_name']\n\n\n@app.route('/logout')\ndef logout():\n    session.pop('user_name', None)\n    return redirect(url_for('login'))\n\n\nif __name__ == '__main__':\n    app.run()
\n

代码的含义

\n

app.secret_key用于给session加密。

\n

在/login中将向用户展示一个表单,要求输入一个名字,submit后将数据以post的方式传递给/do_login,/do_login将名字存放在session中。

\n

如果用户成功登录,访问/show时会显示用户的名字。此时,打开调试工具,选择session面板,会看到有一个cookie的名称为session。

\n

/logout用于登出,通过将session中的user_name字段pop即可。Flask中的session基于字典类型实现,调用pop方法时会返回pop的键对应的值;如果要pop的键并不存在,那么返回值是pop()的第二个参数。

\n

另外,使用redirect()重定向时,一定要在前面加上return。

\n

设置session的有效时间

设置session的有效时间设置为5分钟。

\n

代码如下:

\n
from datetime import timedelta\nfrom flask import session, app\n\nsession.permanent = True\napp.permanent_session_lifetime = timedelta(minutes=5)
\n

使用Cookie

Cookie是存储在客户端的记录访问者状态的数据。

\n

常用的用于记录用户登录状态的session大多是基于cookie实现的。

\n

cookie可以借助flask.Response来实现。

\n

使用Response.set_cookie添加和删除cookie。

\n

expires参数用来设置cookie有效时间,值可以是datetime对象或者unix时间戳。

\n
res.set_cookie(key='name', value='loen', expires=time.time()+6*60)
\n

上面的expire参数的值表示cookie在从现在开始的6分钟内都是有效的。

\n

要删除cookie,将expire参数的值设为0即可:

\n
res.set_cookie('name', '', expires=0)
\n

详细的app.py代码如下:

\n
import time\n\nfrom flask import Flask, request, Response\n\napp = Flask(__name__)\n\n\n@app.route('/add')\ndef login():\n    res = Response('add cookies')\n    res.set_cookie(key='name', value='loen', expires=time.time() + 6 * 60)\n    return res\n\n\n@app.route('/show')\ndef show():\n    return request.cookies.__str__()\n\n\n@app.route('/del')\ndef del_cookie():\n    res = Response('delete cookies')\n    res.set_cookie('name', '', expires=0)\n    return res\n\n\nif __name__ == '__main__':\n    app.run()\n
\n

闪存系统 flashing system

Flask 的闪存系统(flashing system)用于向用户提供反馈信息,这些反馈信息一般是对用户上一次操作的反馈。

\n

反馈信息是存储在服务器端的,当服务器向客户端返回反馈信息后,这些反馈信息会被服务器端删除。

\n

详细的app.py代码如下:

\n
import time\n\nfrom flask import Flask, get_flashed_messages, flash\n\napp = Flask(__name__)\napp.secret_key = 'some_secret'\n\n\n@app.route('/')\ndef index():\n    return 'Hello index'\n\n\n@app.route('/gen')\ndef gen():\n    info = 'access at ' + time.time().__str__()\n    flash(info)\n    return info\n\n\n@app.route('/show1')\ndef show1():\n    return get_flashed_messages().__str__()\n\n\n@app.route('/show2')\ndef show2():\n    return get_flashed_messages().__str__()\n\n\nif __name__ == '__main__':\n    app.run()\n
\n","categories":[{"name":"python","slug":"python","permalink":"https://hexo.huangge1199.cn/categories/python/"}],"tags":[{"name":"学习笔记","slug":"学习笔记","permalink":"https://hexo.huangge1199.cn/tags/%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0/"},{"name":"python","slug":"python","permalink":"https://hexo.huangge1199.cn/tags/python/"}]},{"title":"凡人神将传游戏攻略","slug":"ftzsgl","date":"2023-02-11T09:09:51.000Z","updated":"2024-03-13T08:19:06.755Z","comments":true,"path":"/post/ftzsgl/","link":"","excerpt":"","content":"

合服活动

\"\"
注:普通召唤券往后4000给100,5000给100,6000给150,7000给150,至此封顶

\n

\"\"

\n

\"\"
注:幻神神魄使用情况,目前不全,欢迎留言补充
\"\"

\n

零氪黄道神武

\"\"

\n

摘星楼、秘境、龙珠神将出现顺序

\n

炼妖嘉年华

九星秘宝
500,500,1000,1500,1500,5000,7500,后面还有两个不知道

\n

抽卡嘉年华

九星秘宝
500,500,1000,1500,,1500,5000,7500,后面还有两个不知道

\n

归墟

\"\"

\n

财神宝轮

根据以往经验来看,到一定次数必出对应东西,具体如下:

\n\n","categories":[{"name":"游戏","slug":"游戏","permalink":"https://hexo.huangge1199.cn/categories/%E6%B8%B8%E6%88%8F/"}],"tags":[{"name":"游戏","slug":"游戏","permalink":"https://hexo.huangge1199.cn/tags/%E6%B8%B8%E6%88%8F/"}]},{"title":"docker镜像构建以及宿主机和容器间的相互拷贝","slug":"dockerBuilder","date":"2023-02-08T07:36:29.000Z","updated":"2023-02-08T08:45:03.169Z","comments":true,"path":"/post/dockerBuilder/","link":"","excerpt":"","content":"

前言

主要学习docker的相关操作,构建镜像、docker容器运行、从容器内往外拷贝文件,向容器内拷贝文件,进入容器

\n

docker构建镜像

编写Dockerfile文件:

\n
vi Dockerfile
\n

文件内输入

\n
from nginx
\n

\"\"

\n

在同目录执行构建命令:

\n
docker build -t my-nginx .
\n

\"\"

\n

docker容器运行

执行命令:

\n
# 运行命令\ndocker run --name my-nginx -d -p 40080:80 my-nginx\n# 查看所有容器信息\ndocker ps -a
\n

\"\"

\n

浏览器输入IP:40080,显示默认nginx页面

\n

\"\"

\n

从容器内往外拷贝文件

执行命令:

\n
# 拷贝文件\ndocker cp my-nginx:/usr/share/nginx/html/index.html index.html\n# 查看文件内容\ncat index.html\n# 修改文件内容\nvi index.html\n# 查看文件内容\ncat index.html
\n

\"\"

\n

向容器内拷贝文件

执行命令:

\n
# 拷贝文件\ndocker cp index.html my-nginx:/usr/share/nginx/html/index.html 
\n

\"\"

\n

浏览器输入IP:40080,显示页面已经改变

\n

\"\"

\n

进入容器

为了方便查看变化,这里拷贝了一份不一样的文件进人容器,执行命令:

\n
# 修改文件名\nmv index.html new.html\n# 修改文件内容\nvi new.html\n# 拷贝文件进容器\ndocker cp new.html my-nginx:/usr/share/nginx/html/new.html\n# 查看修改文件的内容\ncat new.html
\n

\"\"

\n

执行命令:

\n
# 从容器中拷贝nginx配置文件\ndocker cp my-nginx:/etc/nginx/conf.d/default.conf .\n# 查看配置文件\ncat default.conf
\n

\"\"

\n
# 修改配置文件\nvi default.conf\n# 查看修改后的配置文件\ncat default.conf
\n

\"\"

\n
# 再将配置文件拷贝回容器\ndocker cp default.conf my-nginx:/etc/nginx/conf.d/default.conf\n# 进入容器\ndocker exec -it my-nginx /bin/bash\n# 查看拷贝进容器的文件\ncat /usr/share/nginx/html/new.html
\n

\"\"

\n
# 查看拷贝进容器的nginx配置文件\ncat /etc/nginx/conf.d/default.conf\n# 重启nginx\nnginx -s reload\n# 退出容器\nexit
\n

\"\"

\n

浏览器输入IP:40080,显示页面已经改变

\n

\"\"

\n","categories":[{"name":"云原生2023","slug":"云原生2023","permalink":"https://hexo.huangge1199.cn/categories/%E4%BA%91%E5%8E%9F%E7%94%9F2023/"}],"tags":[{"name":"云原生2023","slug":"云原生2023","permalink":"https://hexo.huangge1199.cn/tags/%E4%BA%91%E5%8E%9F%E7%94%9F2023/"}]},{"title":"免费HTTPS证书部署","slug":"useFreeSSL","date":"2023-02-07T07:15:03.000Z","updated":"2023-02-07T08:19:59.689Z","comments":true,"path":"/post/useFreeSSL/","link":"","excerpt":"","content":"

前言

由于腾讯云限制了免费证书的使用个数,而我之前因为免费就随意了很多,现在,一个正在使用的证书过期了,没法继续使用,这样就导致了在浏览器中不能一步到位的打开网站

\n

网站介绍

使用的是FreeSSL.cn网站,该网站提供免费的HTTPS证书申请,下面是网站首页

\n

\"\"

\n

安装acme.sh

我们需要先在服务器上安装acme.sh,建议使用root用户安装

\n
curl https://get.acme.sh | sh -s email=my@example.com
\n

\"\"

\n

ACME 域名配置

在首页中输入想要申请证书的域名,点击后面的按钮

\n

\"\"

\n

点击下一步

\n

\"\"

\n

根据内容,去你的域名管理处添加信息,添加后回来点击按钮

\n

\"\"

\n

\"\"

\n

出现下面的页面,可以先直接点击完成

\n

\"\"

\n

部署证书

acme.sh 部署命令,这个就是上面图中的内容

\n
acme.sh --issue -d blog.huangge1199.cn  --dns dns_dp --server [专属 ACME 地址]
\n

\"\"

\n

生成证书,注意生成证书的路径根据自己的情况修改

\n
acme.sh --install-cert -d blog.huangge1199.cn \\\n--key-file       /www/server/panel/vhost/cert/blog.huangge1199.cn/key.pem  \\\n--fullchain-file /www/server/panel/vhost/cert/blog.huangge1199.cn/cert.pem \\\n--reloadcmd     "service nginx reload"
\n

\"\"

\n

修改nginx配置文件,添加如下的内容,证书文件的路径和名字同生成证书的路径与名字一致

\n
ssl_certificate    /www/server/panel/vhost/cert/blog.huangge1199.cn/cert.pem;\nssl_certificate_key    /www/server/panel/vhost/cert/blog.huangge1199.cn/key.pem;\nssl_protocols TLSv1.1 TLSv1.2 TLSv1.3;\nssl_ciphers EECDH+CHACHA20:EECDH+CHACHA20-draft:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;\nssl_prefer_server_ciphers on;
\n

\"\"

\n

nginx最好再重启一次

\n
service nginx reload
\n

\"\"

\n

验证

点击跳转,龙儿之家)

\n

\"\"

\n","categories":[{"name":"网站建设","slug":"网站建设","permalink":"https://hexo.huangge1199.cn/categories/%E7%BD%91%E7%AB%99%E5%BB%BA%E8%AE%BE/"}],"tags":[{"name":"网站建设","slug":"网站建设","permalink":"https://hexo.huangge1199.cn/tags/%E7%BD%91%E7%AB%99%E5%BB%BA%E8%AE%BE/"}]},{"title":"群晖安装PostgreSQL","slug":"inPostgreSqlBySynology","date":"2023-01-15T06:57:00.000Z","updated":"2023-02-14T00:37:25.621Z","comments":true,"path":"/post/inPostgreSqlBySynology/","link":"","excerpt":"","content":"

确认套件中心有PostgreSQL

我这边在套件中心中搜索到PostgreSQL了,要安装就先要确认有它,我这边的环境的spk7d 系统版本

\n

\"\"

\n

我在套件中心设置的套件来源有2个

\n\n

\"\"

\n

安装

这步就简单了,直接在套件中心安装套件即可,安装过程中,需要设置用户名、密码和端口号,端口号不能重复的

\n

\"\"

\n","categories":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/categories/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}],"tags":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"},{"name":"deepin","slug":"deepin","permalink":"https://hexo.huangge1199.cn/tags/deepin/"}]},{"title":"TDengine安装使用","slug":"tgengine-1","date":"2022-11-24T07:03:40.000Z","updated":"2022-11-24T09:53:57.437Z","comments":true,"path":"/post/tgengine-1/","link":"","excerpt":"","content":"

引言

近期,听说了时序数据库TDengine,本人的好奇心又出来了,同是时序数据库的InfluxDB不也挺好的嘛?通过一些网上的资料以及些简单的实际操作,本人得出的结论是:

\n\n

内容介绍

本文将会围绕TGengine进行简单的介绍,当然我也是初次使用,这份文档也只是初步的学习记录,如果有朋友在实际中使用了TGengine并且觉得这篇文章有什么问题,还请在下方留言,我会根据实际情况对文章进行修改,这样也是为了防止给别人留坑

\n
    \n
  1. 对TGengine做下简单介绍(摘抄自官方文档)

    \n
  2. \n
  3. 安装TGengine服务端的过程

    \n
  4. \n
  5. TDengine 数据建模

    \n
  6. \n
  7. DataGrip如何查看数据

    \n
  8. \n
  9. 使用java语言进行REST连接测试

    \n
  10. \n
\n

另外,我这边服务端是使用TDengine-server-2.6.0.30-Linux-x64.tar.gz进行安装的

\n

介绍

\n

注:本段内容摘自官方文档

\n
\n

TDengine 是一款高性能、分布式、支持 SQL 的时序数据库 (Database),其核心代码,包括集群功能全部开源(开源协议,AGPL v3.0)。TDengine 能被广泛运用于物联网、工业互联网、车联网、IT 运维、金融等领域。除核心的时序数据库 (Database) 功能外,TDengine 还提供缓存、数据订阅、流式计算等大数据平台所需要的系列功能,最大程度减少研发和运维的复杂度。

\n

牢骚

在对比的过程中,我发现TDengine的官方文档不是太好,怎么说尼,虽然更改方面都提及到了,但是需要看很多内容之后才能完全的解决好,这个就不是太好了。比如说安装,虽然在立即开始里面有,但是详细的安装卸载是放在运维指南里面,按照我们的习惯,从上往下,从左往右,可能看到最后,才看到安装和卸载,但是在这之前却是有大量的实际操作文档在中间夹杂。

\n

\"\"

\n

安装服务端

目前 2.X 版服务端 taosd 和 taosAdapter 仅在 Linux 系统上安装和运行,应用驱动 taosc 与 TDengine CLI 可以在 Windows 或 Linux 上安装和运行。另外在 2.4 之前的版本中没有 taosAdapter,RESTful 接口是由 taosd 内置的 HTTP 服务提供的。

\n
    \n
  1. 下载安装包TDengine-server-2.6.0.30-Linux-x64.tar.gz (45 M)并上传至服务器

    \n

    链接: https://pan.baidu.com/s/1-w7O2xUuq0iaF1glh36bow?pwd=ansm 提取码: ansm

    \n
  2. \n
  3. 进入安装包所在目录,解压文件

    \n
    # 解压命令\ntar -zxvf TDengine-server-2.6.0.30-Linux-x64.tar.gz
    \n
  4. \n
  5. 进入解压目录,执行其中的 install.sh 安装脚本

    \n
    # 进入解压目录命令(目录根据自己的解决自行更改)\ncd /app/TDengine-server-2.6.0.30\n# 执行安装命令\n./install.sh
    \n
  6. \n
\n
\n

注:中途两次输入,直接回车就好,什么都不用输入

\n
\n

\"2022-11-24-16-07-50-image.png\"

\n
    \n
  1. 启动taosd并确认状态

    \n
    # 启动命令\nsystemctl start taosd\n# 确认状态\nsystemctl status taosd
    \n
  2. \n
\n

\"\"

\n
    \n
  1. 启动taosAdapter并确认状态

    \n
    \n

    注:TDengine 在 2.4 版本之后包含一个独立组件 taosAdapter 需要使用 systemctl 命令管理 taosAdapter 服务的启动和停止,不符合的要跳过本步骤

    \n
    \n
    ```shell\n# 启动命令\nsystemctl start taosadapter\n# 确认状态\nsystemctl status taosadapter
    \n
    \n![](https://img.huangge1199.cn/blog/tgengine-1/2022-11-24-16-23-43-image.png) \n\n6. 进入taos,确认安装成功、\n   \n   ```shell\n   # 启动命令(默认密码taosdata)\n   taos -p
    \n
  2. \n
\n

\"\"

\n

TDengine 数据建模

    \n
  1. 创建数据库

    \n
    # 创建数据库命令\nCREATE DATABASE power;\n# 切换数据库\nUSE power;
    \n
  2. \n
\n

\"\"

\n
    \n
  1. 创建表

    \n
    # 创建表\ncreate table t (ts timestamp, speed int);\n# 插入2条数据(建议插入两条记录时隔几秒)\ninsert into t values (now, 10);\ninsert into t values (now, 20);
    \n
  2. \n
\n

\"\"

\n
    \n
  1. 查询表数据

    \n
    # 查询表 t\nselect * from t;
    \n
  2. \n
\n

\"\"

\n

DataGrip查看数据

    \n
  1. 编译jar

    \n

    从 GitHub 仓库克隆 JDBC 连接器的源码,git clone https://github.com/taosdata/taos-connector-jdbc.git -b 2.0.40(此处推荐 -b 指定发布了的 Tags 版本)

    \n

    克隆完源码后,若是编译 2.0.40 及以下版本的將commons-logging 依赖包的 scope 值由 test 改为 compile

    \n

    \"\"

    \n

    在目录下执行:mvn clean package -D maven.test.skip=true

    \n
  2. \n
\n

\"\"

\n
    \n
  1. 自建驱动

    \n

    使用Driver and Data Source,自建驱动,注意红框内容,jar包是之前编译生成的

    \n
  2. \n
\n

\"\"

\n
    \n
  1. 创建数据库连接

    \n

    第一个红框Driver选择之前自建的,第二个红框URL 写jdbc:TAOS-RS://IP:6041/数据库名

    \n
  2. \n
\n

\"\"

\n

java进行REST连接测试

新建Springboot项目,maven引入jar包

\n
<dependency>\n    <groupId>com.taosdata.jdbc</groupId>\n    <artifactId>taos-jdbcdriver</artifactId>\n    <version>2.0.40</version>\n</dependency>
\n

main 方法:

\n
public static void main(String[] args) throws SQLException {\n    String jdbcUrl = "jdbc:TAOS-RS://IP:6041/数据库名?user=用户名&password=密码";\n    Connection conn = DriverManager.getConnection(jdbcUrl);\n    System.out.println("Connected");\n    conn.close();\n}
\n

测试结果:

\n

\"\"

\n","categories":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/categories/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}],"tags":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"},{"name":"deepin","slug":"deepin","permalink":"https://hexo.huangge1199.cn/tags/deepin/"}]},{"title":"群晖nas上部署gitea后修改IP地址","slug":"giteaNas","date":"2022-10-25T03:14:24.000Z","updated":"2022-10-25T04:43:45.854Z","comments":true,"path":"/post/giteaNas/","link":"","excerpt":"","content":"

事件

今天,我在nas的套件中心中发现了Gitea这个套件,想到自己的代码都是保存在GitHub或者Gitee上面的,
于是乎我边在nas上面装了这个套件,装备将代码在nas里面也备份一份

\n

我的nas所在网络没有公网IP,用内网穿透形式弄的,但是在用穿透后的IP:端口进入时,就报了下面的警告

\n

\"\"

\n

看介绍,是说地址不一样了,绿框中的地址分别是我本地地址和穿透后的公网地址,为了方便,我就想把地址换成
公网的地址,这样以后复制地址什么的也方便

\n

换IP

有两种方法:

\n
    \n
  1. 每次都将本地IP改为穿透的公网ip
  2. \n
  3. 修改配置文件conf.ini
  4. \n
\n

第一种方法需要每一次都改,太麻烦了,我这里使用的是第二种方法

\n

群晖的gitea的配置文件是在安装目录下的/var下面,我安装在/var/packages

\n

\"\"

\n

打开conf.ini文件,注意这地方需要root权限,因此执行命令

\n
sudo vi conf.ini
\n

\"\"

\n

将12行这地方改成穿透后的公网IP

\n

重启gitea套件

    \n
  1. 在套件中心中找到gitea,然后停用
  2. \n
\n

\"\"

\n
    \n
  1. 启动gitea
  2. \n
\n

\"\"

\n

完成验证

为了确保成功,完成后再通过穿透后的公网进入,页面的红框消失

\n","categories":[{"name":"nas","slug":"nas","permalink":"https://hexo.huangge1199.cn/categories/nas/"},{"name":"安装部署","slug":"nas/安装部署","permalink":"https://hexo.huangge1199.cn/categories/nas/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}],"tags":[{"name":"nas","slug":"nas","permalink":"https://hexo.huangge1199.cn/tags/nas/"}]},{"title":"vue下el-popover组件实现滚轴跟随功能","slug":"elPopover","date":"2022-10-19T06:13:52.000Z","updated":"2022-10-19T06:29:55.526Z","comments":true,"path":"/post/elPopover/","link":"","excerpt":"","content":"

描述

使用的是点击触发弹出内容,目标是在弹出内容的情况下,上下来回滚动鼠标,弹出内容和点击按钮不分离

\n

通过监听页面滚动来实现功能,当监听到页面有滚动时,通过组件的updatePopper()方法来更新组件的位置

\n

代码

<el-popover \n    ref="popover"\n    placement="right"\n    width="400"\n    trigger="click"\n    style="position: relative">\n    <el-table :data="gridData">\n      <el-table-column width="150" property="date" label="日期"></el-table-column>\n      <el-table-column width="100" property="name" label="姓名"></el-table-column>\n      <el-table-column width="300" property="address" label="地址"></el-table-column>\n    </el-table>\n    <el-button slot="reference">click 激活</el-button>\n</el-popover>\n\n<script>\nexport default {\n  data() {\n    return {\n      gridData: [{\n        date: '2016-05-02',\n        name: '王小虎',\n        address: '上海市普陀区金沙江路 1518 弄'\n      }, {\n        date: '2016-05-04',\n        name: '王小虎',\n        address: '上海市普陀区金沙江路 1518 弄'\n      }, {\n        date: '2016-05-01',\n        name: '王小虎',\n        address: '上海市普陀区金沙江路 1518 弄'\n      }, {\n        date: '2016-05-03',\n        name: '王小虎',\n        address: '上海市普陀区金沙江路 1518 弄'\n      }]\n    };\n  },\n  mounted() {\n    window.addEventListener('scroll', this.handleScroll, true)\n  },\n  methods: {\n    handleScroll() {\n      this.$refs.popover.updatePopper()\n    }\n  }\n};\n</script>
\n","categories":[{"name":"前端","slug":"前端","permalink":"https://hexo.huangge1199.cn/categories/%E5%89%8D%E7%AB%AF/"},{"name":"vue","slug":"前端/vue","permalink":"https://hexo.huangge1199.cn/categories/%E5%89%8D%E7%AB%AF/vue/"}],"tags":[{"name":"前端","slug":"前端","permalink":"https://hexo.huangge1199.cn/tags/%E5%89%8D%E7%AB%AF/"},{"name":"vue","slug":"vue","permalink":"https://hexo.huangge1199.cn/tags/vue/"}]},{"title":"力扣2415. 反转二叉树的奇数层","slug":"reverse-odd-levels-of-binary-tree","date":"2022-09-19T16:28:22.000Z","updated":"2024-04-25T08:10:09.107Z","comments":true,"path":"/post/reverse-odd-levels-of-binary-tree/","link":"","excerpt":"","content":"

311周赛第三题

\n

原题链接:2415. 反转二叉树的奇数层

\n

题目

给你一棵 完美 二叉树的根节点 root ,请你反转这棵树中每个 奇数 层的节点值。

\n\n\n\n

反转后,返回树的根节点。

\n\n

完美 二叉树需满足:二叉树的所有父节点都有两个子节点,且所有叶子节点都在同一层。

\n\n

节点的 层数 等于该节点到根节点之间的边数。

\n\n

 

\n\n

示例 1:

\n\"\" \n
\n输入:root = [2,3,5,8,13,21,34]\n输出:[2,5,3,8,13,21,34]\n解释:\n这棵树只有一个奇数层。\n在第 1 层的节点分别是 3、5 ,反转后为 5、3 。\n
\n\n

示例 2:

\n\"\" \n
\n输入:root = [7,13,11]\n输出:[7,11,13]\n解释: \n在第 1 层的节点分别是 13、11 ,反转后为 11、13 。 \n
\n\n

示例 3:

\n\n
\n输入:root = [0,1,2,0,0,0,0,1,1,1,1,2,2,2,2]\n输出:[0,2,1,0,0,0,0,2,2,2,2,1,1,1,1]\n解释:奇数层由非零值组成。\n在第 1 层的节点分别是 1、2 ,反转后为 2、1 。\n在第 3 层的节点分别是 1、1、1、1、2、2、2、2 ,反转后为 2、2、2、2、1、1、1、1 。\n
\n\n

 

\n\n

提示:

\n\n\n\n

思路:

\n

看了灵神的周赛视频讲解,或多或少有影响

\n
\n

这题有两种方法,都可以做交换值:

\n\n

BFS代码

import java.util.*;\n\nclass Solution {\n    public TreeNode reverseOddLevels(TreeNode root) {\n        /*\n        如果是空节点直接返回\n         */\n        if (root == null) {\n            return null;\n        }\n        // 队列存入每层的节点\n        Queue<TreeNode> queue = new LinkedList<>();\n        queue.add(root);\n        int level = 0;\n        while (!queue.isEmpty()) {\n            /*\n            拿出每层的节点放入列表中,并将下一层的节点放入队列中\n             */\n            int size = queue.size();\n            List<TreeNode> nodeList = new ArrayList<>();\n            for (int i = 0; i < size; i++) {\n                TreeNode node = queue.poll();\n                nodeList.add(node);\n                if (node.left != null) {\n                    queue.add(node.left);\n                    queue.add(node.right);\n                }\n            }\n            /*\n            奇数层,在列表中交换收尾节点的值\n             */\n            if (level == 1) {\n                int nodeSize = nodeList.size();\n                for (int i = 0; i < nodeSize / 2; i++) {\n                    int num = nodeList.get(i).val;\n                    nodeList.get(i).val = nodeList.get(nodeSize - i - 1).val;\n                    nodeList.get(nodeSize - i - 1).val = num;\n                }\n            }\n            // 改变奇偶层\n            level = 1 - level;\n        }\n        return root;\n    }\n}
from typing import Optional\n\n\nclass Solution:\n    def reverseOddLevels(self, root: Optional[TreeNode]) -> Optional[TreeNode]:\n        queue = [root]\n        level = 1\n        while queue[0].left:\n            next = []\n            for node in queue:\n                next += [node.left, node.right]\n            queue = next\n            if level:\n                for i in range(len(queue) // 2):\n                    node1, node2 = queue[i], queue[len(queue) - 1 - i]\n                    node1.val, node2.val = node2.val, node1.val\n            level = 1 - level\n        return root
\n

DFS代码

import java.util.*;\n\nclass Solution {\n    public TreeNode reverseOddLevels(TreeNode root) {\n        if (root == null) {\n            return root;\n        }\n        dfs(root.left, root.right, 1);\n        return root;\n    }\n    \n    private void dfs(TreeNode left, TreeNode right, int level) {\n        if (left == null) {\n            return;\n        }\n        if (level == 1) {\n            // 如果是奇数层,交换值\n            int tmp = left.val;\n            left.val = right.val;\n            right.val = tmp;\n        }\n        dfs(left.left, right.right, 1 - level);\n        dfs(left.right, right.left, 1 - level);\n    }\n}
from typing import Optional\n\n\nclass Solution:\n    def reverseOddLevels(self, root: Optional[TreeNode]) -> Optional[TreeNode]:\n        def dfs(left, right, level: bool) -> None:\n            if left is None: return\n            if level: left.val, right.val = right.val, left.val\n            dfs(left.left, right.right, not level)\n            dfs(left.right, right.left, not level)\n\n        dfs(root.left, root.right, True)\n        return root
","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"力扣2414:最长的字母序连续子字符串的长度","slug":"length-of-the-longest-alphabetical-continuous-substring","date":"2022-09-19T14:47:57.000Z","updated":"2024-04-25T08:10:09.103Z","comments":true,"path":"/post/length-of-the-longest-alphabetical-continuous-substring/","link":"","excerpt":"","content":"

311周赛第二题

\n

原题链接:2414. 最长的字母序连续子字符串的长度

\n

题目

字母序连续字符串 是由字母表中连续字母组成的字符串。换句话说,字符串 \"abcdefghijklmnopqrstuvwxyz\" 的任意子字符串都是 字母序连续字符串

\n\n\n\n

给你一个仅由小写英文字母组成的字符串 s ,返回其 最长 的 字母序连续子字符串 的长度。

\n\n

 

\n\n

示例 1:

\n\n
输入:s = \"abacaba\"\n输出:2\n解释:共有 4 个不同的字母序连续子字符串 \"a\"、\"b\"、\"c\" 和 \"ab\" 。\n\"ab\" 是最长的字母序连续子字符串。\n
\n\n

示例 2:

\n\n
输入:s = \"abcde\"\n输出:5\n解释:\"abcde\" 是最长的字母序连续子字符串。\n
\n\n

 

\n\n

提示:

\n\n\n\n

个人解法

遍历一次,判断相邻字符是否连续,找到最长的连续子字符串的长度

\n
class Solution {\n    public int longestContinuousSubstring(String s) {\n        int cnt = 0;\n        int bf = 0;\n        for (int i = 1; i < s.length(); i++) {\n            if (s.charAt(i) - s.charAt(i - 1) != 1) {\n                cnt = Math.max(cnt, i - bf);\n                bf = i;\n            }\n        }\n        return Math.max(cnt, s.length() - bf);\n    }\n}
class Solution:\n    def longestContinuousSubstring(self, s: str) -> int:\n        cnt = bf = 0\n        for i in range(1, len(s)):\n            if ord(s[i]) - ord(s[i - 1]) != 1:\n                cnt = max(cnt, i - bf)\n                bf = i\n        return max(cnt, len(s) - bf)
","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"力扣2413:最小偶倍数","slug":"smallestEvenMultiple","date":"2022-09-19T13:47:05.000Z","updated":"2024-04-25T08:10:09.109Z","comments":true,"path":"/post/smallestEvenMultiple/","link":"","excerpt":"","content":"

311周赛第一题

\n

原题链接:2413. 最小偶倍数

\n

题目

给你一个正整数 n ,返回 2 n 的最小公倍数(正整数)。

\n

示例 1:

\n\n
输入:n = 5\n输出:10\n解释:5 和 2 的最小公倍数是 10 。\n
\n\n

示例 2:

\n\n
输入:n = 6\n输出:6\n解释:6 和 2 的最小公倍数是 6 。注意数字会是它自身的倍数。\n
\n\n

提示:

\n\n\n\n

个人解法

这题比较简单,就直接上代码

\n
class Solution {\n    public int smallestEvenMultiple(int n) {\n        return n % 2 == 0 ? n : n * 2;\n    }\n}
class Solution:\n    def smallestEvenMultiple(self, n: int) -> int:\n        return n if n % 2 == 0 else n * 2\n
from math import lcm\n\n    \nclass Solution:\n    def smallestEvenMultiple(self, n: int) -> int:\n        return lcm(n, 2)\n
","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"python3学习笔记--pairwise","slug":"pyPairwise","date":"2022-09-05T12:52:18.000Z","updated":"2022-09-22T07:39:53.196Z","comments":true,"path":"/post/pyPairwise/","link":"","excerpt":"","content":"

说明

pairwise(iterable)是itertools下的一个方法
该方法是会返回传入列表所有相邻元素,如果传入的数据少于两个,会返回空

\n

官方文档

Return successive overlapping pairs taken from the input iterable.

\n

The number of 2-tuples in the output iterator will be one fewer than the number of inputs. It will be empty if the input iterable has fewer than two values.

\n

Roughly equivalent to:

def pairwise(iterable):\n    # pairwise('ABCDEFG') --> AB BC CD DE EF FG\n    a, b = tee(iterable)\n    next(b, None)\n    return zip(a, b)

\n

源码

itertools.py文件中

class pairwise(object):\n    """\n    Return an iterator of overlapping pairs taken from the input iterator.\n    \n        s -> (s0,s1), (s1,s2), (s2, s3), ...\n    """\n    def __getattribute__(self, *args, **kwargs): # real signature unknown\n        """ Return getattr(self, name). """\n        pass\n\n    def __init__(self, *args, **kwargs): # real signature unknown\n        pass\n\n    def __iter__(self, *args, **kwargs): # real signature unknown\n        """ Implement iter(self). """\n        pass\n\n    @staticmethod # known case of __new__\n    def __new__(*args, **kwargs): # real signature unknown\n        """ Create and return a new object.  See help(type) for accurate signature. """\n        pass\n\n    def __next__(self, *args, **kwargs): # real signature unknown\n        """ Implement next(self). """\n        pass

\n

参考代码

代码:

#!/usr/bin/env python3\n# @Time : 2022/9/5 20:23\n# @Author : 轩辕龙儿\n# @File : pyPairwise.py \n# @Software: PyCharm\nfrom itertools import pairwise\n\nif __name__ == "__main__":\n    arrs = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n    print("传入数据:0, 1, 2, 3, 4, 5, 6, 7, 8, 9")\n    for arr in pairwise(arrs):\n        print(str(arr[0]) + "," + str(arr[1]))\n    print("------------------------------------")\n    print("传入数据:1")\n    for arr in pairwise([1]):\n        print(str(arr[0]) + "," + str(arr[1]))

控制台输出:
"D:\\Program Files\\Python310\\python.exe" D:/project/leet-code-python/study/pyPairwise.py \n传入数据:0, 1, 2, 3, 4, 5, 6, 7, 8, 9\n0,1\n1,2\n2,3\n3,4\n4,5\n5,6\n6,7\n7,8\n8,9\n------------------------------------\n传入数据:1\n\nProcess finished with exit code 0

\n","categories":[{"name":"python","slug":"python","permalink":"https://hexo.huangge1199.cn/categories/python/"}],"tags":[{"name":"学习笔记","slug":"学习笔记","permalink":"https://hexo.huangge1199.cn/tags/%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0/"},{"name":"python","slug":"python","permalink":"https://hexo.huangge1199.cn/tags/python/"}]},{"title":"历史上的今天--8月17日","slug":"history0817","date":"2022-08-17T02:45:26.000Z","updated":"2022-09-22T07:39:52.892Z","comments":true,"path":"/post/history0817/","link":"","excerpt":"","content":"

2016年8月17日

\n\n

2015年8月17日

\n\n

2008年8月17日

\n\n

2005年8月17日

\n\n

2000年8月17日

\n\n

1999年8月17日

\n\n

1998年8月17日

\n\n

1996年8月17日

\n\n

1993年8月17日

\n\n

1992年8月17日

\n\n

1990年8月17日

\n\n

1988年8月17日

\n\n

1987年8月17日

\n\n

1982年8月17日

\n\n

1971年8月17日

\n\n

1969年8月17日

\n\n

1968年8月17日

\n\n

1964年8月17日

\n\n

1958年8月17日

\n\n

1952年8月17日

\n\n

1949年8月17日

\n\n

1945年8月17日

\n\n

1937年8月17日

\n\n

1931年8月17日

\n\n

1926年8月17日

\n\n

1895年8月17日

\n\n

1893年8月17日

\n\n

1877年8月17日

\n\n

1850年8月17日

\n\n

1807年8月17日

\n\n

1786年8月17日

\n\n

1740年8月17日

\n\n

1648年8月17日

\n\n

1601年8月17日

\n\n

1307年8月17日

\n\n","categories":[{"name":"历史上的今天","slug":"历史上的今天","permalink":"https://hexo.huangge1199.cn/categories/%E5%8E%86%E5%8F%B2%E4%B8%8A%E7%9A%84%E4%BB%8A%E5%A4%A9/"}],"tags":[{"name":"历史上的今天","slug":"历史上的今天","permalink":"https://hexo.huangge1199.cn/tags/%E5%8E%86%E5%8F%B2%E4%B8%8A%E7%9A%84%E4%BB%8A%E5%A4%A9/"}]},{"title":"windows server下安装zookeeper和kafka集群","slug":"dpKafkaZKCluster","date":"2022-07-08T02:42:20.000Z","updated":"2022-09-22T07:39:52.882Z","comments":true,"path":"/post/dpKafkaZKCluster/","link":"","excerpt":"","content":"

安装说明

单机部署zookeeper和kafka集群,kafka使用2.8.0版本的,该版本已经将zookeeper集成在内了,因此只需要下载kafka的包即可。

\n

安装目录:C:/kafka/

\n

三个节点都在目录下,依次为kafka1、kafka2、kafka3

\n

下载

从kafka官网下载:kafka_2.13-2.8.0.tgz 下载地址

\n

下载好后,将内容解压后,依次拷贝到kafka1、kafka2、kafka3的目录下,作为集群的3个节点

\n

zookeeper

1、配置文件

修改zookeeper的配置文件,conf/zookeeper.properties,以节点1为例,确保有以下内容:

\n
dataDir=C:/kafka/kafka1/zkData\ndataLogDir=C:/kafka/kafka1/zkLog\nclientPort=2187\n\ntickTime=2000\ninitLimit=10\nsyncLimit=5\n\nserver.1=192.168.0.116:2887:3887\nserver.2=192.168.0.116:2888:3888\nserver.3=192.168.0.116:2889:3889
\n

三个节点的clientPort依次设置成2187、2188、2189

\n
\n

注意:

\n\n
\n

2、创建myid文件

依次在三个节点的dataDir目录下创建myid文件,内容依次填入1、2、3。

\n

3、创建启动脚本

依次在三个节点kafka的目录下添加启动脚本zkStart.bat

\n
.\\bin\\windows\\zookeeper-server-start.bat .\\config\\zookeeper.properties
\n

kafka

1、修改配置文件

修改kafka的配置文件,server.properties,以节点1为例,修改内容如下:

\n
broker.id=0\nlisteners=PLAINTEXT://192.168.0.116:9097\nadvertised.listeners=PLAINTEXT://192.168.0.116:9097\nhost.name= 192.168.0.116\nport=9097\nlog.dirs=C:/kafka/kafka1/log\nzookeeper.connect=192.168.0.116:2187,192.168.0.116:2188,192.168.0.116:2189
\n
\n

注意:

\n\n
\n

2、创建启动、停止脚本

依次在三个节点kafka的目录下添加

\n

启动脚本start.bat:

\n
./bin/kafka-server-start.sh config/server.properties
\n

停止脚本stop.bat:

\n
./bin/kafka-server-stop.sh config/server.properties
\n

测试:发送消息

1、创建主题

在kafka目录下执行下面的命令

\n
kafka-topics.bat --create --zookeeper IP:2181 --replication-factor 3 --partitions 1 --topic test7
\n

2、创建生产者

在kafka的bin\\windows目录下执行下面的命令

\n
kafka-console-producer.bat --broker-list 192.168.0.116:9097,192.168.0.116:9098,192.168.0.116:9099 --topic test7
\n

\"\"

\n

3、创建消费者

在kafka的bin\\windows目录下执行下面的命令

\n
kafka-console-consumer.bat --bootstrap-server 192.168.0.116:9097 --topic test7 --from-beginning
\n

\"\"

\n

4、生产者发送消息,消费者接收消息

生产者随意输入内容,消费者显示内容

\n

生产者:

\n

\"\"

\n

消费者:

\n

\"\"

\n","categories":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/categories/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}],"tags":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}]},{"title":"deepin下安装docker-compose","slug":"inDCByOsDeepin","date":"2022-07-05T13:57:49.000Z","updated":"2022-09-22T07:39:52.923Z","comments":true,"path":"/post/inDCByOsDeepin/","link":"","excerpt":"","content":"

下载文件

sudo wget -c -t 0 https://github.com/docker/compose/releases/download/1.26.0/docker-compose-`uname -s`-`uname -m` -O /usr/local/bin/docker-compose
\n

\"\"

\n

添加执行权限

sudo chmod a+rx /usr/local/bin/docker-compose
\n

\"\"

\n

验证是否安装成功

docker-compose -v
\n

\"\"

\n

卸载

sudo rm /usr/local/bin/docker-compose
\n","categories":[{"name":"docker","slug":"docker","permalink":"https://hexo.huangge1199.cn/categories/docker/"},{"name":"deepin","slug":"docker/deepin","permalink":"https://hexo.huangge1199.cn/categories/docker/deepin/"}],"tags":[{"name":"docker","slug":"docker","permalink":"https://hexo.huangge1199.cn/tags/docker/"},{"name":"deepin","slug":"deepin","permalink":"https://hexo.huangge1199.cn/tags/deepin/"}]},{"title":"Jenkins通过kubernetes plugin连接K8s集群","slug":"bindK8sToJenkins","date":"2022-06-28T14:21:47.000Z","updated":"2022-09-22T07:39:52.766Z","comments":true,"path":"/post/bindK8sToJenkins/","link":"","excerpt":"","content":"

一、Jenkins安装kubernetes plugin插件

1.1 点击左侧系统管理

\"\"

\n

1.2 点击插件管理

\"\"

\n

1.3 安装插件Kubernetes plugin

\"\"

\n

1.4 安装好后重启Jenkins

浏览器输入http://192.168.0.196:8080/restart,页面点击“是”重启Jenkins

\n

\"\"

\n

二、进入配置页

2.1 左侧点击系统管理

\"\"

\n

2.2 点击节点管理

\"\"

\n

2.3 点击Configure Clouds

\"\"

\n

三、配置

3.1 下拉框选择Kubernetes

\"\"

\n

3.2 点击Kubernetes Cloud details…进入配置详情页

\"\"

\n

3.3 填入认证信息

需要填写红框内的4个内容

\n

\"\"

\n

Kubernetes 地址

这个通过命令行 查看

\n
kubectl cluster-info
\n

\"\"

\n

红框内的就是地址

\n

Kubernetes 服务证书 key

为/root/.kube/config中的certificate-authority-data部分,并通过base64加密

\n

终端输入下面的命令查看certificate-authority-data:

\n
cat .kube/config
\n

\"\"

\n

在执行下面的命令进行base64加密:

\n
echo "certificate-authority-data冒号后面的内容" | base64 -d
\n

\"\"

\n

红框的内容填入“Kubernetes 服务证书 key”中

\n

Kubernetes 命名空间

使用default默认就好

\n

凭据

这地方需要添加一个凭借

\n

\"\"

\n

在弹出的页面中类型选Secret text

\n

\"\"

\n

下面的Secret通过终端添加:

\n\n
kubectl create sa jenkins
\n

\"\"

\n

获取token名

\n
kubectl describe sa jenkins
\n

\"\"

\n

获取token值

\n
kubectl describe secrets jenkins-token-szvg9 -n default
\n

\"\"

\n

上图中的token即为Secret填入的内容

\n

最后的描述可以随意填写

\n

\"\"

\n

点击添加,凭据就好了

\n

四、验证

点击连接测试,左侧显示k8s集群版本

\n

\"\"

\n

下面把Jenkins地址填上,再点击保存按钮就完成了

\n

\"\"

\n","categories":[{"name":"云原生","slug":"云原生","permalink":"https://hexo.huangge1199.cn/categories/%E4%BA%91%E5%8E%9F%E7%94%9F/"}],"tags":[{"name":"云原生","slug":"云原生","permalink":"https://hexo.huangge1199.cn/tags/%E4%BA%91%E5%8E%9F%E7%94%9F/"}]},{"title":"helm不需要证书安装rancher","slug":"inRancherByHelmNoCert","date":"2022-06-26T03:16:37.000Z","updated":"2022-09-22T07:39:53.027Z","comments":true,"path":"/post/inRancherByHelmNoCert/","link":"","excerpt":"","content":"

前置

安装好k8s和helm

\n

\"\"

\n

安装命令

helm install rancher rancher-stable/rancher \\\n  --namespace cattle-system \\\n  --set hostname=rancher.my.org \\\n  --set replicas=1 \\\n  --set ingress.tls.source=secret
\n

\"\"

\n

设置域名映射

sudo vi /etc/hosts\n\n# 添加域名映射 \n127.0.0.1 rancher.my.org\n\n# cat /etc/hosts
\n

\"\"

\n

确认安装完成

kubectl -n cattle-system get deploy rancher
\n

\"\"

\n

浏览器访问

浏览器输入 https://rancher.my.org/

\n

高级—》继续访问

\n

\"\"

\n

密码查看

终端输入,查看密码

\n
kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}{{"\\n"}}'
\n

\"\"

\n

设置自己好记的密码

输入密码进入后,选择Set a specific password to use,然后下方设置自己的密码

\n

\"\"

\n

进入的页面

\"\"

\n

至此,rancher部署完成

\n","categories":[{"name":"云原生","slug":"云原生","permalink":"https://hexo.huangge1199.cn/categories/%E4%BA%91%E5%8E%9F%E7%94%9F/"}],"tags":[{"name":"云原生","slug":"云原生","permalink":"https://hexo.huangge1199.cn/tags/%E4%BA%91%E5%8E%9F%E7%94%9F/"},{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}]},{"title":"deepin主机下通过Kubeadm方式安装K8S","slug":"inK8sByKubeadmByDeepin","date":"2022-06-24T13:39:51.000Z","updated":"2022-09-22T07:39:52.955Z","comments":true,"path":"/post/inK8sByKubeadmByDeepin/","link":"","excerpt":"","content":"

1、关闭swap

依次执行下面的命令:

\n
# 查看分区的使用状态\nfree -mh\n# 禁用swap分区\nsudo swapoff -a\n# 查看分区的使用状态\nfree -mh
\n

\"\"

\n

2、添加k8s源

编辑文件/etc/apt/sources.list.d/kubernetes.list

\n
sudo vi /etc/apt/sources.list.d/kubernetes.list
\n

插入以下内容:

\n
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
\n

再执行命令查看:

\n
cat /etc/apt/sources.list.d/kubernetes.list
\n

\"\"

\n

3、导入k8s密钥

执行命令:

\n
sudo curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
\n

\"\"

\n

4、更新并安装kubeadm, kubelet 和 kubectl

执行命令:

\n
sudo apt-get update\nsudo apt-get install kubelet kubeadm kubectl
\n

5、设置阿里云镜像加速

编辑文件/etc/docker/daemon.json:

\n
sudo vi /etc/docker/daemon.json
\n

修改成如下内容:

\n
{\n    "registry-mirrors": [\n            "https://{阿里云分配的地址}.mirror.aliyuncs.com",\n            "https://registry-1.docker.io/v2/"\n    ]\n}
\n

再执行命令查看:

\n
cat /etc/docker/daemon.json
\n

\"\"

\n

6、拉取镜像

从阿里云拉取镜像并转换tag,执行命令如下:

\n
for  i  in  `kubeadm config images list`;  do\n    imageName=${i#k8s.gcr.io/}\n    docker pull registry.aliyuncs.com/google_containers/$imageName\n    docker tag registry.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName\n    docker rmi registry.aliyuncs.com/google_containers/$imageName\ndone;
\n

如果有拉取不下来的,可以再上网找找镜像然后转换tag,或者直接执行下面的命令用docker官方镜像拉取,但是官方镜像拉取速度可能会很慢

\n
for  i  in  `kubeadm config images list`;  do\n    docker pull i\ndone;
\n

7、kubeadm初始化

执行命令:

\n
kubeadm init --pod-network-cidr=10.244.0.0/16
\n

\"\"

\n

8、执行提示的命令

mkdir -p $HOME/.kube\nsudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nsudo chown $(id -u):$(id -g) $HOME/.kube/config
\n

\"\"

\n

9、安装网络插件

执行命令:

\n
kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
\n

\"\"

\n

10、安装Ingress

执行命令:

\n
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/cloud/deploy.yaml
\n

查看命令:

\n
kubectl get pods --all-namespaces
\n

\"\"

\n","categories":[{"name":"云原生","slug":"云原生","permalink":"https://hexo.huangge1199.cn/categories/%E4%BA%91%E5%8E%9F%E7%94%9F/"},{"name":"deepin","slug":"云原生/deepin","permalink":"https://hexo.huangge1199.cn/categories/%E4%BA%91%E5%8E%9F%E7%94%9F/deepin/"}],"tags":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}]},{"title":"Helm安装Rancher","slug":"inRancherByHelm","date":"2022-06-18T02:44:37.000Z","updated":"2022-09-22T07:39:53.009Z","comments":true,"path":"/post/inRancherByHelm/","link":"","excerpt":"","content":"

前置

本人是直接在deeepin系统上用rke安装的k8s集群形式,但是只有一个节点,rke1.3.10版本的,安装好的k8s1.22.9的版本

\n

前提条件 — helm安装

安照官网说明安装就可以:官网安装步骤

\n

简单说明:

\n

我这边是二进制形式安装的

\n\n

1、安装证书管理

\n

这里选用 Rancher 生成的 TLS 证书,因此需要 cert-manager

\n
\n

1.1 添加配置

执行命令:

\n
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.1/cert-manager.crds.yaml
\n

\"\"

\n

1.2 添加 Jetstack Helm 仓库

执行命令:

\n
helm repo add jetstack https://charts.jetstack.io
\n

\"\"

\n

1.3 更新本地 Helm chart 仓库缓存

执行命令:

\n
helm repo update
\n

\"\"

\n

1.4 安装 cert-manager Helm chart

执行命令:

\n
helm install cert-manager jetstack/cert-manager \\\n  --namespace cert-manager \\\n  --create-namespace \\\n  --version v1.5.1
\n

\"\"

\n

如果报错内容如下:

\n

\"\"

\n

可做如下操作:

\n
# 列出空间列表\nhelm ls --all-namespaces\n# 删除\nkubectl delete namespace cert-manager\n# 列出空间列表\nhelm ls --all-namespaces
\n

\"\"

\n

然后在重新执行命令即可

\n

1.5 确认安装成功

执行命令:

\n
kubectl get pods --namespace cert-manager
\n

\"\"

\n

2、安装ingress-nginx

2.1 添加ingress-nginx repo

执行命令:

\n
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
\n

\"\"

\n

2.2 安装

执行命令:

\n
helm install ingress-nginx ingress-nginx/ingress-nginx -n kube-system \\
\n

3、安装rancher

3.1 添加rancher repo

执行命令:

\n
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
\n

3.2 查看列表

执行命令:

\n
helm repo list
\n

\"\"

\n

3.3 安装

helm install rancher rancher-stable/rancher \\\n>   --namespace cattle-system \\\n>   --create-namespace \\\n>   --set hostname=rancher.my.org \\\n>   --no-hooks \\\n>   --version 2.6.5
\n

\"\"

\n
\n

配置的hostname=rancher.my.org,这个域名需要添加到 /etc/hosts

\n
\n

\"\"

\n

3.4 运行

kubectl -n cattle-system rollout status deploy/rancher
\n

\"\"

\n

3.5 查看 Rancher 运行状态

kubectl -n cattle-system get deploy rancher
\n

\"\"

\n

至此,Rancher部署完成

\n

3.6 浏览器查看

https://rancher.my.org/ ,进入后简单配置下就可以了

\n

默认密码在终端输入下面的命令,显示的就是默认密码,之后可以修改成自己好记的密码

\n
kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}{{"\\n"}}'
\n

进入后的样子:

\n

\"\"

\n","categories":[{"name":"云原生","slug":"云原生","permalink":"https://hexo.huangge1199.cn/categories/%E4%BA%91%E5%8E%9F%E7%94%9F/"}],"tags":[{"name":"云原生","slug":"云原生","permalink":"https://hexo.huangge1199.cn/tags/%E4%BA%91%E5%8E%9F%E7%94%9F/"},{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}]},{"title":"deepin下安装hexo","slug":"inHexoByOsDeepin","date":"2022-06-03T06:51:52.000Z","updated":"2022-11-24T07:09:04.666Z","comments":true,"path":"/post/inHexoByOsDeepin/","link":"","excerpt":"","content":"

1、前置条件

安装好nodejs

\n

参考:deepin下安装nodejs

\n

2、全局安装Hexo

执行命令:

\n
npm install -g hexo-cli
\n

\"\"

\n

3、创建软链接

# 创建hexo软链接\nsudo ln -s /home/deepin/app/node/bin/hexo /usr/local/bin/\n# 查看软链接列表\nsudo ls -l /usr/local/bin/
\n

\"\"

\n

4、查看Hexo版本

hexo -v
\n

\"\"

\n","categories":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/categories/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}],"tags":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"},{"name":"deepin","slug":"deepin","permalink":"https://hexo.huangge1199.cn/tags/deepin/"}]},{"title":"deepin下安装nodejs","slug":"inNodejsByOsDeepin","date":"2022-06-03T06:51:52.000Z","updated":"2022-09-22T07:39:52.980Z","comments":true,"path":"/post/inNodejsByOsDeepin/","link":"","excerpt":"","content":"

1、下载安装包

官网地址

\n

\"image-20220603110634896\"

\n
\n

注意版本

\n
\n

2、解压安装包

执行命令:

\n
# 解压命令\ntar -xf node-v16.15.1-linux-x64.tar.xz\n# 查看列表\nls -l
\n

\"image-20220603111656347\"

\n

3、移动文件

这步是为了方便找到自己安装的软件,可做可不做

\n

我这边是统一移动到用户的app目录下

\n
# 移动文件\nmv node-v16.15.1-linux-x64 ../app/\n# 查看列表\nls -l\n# 切换目录\ncd ../app/\n# 查看列表\nls -l\n# 更改名称(目录名过长)\nmv node-v16.15.1-linux-x64 node\n# 查看列表\nls -l
\n

\"image-20220603112001543\"

\n

\"image-20220603112153367\"

\n

4、创建软链接

# 创建node软链接\nsudo ln -s /home/deepin/app/node/bin/node /usr/local/bin/\n# 创建npm软链接\nsudo ln -s /home/deepin/app/node/bin/npm  /usr/local/bin/\n# 确认软链接建立好了\nsudo ls -l /usr/local/bin/
\n
\n

此处涉及到权限问题,因此命令前要加sudo

\n
\n

\"image-20220603112904096\"

\n

5、确认node和npm版本

node -v\nnpm -v
\n

\"image-20220603113034176\"

\n

6、设置镜像

设置国内淘宝的镜像,提高npm的下载速度

\n
# 查看npm配置列表\nnpm config list\n# 执行如下命令设置成国内的镜像\nnpm config set registry https://registry.npm.taobao.org\n# 查看npm配置列表\nnpm config list
\n

\"image-20220603113454241\"

\n","categories":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/categories/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}],"tags":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"},{"name":"deepin","slug":"deepin","permalink":"https://hexo.huangge1199.cn/tags/deepin/"}]},{"title":"deepin下安装nvm","slug":"inNvmByOsDeepin","date":"2022-06-03T06:51:52.000Z","updated":"2023-02-15T06:15:21.788Z","comments":true,"path":"/post/inNvmByOsDeepin/","link":"","excerpt":"","content":"

1、下载安装包

wget https://github.com/nvm-sh/nvm/archive/refs/tags/v0.39.2.tar.gz -O nvm-0.39.2.tar.gz
\n

\"image-20221209002248521\"

\n

2、解压

tar -zxvf nvm-0.39.2.tar.gz
\n

3、安装

# 切换目录\ncd nvm-0.39.2/\n# 执行安装命令\n./install.sh\n# 查看nvm版本,检查是否安装成功\nnvm -v
\n

\"image-20221209004412627\"

\n","categories":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/categories/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}],"tags":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"},{"name":"deepin","slug":"deepin","permalink":"https://hexo.huangge1199.cn/tags/deepin/"}]},{"title":"deepin下安装vue","slug":"inVueByOsDeepin","date":"2022-06-03T06:51:52.000Z","updated":"2022-09-22T07:39:53.067Z","comments":true,"path":"/post/inVueByOsDeepin/","link":"","excerpt":"","content":"

1、前置条件

安装好nodejs

\n

参考:deepin下安装nodejs

\n

2、全局安装Vue

执行命令:

\n
# 下面两个版本的二选一哦\nnpm install -g @vue/cli\t\t//vue3.0\nnpm install -g vue-cli\t\t//vue2.0
\n

我这边安装的是3.0版本的

\n

\"image-20220603114033774\"

\n

3、全局安装webpack

npm install -g webpack
\n

\"image-20220603114219499\"

\n

4、创建软链接

# 创建Vue软链接\nsudo ln -s /home/deepin/app/node/bin/vue /usr/local/bin/\n# 创建webpack软链接\nsudo ln -s /home/deepin/app/node/bin/webpack /usr/local/bin/\n# 查看软链接列表\nsudo ls -l /usr/local/bin/
\n

\"image-20220603114513508\"

\n

5、查看Vue版本

vue --version
\n

\"image-20220603114616673\"

\n","categories":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/categories/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}],"tags":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"},{"name":"deepin","slug":"deepin","permalink":"https://hexo.huangge1199.cn/tags/deepin/"}]},{"title":"deepin下安装Maven","slug":"inMavenByOsDeepin","date":"2022-06-03T06:46:52.000Z","updated":"2022-09-22T07:39:52.966Z","comments":true,"path":"/post/inMavenByOsDeepin/","link":"","excerpt":"","content":"

1、前置条件

安装好jdk

\n

参考:deepin下安装jdk

\n

2、下载安装包

官网地址:maven下载页面

\n

\"image-20220603115702494\"

\n

我这边下载的是3.8.5版本的,如果下载其他版本,用下面的链接:

\n

其他版本maven

\n

\"image-20220603120000596\"

\n

3、解压

tar -xf apache-maven-3.8.5-bin.tar.gz\nls -l
\n

\"image-20220603120156243\"

\n

4、移动

mv apache-maven-3.8.5 ../app/\nls -l\nls -l ../app/
\n

\"image-20220603120347751\"

\n

5、配置环境变量

sudo vi /etc/profile
\n

文件最下面加入下面的内容

\n
# configuration maven development enviroument\nexport MAVEN_HOME=/home/deepin/app/apache-maven-3.8.5\nexport PATH=$PATH:$MAVEN_HOME/bin
\n

\"image-20220603121051155\"

\n

执行命令让配置文件生效:

\n
source /etc/profile
\n

\"image-20220603121153205\"

\n

6、验证

查看maven版本做验证:

\n
mvn -v
\n

\"image-20220603121458972\"

\n

7、配置仓库文件目录

vi /home/deepin/app/apache-maven-3.8.5/conf/settings.xml
\n

找到localRepository,在下方加入下面的内容:

\n
<localRepository>/home/deepin/repo</localRepository>
\n

红框内容为新加入的

\n

\"image-20220603122200505\"

\n

8、添加阿里镜像源

vi /home/deepin/app/apache-maven-3.8.5/conf/settings.xml
\n

找到mirror添加下面的内容:

\n
<mirror>\t\n    <id>alimaven</id>\n    <mirrorOf>*</mirrorOf>\n    <name>aliyun maven</name>\n    <url>https://maven.aliyun.com/repository/public</url>\n</mirror>
\n

下面红框内容为添加的

\n

\"image-20220603122859128\"

\n","categories":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/categories/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}],"tags":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"},{"name":"deepin","slug":"deepin","permalink":"https://hexo.huangge1199.cn/tags/deepin/"}]},{"title":"nvm安装nodejs","slug":"inNodejsByNvm","date":"2022-06-03T06:46:52.000Z","updated":"2023-03-15T06:20:21.727Z","comments":true,"path":"/post/inNodejsByNvm/","link":"","excerpt":"","content":"

nvm下载

nvm的GitHub下载地址

\n

\"\"

\n

进入后下载

\n

\"\"

\n

nvm安装

下载后双击exe文件进行安装,同意后点击next

\n

\"\"

\n

设置安装目录

\n

\"\"

\n

设置nodejs目录,最好不要带空格

\n

\"\"

\n

点击install安装

\n

\"\"

\n

点击Finish安装完成

\n

nvm添加淘宝镜像

打开nvm安装目录下的settings.txt文件,添加淘宝镜像地址,红框内为新增的

\n
node_mirror: https://npm.taobao.org/mirrors/node/\nnpm_mirror: https://npm.taobao.org/mirrors/npm/
\n

\"\"

\n

nvm设置环境变量

此电脑右键点击属性

\n

\"\"

\n

点击高级系统设置

\n

\"\"

\n

点击环境变量

\n

\"\"

\n

确认环境变量中有NVM_HOME和NVM_SYMLINK

\n

\"\"

\n

确认

\n

管理员身份运行cmd,执行查看的命令确认安装成功

\n
nvm -v
\n

\"\"

\n

node安装

执行命令列出有效可下载的node版本

\n
nvm list available
\n

\"\"

\n

执行安装命令安装指定版本的node

\n
nvm install <version>
\n

\"\"

\n

执行命令查看已安装的node版本

\n
nvm list
\n

\"\"

\n

执行命令使用某个版本的node

\n

\"\"

\n

执行命令查看node版本

\n
node -v
\n

\"\"

\n

node环境变量

新建目录node_global、node_cache,可以建在nodejs目录下

\n

\"\"

\n

新建环境变量,变量名:NODE_PATH,变量值:node_global路径\\node_modules

\n

\"\"

\n

在Path环境变量中增加node_global路径

\n

\"\"

\n","categories":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/categories/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}],"tags":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"},{"name":"deepin","slug":"deepin","permalink":"https://hexo.huangge1199.cn/tags/deepin/"}]},{"title":"deepin下安装jdk","slug":"inJdkByOsDeepin","date":"2022-06-03T06:43:52.000Z","updated":"2022-09-22T07:39:52.933Z","comments":true,"path":"/post/inJdkByOsDeepin/","link":"","excerpt":"","content":"

1、jdk 下载

官网下载地址如下:

\n

\"image-20220603015920009\"

\n
\n

注意区分是哪个版本的

\n
\n

2、安装deb包

终端进入到deb文件所在目录,执行安装命令:

\n
sudo dpkg -i jdk-11.0.15.1_linux-x64_bin.deb
\n

\"image-20220603020439325\"

\n

3、配置环境变量

终端执行命令:

\n
sudo vi /etc/profile
\n

然后输入密码,在文件的最后加上下面的内容

\n
#configuration java development enviroument\nexport JAVA_HOME=/usr/lib/jvm/jdk-11\nexport PATH=$JAVA_HOME/bin:$PATH \nexport CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar 
\n

4、使环境变量生效

执行命令:

\n
source /etc/profile
\n

\"image-20220603021925685\"

\n

5、检查是否成功

执行命令:

\n
java -version
\n

\"image-20220603022022217\"

\n","categories":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/categories/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}],"tags":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"},{"name":"deepin","slug":"deepin","permalink":"https://hexo.huangge1199.cn/tags/deepin/"}]},{"title":"deepin下安装git","slug":"inGitByOsDeepin","date":"2022-06-03T06:30:52.000Z","updated":"2022-09-22T07:39:52.929Z","comments":true,"path":"/post/inGitByOsDeepin/","link":"","excerpt":"","content":"
\n

个人建议直接终端安装,下面的安装也是终端命令行安装的

\n
\n

1、安装git

执行安装命令:

\n
sudo apt-get install git
\n

\"image-20220603103757605\"

\n

2、确认git安装成功

执行查看git版本的命令,以此确认安装成功

\n
git --version
\n

\"image-20220603103959058\"

\n

3、配置git全局用户名和邮箱

配置全局用户名:

\n
git config --global user.name "用户名"
\n

\"image-20220603104225094\"

\n

配置全局邮箱:

\n
git config --global user.email "邮箱"
\n

\"image-20220603104335664\"

\n

4、确认配置结果

查看配置信息确认

\n
git config --list
\n

\"image-20220603104447770\"

\n","categories":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/categories/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}],"tags":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"},{"name":"deepin","slug":"deepin","permalink":"https://hexo.huangge1199.cn/tags/deepin/"}]},{"title":"docker构建自定义镜像","slug":"createMyImage","date":"2022-05-31T07:07:55.000Z","updated":"2022-09-22T07:39:52.812Z","comments":true,"path":"/post/createMyImage/","link":"","excerpt":"","content":"

1、编写Dockerfile

Dockerfile

\n
FROM nginx\nRUN apt update && apt install -y vim
\n

2、构建镜像

执行命令:

\n
docker build -t vim-nginx:1 .
\n
\n

注:要在Dockerfile所在目录下执行

\n
\n

这步时间较长,多等等,出现下面红框表示安装成功

\n

\"\"

\n

完成后,执行命令确认镜像生成:

\n
docker images
\n

\"\"

\n

3、测试镜像

启动容器:

\n
docker run -d --name new-nginx vim-nginx:1\ndocker ps -a
\n

下面红框内是执行过程,中间的部分我命令敲错了,忽略掉

\n

\"\"

\n

进入容器使用vim命令:

\n
docker exec -it new-nginx bash\nvim 123.txt\nexit
\n

\"\"

\n

停止容器:

\n
docker stop new-nginx\ndocker ps -a
\n

\"\"

\n

删除容器:

\n
docker rm new-nginx\ndocker ps -a
\n

\"\"

\n

4、docker登录

执行命令:

\n
docker login
\n

然后输入用户名和密码

\n
\n

注:用户名不是登录的邮箱

\n
\n

\"\"

\n

5、镜像修改

tag命令修改为规范的镜像:

\n
docker tag vim-nginx:1 huangge1199/vim-nginx:1\ndocker images
\n

\"\"

\n

6、推送镜像

docker push huangge1199/vim-nginx:1
\n

\"\"

\n

网页进入自己的docker仓库:

\n

\"\"

\n

7、删除本地镜像

docker rmi huangge1199/vim-nginx:1\ndocker images
\n

\"\"

\n

8、拉取镜像

docker pull huangge1199/vim-nginx:1\ndocker images
\n

\"\"

\n

9、重复3的步骤测试镜像

\n

注意步骤3和现在的镜像名可能不同,记得替换

\n
\n","categories":[{"name":"docker","slug":"docker","permalink":"https://hexo.huangge1199.cn/categories/docker/"}],"tags":[{"name":"docker","slug":"docker","permalink":"https://hexo.huangge1199.cn/tags/docker/"}]},{"title":"通过Kubeadm方式安装K8S","slug":"inK8sByKubeadm","date":"2022-05-31T06:10:52.000Z","updated":"2022-09-22T07:39:52.940Z","comments":true,"path":"/post/inK8sByKubeadm/","link":"","excerpt":"","content":"

前言

根据前几次的经验,这一次,运用脚本的形式安装,可以节约大部分的步骤,把一些前置的配置什么的写到shell脚本里面,随着vagrant up启动命令一起安装

\n

集群环境:

\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
IP内存CPU核数
master172.17.8.514G2
node172.17.8.524G1
node172.17.8.534G1
\n
\n

1、编写Vagrantfile文件

Vagrantfile内容:

\n
# -*- mode: ruby -*-\n# vi: set ft=ruby :\n# on win10, you need `vagrant plugin install vagrant-vbguest --plugin-version 0.21` and change synced_folder.type="virtualbox"\n# reference `https://www.dissmeyer.com/2020/02/11/issue-with-centos-7-vagrant-boxes-on-windows-10/`\n\n\nVagrant.configure("2") do |config|\n  config.vm.box_check_update = false\n  config.vm.provider 'virtualbox' do |vb|\n  vb.customize [ "guestproperty", "set", :id, "/VirtualBox/GuestAdd/VBoxService/--timesync-set-threshold", 1000 ]\n  end  \n  $num_instances = 3\n  # curl https://discovery.etcd.io/new?size=3\n  (1..$num_instances).each do |i|\n    config.vm.define "node#{i}" do |node|\n      node.vm.box = "centos/7"\n      node.vm.hostname = "node#{i}"\n      ip = "172.17.8.#{i+50}"\n      node.vm.network "private_network", ip: ip\n      node.vm.provider "virtualbox" do |vb|\n        vb.memory = "4096"\n        if i==1 then\n            vb.cpus = 2\n        else\n            vb.cpus = 1\n        end\n        vb.name = "node#{i+50}"\n      end\n    end\n  end\n  config.vm.provision "shell", privileged: true, path: "./setup.sh"\nend
\n

2、编写启动后的脚本

setup.sh内容:

\n
#/bin/sh\n\n# 安装docker相关依赖\nsudo yum install -y yum-utils\n\n# 添加阿里源\nsudo yum-config-manager \\\n    --add-repo \\\n    http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo\nsudo sed -i 's/gpgcheck=1/gpgcheck=0/g' /etc/yum.repos.d/docker-ce.repo \n# 安装docker\nyes y|sudo yum install docker-ce docker-ce-cli containerd.io\n\n# 启动docker\nsudo systemctl start docker\ny\ny\n\n# 更改cgroup driver以及docker镜像仓库源\nsudo bash -c 'cat > /etc/docker/daemon.json <<EOF\n{\n  "exec-opts": ["native.cgroupdriver=systemd"],\n  "log-driver": "json-file",\n  "log-opts": {\n    "max-size": "100m"\n  },\n  "storage-driver": "overlay2",\n  "storage-opts": [\n    "overlay2.override_kernel_check=true"\n  ],\n  "registry-mirrors": [\n    "https://registry.docker-cn.com",\n    "http://hub-mirror.c.163.com",\n    "https://w5a7th34.mirror.aliyuncs.com",\n    "http://f1361db2.m.daocloud.io",\n    "https://mirror.ccs.tencentyun.com"\n  ]\n}\nEOF'\n\n# 添加docker组\nif [ ! $(getent group docker) ];\nthen \n    sudo groupadd docker;\nelse\n    echo "docker user group already exists"\nfi\n\nsudo gpasswd -a $USER docker\n\n# 加载、重启docker\nsudo systemctl  daemon-reload\nsudo systemctl restart docker\n\nsudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config\nsudo systemctl restart sshd\n\nsudo bash -c 'cat <<EOF > /etc/yum.repos.d/kubernetes.repo\n[kubernetes]\nname=Kubernetes\nbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64\nenabled=1\ngpgcheck=0\nrepo_gpgcheck=0\ngpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg\nhttps://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg\nEOF'\n\nsudo setenforce 0\n\n# 安装kubeadm, kubectl, kubelet\nsudo yum install -y kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6 --disableexcludes=kubernetes\n\n# 设置docker和kubelet开机自启并启动\nsudo systemctl enable docker && systemctl start docker\nsudo systemctl enable kubelet && systemctl start kubelet\n\n# 设置网络桥接\nsudo bash -c 'cat <<EOF >  /etc/sysctl.d/k8s.conf\nnet.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1\nnet.ipv4.ip_forward=1\nEOF'\nsudo sysctl --system\n\n# 关闭防火墙\nsudo systemctl stop firewalld\nsudo systemctl disable firewalld\n\n# 关闭swap\nsudo swapoff -a\n\n# 设置开机自启\nsudo systemctl enable docker.service\nsudo systemctl enable kubelet.service
\n

3、启动

注:Vagrantfile和setup.sh放在同一目录,并且在该目录下执行启动命令:

\n
vagrant up
\n

由于在启动中加入了脚本,此次启动执行的内容多,时间要比以往长些

\n

4、通过远程连接工具连接

目前,三台机器分别有两个用户,root和vagrant,密码全部是vagrant

\n

5、部署主节点

注:这步时间较长,耐心等待

\n
kubeadm init \\\n--apiserver-advertise-address=172.17.8.51 \\\n--image-repository registry.aliyuncs.com/google_containers \\\n--kubernetes-version v1.23.6 \\\n--service-cidr=10.96.0.0/12 \\\n--pod-network-cidr=10.244.0.0/16
\n

出现下面的内容,主节点部署完成

\n

\"\"

\n

6、使用 kubectl 工具

执行命令:

\n
mkdir -p $HOME/.kube\nsudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\nsudo chown $(id -u):$(id -g) $HOME/.kube/config
\n

7、安装 Pod 网络插件(CNI)

执行命令:

\n
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
\n

注:如果连接不上,可以在当前目录新建文件kube-flannel.yml替换掉文件

\n

kube-flannel.yml内容:

\n
---\napiVersion: policy/v1beta1\nkind: PodSecurityPolicy\nmetadata:\n  name: psp.flannel.unprivileged\n  annotations:\n    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default\n    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default\n    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default\n    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default\nspec:\n  privileged: false\n  volumes:\n  - configMap\n  - secret\n  - emptyDir\n  - hostPath\n  allowedHostPaths:\n  - pathPrefix: "/etc/cni/net.d"\n  - pathPrefix: "/etc/kube-flannel"\n  - pathPrefix: "/run/flannel"\n  readOnlyRootFilesystem: false\n  # Users and groups\n  runAsUser:\n    rule: RunAsAny\n  supplementalGroups:\n    rule: RunAsAny\n  fsGroup:\n    rule: RunAsAny\n  # Privilege Escalation\n  allowPrivilegeEscalation: false\n  defaultAllowPrivilegeEscalation: false\n  # Capabilities\n  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']\n  defaultAddCapabilities: []\n  requiredDropCapabilities: []\n  # Host namespaces\n  hostPID: false\n  hostIPC: false\n  hostNetwork: true\n  hostPorts:\n  - min: 0\n    max: 65535\n  # SELinux\n  seLinux:\n    # SELinux is unused in CaaSP\n    rule: 'RunAsAny'\n---\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: flannel\nrules:\n- apiGroups: ['extensions']\n  resources: ['podsecuritypolicies']\n  verbs: ['use']\n  resourceNames: ['psp.flannel.unprivileged']\n- apiGroups:\n  - ""\n  resources:\n  - pods\n  verbs:\n  - get\n- apiGroups:\n  - ""\n  resources:\n  - nodes\n  verbs:\n  - list\n  - watch\n- apiGroups:\n  - ""\n  resources:\n  - nodes/status\n  verbs:\n  - patch\n---\nkind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: flannel\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: flannel\nsubjects:\n- kind: ServiceAccount\n  name: flannel\n  namespace: kube-system\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: flannel\n  namespace: kube-system\n---\nkind: ConfigMap\napiVersion: v1\nmetadata:\n  name: kube-flannel-cfg\n  namespace: kube-system\n  labels:\n    tier: node\n    app: flannel\ndata:\n  cni-conf.json: |\n    {\n      "name": "cbr0",\n      "cniVersion": "0.3.1",\n      "plugins": [\n        {\n          "type": "flannel",\n          "delegate": {\n            "hairpinMode": true,\n            "isDefaultGateway": true\n          }\n        },\n        {\n          "type": "portmap",\n          "capabilities": {\n            "portMappings": true\n          }\n        }\n      ]\n    }\n  net-conf.json: |\n    {\n      "Network": "10.244.0.0/16",\n      "Backend": {\n        "Type": "vxlan"\n      }\n    }\n---\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n  name: kube-flannel-ds\n  namespace: kube-system\n  labels:\n    tier: node\n    app: flannel\nspec:\n  selector:\n    matchLabels:\n      app: flannel\n  template:\n    metadata:\n      labels:\n        tier: node\n        app: flannel\n    spec:\n      affinity:\n        nodeAffinity:\n          requiredDuringSchedulingIgnoredDuringExecution:\n            nodeSelectorTerms:\n            - matchExpressions:\n              - key: kubernetes.io/os\n                operator: In\n                values:\n                - linux\n      hostNetwork: true\n      priorityClassName: system-node-critical\n      tolerations:\n      - operator: Exists\n        effect: NoSchedule\n      serviceAccountName: flannel\n      initContainers:\n      - name: install-cni-plugin\n       #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)\n        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0\n        command:\n        - cp\n        args:\n        - -f\n        - /flannel\n        - /opt/cni/bin/flannel\n        volumeMounts:\n        - name: cni-plugin\n          mountPath: /opt/cni/bin\n      - name: install-cni\n       #image: flannelcni/flannel:v0.18.0 for ppc64le and mips64le (dockerhub limitations may apply)\n        image: rancher/mirrored-flannelcni-flannel:v0.18.0\n        command:\n        - cp\n        args:\n        - -f\n        - /etc/kube-flannel/cni-conf.json\n        - /etc/cni/net.d/10-flannel.conflist\n        volumeMounts:\n        - name: cni\n          mountPath: /etc/cni/net.d\n        - name: flannel-cfg\n          mountPath: /etc/kube-flannel/\n      containers:\n      - name: kube-flannel\n       #image: flannelcni/flannel:v0.18.0 for ppc64le and mips64le (dockerhub limitations may apply)\n        image: rancher/mirrored-flannelcni-flannel:v0.18.0\n        command:\n        - /opt/bin/flanneld\n        args:\n        - --ip-masq\n        - --kube-subnet-mgr\n        resources:\n          requests:\n            cpu: "100m"\n            memory: "50Mi"\n          limits:\n            cpu: "100m"\n            memory: "50Mi"\n        securityContext:\n          privileged: false\n          capabilities:\n            add: ["NET_ADMIN", "NET_RAW"]\n        env:\n        - name: POD_NAME\n          valueFrom:\n            fieldRef:\n              fieldPath: metadata.name\n        - name: POD_NAMESPACE\n          valueFrom:\n            fieldRef:\n              fieldPath: metadata.namespace\n        - name: EVENT_QUEUE_DEPTH\n          value: "5000"\n        volumeMounts:\n        - name: run\n          mountPath: /run/flannel\n        - name: flannel-cfg\n          mountPath: /etc/kube-flannel/\n        - name: xtables-lock\n          mountPath: /run/xtables.lock\n      volumes:\n      - name: run\n        hostPath:\n          path: /run/flannel\n      - name: cni-plugin\n        hostPath:\n          path: /opt/cni/bin\n      - name: cni\n        hostPath:\n          path: /etc/cni/net.d\n      - name: flannel-cfg\n        configMap:\n          name: kube-flannel-cfg\n      - name: xtables-lock\n        hostPath:\n          path: /run/xtables.lock\n          type: FileOrCreate
\n

8、节点加入集群

在第5步kubeadm init命令输出的日志中,最后几行有需要执行的命令,那个命令拿出来直接在node2和node3上运行就可以了(token的有效期是24小时,超过了需要重新生成)

\n

当然,如果你想我一样,忘记复制了还恰好关掉了远程,那么就有两种方式可以解决

\n\n

8.1、执行下面的命令生成新的token:

\n
kubeadm token create --print-join-command
\n

\"\"

\n

这里显示的命令拿到要加入的节点(node2和node3)执行就可以加入集群中

\n

\"\"

\n

\"\"

\n

然后回到master主节点执行命令,确认加入成功:

\n
kubectl get nodes
\n

\"\"

\n

8.2、查看token命令获取

\n
kubeadm token list
\n

\"\"

\n

主节点:

\n
# 查看节点\nkubectl get nodes
\n

\"\"

\n

节点3:

\n
kubeadm join 172.17.8.51:6443 --token o15q87.xtnzlfis6gtez1x6 --discovery-token-unsafe-skip-ca-verification
\n

\"\"

\n

主节点:

\n
# 查看节点\nkubectl get nodes
\n

\"\"

\n

9、集群中移除节点

主节点执行:

\n
# 查看节点\nkubectl get nodes\n# 移除节点3\nkubectl delete node node3\n# 查看节点\nkubectl get nodes
\n

\"\"

\n

删除的节点执行:

\n
kubeadm reset\nsystemctl stop kubelet\nsystemctl stop docker\nrm -rf /var/lib/cni/\nrm -rf /var/lib/kubelet/*\nrm -rf /etc/cni/\nsystemctl start docker\nsystemctl start kubelet
\n","categories":[{"name":"云原生","slug":"云原生","permalink":"https://hexo.huangge1199.cn/categories/%E4%BA%91%E5%8E%9F%E7%94%9F/"}],"tags":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}]},{"title":"一条命令运行rancher","slug":"inReacherByDC","date":"2022-05-26T00:49:12.000Z","updated":"2022-09-22T07:39:53.041Z","comments":true,"path":"/post/inReacherByDC/","link":"","excerpt":"","content":"

1、rancher安装

控制台中rke用户下执行docker命令:

\n
docker run --name=rancher -d --privileged --restart=unless-stopped -p 30040:80 -p 30050:443 rancher/rancher:latest
\n

\"\"

\n

2、检查是否正常启动

可通过下面两个命令查看:

\n
docker ps | grep rancher           ## 查看正在运行中的docker容器
\n

3、浏览器访问

输入https://IP:PORT

\n

\"\"

\n

点击高级,然后点击继续前往

\n

\"\"

\n

4、密码

根据提示,输入并修改密码

\n

\"\"

\n

\"\"

\n

浏览器输入密码后,选择红框的,并在下方输入自己想要设置的密码

\n

\"\"

\n

进入后里面有一个默认的k3s

\n

\"\"

\n

5、加入其他存在的集群

点击Import Existing

\n

\"\"

\n

选择Generic

\n

\"\"

\n

集群名字随意输入,只要你能记住

\n

\"\"

\n

根据红框的操作执行命令注册进来

\n

\"\"

\n

执行命令

\n
kubectl apply -f https://172.17.8.51:30050/v3/import/2llq4b95zbspwqlcjrb898dtwqmqgtcxtfxjdlkgp8c79jpzf8tfn6_c-m-5ffgdfz6.yaml
\n

\"\"

\n

报了认证的问题,执行第二个命令

\n
curl --insecure -sfL https://172.17.8.51:30050/v3/import/2llq4b95zbspwqlcjrb898dtwqmqgtcxtfxjdlkgp8c79jpzf8tfn6_c-m-5ffgdfz6.yaml | kubectl apply -f -
\n

\"\"

\n

我这边是运行成功了

\n","categories":[{"name":"云原生","slug":"云原生","permalink":"https://hexo.huangge1199.cn/categories/%E4%BA%91%E5%8E%9F%E7%94%9F/"}],"tags":[{"name":"云原生","slug":"云原生","permalink":"https://hexo.huangge1199.cn/tags/%E4%BA%91%E5%8E%9F%E7%94%9F/"},{"name":"docker","slug":"docker","permalink":"https://hexo.huangge1199.cn/tags/docker/"},{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}]},{"title":"力扣675. 为高尔夫比赛砍树","slug":"day20220523","date":"2022-05-24T01:18:23.000Z","updated":"2022-09-22T07:39:52.865Z","comments":true,"path":"/post/day20220523/","link":"","excerpt":"","content":"

2022年05月24日 力扣每日一题

\n

675. 为高尔夫比赛砍树

\n

题目

你被请来给一个要举办高尔夫比赛的树林砍树。树林由一个 m x n 的矩阵表示, 在这个矩阵中:

\n\n\n\n

每一步,你都可以向上、下、左、右四个方向之一移动一个单位,如果你站的地方有一棵树,那么你可以决定是否要砍倒它。

\n\n

你需要按照树的高度从低向高砍掉所有的树,每砍过一颗树,该单元格的值变为 1(即变为地面)。

\n\n

你将从 (0, 0) 点开始工作,返回你砍完所有树需要走的最小步数。 如果你无法砍完所有的树,返回 -1

\n\n

可以保证的是,没有两棵树的高度是相同的,并且你至少需要砍倒一棵树。

\n\n

\n\n

示例 1:

\n\"\" \n
  \n输入:forest = [[1,2,3],[0,0,4],[7,6,5]]  \n输出:6  \n解释:沿着上面的路径,你可以用 6 步,按从最矮到最高的顺序砍掉这些树。
\n\n

示例 2:

\n\"\" \n
  \n输入:forest = [[1,2,3],[0,0,0],[7,6,5]]  \n输出:-1  \n解释:由于中间一行被障碍阻塞,无法访问最下面一行中的树。  \n
\n\n

示例 3:

\n\n
  \n输入:forest = [[2,3,4],[0,0,5],[8,7,6]]  \n输出:6  \n解释:可以按与示例 1 相同的路径来砍掉所有的树。  \n(0,0) 位置的树,可以直接砍去,不用算步数。  \n
\n\n

\n\n

提示:

\n\n

\n
Related Topics
  • 广度优先搜索
  • 数组
  • 矩阵
  • 堆(优先队列)
  • \n\n

    思路

      \n
    1. 记录每颗需要砍树的位置,并排好序

      \n

      注意:这个需要砍的树是从2开始算的,不是1

      \n
    2. \n
    3. 循环计算到达下一棵被砍树的步数

      \n

      可使用广度优先搜索,从出发的树开始,依次取出并将下一步能够到达的树加入到队列,直到目标树为止

      \n
    4. \n
    \n

    代码

    java:

    \n
    class Solution {\n    public int cutOffTree(List<List<Integer>> forest) {\n        /*\n        起始位置不可到达的情况,即坐标(0,0)位置为0\n         */\n        if (forest.get(0).get(0) == 0) {\n            return -1;\n        }\n        int xL = forest.size();\n        int yL = forest.get(0).size();\n        /*\n        按照顺序排列需要砍的树,记录每棵树的位置\n         */\n        TreeMap<Integer, Pair<Integer, Integer>> map = new TreeMap<>();\n        for (int i = 0; i < xL; i++) {\n            List<Integer> list = forest.get(i);\n            for (int j = 0; j < yL; j++) {\n                if (list.get(j) > 1) {\n                    map.put(list.get(j), new Pair<>(i, j));\n                }\n            }\n        }\n        int step = 0;\n        Pair<Integer, Integer> pair = null;\n        Queue<Pair<Integer, Integer>> queue = new LinkedList<>();\n        queue.add(new Pair<>(0, 0));\n        boolean[][] uses = new boolean[xL][yL];\n        uses[0][0] = true;\n        int[] xs = new int[]{1, -1, 0, 0};\n        int[] ys = new int[]{0, 0, 1, -1};\n        for (int key : map.keySet()) {\n            Pair<Integer, Integer> cur = map.get(key);\n            if (queue.peek().equals(cur)) {\n                continue;\n            }\n            boolean bl = false;\n            /*\n            计算到达下一棵需要砍树的步数\n             */\n            while (!queue.isEmpty() && !bl) {\n                int nums = queue.size();\n                step++;\n                for (int i = 0; i < nums && !bl; i++) {\n                    Pair<Integer, Integer> tmp = queue.poll();\n                    for (int j = 0; j < 4; j++) {\n                        int x = tmp.getKey() + xs[j];\n                        int y = tmp.getValue() + ys[j];\n                        if (x == cur.getKey() && y == cur.getValue()) {\n                            bl = true;\n                            break;\n                        }\n                        if (x < 0 || x >= xL || y < 0 || y\n                                >= yL || uses[x][y] || forest.get(x).get(y\n                            continue;\n                        }\n                        queue.add(new Pair<>(x, y));\n                        uses[x][y] = true;\n                    }\n                }\n            }\n            if (!bl) {\n                return -1;\n            }\n            queue = new LinkedList<>();\n            queue.add(cur);\n            uses = new boolean[xL][yL];\n            uses[cur.getKey()][cur.getValue()] = true;\n        }\n        return step;\n    }\n}
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"力扣周赛293题解","slug":"weekly-contest-293","date":"2022-05-19T06:52:09.000Z","updated":"2023-06-20T03:16:55.431Z","comments":true,"path":"/post/weekly-contest-293/","link":"","excerpt":"","content":"

    第一题

    力扣原题链接:

    2273. 移除字母异位词后的结果数组

    \n

    单个题解:

    力扣2273. 移除字母异位词后的结果数组

    \n

    题目:

    给你一个下标从 0 开始的字符串 words ,其中 words[i] 由小写英文字符组成。

    \n\n

    在一步操作中,需要选出任一下标 i ,从 words删除 words[i] 。其中下标 i 需要同时满足下述两个条件:

    \n\n
      \n
    1. 0 < i < words.length
    2. \n
    3. words[i - 1]words[i]字母异位词
    4. \n
    \n\n

    只要可以选出满足条件的下标,就一直执行这个操作。

    \n\n

    在执行所有操作后,返回 words 。可以证明,按任意顺序为每步操作选择下标都会得到相同的结果。

    \n\n

    字母异位词 是由重新排列源单词的字母得到的一个新单词,所有源单词中的字母通常恰好只用一次。例如,\"dacb\"\"abdc\" 的一个字母异位词。

    \n\n

    \n\n

    示例 1:

    \n\n
    输入:words = [\"abba\",\"baba\",\"bbaa\",\"cd\",\"cd\"]    \n输出:[\"abba\",\"cd\"]    \n解释:    \n获取结果数组的方法之一是执行下述步骤:  - 由于 words[2] = \"bbaa\" 和 words[1] = \"baba\" 是字母异位词,选择下标 2 并删除 words[2] 。    \n  现在 words = [\"abba\",\"baba\",\"cd\",\"cd\"] 。  - 由于 words[1] = \"baba\" 和 words[0] = \"abba\" 是字母异位词,选择下标 1 并删除 words[1] 。    \n  现在 words = [\"abba\",\"cd\",\"cd\"] 。  - 由于 words[2] = \"cd\" 和 words[1] = \"cd\" 是字母异位词,选择下标 2 并删除 words[2] 。    \n  现在 words = [\"abba\",\"cd\"] 。  无法再执行任何操作,所以 [\"abba\",\"cd\"] 是最终答案。
    \n\n

    示例 2:

    \n\n
    输入:words = [\"a\",\"b\",\"c\",\"d\",\"e\"]    \n输出:[\"a\",\"b\",\"c\",\"d\",\"e\"]    \n解释:    \nwords 中不存在互为字母异位词的两个相邻字符串,所以无需执行任何操作。
    \n\n

    \n\n

    提示:

    \n\n \n
    Related Topics
  • 数组
  • 哈希表
  • 字符串
  • 排序
  • \n\n## 思路:\n\n遍历字符串数组,分别将每一个字符串装换成字符数组,字符数组排序,如果排序后转成的字符串一样,则说明是字母异位词 \n\n## 代码:\n\njava: \n\n
    class Solution {\n    public List<String> removeAnagrams(String[] words) {\n        char[] strs = words[0].toCharArray();\n        Arrays.sort(strs);\n        List<String> list = new ArrayList<>();\n        int index = 0;\n        for (int i = 1; i < words.length; i++) {\n            char[] strs1 = words[i].toCharArray();\n            Arrays.sort(strs1);\n            if (!String.valueOf(strs).equals(String.valueOf(strs1))) {\n                list.add(words[index]);\n                strs = strs1;\n                index = i;\n            }\n        }\n        list.add(words[index]);\n        return list;\n    }\n}
    \n\n# 第二题\n\n## 力扣原题链接:\n\n[2274. 不含特殊楼层的最大连续楼层数](https://leetcode.cn/problems/maximum-consecutive-floors-without-special-floors/)\n\n## 单个题解:\n\n[力扣2274. 不含特殊楼层的最大连续楼层数](http://192.168.0.198:5080/post/maximum-consecutive-floors-without-special-floors/)\n\n## 题目:\n\n

    Alice 管理着一家公司,并租用大楼的部分楼层作为办公空间。Alice 决定将一些楼层作为 特殊楼层 ,仅用于放松。

    \n\n

    给你两个整数 bottomtop ,表示 Alice 租用了从 bottomtop(含 bottomtop 在内)的所有楼层。另给你一个整数数组 special ,其中 special[i] 表示 Alice 指定用于放松的特殊楼层。

    \n\n

    返回不含特殊楼层的 最大 连续楼层数。

    \n\n

    \n\n

    示例 1:

    \n\n
      \n输入:bottom = 2, top = 9, special = [4,6]  \n输出:3  \n解释:下面列出的是不含特殊楼层的连续楼层范围:  \n- (2, 3) ,楼层数为 2 。  \n- (5, 5) ,楼层数为 1 。  \n- (7, 9) ,楼层数为 3 。  \n因此,返回最大连续楼层数 3 。  \n
    \n\n

    示例 2:

    \n\n
      \n输入:bottom = 6, top = 8, special = [7,6,8]  \n输出:0  \n解释:每层楼都被规划为特殊楼层,所以返回 0 。  \n
    \n\n

    \n\n

    提示

    \n\n \n
    Related Topics
  • 数组
  • 排序
  • \n\n## 思路:\n\n这题相当于在bottom到top的范围内,被special的数分割了,我们需要找到分割后最长的一段\n\n步骤:\n\n1. 为了保证数据的顺序进行,对special进行排序\n2. 遍历`special`对`bottom~top`进行分割,当`bottom<=special[i]`时, \n 连续楼层数为`special[i]-bottom`,与之前的最大连续层数对比,得到当前的最大连续层数, \n 同时更新`bottom = special[i] + 1`\n3. 遍历完,还有最后一段的连续层数`top - special[special.length - 1]`\n4. 至此,不包含特殊层的最大的连续层数就出来了\n\n## 代码:\n\njava:\n\n
    class Solution {\n    public int maxConsecutive(int bottom, int top, int[] special) {\n        Arrays.sort(special);\n        int max = 0;\n        for (int j : special) {\n            if (bottom <= j) {\n                max = Math.max(max, j - bottom);\n                bottom = j + 1;\n            }\n        }\n        max = Math.max(max, top - special[special.length - 1]);\n        return max;\n    }\n}
    \n\n# 第三题\n\n## 力扣原题链接:\n\n[2275. 按位与结果大于零的最长组合](https://leetcode.cn/problems/largest-combination-with-bitwise-and-greater-than-zero/)\n\n## 单个题解:\n\n[力扣2275. 按位与结果大于零的最长组合](http://192.168.0.198:5080/post/largest-combination-with-bitwise-and-greater-than-zero/)\n\n## 题目:\n\n

    对数组 nums 执行 按位与 相当于对数组 nums 中的所有整数执行 按位与

    \n\n\n\n

    给你一个正整数数组 candidates 。计算 candidates 中的数字每种组合下 按位与 的结果。 candidates 中的每个数字在每种组合中只能使用 一次

    \n\n

    返回按位与结果大于 0最长 组合的长度

    \n\n

    \n\n

    示例 1:

    \n\n
      \n输入:candidates = [16,17,71,62,12,24,14]  \n输出:4  \n解释:组合 [16,17,62,24] 的按位与结果是 16 & 17 & 62 & 24 = 16 > 0 。  \n组合长度是 4 。  \n可以证明不存在按位与结果大于 0 且长度大于 4 的组合。  \n注意,符合长度最大的组合可能不止一种。  \n例如,组合 [62,12,24,14] 的按位与结果是 62 & 12 & 24 & 14 = 8 > 0 。  \n
    \n\n

    示例 2:

    \n\n
      \n输入:candidates = [8,8]  \n输出:2  \n解释:最长组合是 [8,8] ,按位与结果 8 & 8 = 8 > 0 。  \n组合长度是 2 ,所以返回 2 。  \n
    \n\n

    \n\n

    提示:

    \n\n \n
    Related Topics
  • 位运算
  • 数组
  • 哈希表
  • 计数
  • \n\n## 思路:\n\n这题需要找出按位与结果大于0的最长组合的长度,按位与结果大于0, \n说明这个数组中的每一个二进制数都有相同的一位是1,根据这题给的数组值的范围, \n可以确定最多有24位,那么我们可以循环24次数组,每一次循环统计出第`i`位位数为1的个数, \n然后将每一次的个数做比较,得出最长组合的长度\n\n## 代码:\n\njava:\n\n
    class Solution {\n    public int largestCombination(int[] candidates) {\n        int max = 0;\n        for (int i = 0; i < 25; i++) {\n            int cnt = 0;\n            for (int j = 0; j < candidates.length; j++) {\n                if ((candidates[j] & (1 << i)) > 0) {\n                    cnt++;\n                }\n            }\n            max = Math.max(max, cnt);\n        }\n        return max;\n    }\n}
    \n\n# 第四题\n\n## 力扣原题链接:\n\n[2276. 统计区间中的整数数目](https://leetcode.cn/problems/count-integers-in-intervals/)\n\n## 单个题解:\n\n[力扣2276. 统计区间中的整数数目](http://192.168.0.198:5080/post/count-integers-in-intervals/)\n\n## 题目:\n\n

    给你区间的 集,请你设计并实现满足要求的数据结构:

    \n\n\n\n

    实现 CountIntervals 类:

    \n\n\n\n

    注意:区间 [left, right] 表示满足 left <= x <= right 的所有整数 x

    \n\n

    \n\n

    示例 1:

    \n\n
      \n输入  \n[\"CountIntervals\", \"add\", \"add\", \"count\", \"add\", \"count\"]  \n[[], [2, 3], [7, 10], [], [5, 8], []]  \n输出  \n[null, null, null, 6, null, 8]  \n\n解释  \nCountIntervals countIntervals = new CountIntervals(); // 用一个区间空集初始化对象  \ncountIntervals.add(2, 3);  // 将 [2, 3] 添加到区间集合中  \ncountIntervals.add(7, 10); // 将 [7, 10] 添加到区间集合中  \ncountIntervals.count();    // 返回 6  \n                           // 整数 2 和 3 出现在区间 [2, 3] 中  \n                           // 整数 7、8、9、10 出现在区间 [7, 10] 中  \ncountIntervals.add(5, 8);  // 将 [5, 8] 添加到区间集合中  \ncountIntervals.count();    // 返回 8  \n                           // 整数 2 和 3 出现在区间 [2, 3] 中  \n                           // 整数 5 和 6 出现在区间 [5, 8] 中  \n                           // 整数 7 和 8 出现在区间 [5, 8] 和区间 [7, 10] 中  \n                           // 整数 9 和 10 出现在区间 [7, 10] 中
    \n\n

    \n\n

    提示:

    \n\n\n\n

    思路:

    这题我的思路是添加一次整理一次并同时计数,利用java的TreeSet结构,可以快速的定位数据。
    典型的模板题

    \n

    代码:

    java:

    \n
    class CountIntervals {\n    TreeSet<Interval> ranges;\n    int cnt;\n    public CountIntervals() {\n        ranges = new TreeSet();\n        cnt = 0;\n    }\n    public void add(int left, int right) {\n        Iterator<Interval> itr = ranges.tailSet(new Interval(0, left - 1)).iterator();\n        while (itr.hasNext()) {\n            Interval iv = itr.next();\n            if (right < iv.left) {\n                break;\n            }\n            left = Math.min(left, iv.left);\n            right = Math.max(right, iv.right);\n            cnt -= iv.right - iv.left + 1;\n            itr.remove();\n        }\n        ranges.add(new Interval(left, right));\n        cnt += right - left + 1;\n    }\n    public int count() {\n        return cnt;\n    }\n}\npublic class Interval implements Comparable<Interval> {\n    int left;\n    int right;\n    public Interval(int left, int right) {\n        this.left = left;\n        this.right = right;\n    }\n    public int compareTo(Interval that) {\n        if (this.right == that.right) return this.left - that.left;\n        return this.right - that.right;\n    }\n}
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"周赛","slug":"算法/力扣/周赛","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E5%91%A8%E8%B5%9B/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"力扣2276. 统计区间中的整数数目","slug":"count-integers-in-intervals","date":"2022-05-19T05:54:33.000Z","updated":"2022-09-22T07:39:52.807Z","comments":true,"path":"/post/count-integers-in-intervals/","link":"","excerpt":"","content":"

    力扣周赛293—第四题

    \n

    2276. 统计区间中的整数数目

    \n

    题目

    给你区间的 集,请你设计并实现满足要求的数据结构:

    \n\n\n\n

    实现 CountIntervals 类:

    \n\n\n\n

    注意:区间 [left, right] 表示满足 left <= x <= right 的所有整数 x

    \n\n

     

    \n\n

    示例 1:

    \n\n
    \n输入\n[\"CountIntervals\", \"add\", \"add\", \"count\", \"add\", \"count\"]\n[[], [2, 3], [7, 10], [], [5, 8], []]\n输出\n[null, null, null, 6, null, 8]\n\n解释\nCountIntervals countIntervals = new CountIntervals(); // 用一个区间空集初始化对象\ncountIntervals.add(2, 3);  // 将 [2, 3] 添加到区间集合中\ncountIntervals.add(7, 10); // 将 [7, 10] 添加到区间集合中\ncountIntervals.count();    // 返回 6\n                           // 整数 2 和 3 出现在区间 [2, 3] 中\n                           // 整数 7、8、9、10 出现在区间 [7, 10] 中\ncountIntervals.add(5, 8);  // 将 [5, 8] 添加到区间集合中\ncountIntervals.count();    // 返回 8\n                           // 整数 2 和 3 出现在区间 [2, 3] 中\n                           // 整数 5 和 6 出现在区间 [5, 8] 中\n                           // 整数 7 和 8 出现在区间 [5, 8] 和区间 [7, 10] 中\n                           // 整数 9 和 10 出现在区间 [7, 10] 中
    \n\n

     

    \n\n

    提示:

    \n\n\n\n

    思路

    这题我的思路是添加一次整理一次并同时计数,利用java的TreeSet结构,可以快速的定位数据。
    典型的模板题

    \n

    代码

    java:

    class CountIntervals {\n    TreeSet<Interval> ranges;\n    int cnt;\n    public CountIntervals() {\n        ranges = new TreeSet();\n        cnt = 0;\n    }\n    public void add(int left, int right) {\n        Iterator<Interval> itr = ranges.tailSet(new Interval(0, left - 1)).iterator();\n        while (itr.hasNext()) {\n            Interval iv = itr.next();\n            if (right < iv.left) {\n                break;\n            }\n            left = Math.min(left, iv.left);\n            right = Math.max(right, iv.right);\n            cnt -= iv.right - iv.left + 1;\n            itr.remove();\n        }\n        ranges.add(new Interval(left, right));\n        cnt += right - left + 1;\n    }\n    public int count() {\n        return cnt;\n    }\n}\npublic class Interval implements Comparable<Interval> {\n    int left;\n    int right;\n    public Interval(int left, int right) {\n        this.left = left;\n        this.right = right;\n    }\n    public int compareTo(Interval that) {\n        if (this.right == that.right) return this.left - that.left;\n        return this.right - that.right;\n    }\n}

    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"力扣2275. 按位与结果大于零的最长组合","slug":"largest-combination-with-bitwise-and-greater-than-zero","date":"2022-05-19T05:41:13.000Z","updated":"2022-09-22T07:39:53.089Z","comments":true,"path":"/post/largest-combination-with-bitwise-and-greater-than-zero/","link":"","excerpt":"","content":"

    力扣周赛293—第三题

    \n

    2275. 按位与结果大于零的最长组合

    \n

    题目

    对数组 nums 执行 按位与 相当于对数组 nums 中的所有整数执行 按位与

    \n\n\n\n

    给你一个正整数数组 candidates 。计算 candidates 中的数字每种组合下 按位与 的结果。 candidates 中的每个数字在每种组合中只能使用 一次

    \n\n

    返回按位与结果大于 0最长 组合的长度

    \n\n

     

    \n\n

    示例 1:

    \n\n
    \n输入:candidates = [16,17,71,62,12,24,14]\n输出:4\n解释:组合 [16,17,62,24] 的按位与结果是 16 & 17 & 62 & 24 = 16 > 0 。\n组合长度是 4 。\n可以证明不存在按位与结果大于 0 且长度大于 4 的组合。\n注意,符合长度最大的组合可能不止一种。\n例如,组合 [62,12,24,14] 的按位与结果是 62 & 12 & 24 & 14 = 8 > 0 。\n
    \n\n

    示例 2:

    \n\n
    \n输入:candidates = [8,8]\n输出:2\n解释:最长组合是 [8,8] ,按位与结果 8 & 8 = 8 > 0 。\n组合长度是 2 ,所以返回 2 。\n
    \n\n

     

    \n\n

    提示:

    \n\n

    \n
    Related Topics
  • 位运算
  • 数组
  • 哈希表
  • 计数
  • \n\n

    思路

    这题需要找出按位与结果大于0的最长组合的长度,按位与结果大于0,
    说明这个数组中的每一个二进制数都有相同的一位是1,根据这题给的数组值的范围,
    可以确定最多有24位,那么我们可以循环24次数组,每一次循环统计出第i位位数为1的个数,
    然后将每一次的个数做比较,得出最长组合的长度

    \n

    代码

    java:

    class Solution {\n    public int largestCombination(int[] candidates) {\n        int max = 0;\n        for (int i = 0; i < 25; i++) {\n            int cnt = 0;\n            for (int j = 0; j < candidates.length; j++) {\n                if ((candidates[j] & (1 << i)) > 0) {\n                    cnt++;\n                }\n            }\n            max = Math.max(max, cnt);\n        }\n        return max;\n    }\n}

    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"力扣2274. 不含特殊楼层的最大连续楼层数","slug":"maximum-consecutive-floors-without-special-floors","date":"2022-05-19T01:57:58.000Z","updated":"2022-09-22T07:39:53.096Z","comments":true,"path":"/post/maximum-consecutive-floors-without-special-floors/","link":"","excerpt":"","content":"

    力扣周赛293—第二题

    \n

    2274. 不含特殊楼层的最大连续楼层数

    \n

    题目

    Alice 管理着一家公司,并租用大楼的部分楼层作为办公空间。Alice 决定将一些楼层作为 特殊楼层 ,仅用于放松。

    \n\n

    给你两个整数 bottomtop ,表示 Alice 租用了从 bottomtop(含 bottomtop 在内)的所有楼层。另给你一个整数数组 special ,其中 special[i] 表示  Alice 指定用于放松的特殊楼层。

    \n\n

    返回不含特殊楼层的 最大 连续楼层数。

    \n\n

     

    \n\n

    示例 1:

    \n\n
    \n输入:bottom = 2, top = 9, special = [4,6]\n输出:3\n解释:下面列出的是不含特殊楼层的连续楼层范围:\n- (2, 3) ,楼层数为 2 。\n- (5, 5) ,楼层数为 1 。\n- (7, 9) ,楼层数为 3 。\n因此,返回最大连续楼层数 3 。\n
    \n\n

    示例 2:

    \n\n
    \n输入:bottom = 6, top = 8, special = [7,6,8]\n输出:0\n解释:每层楼都被规划为特殊楼层,所以返回 0 。\n
    \n\n

     

    \n\n

    提示

    \n\n

    \n
    Related Topics
  • 数组
  • 排序
  • \n\n

    思路

    这题相当于在bottom到top的范围内,被special的数分割了,我们需要找到分割后最长的一段

    \n

    步骤:

    \n
      \n
    1. 为了保证数据的顺序进行,对special进行排序
    2. \n
    3. 遍历specialbottom~top进行分割,当bottom<=special[i]时,
      连续楼层数为special[i]-bottom,与之前的最大连续层数对比,得到当前的最大连续层数,
      同时更新bottom = special[i] + 1
    4. \n
    5. 遍历完,还有最后一段的连续层数top - special[special.length - 1]
    6. \n
    7. 至此,不包含特殊层的最大的连续层数就出来了
    8. \n
    \n

    代码

    java:

    \n
    class Solution {\n    public int maxConsecutive(int bottom, int top, int[] special) {\n        Arrays.sort(special);\n        int max = 0;\n        for (int j : special) {\n            if (bottom <= j) {\n                max = Math.max(max, j - bottom);\n                bottom = j + 1;\n            }\n        }\n        max = Math.max(max, top - special[special.length - 1]);\n        return max;\n    }\n}
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"力扣2273. 移除字母异位词后的结果数组","slug":"find-resultant-array-after-removing-anagrams","date":"2022-05-19T01:27:33.000Z","updated":"2022-09-22T07:39:52.888Z","comments":true,"path":"/post/find-resultant-array-after-removing-anagrams/","link":"","excerpt":"","content":"

    力扣周赛293—第一题

    \n

    2273. 移除字母异位词后的结果数组

    \n

    题目

    给你一个下标从 0 开始的字符串 words ,其中 words[i] 由小写英文字符组成。

    \n\n

    在一步操作中,需要选出任一下标 i ,从 words删除 words[i] 。其中下标 i 需要同时满足下述两个条件:

    \n\n
      \n
    1. 0 < i < words.length
    2. \n
    3. words[i - 1]words[i]字母异位词
    4. \n
    \n\n

    只要可以选出满足条件的下标,就一直执行这个操作。

    \n\n

    在执行所有操作后,返回 words 。可以证明,按任意顺序为每步操作选择下标都会得到相同的结果。

    \n\n

    字母异位词 是由重新排列源单词的字母得到的一个新单词,所有源单词中的字母通常恰好只用一次。例如,\"dacb\"\"abdc\" 的一个字母异位词。

    \n\n

    \n\n

    示例 1:

    \n\n
    输入:words = [\"abba\",\"baba\",\"bbaa\",\"cd\",\"cd\"]  \n输出:[\"abba\",\"cd\"]  \n解释:  \n获取结果数组的方法之一是执行下述步骤:  \n- 由于 words[2] = \"bbaa\" 和 words[1] = \"baba\" 是字母异位词,选择下标 2 并删除 words[2] 。  \n  现在 words = [\"abba\",\"baba\",\"cd\",\"cd\"] 。  \n- 由于 words[1] = \"baba\" 和 words[0] = \"abba\" 是字母异位词,选择下标 1 并删除 words[1] 。  \n  现在 words = [\"abba\",\"cd\",\"cd\"] 。  \n- 由于 words[2] = \"cd\" 和 words[1] = \"cd\" 是字母异位词,选择下标 2 并删除 words[2] 。  \n  现在 words = [\"abba\",\"cd\"] 。  \n无法再执行任何操作,所以 [\"abba\",\"cd\"] 是最终答案。
    \n\n

    示例 2:

    \n\n
    输入:words = [\"a\",\"b\",\"c\",\"d\",\"e\"]  \n输出:[\"a\",\"b\",\"c\",\"d\",\"e\"]  \n解释:  \nwords 中不存在互为字母异位词的两个相邻字符串,所以无需执行任何操作。
    \n\n

    \n\n

    提示:

    \n\n

    \n
    Related Topics
  • 数组
  • 哈希表
  • 字符串
  • 排序
  • \n\n

    思路

    遍历字符串数组,分别将每一个字符串装换成字符数组,字符数组排序,如果排序后转成的字符串一样,则说明是字母异位词

    \n

    代码

    java:

    \n
    class Solution {\n    public List<String> removeAnagrams(String[] words) {\n        char[] strs = words[0].toCharArray();\n        Arrays.sort(strs);\n        List<String> list = new ArrayList<>();\n        int index = 0;\n        for (int i = 1; i < words.length; i++) {\n            char[] strs1 = words[i].toCharArray();\n            Arrays.sort(strs1);\n            if (!String.valueOf(strs).equals(String.valueOf(strs1))) {\n                list.add(words[index]);\n                strs = strs1;\n                index = i;\n            }\n        }\n        list.add(words[index]);\n        return list;\n    }\n}
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"力扣周赛292题解","slug":"weekly-contest-292","date":"2022-05-10T06:01:37.000Z","updated":"2022-09-22T07:39:53.235Z","comments":true,"path":"/post/weekly-contest-292/","link":"","excerpt":"","content":"

    第一题

    力扣原题链接:

    2264. 字符串中最大的 3 位相同数字

    \n

    单个题解:

    力扣2264. 字符串中最大的 3 位相同数字

    \n

    题解:

    这题是要找最大的3个相同数并且3个数是相连的,因为数字的话只有0~9这10个数字,找最大的,那我就从999开始,然后依次888、777。。。000,只要字符串中存在,那就是它了。

    \n

    java代码:

    public String largestGoodInteger(String num) {\n    String str;\n    for (int i = 9; i >= 0; i--) {\n        str = "" + i + i + i;\n        if (num.contains(str)) {\n            return str;\n        }\n    }\n    return "";\n}
    \n

    第二题

    力扣原题链接:

    6057. 统计值等于子树平均值的节点数

    \n

    单个题解:

    力扣6057. 统计值等于子树平均值的节点数

    \n

    题解:

    这题的思路:

    \n\n

    java代码:

    class Solution {\n    public int averageOfSubtree(TreeNode root) {\n        counts(root);\n        sums(root);\n        return count;\n    }\n    Queue<Integer> queue = new LinkedList<>();\n    int count = 0;\n    private int counts(TreeNode root) {\n        if (root == null) {\n            return 0;\n        }\n        int cnt = counts(root.left) + counts(root.right) + 1;\n        queue.add(cnt);\n        return cnt;\n    }\n    private int sums(TreeNode root) {\n        if (root == null) {\n            return 0;\n        }\n        int sum = root.val;\n        sum += sums(root.left);\n        sum += sums(root.right);\n        if (sum / queue.poll() == root.val) {\n            count++;\n        }\n        return sum;\n    }\n}
    \n

    第三题

    力扣原题链接:

    2266. 统计打字方案数

    \n

    单个题解:

    力扣2267. 检查是否有合法括号字符串路径

    \n

    题解:

    这题标的是中等题,个人觉得解题方法有点取巧,怎么取巧尼,因为重复的数最多4个,我完全可以嵌套3层if来处理,当然我也是这么干的。只要遍历一遍就可以了。

    \n

    在遍历到索引i时,有如下情况:

    \n
      \n
    1. 当前数字不和前面的组合,自己单独成一个新的

      \n

      索引i的种数 = 索引i-1的种数

      \n
    2. \n
    3. 当前数字与前一个相等,那么该数字的组合就有两种情况

      \n\n
    4. \n
    \n

    同时,为了保证数据没有超过int的最大值,这里对于每一次的结果都对109+7取余

    \n

    java代码:

    class Solution {\n    public int countTexts(String pressedKeys) {\n        int[] cnts = new int[pressedKeys.length() + 1];\n        cnts[0] = 1;\n        cnts[1] = 1;\n        int mod = 1000000007;\n        for (int i = 1; i < pressedKeys.length(); i++) {\n            cnts[i + 1] = cnts[i];\n            if (pressdKeys.charAt(i) == pressedKeys.charAt(i - 1)) {\n                cnts[i + 1] += cnts[i - 1];\n                cnts[i + 1] %= mod;\n                if (i > 1 && pressedKeys.charAt(i) == pressedKeys.charAt(i - 2)) {\n                    cnts[i + 1] += cnts[i - 2];\n                    cnts[i + 1] %= mod;\n                    if (i > 2 && pressedKeys.charAt(i) == pressedKeys.charAt(i - 3) && (pressedKeys.charAt(i) == '7' || pressedKeys.charAt(i) == '\n                        cnts[i + 1] += cnts[i - 3];\n                        cnts[i + 1] %= mod;\n                    }\n                }\n            }\n        }\n        return cnts[pressedKeys.length()];\n    }\n}
    \n

    第四题

    力扣原题链接:

    2267. 检查是否有合法括号字符串路径

    \n

    单个题解:

    力扣2266. 统计打字方案数

    \n

    题解:

    从左上角到右下角,依次路过,下一个坐标一定是该坐标的右侧或者下侧的坐标,同时玩吗记录下路过到该坐标时未配对的’'('的个数,如果是负数则这条路不对旧不用继续下去了。然后一直到右下角的时候,如果个数为1,则满足条件

    \n

    java代码:

    class Solution {\n    public boolean hasValidPath(char[][] grid) {\n        xl = grid.length;\n        yl = grid[0].length;\n        use = new boolean[xl][yl][xl * yl];\n        if ((xl + yl) % 2 == 0 || grid[0][0] == ')' || grid[xl - 1][yl - \n            return false;\n        }\n        dfs(grid, 0, 0, 0);\n        return bl;\n    }\n    int xl;\n    int yl;\n    boolean bl = false;\n    boolean[][][] use;\n    private void dfs(char[][] grid, int x, int y, int cnt) {\n        if (x >= xl || y >= yl || cnt > xl - x + yl - y - 1) {\n            return;\n        }\n        if (x == xl - 1 && y == yl - 1) {\n            bl = cnt == 1;\n        }\n        if (use[x][y][cnt]) {\n            return;\n        }\n        use[x][y][cnt] = true;\n        cnt += grid[x][y] == '(' ? 1 : -1;\n        if (cnt < 0) {\n            return;\n        }\n        if (!bl) {\n            dfs(grid, x + 1, y, cnt);\n        }\n        if (!bl) {\n            dfs(grid, x, y + 1, cnt);\n        }\n    }\n}
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"周赛","slug":"算法/力扣/周赛","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E5%91%A8%E8%B5%9B/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"力扣2267. 检查是否有合法括号字符串路径","slug":"check-if-there-is-a-valid-parentheses-string-path","date":"2022-05-10T02:50:21.000Z","updated":"2022-09-22T07:39:52.806Z","comments":true,"path":"/post/check-if-there-is-a-valid-parentheses-string-path/","link":"","excerpt":"","content":"

    力扣周赛292—第四题

    \n

    2267. 检查是否有合法括号字符串路径

    \n

    题目

    一个括号字符串是一个 非空 且只包含 '('')' 的字符串。如果下面 任意 条件为 ,那么这个括号字符串就是 合法的

    \n\n\n\n

    给你一个 m x n 的括号网格图矩阵 grid 。网格图中一个 合法括号路径 是满足以下所有条件的一条路径:

    \n\n\n\n

    如果网格图中存在一条 合法括号路径 ,请返回 true ,否则返回 false

    \n\n

    \n\n

    示例 1:

    \n\n

    \"\"

    \n\n
      \n输入:grid = [[\"(\",\"(\",\"(\"],[\")\",\"(\",\")\"],[\"(\",\"(\",\")\"],[\"(\",\"(\",\")\"]]  \n输出:true  \n解释:上图展示了两条路径,它们都是合法括号字符串路径。  \n第一条路径得到的合法字符串是 \"()(())\" 。  \n第二条路径得到的合法字符串是 \"((()))\" 。  \n注意可能有其他的合法括号字符串路径。  \n
    \n\n

    示例 2:

    \n\n

    \"\"

    \n\n
      \n输入:grid = [[\")\",\")\"],[\"(\",\"(\"]]  \n输出:false  \n解释:两条可行路径分别得到 \"))(\" 和 \")((\" 。由于它们都不是合法括号字符串,我们返回 false 。  \n
    \n\n

    \n\n

    提示:

    \n\n\n\n

    思路

    从左上角到右下角,依次路过,下一个坐标一定是该坐标的右侧或者下侧的坐标,同时玩吗记录下路过到该坐标时未配对的’'('的个数,如果是负数则这条路不对旧不用继续下去了。然后一直到右下角的时候,如果个数为1,则满足条件

    \n

    代码

    Java

    \n
    class Solution {\n    public boolean hasValidPath(char[][] grid) {\n        xl = grid.length;\n        yl = grid[0].length;\n        use = new boolean[xl][yl][xl * yl];\n        if ((xl + yl) % 2 == 0 || grid[0][0] == ')' || grid[xl - 1][yl - \n            return false;\n        }\n        dfs(grid, 0, 0, 0);\n        return bl;\n    }\n    int xl;\n    int yl;\n    boolean bl = false;\n    boolean[][][] use;\n    private void dfs(char[][] grid, int x, int y, int cnt) {\n        if (x >= xl || y >= yl || cnt > xl - x + yl - y - 1) {\n            return;\n        }\n        if (x == xl - 1 && y == yl - 1) {\n            bl = cnt == 1;\n        }\n        if (use[x][y][cnt]) {\n            return;\n        }\n        use[x][y][cnt] = true;\n        cnt += grid[x][y] == '(' ? 1 : -1;\n        if (cnt < 0) {\n            return;\n        }\n        if (!bl) {\n            dfs(grid, x + 1, y, cnt);\n        }\n        if (!bl) {\n            dfs(grid, x, y + 1, cnt);\n        }\n    }\n}
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"力扣2266. 统计打字方案数","slug":"count-number-of-texts","date":"2022-05-10T01:32:53.000Z","updated":"2022-09-22T07:39:52.810Z","comments":true,"path":"/post/count-number-of-texts/","link":"","excerpt":"","content":"

    力扣周赛292—第三题

    \n

    2266. 统计打字方案数

    \n

    题目

    Alice 在给 Bob 用手机打字。数字到字母的 对应 如下图所示。

    \n\n

    \"\"

    \n\n

    为了 打出 一个字母,Alice 需要 对应字母 i 次,i 是该字母在这个按键上所处的位置。

    \n\n\n\n

    但是,由于传输的错误,Bob 没有收到 Alice 打字的字母信息,反而收到了 按键的字符串信息

    \n\n\n\n

    给你一个字符串 pressedKeys ,表示 Bob 收到的字符串,请你返回 Alice 总共可能发出多少种文字信息

    \n\n

    由于答案可能很大,将它对 109 + 7 取余 后返回。

    \n\n

    \n\n

    示例 1:

    \n\n
    输入:pressedKeys = \"22233\"  \n输出:8  \n解释:  \nAlice 可能发出的文字信息包括:  \n\"aaadd\", \"abdd\", \"badd\", \"cdd\", \"aaae\", \"abe\", \"bae\" 和 \"ce\" 。  \n由于总共有 8 种可能的信息,所以我们返回 8 。  \n
    \n\n

    示例 2:

    \n\n
    输入:pressedKeys = \"222222222222222222222222222222222222\"  \n输出:82876089  \n解释:  \n总共有 2082876103 种 Alice 可能发出的文字信息。  \n由于我们需要将答案对 109 + 7 取余,所以我们返回 2082876103 % (109 + 7) = 82876089 。  \n
    \n\n

    \n\n

    提示:

    \n\n\n\n

    思路

    这题标的是中等题,个人觉得解题方法有点取巧,怎么取巧尼,因为重复的数最多4个,我完全可以嵌套3层if来处理,当然我也是这么干的。只要遍历一遍就可以了。

    \n

    在遍历到索引i时,有如下情况:

    \n
      \n
    1. 当前数字不和前面的组合,自己单独成一个新的

      \n

      索引i的种数 = 索引i-1的种数

      \n
    2. \n
    3. 当前数字与前一个相等,那么该数字的组合就有两种情况

      \n\n
    4. \n
    \n

    同时,为了保证数据没有超过int的最大值,这里对于每一次的结果都对109+7取余

    \n

    代码

    Java:

    \n
    class Solution {\n    public int countTexts(String pressedKeys) {\n        int[] cnts = new int[pressedKeys.length() + 1];\n        cnts[0] = 1;\n        cnts[1] = 1;\n        int mod = 1000000007;\n        for (int i = 1; i < pressedKeys.length(); i++) {\n            cnts[i + 1] = cnts[i];\n            if (pressdKeys.charAt(i) == pressedKeys.charAt(i - 1)) {\n                cnts[i + 1] += cnts[i - 1];\n                cnts[i + 1] %= mod;\n                if (i > 1 && pressedKeys.charAt(i) == pressedKeys.charAt(i - 2)) {\n                    cnts[i + 1] += cnts[i - 2];\n                    cnts[i + 1] %= mod;\n                    if (i > 2 && pressedKeys.charAt(i) == pressedKeys.charAt(i - 3) && (pressedKeys.charAt(i) == '7' || pressedKeys.charAt(i) == '\n                        cnts[i + 1] += cnts[i - 3];\n                        cnts[i + 1] %= mod;\n                    }\n                }\n            }\n        }\n        return cnts[pressedKeys.length()];\n    }\n}
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"力扣6057. 统计值等于子树平均值的节点数","slug":"count-nodes-equal-to-average-of-subtree","date":"2022-05-09T07:57:48.000Z","updated":"2022-09-22T07:39:52.810Z","comments":true,"path":"/post/count-nodes-equal-to-average-of-subtree/","link":"","excerpt":"","content":"

    力扣周赛292—第二题

    \n

    6057. 统计值等于子树平均值的节点数

    \n

    题目

    给你一棵二叉树的根节点 root ,找出并返回满足要求的节点数,要求节点的值等于其 子树 中值的 平均值

    \n\n

    注意:

    \n\n\n\n

    \n\n

    示例 1:

    \n \n
    输入:root = [4,8,5,0,1,null,6]  \n输出:5  \n解释:  \n对值为 4 的节点:子树的平均值 (4 + 8 + 5 + 0 + 1 + 6) / 6 = 24 / 6 = 4 。  \n对值为 5 的节点:子树的平均值 (5 + 6) / 2 = 11 / 2 = 5 。  \n对值为 0 的节点:子树的平均值 0 / 1 = 0 。  \n对值为 1 的节点:子树的平均值 1 / 1 = 1 。  \n对值为 6 的节点:子树的平均值 6 / 1 = 6 。  \n
    \n\n

    示例 2:

    \n \n
    输入:root = [1]  \n输出:1  \n解释:对值为 1 的节点:子树的平均值 1 / 1 = 1。  \n
    \n\n

    \n\n

    提示:

    \n\n\n\n

    思路

    这题的思路:

    \n\n

    代码

    Java:

    \n
    class Solution {\n    public int averageOfSubtree(TreeNode root) {\n        counts(root);\n        sums(root);\n        return count;\n    }\n    Queue<Integer> queue = new LinkedList<>();\n    int count = 0;\n    private int counts(TreeNode root) {\n        if (root == null) {\n            return 0;\n        }\n        int cnt = counts(root.left) + counts(root.right) + 1;\n        queue.add(cnt);\n        return cnt;\n    }\n    private int sums(TreeNode root) {\n        if (root == null) {\n            return 0;\n        }\n        int sum = root.val;\n        sum += sums(root.left);\n        sum += sums(root.right);\n        if (sum / queue.poll() == root.val) {\n            count++;\n        }\n        return sum;\n    }\n}
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"力扣2264. 字符串中最大的 3 位相同数字","slug":"largest-3-same-digit-number-in-string","date":"2022-05-09T07:24:16.000Z","updated":"2022-09-22T07:39:53.088Z","comments":true,"path":"/post/largest-3-same-digit-number-in-string/","link":"","excerpt":"","content":"

    力扣周赛292—第一题

    \n

    2264. 字符串中最大的 3 位相同数字

    \n

    题目

    给你一个字符串 num ,表示一个大整数。如果一个整数满足下述所有条件,则认为该整数是一个 优质整数

    \n\n\n\n

    以字符串形式返回 最大的优质整数 。如果不存在满足要求的整数,则返回一个空字符串 \"\"

    \n\n

    注意:

    \n\n\n\n

    示例 1:

    \n\n
      \n输入:num = \"6777133339\"  \n输出:\"777\"  \n解释:num 中存在两个优质整数:\"777\" 和 \"333\" 。  \n\"777\" 是最大的那个,所以返回 \"777\" 。  \n
    \n\n

    示例 2:

    \n\n
      \n输入:num = \"2300019\"  \n输出:\"000\"  \n解释:\"000\" 是唯一一个优质整数。  \n
    \n\n

    示例 3:

    \n\n
      \n输入:num = \"42352338\"  \n输出:\"\"  \n解释:不存在长度为 3 且仅由一个唯一数字组成的整数。因此,不存在优质整数。  \n
    \n\n

    \n\n

    提示:

    \n\n\n\n

    思路

    这题是要找最大的3个相同数并且3个数是相连的,因为数字的话只有0~9这10个数字,找最大的,那我就从999开始,然后依次888、777。。。000,只要字符串中存在,那就是它了。

    \n

    代码

    java:

    \n
    public String largestGoodInteger(String num) {\n    String str;\n    for (int i = 9; i >= 0; i--) {\n        str = "" + i + i + i;\n        if (num.contains(str)) {\n            return str;\n        }\n    }\n    return "";\n}
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"RKE方式安装k8s集群和Dashboard","slug":"inRKE","date":"2022-05-03T08:44:17.000Z","updated":"2022-09-22T07:39:52.990Z","comments":true,"path":"/post/inRKE/","link":"","excerpt":"","content":"

    前言

    需要在电脑上安装好VirtualBox和Vagrant

    \n

    构建3台虚拟机

    1、编写Vagrantfile文件

    内容如下:

    \n
    Vagrant.configure("2") do |config|\n  config.vm.box_check_update = false\n  config.vm.provider 'virtualbox' do |vb|\n  vb.customize [ "guestproperty", "set", :id, "/VirtualBox/GuestAdd/VBoxService/--timesync-set-threshold", 1000 ]\n  end  \n  $num_instances = 3\n  # curl https://discovery.etcd.io/new?size=3\n  (1..$num_instances).each do |i|\n    config.vm.define "node#{i}" do |node|\n      node.vm.box = "centos/7"\n      node.vm.hostname = "node#{i}"\n      ip = "172.17.8.#{i+100}"\n      node.vm.network "private_network", ip: ip\n      node.vm.provider "virtualbox" do |vb|\n        vb.memory = "8192"\n        if i==1 then\n            vb.cpus = 2\n        else\n            vb.cpus = 1\n        end\n        vb.name = "node#{i}"\n      end\n    end\n  end\nend
    \n

    2、启动3台虚拟机

    在Vagrantfile文件所在目录的控制台下执行命令:

    \n
    vagrant up
    \n

    等待完成,完成后,在VirtualBox主页:

    \n

    \"\"

    \n

    虚拟机配置用户名密码ssh连接

    3台虚拟机都需要安装

    \n

    配置参考:windows下VirtualBox和vagrant组合安装centos 中的“用户名密码ssh”

    \n

    虚拟机docker安装

    3台虚拟机都需要安装

    \n

    安装教程:docker安装教程)

    \n

    安装 Kubernetes 命令行工具 kubectl

    3台虚拟机都需要安装

    \n

    执行命令:

    \n
    yum install wget\nwget https://dl.k8s.io/release/v1.24.0/bin/linux/amd64/kubectl && chmod +x kubectl && cp kubectl /usr/bin/
    \n

    如果报错:curl: (1) Protocol “https not supported or disabled in libcurl

    \n

    安装RKE命令行工具

    只有主节点做即可

    \n
    wget https://rancher-mirror.rancher.cn/rke/v1.3.10/rke_linux-amd64 && mv rke_linux-amd64 rke && chmod +x rke && ./rke --version && cp rke /usr/bin/
    \n

    进行机器配置

    adduser rke -G docker

    \n

    1、禁用 SELinux

    vi /etc/selinux/config
    \n

    将第七行SELINUX=enforcing改为SELINUX=disabled

    \n

    \"\"

    \n

    2、禁用 swap

    vi /etc/fstab
    \n

    使用 # 注释掉有 swap 的一行

    \n

    \"\"

    \n

    3、关闭防火墙

    systemctl stop firewalld.service\nsystemctl disable firewalld.service
    \n

    4、重启查看效果

    reboot\n/usr/sbin/sestatus -v\nfree -h
    \n

    \"\"

    \n

    5、设置用户

    CentOS7不能使用root用户安装

    \n

    添加用户:

    \n
    adduser rke -G docker
    \n

    给新添加的用户设置密码:

    \n
    passwd rke
    \n

    中途需要输入2次密码

    \n

    \"\"

    \n

    确认新用户是否有权限:

    \n
    su rke\ndocker ps -a
    \n

    \"\"

    \n

    6、设置SSH

    这个地方要给全部的机器配置ssh(包括自己)注意在新用户下操作:

    \n
    ssh-keygen\nssh-copy-id rke@172.17.8.101\nssh-copy-id rke@172.17.8.102\nssh-copy-id rke@172.17.8.103
    \n

    第一个红框位置输入yes,第二个红框位置输入密码

    \n

    \"\"

    \n

    编辑rke.yaml

    仅在主节点,在新用户下操作

    \n
    vi rke.yaml
    \n

    rke.yaml内容(里面的IP换成各自的IP哦):

    \n
    nodes:\n  - address: 172.17.8.101\n    user: rke\n    role: [controlplane, worker, etcd]\n  - address: 172.17.8.102\n    user: rke\n    role: [controlplane, worker, etcd]\n  - address: 172.17.8.103\n    user: rke\n    role: [worker]\n\nservices:\n  etcd:\n    snapshot: true\n    creation: 6h\n    retention: 24h\n\n# 当使用外部 TLS 终止,并且使用 ingress-nginx v0.22或以上版本时,必须。\ningress:\n  provider: nginx\n  options:\n    use-forwarded-headers: “true”\nded-headers: “true”
    \n

    安装集群

    也是在新用户下操作:

    \n
    rke up --config rke.yaml
    \n

    这步执行时间较长,多等一会,需要下载很多镜像~~~~

    \n

    运行完成后执行 :

    \n
    mkdir ~/.kube && mv kube_config_rke.yaml ~/.kube/config
    \n

    最后,执行下面的命令确认集群安装完成

    \n
    kubectl get node
    \n

    安装kubernetes Dashboard

    依然是在新用户下:

    \n

    切换到~目录下

    \n

    1、获取dashboard的yaml文件

    wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml
    \n

    2、修改文件

    修改service部分,默认service是ClusterIP类型,这里改称NodePort类型,是集群外部能否访问

    \n

    下面标红框的地方为新增加的:

    \n

    \"\"

    \n

    3、执行yaml文件

    kubectl apply -f recommended.yaml
    \n

    4、查看服务状态

    kubectl get all -n kubernetes-dashboard
    \n

    下面红框的可以看出服务已经运行了

    \n

    \"\"

    \n

    5、接下来浏览器访问

    IP:30010,端口就是你在第二步中添加的,输入网址后,点击高级继续访问就出现下面的页面了

    \n

    \"\"

    \n

    6、创建登录用户信息

    创建文件admin-role.yaml,内容如下:

    \n
    kind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  name: admin\n  annotations:\n    rbac.authorization.kubernetes.io/autoupdate: "true"\nroleRef:\n  kind: ClusterRole\n  name: cluster-admin\n  apiGroup: rbac.authorization.k8s.io\nsubjects:\n- kind: ServiceAccount\n  name: admin\n  namespace: kube-system\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: admin\n  namespace: kube-system\n  labels:\n    kubernetes.io/cluster-service: "true"\n    addonmanager.kubernetes.io/mode: Reconcile
    \n

    将其执行到集群中:

    \n
    kubectl apply -f admin-role.yaml
    \n

    7、获取token

    查看kubernetes-dashboard下面的secret

    \n

    \"\"

    \n

    在执行下面的命令:

    \n
    kubectl -n kube-system describe secret 红框的名字
    \n

    红框内就是token

    \n

    \"\"

    \n","categories":[{"name":"云原生","slug":"云原生","permalink":"https://hexo.huangge1199.cn/categories/%E4%BA%91%E5%8E%9F%E7%94%9F/"}],"tags":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}]},{"title":"PhpStorm自动上传修改的内容到服务器","slug":"phpDeploy","date":"2022-04-30T08:40:05.000Z","updated":"2022-09-22T07:39:53.156Z","comments":true,"path":"/post/phpDeploy/","link":"","excerpt":"","content":"

    前言

    今天,在修改WordPress时,发现利用宝塔的在线编辑好麻烦,找到方法,确无法直接跳过去,于是乎,我把代码下载到本地了,本来想着利用编辑器来修改就可以跳转了,没想到呀,PhpStorm给了我一个大惊喜,原来它只要配置好久可以直接在本地修改,WordPress刷新就可以直接看到效果。

    \n

    接下来,我就详细的说明一下配置的步骤

    \n

    配置步骤

    1、设置连接

    打开File—>Setting

    \n

    \"\"

    \n

    左侧Build,Execution,Deployment—>Deployment,然后右侧加号添加配置选择SFTP

    \n

    \"\"

    \n

    弹出的窗口内输入配置的名称,可随意输入,方便记住就好

    \n

    \"\"

    \n

    点击红框的位置添加ssh连接

    \n

    \"\"

    \n

    在弹出的窗口点击 加号,右边配置

    \n

    \"\"

    \n

    点击OK后,ssh会自动添加上,同时再把IP加入到下面的红框内

    \n

    \"\"

    \n

    2、设置文件映射关系

    点击mapping,将服务器上项目的根目录添加到Deployment Path中,如果点击OK

    \n

    \"\"

    \n

    3、设置自动上传

    在PhpStorm中依次点击Tool—>Deployment—>Options…

    \n

    \"\"

    \n

    在弹出的窗口中,将红框下拉框设置成第二个,之后只要按Ctrl+S就可将修改的代码上传到服务器上

    \n

    \"\"

    \n","categories":[{"name":"PHP","slug":"PHP","permalink":"https://hexo.huangge1199.cn/categories/PHP/"}],"tags":[{"name":"PHP","slug":"PHP","permalink":"https://hexo.huangge1199.cn/tags/PHP/"}]},{"title":"设计模式总结与对比(作业)","slug":"designPattern","date":"2022-04-28T02:13:09.000Z","updated":"2022-09-22T07:39:52.866Z","comments":true,"path":"/post/designPattern/","link":"","excerpt":"","content":"

    1、设计模式的初衷是什么?有哪些设计原则?

    \n

    2、列举至少4种单例模式被破坏的场景并给出解决方案

    \n

    3、一句话总结单例模式、原型模式、建造者模式、代理模式、策略模式和责任链模式

    \n","categories":[{"name":"设计模式","slug":"设计模式","permalink":"https://hexo.huangge1199.cn/categories/%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F/"}],"tags":[{"name":"学习笔记","slug":"学习笔记","permalink":"https://hexo.huangge1199.cn/tags/%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0/"}]},{"title":"建造者模式","slug":"builder","date":"2022-04-26T09:10:11.000Z","updated":"2022-09-22T07:39:52.805Z","comments":true,"path":"/post/builder/","link":"","excerpt":"","content":"

    定义

    建造者模式是将一个复杂对象的构建与它的表示分离,使得同样的构建过程可以创建不同的表示

    \n

    特征:用户只需指定需要建造的类型就可以获得对象,建造过程及细节不需要了解

    \n

    属于创建型模式

    \n

    设计中四个角色

    \n

    适用场景

    \n

    优点

    \n

    建造者模式和工厂模式的区别

      \n
    1. 建造者模式更加注重方法的调用顺序,工厂模式注重于创建对象。
    2. \n
    3. 创建对象的力度不同,建造者模式创建复杂的对象,由各种复杂的部件组成,工厂模式创建出来的都一样。
    4. \n
    5. 关注点:工厂模式模式只需要把对象创建出来就可以了,而建造者模式中不仅要创建出这个对象,还要知道这个对象由哪些部件组成。
    6. \n
    7. 建造者模式根据建造过程中的顺序不一样,最终的对象部件组成也不一样。
    8. \n
    \n","categories":[{"name":"设计模式","slug":"设计模式","permalink":"https://hexo.huangge1199.cn/categories/%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F/"}],"tags":[{"name":"学习笔记","slug":"学习笔记","permalink":"https://hexo.huangge1199.cn/tags/%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0/"},{"name":"设计模式","slug":"设计模式","permalink":"https://hexo.huangge1199.cn/tags/%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F/"}]},{"title":"原型模式","slug":"prototype","date":"2022-04-26T07:12:44.000Z","updated":"2022-09-22T07:39:53.179Z","comments":true,"path":"/post/prototype/","link":"","excerpt":"","content":"

    定义

    原型模式时指原型实例指定创建对象的种类,并且通过拷贝这些原型创建新的对象,属于创建型模式

    \n

    应用场景

    \n

    优点

    \n

    缺点

    \n

    克隆破坏单例模式

    如果我们克隆的目标对象是单例的对象,深克隆就会破坏单例。
    解决办法:可以禁止深克隆。要么你的单例类不实现Cloneable接口;要么我们重写
    clone()方法,在clone方法中返回单例对象即可

    \n","categories":[{"name":"设计模式","slug":"设计模式","permalink":"https://hexo.huangge1199.cn/categories/%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F/"}],"tags":[{"name":"学习笔记","slug":"学习笔记","permalink":"https://hexo.huangge1199.cn/tags/%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0/"},{"name":"设计模式","slug":"设计模式","permalink":"https://hexo.huangge1199.cn/tags/%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F/"}]},{"title":"单例模式","slug":"singleton","date":"2022-04-26T06:44:05.000Z","updated":"2022-09-22T07:39:53.205Z","comments":true,"path":"/post/singleton/","link":"","excerpt":"","content":"

    定义

    确保一个类在任何情况下都绝对只有一个实例,并提供一个全局访问点

    \n

    饿汉式单例

    优点:执行效率高、性能高、没有融合的锁

    \n

    缺点:某些情况下,可能会造成内存浪费

    \n

    常规写法

    public class HungrySingleton {\n\n    private static final HungrySingleton hungrySingleton = new HungrySingleton();\n\n    private HungrySingleton() {\n    }\n\n    public static HungrySingleton getInstance() {\n        return hungrySingleton;\n    }\n}
    \n

    利用静态代码块的写法

    public class HungryStaticSingleton {\n    private static final HungryStaticSingleton hungrySingleton;\n\n    static {\n        hungrySingleton = new HungryStaticSingleton();\n    }\n\n    private HungryStaticSingleton() {\n    }\n\n    public static HungryStaticSingleton getInstance() {\n        return hungrySingleton;\n    }\n}
    \n

    懒汉式单例

    常规写法

    优点:节省了内存,线程安全

    \n

    缺点:性能低

    \n
    public class LazySimpleSingletion {\n    private static LazySimpleSingletion instance;\n    private LazySimpleSingletion(){}\n\n    public synchronized static LazySimpleSingletion getInstance(){\n        if(instance == null){\n            instance = new LazySimpleSingletion();\n        }\n        return instance;\n    }\n}
    \n

    双重检查

    优点:性能高了,线程安全了
    缺点:可读性难度加大,不够优雅

    \n
    public class LazyDoubleCheckSingleton {\n    private volatile static LazyDoubleCheckSingleton instance;\n\n    private LazyDoubleCheckSingleton() {\n    }\n\n    public static LazyDoubleCheckSingleton getInstance() {\n        //检查是否要阻塞\n        if (instance == null) {\n            synchronized (LazyDoubleCheckSingleton.class) {\n                //检查是否要重新创建实例\n                if (instance == null) {\n                    instance = new LazyDoubleCheckSingleton();\n                    //指令重排序的问题\n                }\n            }\n        }\n        return instance;\n    }\n}
    \n

    静态内部类单例

    优点:写法优雅,利用了Java本身语法特点,性能高,避免了内存浪费,不能被反射破坏

    \n
    public class LazyStaticInnerClassSingleton {\n\n    private LazyStaticInnerClassSingleton() {\n        if (LazyHolder.INSTANCE != null) {\n            throw new RuntimeException("不允许非法访问");\n        }\n    }\n\n    private static LazyStaticInnerClassSingleton getInstance() {\n        return LazyHolder.INSTANCE;\n    }\n\n    private static class LazyHolder {\n        private static final LazyStaticInnerClassSingleton INSTANCE = new LazyStaticInnerClassSingleton();\n    }\n\n}
    \n

    注册式单例

    枚举单例

    public enum EnumSingleton {\n    INSTANCE;\n\n    private Object data;\n\n    public Object getData() {\n        return data;\n    }\n\n    public void setData(Object data) {\n        this.data = data;\n    }\n\n    public static EnumSingleton getInstance() {\n        return INSTANCE;\n    }\n}
    \n

    容器化单例

    public class ContainerSingleton {\n\n    private ContainerSingleton() {\n    }\n\n    private static Map<String, Object> ioc = new ConcurrentHashMap<String, Object>();\n\n    public static Object getInstance(String className) {\n        Object instance = null;\n        if (!ioc.containsKey(className)) {\n            try {\n                instance = Class.forName(className).newInstance();\n                ioc.put(className, instance);\n            } catch (Exception e) {\n                e.printStackTrace();\n            }\n            return instance;\n        } else {\n            return ioc.get(className);\n        }\n    }\n\n}
    \n

    序列化单例

    public class SeriableSingleton implements Serializable {\n    \n    public final static SeriableSingleton INSTANCE = new SeriableSingleton();\n\n    private SeriableSingleton() {\n    }\n\n    public static SeriableSingleton getInstance() {\n        return INSTANCE;\n    }\n\n    private Object readResolve() {\n        return INSTANCE;\n    }\n\n}
    \n

    线程

    public class ThreadLocalSingleton {\n    private static final ThreadLocal<ThreadLocalSingleton> threadLocaLInstance =\n            new ThreadLocal<ThreadLocalSingleton>() {\n                @Override\n                protected ThreadLocalSingleton initialValue() {\n                    return new ThreadLocalSingleton();\n                }\n            };\n\n    private ThreadLocalSingleton() {\n    }\n\n    public static ThreadLocalSingleton getInstance() {\n        return threadLocaLInstance.get();\n    }\n}CE;\n    }\n\n}
    \n

    破坏单例模式的场景和解决方案

    1、指令重排使懒汉式模式失效

    解决办法:加volatile关键字

    \n

    2、反射

    解决办法:弄一个全局变量标记是否实例化过,如果实例化过,抛异常

    \n

    3、克隆

    解决办法:重新克隆方法,调用时直接返回已经实例化的对象

    \n

    4、序列化

    解决办法:在反序列化时的回调方法 readResolve()中返回单例对象

    \n","categories":[{"name":"设计模式","slug":"设计模式","permalink":"https://hexo.huangge1199.cn/categories/%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F/"}],"tags":[{"name":"学习笔记","slug":"学习笔记","permalink":"https://hexo.huangge1199.cn/tags/%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0/"},{"name":"设计模式","slug":"设计模式","permalink":"https://hexo.huangge1199.cn/tags/%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F/"}]},{"title":"docker-compose安装Redis","slug":"iRedisByDC","date":"2022-04-24T08:53:43.000Z","updated":"2022-09-22T07:39:52.915Z","comments":true,"path":"/post/iRedisByDC/","link":"","excerpt":"","content":"

    1、拉取镜像

    执行下面的命令拉取redis的docker镜像

    \n
    docker pull redis
    \n

    \"\"

    \n

    2、编写docker-compose.yml文件

    内容如下:

    \n
    version: '3'\nservices:\n  redis:\n    restart: always\n    image: redis\n    container_name: redis\n    ports:\n      - 50020:6379\n    environment:\n      TZ: Asia/Shanghai\n    volumes:\n      - ./data:/data\n      - ./conf/redis.conf:/etc/redis.conf\n    privileged: true
    \n

    3、创建目录文件

    根据docker-compose.yml文件创建对应目录文件

    \n
    pwd\nmkdir data\nmkdir conf\nll
    \n

    \"\"

    \n

    4、编写Redis的配置文件

    在conf目录下创建redis.conf文件,文件内容如下:

    \n
    # Redis configuration file example.\n#\n# Note that in order to read the configuration file, Redis must be\n# started with the file path as first argument:\n#\n# ./redis-server /path/to/redis.conf\n\n# Note on units: when memory size is needed, it is possible to specify\n# it in the usual form of 1k 5GB 4M and so forth:\n#\n# 1k => 1000 bytes\n# 1kb => 1024 bytes\n# 1m => 1000000 bytes\n# 1mb => 1024*1024 bytes\n# 1g => 1000000000 bytes\n# 1gb => 1024*1024*1024 bytes\n#\n# units are case insensitive so 1GB 1Gb 1gB are all the same.\n\n################################## INCLUDES ###################################\n\n# Include one or more other config files here.  This is useful if you\n# have a standard template that goes to all Redis servers but also need\n# to customize a few per-server settings.  Include files can include\n# other files, so use this wisely.\n#\n# Note that option "include" won't be rewritten by command "CONFIG REWRITE"\n# from admin or Redis Sentinel. Since Redis always uses the last processed\n# line as value of a configuration directive, you'd better put includes\n# at the beginning of this file to avoid overwriting config change at runtime.\n#\n# If instead you are interested in using includes to override configuration\n# options, it is better to use include as the last line.\n#\n# include /path/to/local.conf\n# include /path/to/other.conf\n\n################################## MODULES #####################################\n\n# Load modules at startup. If the server is not able to load modules\n# it will abort. It is possible to use multiple loadmodule directives.\n#\n# loadmodule /path/to/my_module.so\n# loadmodule /path/to/other_module.so\n\n################################## NETWORK #####################################\n\n# By default, if no "bind" configuration directive is specified, Redis listens\n# for connections from all available network interfaces on the host machine.\n# It is possible to listen to just one or multiple selected interfaces using\n# the "bind" configuration directive, followed by one or more IP addresses.\n# Each address can be prefixed by "-", which means that redis will not fail to\n# start if the address is not available. Being not available only refers to\n# addresses that does not correspond to any network interfece. Addresses that\n# are already in use will always fail, and unsupported protocols will always BE\n# silently skipped.\n#\n# Examples:\n#\n# bind 192.168.1.100 10.0.0.1     # listens on two specific IPv4 addresses\n# bind 127.0.0.1 ::1              # listens on loopback IPv4 and IPv6\n# bind * -::*                     # like the default, all available interfaces\n#\n# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the\n# internet, binding to all the interfaces is dangerous and will expose the\n# instance to everybody on the internet. So by default we uncomment the\n# following bind directive, that will force Redis to listen only on the\n# IPv4 and IPv6 (if available) loopback interface addresses (this means Redis\n# will only be able to accept client connections from the same host that it is\n# running on).\n#\n# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES\n# JUST COMMENT OUT THE FOLLOWING LINE.\n# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n# bind 127.0.0.1 -::1\n\n# Protected mode is a layer of security protection, in order to avoid that\n# Redis instances left open on the internet are accessed and exploited.\n#\n# When protected mode is on and if:\n#\n# 1) The server is not binding explicitly to a set of addresses using the\n#    "bind" directive.\n# 2) No password is configured.\n#\n# The server only accepts connections from clients connecting from the\n# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain\n# sockets.\n#\n# By default protected mode is enabled. You should disable it only if\n# you are sure you want clients from other hosts to connect to Redis\n# even if no authentication is configured, nor a specific set of interfaces\n# are explicitly listed using the "bind" directive.\nprotected-mode no\n\n# Accept connections on the specified port, default is 6379 (IANA #815344).\n# If port 0 is specified Redis will not listen on a TCP socket.\nport 6379\n\n# TCP listen() backlog.\n#\n# In high requests-per-second environments you need a high backlog in order\n# to avoid slow clients connection issues. Note that the Linux kernel\n# will silently truncate it to the value of /proc/sys/net/core/somaxconn so\n# make sure to raise both the value of somaxconn and tcp_max_syn_backlog\n# in order to get the desired effect.\ntcp-backlog 511\n\n# Unix socket.\n#\n# Specify the path for the Unix socket that will be used to listen for\n# incoming connections. There is no default, so Redis will not listen\n# on a unix socket when not specified.\n#\n# unixsocket /run/redis.sock\n# unixsocketperm 700\n\n# Close the connection after a client is idle for N seconds (0 to disable)\ntimeout 0\n\n# TCP keepalive.\n#\n# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence\n# of communication. This is useful for two reasons:\n#\n# 1) Detect dead peers.\n# 2) Force network equipment in the middle to consider the connection to be\n#    alive.\n#\n# On Linux, the specified value (in seconds) is the period used to send ACKs.\n# Note that to close the connection the double of the time is needed.\n# On other kernels the period depends on the kernel configuration.\n#\n# A reasonable value for this option is 300 seconds, which is the new\n# Redis default starting with Redis 3.2.1.\ntcp-keepalive 300\n\n################################# TLS/SSL #####################################\n\n# By default, TLS/SSL is disabled. To enable it, the "tls-port" configuration\n# directive can be used to define TLS-listening ports. To enable TLS on the\n# default port, use:\n#\n# port 0\n# tls-port 6379\n\n# Configure a X.509 certificate and private key to use for authenticating the\n# server to connected clients, masters or cluster peers.  These files should be\n# PEM formatted.\n#\n# tls-cert-file redis.crt \n# tls-key-file redis.key\n#\n# If the key file is encrypted using a passphrase, it can be included here\n# as well.\n#\n# tls-key-file-pass secret\n\n# Normally Redis uses the same certificate for both server functions (accepting\n# connections) and client functions (replicating from a master, establishing\n# cluster bus connections, etc.).\n#\n# Sometimes certificates are issued with attributes that designate them as\n# client-only or server-only certificates. In that case it may be desired to use\n# different certificates for incoming (server) and outgoing (client)\n# connections. To do that, use the following directives:\n#\n# tls-client-cert-file client.crt\n# tls-client-key-file client.key\n#\n# If the key file is encrypted using a passphrase, it can be included here\n# as well.\n#\n# tls-client-key-file-pass secret\n\n# Configure a DH parameters file to enable Diffie-Hellman (DH) key exchange:\n#\n# tls-dh-params-file redis.dh\n\n# Configure a CA certificate(s) bundle or directory to authenticate TLS/SSL\n# clients and peers.  Redis requires an explicit configuration of at least one\n# of these, and will not implicitly use the system wide configuration.\n#\n# tls-ca-cert-file ca.crt\n# tls-ca-cert-dir /etc/ssl/certs\n\n# By default, clients (including replica servers) on a TLS port are required\n# to authenticate using valid client side certificates.\n#\n# If "no" is specified, client certificates are not required and not accepted.\n# If "optional" is specified, client certificates are accepted and must be\n# valid if provided, but are not required.\n#\n# tls-auth-clients no\n# tls-auth-clients optional\n\n# By default, a Redis replica does not attempt to establish a TLS connection\n# with its master.\n#\n# Use the following directive to enable TLS on replication links.\n#\n# tls-replication yes\n\n# By default, the Redis Cluster bus uses a plain TCP connection. To enable\n# TLS for the bus protocol, use the following directive:\n#\n# tls-cluster yes\n\n# By default, only TLSv1.2 and TLSv1.3 are enabled and it is highly recommended\n# that older formally deprecated versions are kept disabled to reduce the attack surface.\n# You can explicitly specify TLS versions to support.\n# Allowed values are case insensitive and include "TLSv1", "TLSv1.1", "TLSv1.2",\n# "TLSv1.3" (OpenSSL >= 1.1.1) or any combination.\n# To enable only TLSv1.2 and TLSv1.3, use:\n#\n# tls-protocols "TLSv1.2 TLSv1.3"\n\n# Configure allowed ciphers.  See the ciphers(1ssl) manpage for more information\n# about the syntax of this string.\n#\n# Note: this configuration applies only to <= TLSv1.2.\n#\n# tls-ciphers DEFAULT:!MEDIUM\n\n# Configure allowed TLSv1.3 ciphersuites.  See the ciphers(1ssl) manpage for more\n# information about the syntax of this string, and specifically for TLSv1.3\n# ciphersuites.\n#\n# tls-ciphersuites TLS_CHACHA20_POLY1305_SHA256\n\n# When choosing a cipher, use the server's preference instead of the client\n# preference. By default, the server follows the client's preference.\n#\n# tls-prefer-server-ciphers yes\n\n# By default, TLS session caching is enabled to allow faster and less expensive\n# reconnections by clients that support it. Use the following directive to disable\n# caching.\n#\n# tls-session-caching no\n\n# Change the default number of TLS sessions cached. A zero value sets the cache\n# to unlimited size. The default size is 20480.\n#\n# tls-session-cache-size 5000\n\n# Change the default timeout of cached TLS sessions. The default timeout is 300\n# seconds.\n#\n# tls-session-cache-timeout 60\n\n################################# GENERAL #####################################\n\n# By default Redis does not run as a daemon. Use 'yes' if you need it.\n# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.\n# When Redis is supervised by upstart or systemd, this parameter has no impact.\ndaemonize no\n\n# If you run Redis from upstart or systemd, Redis can interact with your\n# supervision tree. Options:\n#   supervised no      - no supervision interaction\n#   supervised upstart - signal upstart by putting Redis into SIGSTOP mode\n#                        requires "expect stop" in your upstart job config\n#   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET\n#                        on startup, and updating Redis status on a regular\n#                        basis.\n#   supervised auto    - detect upstart or systemd method based on\n#                        UPSTART_JOB or NOTIFY_SOCKET environment variables\n# Note: these supervision methods only signal "process is ready."\n#       They do not enable continuous pings back to your supervisor.\n#\n# The default is "no". To run under upstart/systemd, you can simply uncomment\n# the line below:\n#\n# supervised auto\n\n# If a pid file is specified, Redis writes it where specified at startup\n# and removes it at exit.\n#\n# When the server runs non daemonized, no pid file is created if none is\n# specified in the configuration. When the server is daemonized, the pid file\n# is used even if not specified, defaulting to "/var/run/redis.pid".\n#\n# Creating a pid file is best effort: if Redis is not able to create it\n# nothing bad happens, the server will start and run normally.\n#\n# Note that on modern Linux systems "/run/redis.pid" is more conforming\n# and should be used instead.\npidfile /var/run/redis_6379.pid\n\n# Specify the server verbosity level.\n# This can be one of:\n# debug (a lot of information, useful for development/testing)\n# verbose (many rarely useful info, but not a mess like the debug level)\n# notice (moderately verbose, what you want in production probably)\n# warning (only very important / critical messages are logged)\nloglevel notice\n\n# Specify the log file name. Also the empty string can be used to force\n# Redis to log on the standard output. Note that if you use standard\n# output for logging but daemonize, logs will be sent to /dev/null\nlogfile ""\n\n# To enable logging to the system logger, just set 'syslog-enabled' to yes,\n# and optionally update the other syslog parameters to suit your needs.\n# syslog-enabled no\n\n# Specify the syslog identity.\n# syslog-ident redis\n\n# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.\n# syslog-facility local0\n\n# To disable the built in crash log, which will possibly produce cleaner core\n# dumps when they are needed, uncomment the following:\n#\n# crash-log-enabled no\n\n# To disable the fast memory check that's run as part of the crash log, which\n# will possibly let redis terminate sooner, uncomment the following:\n#\n# crash-memcheck-enabled no\n\n# Set the number of databases. The default database is DB 0, you can select\n# a different one on a per-connection basis using SELECT <dbid> where\n# dbid is a number between 0 and 'databases'-1\ndatabases 16\n\n# By default Redis shows an ASCII art logo only when started to log to the\n# standard output and if the standard output is a TTY and syslog logging is\n# disabled. Basically this means that normally a logo is displayed only in\n# interactive sessions.\n#\n# However it is possible to force the pre-4.0 behavior and always show a\n# ASCII art logo in startup logs by setting the following option to yes.\nalways-show-logo no\n\n# By default, Redis modifies the process title (as seen in 'top' and 'ps') to\n# provide some runtime information. It is possible to disable this and leave\n# the process name as executed by setting the following to no.\nset-proc-title yes\n\n# When changing the process title, Redis uses the following template to construct\n# the modified title.\n#\n# Template variables are specified in curly brackets. The following variables are\n# supported:\n#\n# {title}           Name of process as executed if parent, or type of child process.\n# {listen-addr}     Bind address or '*' followed by TCP or TLS port listening on, or\n#                   Unix socket if only that's available.\n# {server-mode}     Special mode, i.e. "[sentinel]" or "[cluster]".\n# {port}            TCP port listening on, or 0.\n# {tls-port}        TLS port listening on, or 0.\n# {unixsocket}      Unix domain socket listening on, or "".\n# {config-file}     Name of configuration file used.\n#\nproc-title-template "{title} {listen-addr} {server-mode}"\n\n################################ SNAPSHOTTING  ################################\n\n# Save the DB to disk.\n#\n# save <seconds> <changes>\n#\n# Redis will save the DB if both the given number of seconds and the given\n# number of write operations against the DB occurred.\n#\n# Snapshotting can be completely disabled with a single empty string argument\n# as in following example:\n#\n# save ""\n#\n# Unless specified otherwise, by default Redis will save the DB:\n#   * After 3600 seconds (an hour) if at least 1 key changed\n#   * After 300 seconds (5 minutes) if at least 100 keys changed\n#   * After 60 seconds if at least 10000 keys changed\n#\n# You can set these explicitly by uncommenting the three following lines.\n#\n# save 3600 1\n# save 300 100\n# save 60 10000\n\n# By default Redis will stop accepting writes if RDB snapshots are enabled\n# (at least one save point) and the latest background save failed.\n# This will make the user aware (in a hard way) that data is not persisting\n# on disk properly, otherwise chances are that no one will notice and some\n# disaster will happen.\n#\n# If the background saving process will start working again Redis will\n# automatically allow writes again.\n#\n# However if you have setup your proper monitoring of the Redis server\n# and persistence, you may want to disable this feature so that Redis will\n# continue to work as usual even if there are problems with disk,\n# permissions, and so forth.\nstop-writes-on-bgsave-error yes\n\n# Compress string objects using LZF when dump .rdb databases?\n# By default compression is enabled as it's almost always a win.\n# If you want to save some CPU in the saving child set it to 'no' but\n# the dataset will likely be bigger if you have compressible values or keys.\nrdbcompression yes\n\n# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.\n# This makes the format more resistant to corruption but there is a performance\n# hit to pay (around 10%) when saving and loading RDB files, so you can disable it\n# for maximum performances.\n#\n# RDB files created with checksum disabled have a checksum of zero that will\n# tell the loading code to skip the check.\nrdbchecksum yes\n\n# Enables or disables full sanitation checks for ziplist and listpack etc when\n# loading an RDB or RESTORE payload. This reduces the chances of a assertion or\n# crash later on while processing commands.\n# Options:\n#   no         - Never perform full sanitation\n#   yes        - Always perform full sanitation\n#   clients    - Perform full sanitation only for user connections.\n#                Excludes: RDB files, RESTORE commands received from the master\n#                connection, and client connections which have the\n#                skip-sanitize-payload ACL flag.\n# The default should be 'clients' but since it currently affects cluster\n# resharding via MIGRATE, it is temporarily set to 'no' by default.\n#\n# sanitize-dump-payload no\n\n# The filename where to dump the DB\ndbfilename dump.rdb\n\n# Remove RDB files used by replication in instances without persistence\n# enabled. By default this option is disabled, however there are environments\n# where for regulations or other security concerns, RDB files persisted on\n# disk by masters in order to feed replicas, or stored on disk by replicas\n# in order to load them for the initial synchronization, should be deleted\n# ASAP. Note that this option ONLY WORKS in instances that have both AOF\n# and RDB persistence disabled, otherwise is completely ignored.\n#\n# An alternative (and sometimes better) way to obtain the same effect is\n# to use diskless replication on both master and replicas instances. However\n# in the case of replicas, diskless is not always an option.\nrdb-del-sync-files no\n\n# The working directory.\n#\n# The DB will be written inside this directory, with the filename specified\n# above using the 'dbfilename' configuration directive.\n#\n# The Append Only File will also be created inside this directory.\n#\n# Note that you must specify a directory here, not a file name.\ndir ./\n\n################################# REPLICATION #################################\n\n# Master-Replica replication. Use replicaof to make a Redis instance a copy of\n# another Redis server. A few things to understand ASAP about Redis replication.\n#\n#   +------------------+      +---------------+\n#   |      Master      | ---> |    Replica    |\n#   | (receive writes) |      |  (exact copy) |\n#   +------------------+      +---------------+\n#\n# 1) Redis replication is asynchronous, but you can configure a master to\n#    stop accepting writes if it appears to be not connected with at least\n#    a given number of replicas.\n# 2) Redis replicas are able to perform a partial resynchronization with the\n#    master if the replication link is lost for a relatively small amount of\n#    time. You may want to configure the replication backlog size (see the next\n#    sections of this file) with a sensible value depending on your needs.\n# 3) Replication is automatic and does not need user intervention. After a\n#    network partition replicas automatically try to reconnect to masters\n#    and resynchronize with them.\n#\n# replicaof <masterip> <masterport>\n\n# If the master is password protected (using the "requirepass" configuration\n# directive below) it is possible to tell the replica to authenticate before\n# starting the replication synchronization process, otherwise the master will\n# refuse the replica request.\n#\n# masterauth <master-password>\n#\n# However this is not enough if you are using Redis ACLs (for Redis version\n# 6 or greater), and the default user is not capable of running the PSYNC\n# command and/or other commands needed for replication. In this case it's\n# better to configure a special user to use with replication, and specify the\n# masteruser configuration as such:\n#\n# masteruser <username>\n#\n# When masteruser is specified, the replica will authenticate against its\n# master using the new AUTH form: AUTH <username> <password>.\n\n# When a replica loses its connection with the master, or when the replication\n# is still in progress, the replica can act in two different ways:\n#\n# 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will\n#    still reply to client requests, possibly with out of date data, or the\n#    data set may just be empty if this is the first synchronization.\n#\n# 2) If replica-serve-stale-data is set to 'no' the replica will reply with\n#    an error "SYNC with master in progress" to all commands except:\n#    INFO, REPLICAOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG, SUBSCRIBE,\n#    UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB, COMMAND, POST,\n#    HOST and LATENCY.\n#\nreplica-serve-stale-data yes\n\n# You can configure a replica instance to accept writes or not. Writing against\n# a replica instance may be useful to store some ephemeral data (because data\n# written on a replica will be easily deleted after resync with the master) but\n# may also cause problems if clients are writing to it because of a\n# misconfiguration.\n#\n# Since Redis 2.6 by default replicas are read-only.\n#\n# Note: read only replicas are not designed to be exposed to untrusted clients\n# on the internet. It's just a protection layer against misuse of the instance.\n# Still a read only replica exports by default all the administrative commands\n# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve\n# security of read only replicas using 'rename-command' to shadow all the\n# administrative / dangerous commands.\nreplica-read-only yes\n\n# Replication SYNC strategy: disk or socket.\n#\n# New replicas and reconnecting replicas that are not able to continue the\n# replication process just receiving differences, need to do what is called a\n# "full synchronization". An RDB file is transmitted from the master to the\n# replicas.\n#\n# The transmission can happen in two different ways:\n#\n# 1) Disk-backed: The Redis master creates a new process that writes the RDB\n#                 file on disk. Later the file is transferred by the parent\n#                 process to the replicas incrementally.\n# 2) Diskless: The Redis master creates a new process that directly writes the\n#              RDB file to replica sockets, without touching the disk at all.\n#\n# With disk-backed replication, while the RDB file is generated, more replicas\n# can be queued and served with the RDB file as soon as the current child\n# producing the RDB file finishes its work. With diskless replication instead\n# once the transfer starts, new replicas arriving will be queued and a new\n# transfer will start when the current one terminates.\n#\n# When diskless replication is used, the master waits a configurable amount of\n# time (in seconds) before starting the transfer in the hope that multiple\n# replicas will arrive and the transfer can be parallelized.\n#\n# With slow disks and fast (large bandwidth) networks, diskless replication\n# works better.\nrepl-diskless-sync no\n\n# When diskless replication is enabled, it is possible to configure the delay\n# the server waits in order to spawn the child that transfers the RDB via socket\n# to the replicas.\n#\n# This is important since once the transfer starts, it is not possible to serve\n# new replicas arriving, that will be queued for the next RDB transfer, so the\n# server waits a delay in order to let more replicas arrive.\n#\n# The delay is specified in seconds, and by default is 5 seconds. To disable\n# it entirely just set it to 0 seconds and the transfer will start ASAP.\nrepl-diskless-sync-delay 5\n\n# -----------------------------------------------------------------------------\n# WARNING: RDB diskless load is experimental. Since in this setup the replica\n# does not immediately store an RDB on disk, it may cause data loss during\n# failovers. RDB diskless load + Redis modules not handling I/O reads may also\n# cause Redis to abort in case of I/O errors during the initial synchronization\n# stage with the master. Use only if you know what you are doing.\n# -----------------------------------------------------------------------------\n#\n# Replica can load the RDB it reads from the replication link directly from the\n# socket, or store the RDB to a file and read that file after it was completely\n# received from the master.\n#\n# In many cases the disk is slower than the network, and storing and loading\n# the RDB file may increase replication time (and even increase the master's\n# Copy on Write memory and salve buffers).\n# However, parsing the RDB file directly from the socket may mean that we have\n# to flush the contents of the current database before the full rdb was\n# received. For this reason we have the following options:\n#\n# "disabled"    - Don't use diskless load (store the rdb file to the disk first)\n# "on-empty-db" - Use diskless load only when it is completely safe.\n# "swapdb"      - Keep a copy of the current db contents in RAM while parsing\n#                 the data directly from the socket. note that this requires\n#                 sufficient memory, if you don't have it, you risk an OOM kill.\nrepl-diskless-load disabled\n\n# Replicas send PINGs to server in a predefined interval. It's possible to\n# change this interval with the repl_ping_replica_period option. The default\n# value is 10 seconds.\n#\n# repl-ping-replica-period 10\n\n# The following option sets the replication timeout for:\n#\n# 1) Bulk transfer I/O during SYNC, from the point of view of replica.\n# 2) Master timeout from the point of view of replicas (data, pings).\n# 3) Replica timeout from the point of view of masters (REPLCONF ACK pings).\n#\n# It is important to make sure that this value is greater than the value\n# specified for repl-ping-replica-period otherwise a timeout will be detected\n# every time there is low traffic between the master and the replica. The default\n# value is 60 seconds.\n#\n# repl-timeout 60\n\n# Disable TCP_NODELAY on the replica socket after SYNC?\n#\n# If you select "yes" Redis will use a smaller number of TCP packets and\n# less bandwidth to send data to replicas. But this can add a delay for\n# the data to appear on the replica side, up to 40 milliseconds with\n# Linux kernels using a default configuration.\n#\n# If you select "no" the delay for data to appear on the replica side will\n# be reduced but more bandwidth will be used for replication.\n#\n# By default we optimize for low latency, but in very high traffic conditions\n# or when the master and replicas are many hops away, turning this to "yes" may\n# be a good idea.\nrepl-disable-tcp-nodelay no\n\n# Set the replication backlog size. The backlog is a buffer that accumulates\n# replica data when replicas are disconnected for some time, so that when a\n# replica wants to reconnect again, often a full resync is not needed, but a\n# partial resync is enough, just passing the portion of data the replica\n# missed while disconnected.\n#\n# The bigger the replication backlog, the longer the replica can endure the\n# disconnect and later be able to perform a partial resynchronization.\n#\n# The backlog is only allocated if there is at least one replica connected.\n#\n# repl-backlog-size 1mb\n\n# After a master has no connected replicas for some time, the backlog will be\n# freed. The following option configures the amount of seconds that need to\n# elapse, starting from the time the last replica disconnected, for the backlog\n# buffer to be freed.\n#\n# Note that replicas never free the backlog for timeout, since they may be\n# promoted to masters later, and should be able to correctly "partially\n# resynchronize" with other replicas: hence they should always accumulate backlog.\n#\n# A value of 0 means to never release the backlog.\n#\n# repl-backlog-ttl 3600\n\n# The replica priority is an integer number published by Redis in the INFO\n# output. It is used by Redis Sentinel in order to select a replica to promote\n# into a master if the master is no longer working correctly.\n#\n# A replica with a low priority number is considered better for promotion, so\n# for instance if there are three replicas with priority 10, 100, 25 Sentinel\n# will pick the one with priority 10, that is the lowest.\n#\n# However a special priority of 0 marks the replica as not able to perform the\n# role of master, so a replica with priority of 0 will never be selected by\n# Redis Sentinel for promotion.\n#\n# By default the priority is 100.\nreplica-priority 100\n\n# -----------------------------------------------------------------------------\n# By default, Redis Sentinel includes all replicas in its reports. A replica\n# can be excluded from Redis Sentinel's announcements. An unannounced replica\n# will be ignored by the 'sentinel replicas <master>' command and won't be\n# exposed to Redis Sentinel's clients.\n#\n# This option does not change the behavior of replica-priority. Even with\n# replica-announced set to 'no', the replica can be promoted to master. To\n# prevent this behavior, set replica-priority to 0.\n#\n# replica-announced yes\n\n# It is possible for a master to stop accepting writes if there are less than\n# N replicas connected, having a lag less or equal than M seconds.\n#\n# The N replicas need to be in "online" state.\n#\n# The lag in seconds, that must be <= the specified value, is calculated from\n# the last ping received from the replica, that is usually sent every second.\n#\n# This option does not GUARANTEE that N replicas will accept the write, but\n# will limit the window of exposure for lost writes in case not enough replicas\n# are available, to the specified number of seconds.\n#\n# For example to require at least 3 replicas with a lag <= 10 seconds use:\n#\n# min-replicas-to-write 3\n# min-replicas-max-lag 10\n#\n# Setting one or the other to 0 disables the feature.\n#\n# By default min-replicas-to-write is set to 0 (feature disabled) and\n# min-replicas-max-lag is set to 10.\n\n# A Redis master is able to list the address and port of the attached\n# replicas in different ways. For example the "INFO replication" section\n# offers this information, which is used, among other tools, by\n# Redis Sentinel in order to discover replica instances.\n# Another place where this info is available is in the output of the\n# "ROLE" command of a master.\n#\n# The listed IP address and port normally reported by a replica is\n# obtained in the following way:\n#\n#   IP: The address is auto detected by checking the peer address\n#   of the socket used by the replica to connect with the master.\n#\n#   Port: The port is communicated by the replica during the replication\n#   handshake, and is normally the port that the replica is using to\n#   listen for connections.\n#\n# However when port forwarding or Network Address Translation (NAT) is\n# used, the replica may actually be reachable via different IP and port\n# pairs. The following two options can be used by a replica in order to\n# report to its master a specific set of IP and port, so that both INFO\n# and ROLE will report those values.\n#\n# There is no need to use both the options if you need to override just\n# the port or the IP address.\n#\n# replica-announce-ip 5.5.5.5\n# replica-announce-port 1234\n\n############################### KEYS TRACKING #################################\n\n# Redis implements server assisted support for client side caching of values.\n# This is implemented using an invalidation table that remembers, using\n# a radix key indexed by key name, what clients have which keys. In turn\n# this is used in order to send invalidation messages to clients. Please\n# check this page to understand more about the feature:\n#\n#   https://redis.io/topics/client-side-caching\n#\n# When tracking is enabled for a client, all the read only queries are assumed\n# to be cached: this will force Redis to store information in the invalidation\n# table. When keys are modified, such information is flushed away, and\n# invalidation messages are sent to the clients. However if the workload is\n# heavily dominated by reads, Redis could use more and more memory in order\n# to track the keys fetched by many clients.\n#\n# For this reason it is possible to configure a maximum fill value for the\n# invalidation table. By default it is set to 1M of keys, and once this limit\n# is reached, Redis will start to evict keys in the invalidation table\n# even if they were not modified, just to reclaim memory: this will in turn\n# force the clients to invalidate the cached values. Basically the table\n# maximum size is a trade off between the memory you want to spend server\n# side to track information about who cached what, and the ability of clients\n# to retain cached objects in memory.\n#\n# If you set the value to 0, it means there are no limits, and Redis will\n# retain as many keys as needed in the invalidation table.\n# In the "stats" INFO section, you can find information about the number of\n# keys in the invalidation table at every given moment.\n#\n# Note: when key tracking is used in broadcasting mode, no memory is used\n# in the server side so this setting is useless.\n#\n# tracking-table-max-keys 1000000\n\n################################## SECURITY ###################################\n\n# Warning: since Redis is pretty fast, an outside user can try up to\n# 1 million passwords per second against a modern box. This means that you\n# should use very strong passwords, otherwise they will be very easy to break.\n# Note that because the password is really a shared secret between the client\n# and the server, and should not be memorized by any human, the password\n# can be easily a long string from /dev/urandom or whatever, so by using a\n# long and unguessable password no brute force attack will be possible.\n\n# Redis ACL users are defined in the following format:\n#\n#   user <username> ... acl rules ...\n#\n# For example:\n#\n#   user worker +@list +@connection ~jobs:* on >ffa9203c493aa99\n#\n# The special username "default" is used for new connections. If this user\n# has the "nopass" rule, then new connections will be immediately authenticated\n# as the "default" user without the need of any password provided via the\n# AUTH command. Otherwise if the "default" user is not flagged with "nopass"\n# the connections will start in not authenticated state, and will require\n# AUTH (or the HELLO command AUTH option) in order to be authenticated and\n# start to work.\n#\n# The ACL rules that describe what a user can do are the following:\n#\n#  on           Enable the user: it is possible to authenticate as this user.\n#  off          Disable the user: it's no longer possible to authenticate\n#               with this user, however the already authenticated connections\n#               will still work.\n#  skip-sanitize-payload    RESTORE dump-payload sanitation is skipped.\n#  sanitize-payload         RESTORE dump-payload is sanitized (default).\n#  +<command>   Allow the execution of that command\n#  -<command>   Disallow the execution of that command\n#  +@<category> Allow the execution of all the commands in such category\n#               with valid categories are like @admin, @set, @sortedset, ...\n#               and so forth, see the full list in the server.c file where\n#               the Redis command table is described and defined.\n#               The special category @all means all the commands, but currently\n#               present in the server, and that will be loaded in the future\n#               via modules.\n#  +<command>|subcommand    Allow a specific subcommand of an otherwise\n#                           disabled command. Note that this form is not\n#                           allowed as negative like -DEBUG|SEGFAULT, but\n#                           only additive starting with "+".\n#  allcommands  Alias for +@all. Note that it implies the ability to execute\n#               all the future commands loaded via the modules system.\n#  nocommands   Alias for -@all.\n#  ~<pattern>   Add a pattern of keys that can be mentioned as part of\n#               commands. For instance ~* allows all the keys. The pattern\n#               is a glob-style pattern like the one of KEYS.\n#               It is possible to specify multiple patterns.\n#  allkeys      Alias for ~*\n#  resetkeys    Flush the list of allowed keys patterns.\n#  &<pattern>   Add a glob-style pattern of Pub/Sub channels that can be\n#               accessed by the user. It is possible to specify multiple channel\n#               patterns.\n#  allchannels  Alias for &*\n#  resetchannels            Flush the list of allowed channel patterns.\n#  ><password>  Add this password to the list of valid password for the user.\n#               For example >mypass will add "mypass" to the list.\n#               This directive clears the "nopass" flag (see later).\n#  <<password>  Remove this password from the list of valid passwords.\n#  nopass       All the set passwords of the user are removed, and the user\n#               is flagged as requiring no password: it means that every\n#               password will work against this user. If this directive is\n#               used for the default user, every new connection will be\n#               immediately authenticated with the default user without\n#               any explicit AUTH command required. Note that the "resetpass"\n#               directive will clear this condition.\n#  resetpass    Flush the list of allowed passwords. Moreover removes the\n#               "nopass" status. After "resetpass" the user has no associated\n#               passwords and there is no way to authenticate without adding\n#               some password (or setting it as "nopass" later).\n#  reset        Performs the following actions: resetpass, resetkeys, off,\n#               -@all. The user returns to the same state it has immediately\n#               after its creation.\n#\n# ACL rules can be specified in any order: for instance you can start with\n# passwords, then flags, or key patterns. However note that the additive\n# and subtractive rules will CHANGE MEANING depending on the ordering.\n# For instance see the following example:\n#\n#   user alice on +@all -DEBUG ~* >somepassword\n#\n# This will allow "alice" to use all the commands with the exception of the\n# DEBUG command, since +@all added all the commands to the set of the commands\n# alice can use, and later DEBUG was removed. However if we invert the order\n# of two ACL rules the result will be different:\n#\n#   user alice on -DEBUG +@all ~* >somepassword\n#\n# Now DEBUG was removed when alice had yet no commands in the set of allowed\n# commands, later all the commands are added, so the user will be able to\n# execute everything.\n#\n# Basically ACL rules are processed left-to-right.\n#\n# For more information about ACL configuration please refer to\n# the Redis web site at https://redis.io/topics/acl\n\n# ACL LOG\n#\n# The ACL Log tracks failed commands and authentication events associated\n# with ACLs. The ACL Log is useful to troubleshoot failed commands blocked \n# by ACLs. The ACL Log is stored in memory. You can reclaim memory with \n# ACL LOG RESET. Define the maximum entry length of the ACL Log below.\nacllog-max-len 128\n\n# Using an external ACL file\n#\n# Instead of configuring users here in this file, it is possible to use\n# a stand-alone file just listing users. The two methods cannot be mixed:\n# if you configure users here and at the same time you activate the external\n# ACL file, the server will refuse to start.\n#\n# The format of the external ACL user file is exactly the same as the\n# format that is used inside redis.conf to describe users.\n#\n# aclfile /etc/redis/users.acl\n\n# IMPORTANT NOTE: starting with Redis 6 "requirepass" is just a compatibility\n# layer on top of the new ACL system. The option effect will be just setting\n# the password for the default user. Clients will still authenticate using\n# AUTH <password> as usually, or more explicitly with AUTH default <password>\n# if they follow the new protocol: both will work.\n#\n# The requirepass is not compatable with aclfile option and the ACL LOAD\n# command, these will cause requirepass to be ignored.\n#\nrequirepass huangge1199\n\n# New users are initialized with restrictive permissions by default, via the\n# equivalent of this ACL rule 'off resetkeys -@all'. Starting with Redis 6.2, it\n# is possible to manage access to Pub/Sub channels with ACL rules as well. The\n# default Pub/Sub channels permission if new users is controlled by the \n# acl-pubsub-default configuration directive, which accepts one of these values:\n#\n# allchannels: grants access to all Pub/Sub channels\n# resetchannels: revokes access to all Pub/Sub channels\n#\n# To ensure backward compatibility while upgrading Redis 6.0, acl-pubsub-default\n# defaults to the 'allchannels' permission.\n#\n# Future compatibility note: it is very likely that in a future version of Redis\n# the directive's default of 'allchannels' will be changed to 'resetchannels' in\n# order to provide better out-of-the-box Pub/Sub security. Therefore, it is\n# recommended that you explicitly define Pub/Sub permissions for all users\n# rather then rely on implicit default values. Once you've set explicit\n# Pub/Sub for all existing users, you should uncomment the following line.\n#\n# acl-pubsub-default resetchannels\n\n# Command renaming (DEPRECATED).\n#\n# ------------------------------------------------------------------------\n# WARNING: avoid using this option if possible. Instead use ACLs to remove\n# commands from the default user, and put them only in some admin user you\n# create for administrative purposes.\n# ------------------------------------------------------------------------\n#\n# It is possible to change the name of dangerous commands in a shared\n# environment. For instance the CONFIG command may be renamed into something\n# hard to guess so that it will still be available for internal-use tools\n# but not available for general clients.\n#\n# Example:\n#\n# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52\n#\n# It is also possible to completely kill a command by renaming it into\n# an empty string:\n#\n# rename-command CONFIG ""\n#\n# Please note that changing the name of commands that are logged into the\n# AOF file or transmitted to replicas may cause problems.\n\n################################### CLIENTS ####################################\n\n# Set the max number of connected clients at the same time. By default\n# this limit is set to 10000 clients, however if the Redis server is not\n# able to configure the process file limit to allow for the specified limit\n# the max number of allowed clients is set to the current file limit\n# minus 32 (as Redis reserves a few file descriptors for internal uses).\n#\n# Once the limit is reached Redis will close all the new connections sending\n# an error 'max number of clients reached'.\n#\n# IMPORTANT: When Redis Cluster is used, the max number of connections is also\n# shared with the cluster bus: every node in the cluster will use two\n# connections, one incoming and another outgoing. It is important to size the\n# limit accordingly in case of very large clusters.\n#\n# maxclients 10000\n\n############################## MEMORY MANAGEMENT ################################\n\n# Set a memory usage limit to the specified amount of bytes.\n# When the memory limit is reached Redis will try to remove keys\n# according to the eviction policy selected (see maxmemory-policy).\n#\n# If Redis can't remove keys according to the policy, or if the policy is\n# set to 'noeviction', Redis will start to reply with errors to commands\n# that would use more memory, like SET, LPUSH, and so on, and will continue\n# to reply to read-only commands like GET.\n#\n# This option is usually useful when using Redis as an LRU or LFU cache, or to\n# set a hard memory limit for an instance (using the 'noeviction' policy).\n#\n# WARNING: If you have replicas attached to an instance with maxmemory on,\n# the size of the output buffers needed to feed the replicas are subtracted\n# from the used memory count, so that network problems / resyncs will\n# not trigger a loop where keys are evicted, and in turn the output\n# buffer of replicas is full with DELs of keys evicted triggering the deletion\n# of more keys, and so forth until the database is completely emptied.\n#\n# In short... if you have replicas attached it is suggested that you set a lower\n# limit for maxmemory so that there is some free RAM on the system for replica\n# output buffers (but this is not needed if the policy is 'noeviction').\n#\n# maxmemory <bytes>\n\n# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory\n# is reached. You can select one from the following behaviors:\n#\n# volatile-lru -> Evict using approximated LRU, only keys with an expire set.\n# allkeys-lru -> Evict any key using approximated LRU.\n# volatile-lfu -> Evict using approximated LFU, only keys with an expire set.\n# allkeys-lfu -> Evict any key using approximated LFU.\n# volatile-random -> Remove a random key having an expire set.\n# allkeys-random -> Remove a random key, any key.\n# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)\n# noeviction -> Don't evict anything, just return an error on write operations.\n#\n# LRU means Least Recently Used\n# LFU means Least Frequently Used\n#\n# Both LRU, LFU and volatile-ttl are implemented using approximated\n# randomized algorithms.\n#\n# Note: with any of the above policies, when there are no suitable keys for\n# eviction, Redis will return an error on write operations that require\n# more memory. These are usually commands that create new keys, add data or\n# modify existing keys. A few examples are: SET, INCR, HSET, LPUSH, SUNIONSTORE,\n# SORT (due to the STORE argument), and EXEC (if the transaction includes any\n# command that requires memory).\n#\n# The default is:\n#\n# maxmemory-policy noeviction\n\n# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated\n# algorithms (in order to save memory), so you can tune it for speed or\n# accuracy. By default Redis will check five keys and pick the one that was\n# used least recently, you can change the sample size using the following\n# configuration directive.\n#\n# The default of 5 produces good enough results. 10 Approximates very closely\n# true LRU but costs more CPU. 3 is faster but not very accurate.\n#\n# maxmemory-samples 5\n\n# Eviction processing is designed to function well with the default setting.\n# If there is an unusually large amount of write traffic, this value may need to\n# be increased.  Decreasing this value may reduce latency at the risk of \n# eviction processing effectiveness\n#   0 = minimum latency, 10 = default, 100 = process without regard to latency\n#\n# maxmemory-eviction-tenacity 10\n\n# Starting from Redis 5, by default a replica will ignore its maxmemory setting\n# (unless it is promoted to master after a failover or manually). It means\n# that the eviction of keys will be just handled by the master, sending the\n# DEL commands to the replica as keys evict in the master side.\n#\n# This behavior ensures that masters and replicas stay consistent, and is usually\n# what you want, however if your replica is writable, or you want the replica\n# to have a different memory setting, and you are sure all the writes performed\n# to the replica are idempotent, then you may change this default (but be sure\n# to understand what you are doing).\n#\n# Note that since the replica by default does not evict, it may end using more\n# memory than the one set via maxmemory (there are certain buffers that may\n# be larger on the replica, or data structures may sometimes take more memory\n# and so forth). So make sure you monitor your replicas and make sure they\n# have enough memory to never hit a real out-of-memory condition before the\n# master hits the configured maxmemory setting.\n#\n# replica-ignore-maxmemory yes\n\n# Redis reclaims expired keys in two ways: upon access when those keys are\n# found to be expired, and also in background, in what is called the\n# "active expire key". The key space is slowly and interactively scanned\n# looking for expired keys to reclaim, so that it is possible to free memory\n# of keys that are expired and will never be accessed again in a short time.\n#\n# The default effort of the expire cycle will try to avoid having more than\n# ten percent of expired keys still in memory, and will try to avoid consuming\n# more than 25% of total memory and to add latency to the system. However\n# it is possible to increase the expire "effort" that is normally set to\n# "1", to a greater value, up to the value "10". At its maximum value the\n# system will use more CPU, longer cycles (and technically may introduce\n# more latency), and will tolerate less already expired keys still present\n# in the system. It's a tradeoff between memory, CPU and latency.\n#\n# active-expire-effort 1\n\n############################# LAZY FREEING ####################################\n\n# Redis has two primitives to delete keys. One is called DEL and is a blocking\n# deletion of the object. It means that the server stops processing new commands\n# in order to reclaim all the memory associated with an object in a synchronous\n# way. If the key deleted is associated with a small object, the time needed\n# in order to execute the DEL command is very small and comparable to most other\n# O(1) or O(log_N) commands in Redis. However if the key is associated with an\n# aggregated value containing millions of elements, the server can block for\n# a long time (even seconds) in order to complete the operation.\n#\n# For the above reasons Redis also offers non blocking deletion primitives\n# such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and\n# FLUSHDB commands, in order to reclaim memory in background. Those commands\n# are executed in constant time. Another thread will incrementally free the\n# object in the background as fast as possible.\n#\n# DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.\n# It's up to the design of the application to understand when it is a good\n# idea to use one or the other. However the Redis server sometimes has to\n# delete keys or flush the whole database as a side effect of other operations.\n# Specifically Redis deletes objects independently of a user call in the\n# following scenarios:\n#\n# 1) On eviction, because of the maxmemory and maxmemory policy configurations,\n#    in order to make room for new data, without going over the specified\n#    memory limit.\n# 2) Because of expire: when a key with an associated time to live (see the\n#    EXPIRE command) must be deleted from memory.\n# 3) Because of a side effect of a command that stores data on a key that may\n#    already exist. For example the RENAME command may delete the old key\n#    content when it is replaced with another one. Similarly SUNIONSTORE\n#    or SORT with STORE option may delete existing keys. The SET command\n#    itself removes any old content of the specified key in order to replace\n#    it with the specified string.\n# 4) During replication, when a replica performs a full resynchronization with\n#    its master, the content of the whole database is removed in order to\n#    load the RDB file just transferred.\n#\n# In all the above cases the default is to delete objects in a blocking way,\n# like if DEL was called. However you can configure each case specifically\n# in order to instead release memory in a non-blocking way like if UNLINK\n# was called, using the following configuration directives.\n\nlazyfree-lazy-eviction no\nlazyfree-lazy-expire no\nlazyfree-lazy-server-del no\nreplica-lazy-flush no\n\n# It is also possible, for the case when to replace the user code DEL calls\n# with UNLINK calls is not easy, to modify the default behavior of the DEL\n# command to act exactly like UNLINK, using the following configuration\n# directive:\n\nlazyfree-lazy-user-del no\n\n# FLUSHDB, FLUSHALL, and SCRIPT FLUSH support both asynchronous and synchronous\n# deletion, which can be controlled by passing the [SYNC|ASYNC] flags into the\n# commands. When neither flag is passed, this directive will be used to determine\n# if the data should be deleted asynchronously.\n\nlazyfree-lazy-user-flush no\n\n################################ THREADED I/O #################################\n\n# Redis is mostly single threaded, however there are certain threaded\n# operations such as UNLINK, slow I/O accesses and other things that are\n# performed on side threads.\n#\n# Now it is also possible to handle Redis clients socket reads and writes\n# in different I/O threads. Since especially writing is so slow, normally\n# Redis users use pipelining in order to speed up the Redis performances per\n# core, and spawn multiple instances in order to scale more. Using I/O\n# threads it is possible to easily speedup two times Redis without resorting\n# to pipelining nor sharding of the instance.\n#\n# By default threading is disabled, we suggest enabling it only in machines\n# that have at least 4 or more cores, leaving at least one spare core.\n# Using more than 8 threads is unlikely to help much. We also recommend using\n# threaded I/O only if you actually have performance problems, with Redis\n# instances being able to use a quite big percentage of CPU time, otherwise\n# there is no point in using this feature.\n#\n# So for instance if you have a four cores boxes, try to use 2 or 3 I/O\n# threads, if you have a 8 cores, try to use 6 threads. In order to\n# enable I/O threads use the following configuration directive:\n#\n# io-threads 4\n#\n# Setting io-threads to 1 will just use the main thread as usual.\n# When I/O threads are enabled, we only use threads for writes, that is\n# to thread the write(2) syscall and transfer the client buffers to the\n# socket. However it is also possible to enable threading of reads and\n# protocol parsing using the following configuration directive, by setting\n# it to yes:\n#\n# io-threads-do-reads no\n#\n# Usually threading reads doesn't help much.\n#\n# NOTE 1: This configuration directive cannot be changed at runtime via\n# CONFIG SET. Aso this feature currently does not work when SSL is\n# enabled.\n#\n# NOTE 2: If you want to test the Redis speedup using redis-benchmark, make\n# sure you also run the benchmark itself in threaded mode, using the\n# --threads option to match the number of Redis threads, otherwise you'll not\n# be able to notice the improvements.\n\n############################ KERNEL OOM CONTROL ##############################\n\n# On Linux, it is possible to hint the kernel OOM killer on what processes\n# should be killed first when out of memory.\n#\n# Enabling this feature makes Redis actively control the oom_score_adj value\n# for all its processes, depending on their role. The default scores will\n# attempt to have background child processes killed before all others, and\n# replicas killed before masters.\n#\n# Redis supports three options:\n#\n# no:       Don't make changes to oom-score-adj (default).\n# yes:      Alias to "relative" see below.\n# absolute: Values in oom-score-adj-values are written as is to the kernel.\n# relative: Values are used relative to the initial value of oom_score_adj when\n#           the server starts and are then clamped to a range of -1000 to 1000.\n#           Because typically the initial value is 0, they will often match the\n#           absolute values.\noom-score-adj no\n\n# When oom-score-adj is used, this directive controls the specific values used\n# for master, replica and background child processes. Values range -2000 to\n# 2000 (higher means more likely to be killed).\n#\n# Unprivileged processes (not root, and without CAP_SYS_RESOURCE capabilities)\n# can freely increase their value, but not decrease it below its initial\n# settings. This means that setting oom-score-adj to "relative" and setting the\n# oom-score-adj-values to positive values will always succeed.\noom-score-adj-values 0 200 800\n\n\n#################### KERNEL transparent hugepage CONTROL ######################\n\n# Usually the kernel Transparent Huge Pages control is set to "madvise" or\n# or "never" by default (/sys/kernel/mm/transparent_hugepage/enabled), in which\n# case this config has no effect. On systems in which it is set to "always",\n# redis will attempt to disable it specifically for the redis process in order\n# to avoid latency problems specifically with fork(2) and CoW.\n# If for some reason you prefer to keep it enabled, you can set this config to\n# "no" and the kernel global to "always".\n\ndisable-thp yes\n\n############################## APPEND ONLY MODE ###############################\n\n# By default Redis asynchronously dumps the dataset on disk. This mode is\n# good enough in many applications, but an issue with the Redis process or\n# a power outage may result into a few minutes of writes lost (depending on\n# the configured save points).\n#\n# The Append Only File is an alternative persistence mode that provides\n# much better durability. For instance using the default data fsync policy\n# (see later in the config file) Redis can lose just one second of writes in a\n# dramatic event like a server power outage, or a single write if something\n# wrong with the Redis process itself happens, but the operating system is\n# still running correctly.\n#\n# AOF and RDB persistence can be enabled at the same time without problems.\n# If the AOF is enabled on startup Redis will load the AOF, that is the file\n# with the better durability guarantees.\n#\n# Please check https://redis.io/topics/persistence for more information.\n\nappendonly no\n\n# The name of the append only file (default: "appendonly.aof")\n\nappendfilename "appendonly.aof"\n\n# The fsync() call tells the Operating System to actually write data on disk\n# instead of waiting for more data in the output buffer. Some OS will really flush\n# data on disk, some other OS will just try to do it ASAP.\n#\n# Redis supports three different modes:\n#\n# no: don't fsync, just let the OS flush the data when it wants. Faster.\n# always: fsync after every write to the append only log. Slow, Safest.\n# everysec: fsync only one time every second. Compromise.\n#\n# The default is "everysec", as that's usually the right compromise between\n# speed and data safety. It's up to you to understand if you can relax this to\n# "no" that will let the operating system flush the output buffer when\n# it wants, for better performances (but if you can live with the idea of\n# some data loss consider the default persistence mode that's snapshotting),\n# or on the contrary, use "always" that's very slow but a bit safer than\n# everysec.\n#\n# More details please check the following article:\n# http://antirez.com/post/redis-persistence-demystified.html\n#\n# If unsure, use "everysec".\n\n# appendfsync always\nappendfsync everysec\n# appendfsync no\n\n# When the AOF fsync policy is set to always or everysec, and a background\n# saving process (a background save or AOF log background rewriting) is\n# performing a lot of I/O against the disk, in some Linux configurations\n# Redis may block too long on the fsync() call. Note that there is no fix for\n# this currently, as even performing fsync in a different thread will block\n# our synchronous write(2) call.\n#\n# In order to mitigate this problem it's possible to use the following option\n# that will prevent fsync() from being called in the main process while a\n# BGSAVE or BGREWRITEAOF is in progress.\n#\n# This means that while another child is saving, the durability of Redis is\n# the same as "appendfsync none". In practical terms, this means that it is\n# possible to lose up to 30 seconds of log in the worst scenario (with the\n# default Linux settings).\n#\n# If you have latency problems turn this to "yes". Otherwise leave it as\n# "no" that is the safest pick from the point of view of durability.\n\nno-appendfsync-on-rewrite no\n\n# Automatic rewrite of the append only file.\n# Redis is able to automatically rewrite the log file implicitly calling\n# BGREWRITEAOF when the AOF log size grows by the specified percentage.\n#\n# This is how it works: Redis remembers the size of the AOF file after the\n# latest rewrite (if no rewrite has happened since the restart, the size of\n# the AOF at startup is used).\n#\n# This base size is compared to the current size. If the current size is\n# bigger than the specified percentage, the rewrite is triggered. Also\n# you need to specify a minimal size for the AOF file to be rewritten, this\n# is useful to avoid rewriting the AOF file even if the percentage increase\n# is reached but it is still pretty small.\n#\n# Specify a percentage of zero in order to disable the automatic AOF\n# rewrite feature.\n\nauto-aof-rewrite-percentage 100\nauto-aof-rewrite-min-size 64mb\n\n# An AOF file may be found to be truncated at the end during the Redis\n# startup process, when the AOF data gets loaded back into memory.\n# This may happen when the system where Redis is running\n# crashes, especially when an ext4 filesystem is mounted without the\n# data=ordered option (however this can't happen when Redis itself\n# crashes or aborts but the operating system still works correctly).\n#\n# Redis can either exit with an error when this happens, or load as much\n# data as possible (the default now) and start if the AOF file is found\n# to be truncated at the end. The following option controls this behavior.\n#\n# If aof-load-truncated is set to yes, a truncated AOF file is loaded and\n# the Redis server starts emitting a log to inform the user of the event.\n# Otherwise if the option is set to no, the server aborts with an error\n# and refuses to start. When the option is set to no, the user requires\n# to fix the AOF file using the "redis-check-aof" utility before to restart\n# the server.\n#\n# Note that if the AOF file will be found to be corrupted in the middle\n# the server will still exit with an error. This option only applies when\n# Redis will try to read more data from the AOF file but not enough bytes\n# will be found.\naof-load-truncated yes\n\n# When rewriting the AOF file, Redis is able to use an RDB preamble in the\n# AOF file for faster rewrites and recoveries. When this option is turned\n# on the rewritten AOF file is composed of two different stanzas:\n#\n#   [RDB file][AOF tail]\n#\n# When loading, Redis recognizes that the AOF file starts with the "REDIS"\n# string and loads the prefixed RDB file, then continues loading the AOF\n# tail.\naof-use-rdb-preamble yes\n\n################################ LUA SCRIPTING  ###############################\n\n# Max execution time of a Lua script in milliseconds.\n#\n# If the maximum execution time is reached Redis will log that a script is\n# still in execution after the maximum allowed time and will start to\n# reply to queries with an error.\n#\n# When a long running script exceeds the maximum execution time only the\n# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be\n# used to stop a script that did not yet call any write commands. The second\n# is the only way to shut down the server in the case a write command was\n# already issued by the script but the user doesn't want to wait for the natural\n# termination of the script.\n#\n# Set it to 0 or a negative value for unlimited execution without warnings.\nlua-time-limit 5000\n\n################################ REDIS CLUSTER  ###############################\n\n# Normal Redis instances can't be part of a Redis Cluster; only nodes that are\n# started as cluster nodes can. In order to start a Redis instance as a\n# cluster node enable the cluster support uncommenting the following:\n#\n# cluster-enabled yes\n\n# Every cluster node has a cluster configuration file. This file is not\n# intended to be edited by hand. It is created and updated by Redis nodes.\n# Every Redis Cluster node requires a different cluster configuration file.\n# Make sure that instances running in the same system do not have\n# overlapping cluster configuration file names.\n#\n# cluster-config-file nodes-6379.conf\n\n# Cluster node timeout is the amount of milliseconds a node must be unreachable\n# for it to be considered in failure state.\n# Most other internal time limits are a multiple of the node timeout.\n#\n# cluster-node-timeout 15000\n\n# A replica of a failing master will avoid to start a failover if its data\n# looks too old.\n#\n# There is no simple way for a replica to actually have an exact measure of\n# its "data age", so the following two checks are performed:\n#\n# 1) If there are multiple replicas able to failover, they exchange messages\n#    in order to try to give an advantage to the replica with the best\n#    replication offset (more data from the master processed).\n#    Replicas will try to get their rank by offset, and apply to the start\n#    of the failover a delay proportional to their rank.\n#\n# 2) Every single replica computes the time of the last interaction with\n#    its master. This can be the last ping or command received (if the master\n#    is still in the "connected" state), or the time that elapsed since the\n#    disconnection with the master (if the replication link is currently down).\n#    If the last interaction is too old, the replica will not try to failover\n#    at all.\n#\n# The point "2" can be tuned by user. Specifically a replica will not perform\n# the failover if, since the last interaction with the master, the time\n# elapsed is greater than:\n#\n#   (node-timeout * cluster-replica-validity-factor) + repl-ping-replica-period\n#\n# So for example if node-timeout is 30 seconds, and the cluster-replica-validity-factor\n# is 10, and assuming a default repl-ping-replica-period of 10 seconds, the\n# replica will not try to failover if it was not able to talk with the master\n# for longer than 310 seconds.\n#\n# A large cluster-replica-validity-factor may allow replicas with too old data to failover\n# a master, while a too small value may prevent the cluster from being able to\n# elect a replica at all.\n#\n# For maximum availability, it is possible to set the cluster-replica-validity-factor\n# to a value of 0, which means, that replicas will always try to failover the\n# master regardless of the last time they interacted with the master.\n# (However they'll always try to apply a delay proportional to their\n# offset rank).\n#\n# Zero is the only value able to guarantee that when all the partitions heal\n# the cluster will always be able to continue.\n#\n# cluster-replica-validity-factor 10\n\n# Cluster replicas are able to migrate to orphaned masters, that are masters\n# that are left without working replicas. This improves the cluster ability\n# to resist to failures as otherwise an orphaned master can't be failed over\n# in case of failure if it has no working replicas.\n#\n# Replicas migrate to orphaned masters only if there are still at least a\n# given number of other working replicas for their old master. This number\n# is the "migration barrier". A migration barrier of 1 means that a replica\n# will migrate only if there is at least 1 other working replica for its master\n# and so forth. It usually reflects the number of replicas you want for every\n# master in your cluster.\n#\n# Default is 1 (replicas migrate only if their masters remain with at least\n# one replica). To disable migration just set it to a very large value or\n# set cluster-allow-replica-migration to 'no'.\n# A value of 0 can be set but is useful only for debugging and dangerous\n# in production.\n#\n# cluster-migration-barrier 1\n\n# Turning off this option allows to use less automatic cluster configuration.\n# It both disables migration to orphaned masters and migration from masters\n# that became empty.\n#\n# Default is 'yes' (allow automatic migrations).\n#\n# cluster-allow-replica-migration yes\n\n# By default Redis Cluster nodes stop accepting queries if they detect there\n# is at least a hash slot uncovered (no available node is serving it).\n# This way if the cluster is partially down (for example a range of hash slots\n# are no longer covered) all the cluster becomes, eventually, unavailable.\n# It automatically returns available as soon as all the slots are covered again.\n#\n# However sometimes you want the subset of the cluster which is working,\n# to continue to accept queries for the part of the key space that is still\n# covered. In order to do so, just set the cluster-require-full-coverage\n# option to no.\n#\n# cluster-require-full-coverage yes\n\n# This option, when set to yes, prevents replicas from trying to failover its\n# master during master failures. However the replica can still perform a\n# manual failover, if forced to do so.\n#\n# This is useful in different scenarios, especially in the case of multiple\n# data center operations, where we want one side to never be promoted if not\n# in the case of a total DC failure.\n#\n# cluster-replica-no-failover no\n\n# This option, when set to yes, allows nodes to serve read traffic while the\n# the cluster is in a down state, as long as it believes it owns the slots. \n#\n# This is useful for two cases.  The first case is for when an application \n# doesn't require consistency of data during node failures or network partitions.\n# One example of this is a cache, where as long as the node has the data it\n# should be able to serve it. \n#\n# The second use case is for configurations that don't meet the recommended  \n# three shards but want to enable cluster mode and scale later. A \n# master outage in a 1 or 2 shard configuration causes a read/write outage to the\n# entire cluster without this option set, with it set there is only a write outage.\n# Without a quorum of masters, slot ownership will not change automatically. \n#\n# cluster-allow-reads-when-down no\n\n# In order to setup your cluster make sure to read the documentation\n# available at https://redis.io web site.\n\n########################## CLUSTER DOCKER/NAT support  ########################\n\n# In certain deployments, Redis Cluster nodes address discovery fails, because\n# addresses are NAT-ted or because ports are forwarded (the typical case is\n# Docker and other containers).\n#\n# In order to make Redis Cluster working in such environments, a static\n# configuration where each node knows its public address is needed. The\n# following four options are used for this scope, and are:\n#\n# * cluster-announce-ip\n# * cluster-announce-port\n# * cluster-announce-tls-port\n# * cluster-announce-bus-port\n#\n# Each instructs the node about its address, client ports (for connections\n# without and with TLS) and cluster message bus port. The information is then\n# published in the header of the bus packets so that other nodes will be able to\n# correctly map the address of the node publishing the information.\n#\n# If cluster-tls is set to yes and cluster-announce-tls-port is omitted or set\n# to zero, then cluster-announce-port refers to the TLS port. Note also that\n# cluster-announce-tls-port has no effect if cluster-tls is set to no.\n#\n# If the above options are not used, the normal Redis Cluster auto-detection\n# will be used instead.\n#\n# Note that when remapped, the bus port may not be at the fixed offset of\n# clients port + 10000, so you can specify any port and bus-port depending\n# on how they get remapped. If the bus-port is not set, a fixed offset of\n# 10000 will be used as usual.\n#\n# Example:\n#\n# cluster-announce-ip 10.1.1.5\n# cluster-announce-tls-port 6379\n# cluster-announce-port 0\n# cluster-announce-bus-port 6380\n\n################################## SLOW LOG ###################################\n\n# The Redis Slow Log is a system to log queries that exceeded a specified\n# execution time. The execution time does not include the I/O operations\n# like talking with the client, sending the reply and so forth,\n# but just the time needed to actually execute the command (this is the only\n# stage of command execution where the thread is blocked and can not serve\n# other requests in the meantime).\n#\n# You can configure the slow log with two parameters: one tells Redis\n# what is the execution time, in microseconds, to exceed in order for the\n# command to get logged, and the other parameter is the length of the\n# slow log. When a new command is logged the oldest one is removed from the\n# queue of logged commands.\n\n# The following time is expressed in microseconds, so 1000000 is equivalent\n# to one second. Note that a negative number disables the slow log, while\n# a value of zero forces the logging of every command.\nslowlog-log-slower-than 10000\n\n# There is no limit to this length. Just be aware that it will consume memory.\n# You can reclaim memory used by the slow log with SLOWLOG RESET.\nslowlog-max-len 128\n\n################################ LATENCY MONITOR ##############################\n\n# The Redis latency monitoring subsystem samples different operations\n# at runtime in order to collect data related to possible sources of\n# latency of a Redis instance.\n#\n# Via the LATENCY command this information is available to the user that can\n# print graphs and obtain reports.\n#\n# The system only logs operations that were performed in a time equal or\n# greater than the amount of milliseconds specified via the\n# latency-monitor-threshold configuration directive. When its value is set\n# to zero, the latency monitor is turned off.\n#\n# By default latency monitoring is disabled since it is mostly not needed\n# if you don't have latency issues, and collecting data has a performance\n# impact, that while very small, can be measured under big load. Latency\n# monitoring can easily be enabled at runtime using the command\n# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.\nlatency-monitor-threshold 0\n\n############################# EVENT NOTIFICATION ##############################\n\n# Redis can notify Pub/Sub clients about events happening in the key space.\n# This feature is documented at https://redis.io/topics/notifications\n#\n# For instance if keyspace events notification is enabled, and a client\n# performs a DEL operation on key "foo" stored in the Database 0, two\n# messages will be published via Pub/Sub:\n#\n# PUBLISH __keyspace@0__:foo del\n# PUBLISH __keyevent@0__:del foo\n#\n# It is possible to select the events that Redis will notify among a set\n# of classes. Every class is identified by a single character:\n#\n#  K     Keyspace events, published with __keyspace@<db>__ prefix.\n#  E     Keyevent events, published with __keyevent@<db>__ prefix.\n#  g     Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...\n#  $     String commands\n#  l     List commands\n#  s     Set commands\n#  h     Hash commands\n#  z     Sorted set commands\n#  x     Expired events (events generated every time a key expires)\n#  e     Evicted events (events generated when a key is evicted for maxmemory)\n#  t     Stream commands\n#  d     Module key type events\n#  m     Key-miss events (Note: It is not included in the 'A' class)\n#  A     Alias for g$lshzxetd, so that the "AKE" string means all the events\n#        (Except key-miss events which are excluded from 'A' due to their\n#         unique nature).\n#\n#  The "notify-keyspace-events" takes as argument a string that is composed\n#  of zero or multiple characters. The empty string means that notifications\n#  are disabled.\n#\n#  Example: to enable list and generic events, from the point of view of the\n#           event name, use:\n#\n#  notify-keyspace-events Elg\n#\n#  Example 2: to get the stream of the expired keys subscribing to channel\n#             name __keyevent@0__:expired use:\n#\n#  notify-keyspace-events Ex\n#\n#  By default all notifications are disabled because most users don't need\n#  this feature and the feature has some overhead. Note that if you don't\n#  specify at least one of K or E, no events will be delivered.\nnotify-keyspace-events ""\n\n############################### GOPHER SERVER #################################\n\n# Redis contains an implementation of the Gopher protocol, as specified in\n# the RFC 1436 (https://www.ietf.org/rfc/rfc1436.txt).\n#\n# The Gopher protocol was very popular in the late '90s. It is an alternative\n# to the web, and the implementation both server and client side is so simple\n# that the Redis server has just 100 lines of code in order to implement this\n# support.\n#\n# What do you do with Gopher nowadays? Well Gopher never *really* died, and\n# lately there is a movement in order for the Gopher more hierarchical content\n# composed of just plain text documents to be resurrected. Some want a simpler\n# internet, others believe that the mainstream internet became too much\n# controlled, and it's cool to create an alternative space for people that\n# want a bit of fresh air.\n#\n# Anyway for the 10nth birthday of the Redis, we gave it the Gopher protocol\n# as a gift.\n#\n# --- HOW IT WORKS? ---\n#\n# The Redis Gopher support uses the inline protocol of Redis, and specifically\n# two kind of inline requests that were anyway illegal: an empty request\n# or any request that starts with "/" (there are no Redis commands starting\n# with such a slash). Normal RESP2/RESP3 requests are completely out of the\n# path of the Gopher protocol implementation and are served as usual as well.\n#\n# If you open a connection to Redis when Gopher is enabled and send it\n# a string like "/foo", if there is a key named "/foo" it is served via the\n# Gopher protocol.\n#\n# In order to create a real Gopher "hole" (the name of a Gopher site in Gopher\n# talking), you likely need a script like the following:\n#\n#   https://github.com/antirez/gopher2redis\n#\n# --- SECURITY WARNING ---\n#\n# If you plan to put Redis on the internet in a publicly accessible address\n# to server Gopher pages MAKE SURE TO SET A PASSWORD to the instance.\n# Once a password is set:\n#\n#   1. The Gopher server (when enabled, not by default) will still serve\n#      content via Gopher.\n#   2. However other commands cannot be called before the client will\n#      authenticate.\n#\n# So use the 'requirepass' option to protect your instance.\n#\n# Note that Gopher is not currently supported when 'io-threads-do-reads'\n# is enabled.\n#\n# To enable Gopher support, uncomment the following line and set the option\n# from no (the default) to yes.\n#\n# gopher-enabled no\n\n############################### ADVANCED CONFIG ###############################\n\n# Hashes are encoded using a memory efficient data structure when they have a\n# small number of entries, and the biggest entry does not exceed a given\n# threshold. These thresholds can be configured using the following directives.\nhash-max-ziplist-entries 512\nhash-max-ziplist-value 64\n\n# Lists are also encoded in a special way to save a lot of space.\n# The number of entries allowed per internal list node can be specified\n# as a fixed maximum size or a maximum number of elements.\n# For a fixed maximum size, use -5 through -1, meaning:\n# -5: max size: 64 Kb  <-- not recommended for normal workloads\n# -4: max size: 32 Kb  <-- not recommended\n# -3: max size: 16 Kb  <-- probably not recommended\n# -2: max size: 8 Kb   <-- good\n# -1: max size: 4 Kb   <-- good\n# Positive numbers mean store up to _exactly_ that number of elements\n# per list node.\n# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),\n# but if your use case is unique, adjust the settings as necessary.\nlist-max-ziplist-size -2\n\n# Lists may also be compressed.\n# Compress depth is the number of quicklist ziplist nodes from *each* side of\n# the list to *exclude* from compression.  The head and tail of the list\n# are always uncompressed for fast push/pop operations.  Settings are:\n# 0: disable all list compression\n# 1: depth 1 means "don't start compressing until after 1 node into the list,\n#    going from either the head or tail"\n#    So: [head]->node->node->...->node->[tail]\n#    [head], [tail] will always be uncompressed; inner nodes will compress.\n# 2: [head]->[next]->node->node->...->node->[prev]->[tail]\n#    2 here means: don't compress head or head->next or tail->prev or tail,\n#    but compress all nodes between them.\n# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]\n# etc.\nlist-compress-depth 0\n\n# Sets have a special encoding in just one case: when a set is composed\n# of just strings that happen to be integers in radix 10 in the range\n# of 64 bit signed integers.\n# The following configuration setting sets the limit in the size of the\n# set in order to use this special memory saving encoding.\nset-max-intset-entries 512\n\n# Similarly to hashes and lists, sorted sets are also specially encoded in\n# order to save a lot of space. This encoding is only used when the length and\n# elements of a sorted set are below the following limits:\nzset-max-ziplist-entries 128\nzset-max-ziplist-value 64\n\n# HyperLogLog sparse representation bytes limit. The limit includes the\n# 16 bytes header. When an HyperLogLog using the sparse representation crosses\n# this limit, it is converted into the dense representation.\n#\n# A value greater than 16000 is totally useless, since at that point the\n# dense representation is more memory efficient.\n#\n# The suggested value is ~ 3000 in order to have the benefits of\n# the space efficient encoding without slowing down too much PFADD,\n# which is O(N) with the sparse encoding. The value can be raised to\n# ~ 10000 when CPU is not a concern, but space is, and the data set is\n# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.\nhll-sparse-max-bytes 3000\n\n# Streams macro node max size / items. The stream data structure is a radix\n# tree of big nodes that encode multiple items inside. Using this configuration\n# it is possible to configure how big a single node can be in bytes, and the\n# maximum number of items it may contain before switching to a new node when\n# appending new stream entries. If any of the following settings are set to\n# zero, the limit is ignored, so for instance it is possible to set just a\n# max entries limit by setting max-bytes to 0 and max-entries to the desired\n# value.\nstream-node-max-bytes 4096\nstream-node-max-entries 100\n\n# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in\n# order to help rehashing the main Redis hash table (the one mapping top-level\n# keys to values). The hash table implementation Redis uses (see dict.c)\n# performs a lazy rehashing: the more operation you run into a hash table\n# that is rehashing, the more rehashing "steps" are performed, so if the\n# server is idle the rehashing is never complete and some more memory is used\n# by the hash table.\n#\n# The default is to use this millisecond 10 times every second in order to\n# actively rehash the main dictionaries, freeing memory when possible.\n#\n# If unsure:\n# use "activerehashing no" if you have hard latency requirements and it is\n# not a good thing in your environment that Redis can reply from time to time\n# to queries with 2 milliseconds delay.\n#\n# use "activerehashing yes" if you don't have such hard requirements but\n# want to free memory asap when possible.\nactiverehashing yes\n\n# The client output buffer limits can be used to force disconnection of clients\n# that are not reading data from the server fast enough for some reason (a\n# common reason is that a Pub/Sub client can't consume messages as fast as the\n# publisher can produce them).\n#\n# The limit can be set differently for the three different classes of clients:\n#\n# normal -> normal clients including MONITOR clients\n# replica  -> replica clients\n# pubsub -> clients subscribed to at least one pubsub channel or pattern\n#\n# The syntax of every client-output-buffer-limit directive is the following:\n#\n# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>\n#\n# A client is immediately disconnected once the hard limit is reached, or if\n# the soft limit is reached and remains reached for the specified number of\n# seconds (continuously).\n# So for instance if the hard limit is 32 megabytes and the soft limit is\n# 16 megabytes / 10 seconds, the client will get disconnected immediately\n# if the size of the output buffers reach 32 megabytes, but will also get\n# disconnected if the client reaches 16 megabytes and continuously overcomes\n# the limit for 10 seconds.\n#\n# By default normal clients are not limited because they don't receive data\n# without asking (in a push way), but just after a request, so only\n# asynchronous clients may create a scenario where data is requested faster\n# than it can read.\n#\n# Instead there is a default limit for pubsub and replica clients, since\n# subscribers and replicas receive data in a push fashion.\n#\n# Both the hard or the soft limit can be disabled by setting them to zero.\nclient-output-buffer-limit normal 0 0 0\nclient-output-buffer-limit replica 256mb 64mb 60\nclient-output-buffer-limit pubsub 32mb 8mb 60\n\n# Client query buffers accumulate new commands. They are limited to a fixed\n# amount by default in order to avoid that a protocol desynchronization (for\n# instance due to a bug in the client) will lead to unbound memory usage in\n# the query buffer. However you can configure it here if you have very special\n# needs, such us huge multi/exec requests or alike.\n#\n# client-query-buffer-limit 1gb\n\n# In the Redis protocol, bulk requests, that are, elements representing single\n# strings, are normally limited to 512 mb. However you can change this limit\n# here, but must be 1mb or greater\n#\n# proto-max-bulk-len 512mb\n\n# Redis calls an internal function to perform many background tasks, like\n# closing connections of clients in timeout, purging expired keys that are\n# never requested, and so forth.\n#\n# Not all tasks are performed with the same frequency, but Redis checks for\n# tasks to perform according to the specified "hz" value.\n#\n# By default "hz" is set to 10. Raising the value will use more CPU when\n# Redis is idle, but at the same time will make Redis more responsive when\n# there are many keys expiring at the same time, and timeouts may be\n# handled with more precision.\n#\n# The range is between 1 and 500, however a value over 100 is usually not\n# a good idea. Most users should use the default of 10 and raise this up to\n# 100 only in environments where very low latency is required.\nhz 10\n\n# Normally it is useful to have an HZ value which is proportional to the\n# number of clients connected. This is useful in order, for instance, to\n# avoid too many clients are processed for each background task invocation\n# in order to avoid latency spikes.\n#\n# Since the default HZ value by default is conservatively set to 10, Redis\n# offers, and enables by default, the ability to use an adaptive HZ value\n# which will temporarily raise when there are many connected clients.\n#\n# When dynamic HZ is enabled, the actual configured HZ will be used\n# as a baseline, but multiples of the configured HZ value will be actually\n# used as needed once more clients are connected. In this way an idle\n# instance will use very little CPU time while a busy instance will be\n# more responsive.\ndynamic-hz yes\n\n# When a child rewrites the AOF file, if the following option is enabled\n# the file will be fsync-ed every 32 MB of data generated. This is useful\n# in order to commit the file to the disk more incrementally and avoid\n# big latency spikes.\naof-rewrite-incremental-fsync yes\n\n# When redis saves RDB file, if the following option is enabled\n# the file will be fsync-ed every 32 MB of data generated. This is useful\n# in order to commit the file to the disk more incrementally and avoid\n# big latency spikes.\nrdb-save-incremental-fsync yes\n\n# Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good\n# idea to start with the default settings and only change them after investigating\n# how to improve the performances and how the keys LFU change over time, which\n# is possible to inspect via the OBJECT FREQ command.\n#\n# There are two tunable parameters in the Redis LFU implementation: the\n# counter logarithm factor and the counter decay time. It is important to\n# understand what the two parameters mean before changing them.\n#\n# The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis\n# uses a probabilistic increment with logarithmic behavior. Given the value\n# of the old counter, when a key is accessed, the counter is incremented in\n# this way:\n#\n# 1. A random number R between 0 and 1 is extracted.\n# 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).\n# 3. The counter is incremented only if R < P.\n#\n# The default lfu-log-factor is 10. This is a table of how the frequency\n# counter changes with a different number of accesses with different\n# logarithmic factors:\n#\n# +--------+------------+------------+------------+------------+------------+\n# | factor | 100 hits   | 1000 hits  | 100K hits  | 1M hits    | 10M hits   |\n# +--------+------------+------------+------------+------------+------------+\n# | 0      | 104        | 255        | 255        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 1      | 18         | 49         | 255        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 10     | 10         | 18         | 142        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 100    | 8          | 11         | 49         | 143        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n#\n# NOTE: The above table was obtained by running the following commands:\n#\n#   redis-benchmark -n 1000000 incr foo\n#   redis-cli object freq foo\n#\n# NOTE 2: The counter initial value is 5 in order to give new objects a chance\n# to accumulate hits.\n#\n# The counter decay time is the time, in minutes, that must elapse in order\n# for the key counter to be divided by two (or decremented if it has a value\n# less <= 10).\n#\n# The default value for the lfu-decay-time is 1. A special value of 0 means to\n# decay the counter every time it happens to be scanned.\n#\n# lfu-log-factor 10\n# lfu-decay-time 1\n\n########################### ACTIVE DEFRAGMENTATION #######################\n#\n# What is active defragmentation?\n# -------------------------------\n#\n# Active (online) defragmentation allows a Redis server to compact the\n# spaces left between small allocations and deallocations of data in memory,\n# thus allowing to reclaim back memory.\n#\n# Fragmentation is a natural process that happens with every allocator (but\n# less so with Jemalloc, fortunately) and certain workloads. Normally a server\n# restart is needed in order to lower the fragmentation, or at least to flush\n# away all the data and create it again. However thanks to this feature\n# implemented by Oran Agra for Redis 4.0 this process can happen at runtime\n# in a "hot" way, while the server is running.\n#\n# Basically when the fragmentation is over a certain level (see the\n# configuration options below) Redis will start to create new copies of the\n# values in contiguous memory regions by exploiting certain specific Jemalloc\n# features (in order to understand if an allocation is causing fragmentation\n# and to allocate it in a better place), and at the same time, will release the\n# old copies of the data. This process, repeated incrementally for all the keys\n# will cause the fragmentation to drop back to normal values.\n#\n# Important things to understand:\n#\n# 1. This feature is disabled by default, and only works if you compiled Redis\n#    to use the copy of Jemalloc we ship with the source code of Redis.\n#    This is the default with Linux builds.\n#\n# 2. You never need to enable this feature if you don't have fragmentation\n#    issues.\n#\n# 3. Once you experience fragmentation, you can enable this feature when\n#    needed with the command "CONFIG SET activedefrag yes".\n#\n# The configuration parameters are able to fine tune the behavior of the\n# defragmentation process. If you are not sure about what they mean it is\n# a good idea to leave the defaults untouched.\n\n# Enabled active defragmentation\n# activedefrag no\n\n# Minimum amount of fragmentation waste to start active defrag\n# active-defrag-ignore-bytes 100mb\n\n# Minimum percentage of fragmentation to start active defrag\n# active-defrag-threshold-lower 10\n\n# Maximum percentage of fragmentation at which we use maximum effort\n# active-defrag-threshold-upper 100\n\n# Minimal effort for defrag in CPU percentage, to be used when the lower\n# threshold is reached\n# active-defrag-cycle-min 1\n\n# Maximal effort for defrag in CPU percentage, to be used when the upper\n# threshold is reached\n# active-defrag-cycle-max 25\n\n# Maximum number of set/hash/zset/list fields that will be processed from\n# the main dictionary scan\n# active-defrag-max-scan-fields 1000\n\n# Jemalloc background thread for purging will be enabled by default\njemalloc-bg-thread yes\n\n# It is possible to pin different threads and processes of Redis to specific\n# CPUs in your system, in order to maximize the performances of the server.\n# This is useful both in order to pin different Redis threads in different\n# CPUs, but also in order to make sure that multiple Redis instances running\n# in the same host will be pinned to different CPUs.\n#\n# Normally you can do this using the "taskset" command, however it is also\n# possible to this via Redis configuration directly, both in Linux and FreeBSD.\n#\n# You can pin the server/IO threads, bio threads, aof rewrite child process, and\n# the bgsave child process. The syntax to specify the cpu list is the same as\n# the taskset command:\n#\n# Set redis server/io threads to cpu affinity 0,2,4,6:\n# server_cpulist 0-7:2\n#\n# Set bio threads to cpu affinity 1,3:\n# bio_cpulist 1,3\n#\n# Set aof rewrite child process to cpu affinity 8,9,10,11:\n# aof_rewrite_cpulist 8-11\n#\n# Set bgsave child process to cpu affinity 1,10,11\n# bgsave_cpulist 1,10-11\n\n# In some cases redis will emit warnings and even refuse to start if it detects\n# that the system is in bad state, it is possible to suppress these warnings\n# by setting the following config which takes a space delimited list of warnings\n# to suppress\n#\n# ignore-warnings ARM64-COW-BUG
    \n

    5、启动Redis容器

    执行命令启动redis容器:

    \n
    docker-compose up -d
    \n

    \"\"

    \n

    6、远程连接验证结果

    信息填完,点击OK

    \n

    \"\"

    \n

    点击左侧对呀的连接,右侧出现redis服务器信息则为安装成功

    \n

    \"\"

    \n","categories":[{"name":"docker","slug":"docker","permalink":"https://hexo.huangge1199.cn/categories/docker/"}],"tags":[{"name":"docker","slug":"docker","permalink":"https://hexo.huangge1199.cn/tags/docker/"},{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}]},{"title":"docker-compose安装MySQL","slug":"iMySQLByDC","date":"2022-04-24T07:57:49.000Z","updated":"2022-09-22T07:39:52.893Z","comments":true,"path":"/post/iMySQLByDC/","link":"","excerpt":"","content":"

    docker中安装MySQL

    \n

    本教程以MySQL5.7版本为例编写,如需其他版本,可自行前往docker hub网站查找对应的镜像,安装可能回和本教程有一定出入,清自行处理。
    如遇问题也可以在评论中回复,本人会尽快给与回复

    \n

    1、拉取镜像

    docker pull mysql:5.7
    \n

    \"img.png\"

    \n

    2、编写docker-compose.yml文件

    内容如下:

    \n
    version: '3'\nservices:\n    mysql:\n        container_name: mysql\n        image: mysql:5.7\n        environment:\n            - MYSQL_ROOT_PASSWORD=此处为root密码自行设置\n            - TZ=Asia/Shanghai\n        volumes: \n            - ./conf:/etc/mysql\n            - ./data:/var/lib/mysql\n            - ./init:/docker-entrypoint-initdb.d/\n        ports: \n            - 50010:3306\n        restart: always
    \n

    3、创建目录文件

    根据docker-compose.yml文件创建对应目录文件

    \n

    \"\"

    \n

    4、编写MySQL的配置文件

    在conf目录下创建my.cnf文件,文件内容如下:

    \n
    [mysqld]\nlower_case_table_names=1\ninnodb_force_recovery = 0\n\nlog-bin=/var/lib/mysql/mysql-bin\nbinlog-format=ROW\nserver_id=1
    \n

    5、启动MySQL容器

    docker-compose up -d
    \n

    \"\"

    \n

    6、远程连接验证结果

    \"\"

    \n","categories":[{"name":"docker","slug":"docker","permalink":"https://hexo.huangge1199.cn/categories/docker/"}],"tags":[{"name":"docker","slug":"docker","permalink":"https://hexo.huangge1199.cn/tags/docker/"}]},{"title":"用docker-compose安装nginx","slug":"iNginxByDC","date":"2022-04-24T07:36:52.000Z","updated":"2022-09-22T07:39:52.901Z","comments":true,"path":"/post/iNginxByDC/","link":"","excerpt":"","content":"

    docker中安装nginx

    \n

    1、查找nginx镜像

    通过Docker Hub网站查询nginx镜像,选择下面的官方镜像

    \n

    \"\"

    \n

    2、下载镜像

    3.1页面点进去后在右上方有docker拉取命令

    \n

    \"\"

    \n
    docker pull nginx
    \n

    \"\"

    \n

    3、编写docker-compose.yml

    docker-compose.yml内容如下:

    \n
    version: '3'\nservices:\n    nginx: \n        container_name: nginx  #生成的容器名\n        image: nginx:latest #镜像\n        environment:\n            - TZ=Asia/Shanghai #时间\n        volumes: \n            - ./html:/usr/share/nginx/html              #nginx静态页位置\n            - ./conf/nginx.conf:/etc/nginx/nginx.conf   #配置文件\n            - ./conf.d:/etc/nginx/conf.d                #配置文件\n            - ./logs:/var/log/nginx                     #日志\n        ports: \n            - 80:80\n            - 443:443\n        restart: always
    \n

    4、创建目录以及nginx配置文件

    根据docker-compose.yml建立文件目录,并编写相关文件

    \n

    目录:

    \n

    \"\"

    \n

    conf/nginx.conf:

    \n
    user  nginx;\nworker_processes  auto;\n\nerror_log  /var/log/nginx/error.log notice;\npid        /var/run/nginx.pid;\n\n\nevents {\n    worker_connections  1024;\n}\n\n\nhttp {\n    include       /etc/nginx/mime.types;\n    default_type  application/octet-stream;\n\n    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '\n                      '$status $body_bytes_sent "$http_referer" '\n                      '"$http_user_agent" "$http_x_forwarded_for"';\n\n    access_log  /var/log/nginx/access.log  main;\n\n    sendfile        on;\n    #tcp_nopush     on;\n\n    keepalive_timeout  65;\n\n    #gzip  on;\n\n    include /etc/nginx/conf.d/*.conf;\n}
    \n

    conf.d/default.conf

    \n
    server {\n    listen       80;\n    listen  [::]:80;\n    server_name  localhost;\n\n    #access_log  /var/log/nginx/host.access.log  main;\n\n    location / {\n        root   /usr/share/nginx/html;\n        index  index.html index.htm;\n    }\n\n    #error_page  404              /404.html;\n\n    # redirect server error pages to the static page /50x.html\n    #\n    error_page   500 502 503 504  /50x.html;\n    location = /50x.html {\n        root   /usr/share/nginx/html;\n    }\n\n    # proxy the PHP scripts to Apache listening on 127.0.0.1:80\n    #\n    #location ~ \\.php$ {\n    #    proxy_pass   http://127.0.0.1;\n    #}\n\n    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000\n    #\n    #location ~ \\.php$ {\n    #    root           html;\n    #    fastcgi_pass   127.0.0.1:9000;\n    #    fastcgi_index  index.php;\n    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;\n    #    include        fastcgi_params;\n    #}\n\n    # deny access to .htaccess files, if Apache's document root\n    # concurs with nginx's one\n    #\n    #location ~ /\\.ht {\n    #    deny  all;\n    #}\n}
    \n

    html/50x.html

    \n
    <!DOCTYPE html>\n<html>\n<head>\n<title>Error</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>An error occurred.</h1>\n<p>Sorry, the page you are looking for is currently unavailable.<br/>\nPlease try again later.</p>\n<p>If you are the system administrator of this resource then you should check\nthe error log for details.</p>\n<p><em>Faithfully yours, nginx.</em></p>\n</body>\n</html>
    \n

    html/index.html

    \n
    <!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\nhtml { color-scheme: light dark; }\nbody { width: 35em; margin: 0 auto;\nfont-family: Tahoma, Verdana, Arial, sans-serif; }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\n<a href="http://nginx.org/">nginx.org</a>.<br/>\nCommercial support is available at\n<a href="http://nginx.com/">nginx.com</a>.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>
    \n

    5、docker-compose启动nginx

    cd nginx/\nll\ndocker-compose up -d
    \n

    \"\"

    \n

    6、验证nginx正常启动

    执行命令:

    \n
    docker ps -a
    \n

    \"\"

    \n

    然后在浏览器中输入IP,出现欢迎界面,安装完成

    \n

    \"\"

    \n","categories":[{"name":"docker","slug":"docker","permalink":"https://hexo.huangge1199.cn/categories/docker/"}],"tags":[{"name":"docker","slug":"docker","permalink":"https://hexo.huangge1199.cn/tags/docker/"},{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}]},{"title":"docker-compose安装","slug":"dockerComposeInstall","date":"2022-04-24T07:12:03.000Z","updated":"2022-09-22T07:39:52.867Z","comments":true,"path":"/post/dockerComposeInstall/","link":"","excerpt":"","content":"

    docker-compose安装

    \n

    按照官方来即可,docker-compose安装文档

    \n

    按照自己的系统来安装:

    \n

    \"\"

    \n

    1、下载docker-compose

    下面两个二选一,建议国内源,速度快

    \n

    官方:

    \n
    sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
    \n

    国内源:

    \n
    curl -L https://get.daocloud.io/docker/compose/releases/download/1.29.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
    \n

    \"\"

    \n

    2、授予权限

    sudo chmod +x /usr/local/bin/docker-compose\nsudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
    \n

    3、验证

    docker-compose --version
    \n

    输入命令后,出现版本号,则为安装成功

    \n

    \"\"

    \n","categories":[{"name":"docker","slug":"docker","permalink":"https://hexo.huangge1199.cn/categories/docker/"}],"tags":[{"name":"docker","slug":"docker","permalink":"https://hexo.huangge1199.cn/tags/docker/"}]},{"title":"docker安装","slug":"dockerInstall","date":"2022-04-24T05:59:34.000Z","updated":"2022-09-22T07:39:52.872Z","comments":true,"path":"/post/dockerInstall/","link":"","excerpt":"","content":"

    安装docker

    \n

    这部分基本就是按照docker官网的来,centos安装docker文档

    \n

    1、卸载旧版本docker

    yum remove docker \\\n                  docker-client \\\n                  docker-client-latest \\\n                  docker-common \\\n                  docker-latest \\\n                  docker-latest-logrotate \\\n                  docker-logrotate \\\n                  docker-engine
    \n

    \"\"

    \n

    2、设置docker软件源

    下面官网软件源和阿里软件源二选一,个人建议用阿里的,国内的速度快

    \n

    官网软件源 :速度慢,可以考虑阿里的

    \n
    yum install -y yum-utils\nyum-config-manager \\\n    --add-repo \\\n    https://download.docker.com/linux/centos/docker-ce.repo
    \n

    \"\"

    \n

    阿里软件源:

    \n

    \"\"

    \n

    3、安装docker

    yum install docker-ce docker-ce-cli containerd.io
    \n

    命令输入后,中途出现下面的内容,输入y,然后按回车确认

    \n

    \"\"

    \n

    中途出现下面的内容,输入y,然后按回车确认

    \n

    \"\"

    \n

    4、更改docker仓库地址,用Docker中国区官方替换掉,要不之后拉取镜像速度太慢了

    vi /etc/docker/daemon.json
    \n

    daemon.json内容:

    \n
    {\n "registry-mirrors": ["https://registry.docker-cn.com"]\n}
    \n

    5、启动docker

    systemctl start docker
    \n

    \"\"

    \n

    6、设置开机启动docker

    systemctl enable docker
    \n

    \"\"

    \n

    7、验证

    通过查询docker版本确认docker是否正常启动

    \n
    docker -v
    \n

    执行命令后正常显示docker版本则为安装启动成功

    \n","categories":[{"name":"docker","slug":"docker","permalink":"https://hexo.huangge1199.cn/categories/docker/"}],"tags":[{"name":"docker","slug":"docker","permalink":"https://hexo.huangge1199.cn/tags/docker/"}]},{"title":"力扣459:重复的子字符串","slug":"repeatedSubstringPattern","date":"2022-04-18T07:53:15.000Z","updated":"2024-04-25T08:10:09.106Z","comments":true,"path":"/post/repeatedSubstringPattern/","link":"","excerpt":"","content":"

    今天刷力扣发现一道有趣的题,这道题目很普通,但是解法确可以偷懒

    \n

    原题链接:力扣459:重复的子字符串

    \n

    题目

    给定一个非空的字符串 s ,检查是否可以通过由它的一个子串重复多次构成。

    \n\n

     

    \n\n

    示例 1:

    \n\n
    \n输入: s = \"abab\"\n输出: true\n解释: 可由子串 \"ab\" 重复两次构成。\n
    \n\n

    示例 2:

    \n\n
    \n输入: s = \"aba\"\n输出: false\n
    \n\n

    示例 3:

    \n\n
    \n输入: s = \"abcabcabcabc\"\n输出: true\n解释: 可由子串 \"abc\" 重复四次构成。 (或子串 \"abcabc\" 重复两次构成。)\n
    \n\n

     

    \n\n

    提示:

    \n\n

    \n\n

    \n
    Related Topics
  • 字符串
  • 字符串匹配

  • \n\n# 个人解法\n\n想法:既然要判断字符串是否由一个子串重复多次构成,那么如果结果是肯定的,这个字符串的长\n度一定能够整除子串的长度。\n\n所以我首先做一个循环,找到可能作为子串重复的字符串,在其基础上判断是否满足,循环结束\n后都没有找到满足的,那么结果肯定就是false了。\n\n接下来我们考虑循环内部的逻辑,如果一个子串可以满足子串重复多次组成当前的字符串,那么按\n照子串的长度分割,每一部分都是相同的。接下来就是重点了!!!重点!!!怎么判断这些部分\n都相同??\n\n
    \n假设满足条件:
    \n        s = \"abdfs\"
    \n        parent = s1+s2+s3+s4+....sn(s1...sn都是s)
    \n根据上面的字符串以及子串作说明
    \n可以分为两步判断:\n
      \n
    1. s1和sn相同
    2. \n
    3. s2s3s4...sn和s1s2s3....s(n-1)相同
    4. \n
    \n2中s2=s1,s3=s2.....sn=s(n-1),这样一来s1,s2,s3....sn就都相同了\n
    \n\n
    class Solution {\n    public boolean repeatedSubstringPattern(String s) {\n        int lens = s.length();\n        for (int i = 1; i < lens; i++) {\n            if (lens % i == 0) {\n                if (s.substring(0, i).equals(s.substring(lens - i))\n                        && s.substring(i).equals(s.substring(0, lens - i))) {\n                    return true;\n                }\n            }\n        }\n        return false;\n    }\n}
    class Solution:\n    def repeatedSubstringPattern(self, s: str) -> bool:\n        for i in range(1, len(s)):\n            if len(s) % i == 0:\n                if s[0:i] == s[len(s)-i:len(s)] and s[0:len(s)-i] == s[i:len(s)]:\n                    return True\n        return False
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"力扣204:计数质数","slug":"leetcode204","date":"2022-04-12T03:04:35.000Z","updated":"2022-09-22T07:39:53.089Z","comments":true,"path":"/post/leetcode204/","link":"","excerpt":"","content":"

    今天遇到一个有趣的题目,求小于给定非负整数的质数的数量

    \n

    原题链接:力扣204. 计数质数

    \n

    题目

    给定整数 n ,返回 所有小于非负整数 n 的质数的数量

    \n\n

    \n\n

    示例 1:

    \n\n
    \n输入:n = 10\n输出:4\n解释:小于 10 的质数一共有 4 个, 它们是 2, 3, 5, 7 。\n
    \n\n

    示例 2:

    \n\n
    \n输入:n = 0\n输出:0\n
    \n\n

    示例 3:

    \n\n
    \n输入:n = 1\n输出:0\n
    \n\n

    \n\n

    提示:

    \n\n

    \n
    Related Topics
  • 数组
  • 数学
  • 枚举
  • 数论
  • \n\n

    个人解法

    思路:

    \n

    这题我最开始想的比较简单,直接从0开始遍历到给定数字,遍历过程中判断是否是质数

    \n

    java代码如下:

    \n
    class Solution {\n    public int countPrimes(int n) {\n        if (n <= 2) {\n            return 0;\n        }\n        int count = 1;\n        for (int i = 3; i < n; i++) {\n            if (isPrime(i)) {\n                count++;\n            }\n        }\n        return count;\n    }\n    /**\n     * 判断是否是质数\n     *\n     * @param num 数字\n     * @return true:质数、false:不是质数\n     */\n    private boolean isPrime(int num) {\n        if (num < 2) {\n            return false;\n        }\n        if (num == 2) {\n            return true;\n        }\n        for (int i = 2; i * i <= num; i++) {\n            if (num % i == 0) {\n                return false;\n            }\n        }\n        return true;\n    }\n}
    \n

    这种办法虽然例子过了,但是最后提交时却是超时了

    \n

    接下来,我又仔细的想了想,之后想到了一种办法,通过了,然后看了看题解,发现这完全就是埃拉托斯特
    尼筛法,简称埃氏筛,也称素数筛,是一种简单且历史悠久的筛法,用来找出一定范围内所有的素数。

    \n

    这种算法就是给出要筛数值的范围n,从2开始遍历直到 $\\sqrt{2}$ 。从2开始把小于n并且是其倍数的标记上,
    然后,按顺序是3,和2一样的步骤,不过要判断下是否被标记过,因为,被标记的不是质数

    \n

    我在维基百科上看到了这个小动画,就是这个算法的整体步骤了

    \n\n\n

    下面是我的java代码:

    \n
    class Solution {\n    public int countPrimes(int n) {\n        if (n <= 2) {\n            return 0;\n        }\n        boolean[] nums = new boolean[n + 1];\n        Arrays.fill(nums, true);\n        nums[0] = false;\n        nums[1] = false;\n        int count = 0;\n        int max = (int) Math.sqrt(n);\n        for (int i = 2; i < n; i++) {\n            if (nums[i]) {\n                count++;\n                if (i > max) {\n                    continue;\n                }\n                for (int j = i; j * i < n; j++) {\n                    nums[j * i] = false;\n                }\n            }\n        }\n        return count;\n    }\n}
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"力扣357:统计各位数字都不同的数字个数","slug":"day20220411","date":"2022-04-11T06:54:43.000Z","updated":"2024-04-25T08:10:09.101Z","comments":true,"path":"/post/day20220411/","link":"","excerpt":"","content":"

    2022年04月11日 力扣每日一题

    \n

    357:统计各位数字都不同的数字个数

    \n

    题目

    给你一个整数 n ,统计并返回各位数字都不同的数字 x 的个数,其中 0 <= x < 10n 

    \n
    \n
    \n

    \n\n

    示例 1:

    \n\n
    \n输入:n = 2\n输出:91\n解释:答案应为除去 11、22、33、44、55、66、77、88、99 外,在 0 ≤ x < 100 范围内的所有数字。 \n
    \n\n

    示例 2:

    \n\n
    \n输入:n = 0\n输出:1\n
    \n\n
    \n
    \n\n

    \n\n

    提示:

    \n\n

    \n
    Related Topics
  • 数学
  • 动态规划
  • 回溯
  • \n\n

    个人解法

    思路:

    \n

    今天这题在我看来就是一个排列组合的问题,首先我们先考虑下边界,

    \n\n
    class Solution {\n    public int countNumbersWithUniqueDigits(int n) {\n        if (n == 0) {\n            return 1;\n        }\n        if (n == 1) {\n            return 10;\n        }\n        int sub = 9;\n        int count = 10;\n        int mul = 9;\n        for (int i = 2; i <= n; i++) {\n            count += mul * sub;\n            mul *= sub;\n            sub--;\n        }\n        return count;\n    }\n}
    class Solution:\n    def countNumbersWithUniqueDigits(self, n: int) -> int:\n        if n == 0:\n            return 1\n        if n == 1:\n            return 10\n        sub = 9\n        count = 10\n        mul = 9\n        for i in range(2, n + 1):\n            count += mul * sub\n            mul *= sub\n            sub -= 1\n        return count
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"darwin是什么?","slug":"darwin","date":"2022-03-30T01:16:32.000Z","updated":"2022-09-22T07:39:52.829Z","comments":true,"path":"/post/darwin/","link":"","excerpt":"","content":"

    今天,在学习NPS时,看到服务端启动命令时,它的分类是linux|darwin和windows两种,之前没有见过darwin,实在是好奇。
    通过网络的查找,学习到了以下知识:

    \n\n","categories":[{"name":"it百科","slug":"it百科","permalink":"https://hexo.huangge1199.cn/categories/it%E7%99%BE%E7%A7%91/"}],"tags":[{"name":"it百科","slug":"it百科","permalink":"https://hexo.huangge1199.cn/tags/it%E7%99%BE%E7%A7%91/"}]},{"title":"Autowired注解警告的解决办法","slug":"autowiredWaring","date":"2022-03-28T03:20:43.000Z","updated":"2022-09-22T07:39:52.763Z","comments":true,"path":"/post/autowiredWaring/","link":"","excerpt":"","content":"

    @AutoWired 在idea报警告

    近期,发现@AutoWired注解在idea中总是报警告

    \n

    java代码

    如下:

    \n
    @Controller\npublic class UserController {\n\n    @Autowired\n    private UserService userService;\n\n}
    \n

    警告内容

    如下:

    \n

    \"\".png)

    \n

    解决办法

    于是乎,关联性的在网上找了找资料,用以下的写法不会报警告,同时这种写法也是spring官方推荐的写法,代码如下:

    \n
    @Controller\npublic class UserController {\n\n    private final UserService userService;\n\n    public UserController(UserService userService){\n        this.userService = userService;\n    }\n\n}
    \n

    Lombok优雅写法

    @Controller\n@RequiredArgsConstructor(onConstructor = @__(@Autowired))\npublic clas UserController {\n    //这里必须是final,若不使用final,用@NotNull注解也是可以的\n    private final UserService userService;\n\n}
    \n

    拓展学习

    由此,我这边拓展到了spring的三种依赖注入方式:

    \n\n

    Field Injection

    @Autowired注解的一大使用场景就是Field Injection

    \n

    具体形式如下:

    \n
    @Controller\npublic class UserController {\n\n    @Autowired\n    private UserService userService;\n\n}
    \n

    这种注入方式通过Java的反射机制实现,所以private的成员也可以被注入具体的对象。

    \n

    Constructor Injection

    Constructor Injection是构造器注入,是我们日常最为推荐的一种使用方式。

    \n

    具体形式如下:

    \n
    @Controller\npublic class UserController {\n\n    private final UserService userService;\n\n    public UserController(UserService userService){\n        this.userService = userService;\n    }\n\n}
    \n

    这种注入方式很直接,通过对象构建的时候建立关系,所以这种方式对对象创建的顺序会有要求,当然Spring会为你搞定这样的先后顺序,除非你出现循环依赖,然后就会抛出异常。

    \n

    Setter Injection

    Setter Injection也会用到@Autowired注解,但使用方式与Field Injection有所不同,Field Injection是用在成员变量上,而Setter Injection的时候,是用在成员变量的Setter函数上。

    \n

    具体形式如下:

    \n
    @Controller\npublic class UserController {\n\n    private UserService userService;\n\n    @Autowired\n    public void setUserService(UserService userService){\n        this.userService = userService;\n    }\n}
    \n

    这种注入方式也很好理解,就是通过调用成员变量的set方法来注入想要使用的依赖对象。

    \n

    三种依赖注入方式比较

    \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
    注入方式可靠性可维护性可测试性灵活性循环关系的检测性能影响
    Field不可靠很灵活不检测启动快
    Constructor可靠不灵活自动检测启动慢
    Setter不可靠很灵活不检测启动快
    \n
    \n

    参考:

      \n
    1. https://docs.spring.io/spring-framework/docs/current/reference/html/core.html#beans-constructor-injection

      \n
    2. \n
    3. https://docs.spring.io/spring-framework/docs/current/reference/html/core.html#beans-setter-injection

      \n
    4. \n
    5. 利用Lombok编写优雅的spring依赖注入代码,去掉繁人的@Autowired_路遥知码农的博客-CSDN博客_lombok 依赖注入

      \n
    6. \n
    7. https://segmentfault.com/a/1190000040914633

      \n
    8. \n
    \n","categories":[{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/categories/java/"}],"tags":[{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/tags/java/"}]},{"title":"influxdb安装(centos7)","slug":"influxdbInstall","date":"2022-03-12T10:14:53.000Z","updated":"2022-09-22T07:39:53.074Z","comments":true,"path":"/post/influxdbInstall/","link":"","excerpt":"","content":"

    1、获取安装包

    wget https://dl.influxdata.com/influxdb/releases/influxdb-1.8.10.x86_64.rpm
    \n

    \"\"

    \n

    2、安装

    yum localinstall influxdb-1.8.10.x86_64.rpm
    \n

    3、配置

    vim /etc/influxdb/influxdb.conf
    \n

    用户名密码(非必须)

    \n

    \"\"

    \n

    开启influx功能

    \n

    \"\"

    \n

    4、启动服务

    systemctl start influxdb
    \n

    5、启动

    influx
    \n

    在客户端工具窗口中执行以下语句设置用户名和密码(非必须):

    \n
    # 创建管理员权限的用户\nCREATE USER root WITH PASSWORD 'root' WITH ALL PRIVILEGES
    \n

    6、验证

    用其他机器远程连接:

    \n
    influx -host ip地址 -port 端口号
    \n

    \"\"

    \n

    这里创建数据库时报错,是因为我这边配置了用户名和密码,需要连接时带上用户名和密码才行

    \n
    iinflux -host ip地址 -port 端口号 -username 用户名 -password 密码
    \n

    \"\"

    \n","categories":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/categories/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}],"tags":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"}]},{"title":"力扣590:N 叉树的后序遍历","slug":"day20220312","date":"2022-03-12T02:10:45.000Z","updated":"2024-04-25T08:10:09.100Z","comments":true,"path":"/post/day20220312/","link":"","excerpt":"","content":"

    2022年03月12日 力扣每日一题

    \n

    题目

    给定一个 n 叉树的根节点 root ,返回 其节点值的 后序遍历

    \n\n

    n 叉树 在输入中按层序遍历进行序列化表示,每组子节点由空值 null 分隔(请参见示例)。

    \n\n

     

    \n\n

    示例 1:

    \n\n

    \n\n
    \n输入:root = [1,null,3,2,4,null,5,6]\n输出:[5,6,3,2,4,1]\n
    \n\n

    示例 2:

    \n\n

    \"\"

    \n\n
    \n输入:root = [1,null,2,3,4,5,null,null,6,7,null,8,null,9,10,null,null,11,null,12,null,13,null,null,14]\n输出:[2,6,14,11,7,3,12,8,4,13,9,10,5,1]\n
    \n\n

     

    \n\n

    提示:

    \n\n\n\n

     

    \n\n

    进阶:递归法很简单,你可以使用迭代法完成此题吗?

    \n
    Related Topics
  • 深度优先搜索
  • \n\n

    个人解法

    思路:

    \n

      这题简单,只需要递归做就好了,对于每一个节点,先存叶子节点,然后存根节点

    \n
    import java.util.ArrayList;\nimport java.util.List;\n\n/*\n// Definition for a Node.\nclass Node {\n    public int val;\n    public List<Node> children;\n\n    public Node() {}\n\n    public Node(int _val) {\n        val = _val;\n    }\n\n    public Node(int _val, List<Node> _children) {\n        val = _val;\n        children = _children;\n    }\n};\n*/\n\nclass Solution {\n    public List<Integer> postorder(Node root) {\n        list = new ArrayList<>();\n        dfs(root);\n        return list;\n    }\n    List<Integer> list;\n    private void dfs(Node root) {\n        if (root == null) {\n            return;\n        }\n        if (root.children.size() == 0) {\n            list.add(root.val);\n            return;\n        }\n        for (Node node : root.children) {\n            dfs(node);\n        }\n        list.add(root.val);\n    }\n}
    """\n# Definition for a Node.\nclass Node:\n    def __init__(self, val=None, children=None):\n        self.val = val\n        self.children = children\n"""\nfrom typing import List\n\n\nclass Solution:\n    def postorder(self, root: 'Node') -> List[int]:\n        arr = []\n\n        def dfs(root1: 'Node'):\n            if root1 is None:\n                return\n            if len(root1.children)==0:\n                arr.append(root1.val)\n                return\n            for node in root1.children:\n                dfs(node)\n            arr.append(root1.val)\n        dfs(root)\n        return arr
    ","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"推理界的3月11号","slug":"mystery0311","date":"2022-03-11T06:33:56.000Z","updated":"2022-09-22T07:39:53.100Z","comments":true,"path":"/post/mystery0311/","link":"","excerpt":"","content":"

    今天是3月11日,在推理界,今天在历史上的意义:

    \n\n

    克里斯蒂安娜·布兰德

      克里斯蒂安娜·布兰德(Christianna Brand,1907.3.11-1988.12.17),英国侦探小说作家,儿童文学作家。

    \n

      克里斯蒂安娜·布兰德1907年出生于马来亚,原名为玛丽·克里斯蒂安娜·刘易斯(Mary Christianna Lewis),早年在印度生活。她从事过很多工作,包括模特、舞蹈演员、店员和家庭教师。

    \n

      1941年,她创作了第一本以查尔斯·沃斯为主角的侦探小说《高跟鞋之死》(Death in High Heels),当时她还只是一个销售员。同年,她笔下的英国著名探长考克瑞尔在《晕头转向》(Heads You Lose)一书中初次登场,之后考克瑞尔先后七次出现在布兰德的作品中,考克瑞尔探长是她塑造最成功的侦探形象,以他为主角的侦探小说《绿色危机》(Green for Danger)也是布兰德最有名的小说。这部作品描写的是二次大战中一所医院中发生的故事,一名邮递员被送往手术室,不料却因麻醉过度而死。考克瑞尔探长亲自赶来调查,却不料护士长玛丽恩·贝茨也惨遭杀害……《绿色危机》自1944年出版之后,至今仍不断再版。1946年,《绿色危机》被Eagle-Lion公司拍成电影,由阿拉斯泰尔·希姆饰演探长,获得巨大成功。

    \n

      由于《绿色危机》的成功,1946年,克里斯蒂安娜·布兰德加入了英国侦探作家俱乐部,自此她的创作生涯一发不可收拾,接连发表了多部小说。

    \n

      上世纪50年代末开始,克里斯蒂安娜·布兰德开始专注于撰写各种不同类型的作品和短篇小说。她曾获得三次埃德加奖提名:短篇小说《杯中的毒药》(Poison in the Cup)(1969年2月,《埃勒里·奎因神秘杂志》)、《Twist for Twist》(1967年5月,《埃勒里·奎因神秘杂志》)以及一个有关苏格兰谋杀案的《天堂知道谁》(Heaven Knows Who)(1960年)。

    \n

      1972到1973年间,克里斯蒂安娜·布兰德以其杰出的成就,被推选为英国犯罪作家协会主席。

    \n

      克里斯蒂安娜·布兰德曾经使用过的笔名还有玛丽·安·阿希、安娜贝尔·琼斯、玛丽·罗兰和查娜·汤姆森。她的作品被称为“黄金时代最后的侦探小说”,克里斯蒂安娜·布兰德的作品善于在活泼、幽默的情节以及吸引人的诡计中寻求平衡,她在1988年去世,享年81岁。

    \n

    梦野久作

      日本著名幻想文学作家、变格派推理大师。

    \n

      本名杉山直树,后改名杉山泰道。

    \n

      曾用笔名有海若蓝平、香俱土三鸟、土原耕作、萌圆、杉山萌圆、沙门萌圆、萌圆山人、萌圆生、萌圆泰道、朴平、白木朴平、三鸟山人、香椎村人、青杉居士、外人某氏、钝骨生、TS生、T生等。

    \n

      一九二六年,梦野久作在《妖鼓》投稿前,曾拿给父亲过目,父亲看过后说“就像梦野久作所写的小说”。 所谓“梦野久作”是博多地区的方言,意指精神恍惚、成天做白日梦的人。曾有数十个笔名的他自此以后便固定使用了这四个字为其笔名。

    \n

      梦野久作有“妖怪作家”之称,其所属的“变格派”讲究人性的怪奇、丑恶、战栗心理的唯美面,使得推理小说充满了文学艺术气息。其代表作《脑髓地狱》(1935年)被称为日本推理小说的四大奇书之一。

    \n

    生平年表

      一八八九年一月四日生于九州福冈市。父亲杉山茂丸是右派教父、玄洋社头目头山满的盟友。直树出生後就由祖父母养育。

    \n

      一八九一年开始学习四书诵读。亲生母亲与父亲离婚另嫁高桥家。

    \n

      一八九二年开始学习能乐。熟读四书,遂有神童之称。

    \n

      一八九五年,进入小学就读,身体虚弱瘦小,多由祖父教授学习。求知欲旺盛,具有绘画方面的天赋。

    \n

      一八九九年,大名寻常小学毕业。进入高等小学就学。

    \n

      一九零二年三月二十日,祖父因中风并发肺炎去世。

    \n

      一九零三年三月,高等小学毕业,四月进入福冈县立中学修猷馆就读。

    \n

      一九零八年三月福冈县立修猷馆中学毕业,十二月一日,以一年志愿兵身份入近卫步兵第一连队,在部队中担任小队长,颇受士兵们的信赖。

    \n

      一九一零年,退伍之后,进入中央大学附属补习班,准备入学考。
      
      一九一一年,进入庆应大学文科系就读。

    \n

      一九一二年,同父异母之弟五郎去世。二月二十六日,奉命成为陆军步兵少尉。十一月八日,继祖母去世。

    \n

      一九一三年,因为弟弟的猝死,父亲遂令其从庆应大学休学。三月时,依父亲之命前往福冈县糟屋郡香椎村唐原经营果园。

    \n

      一九一五年,于东京本乡的喜福寺剃发为僧。将直树改名为泰道。

    \n

      一九一六年,以行脚僧身份从京都走到吉野山。

    \n

      一九一七年被父亲叫回农园,还俗,继承杉山家业。从本年起,在父亲所组织的右派团体台华社机关杂志《黑白》发表有关谣曲与时事的评论之外,还撰写小说。这段期间使用的笔名有沙门萌圆、杉山萌圆、萌圆泰道等。

    \n

      一九一八年二月二十五日,与镰田昌一的女儿阿仓结婚。连载《冀望日本青年》等。

    \n

      一九一九年,长男龙丸出生。成为九州岛日报记者,开始于家庭专栏发表童话至一九二六年。

    \n

      一九二零年,三十一岁,在父亲所投资之九州日报社当社会新闻记者,一九二二年在该报家庭版陆续发表童谣,所使用的笔名有梅若蓝平、香具上三鸟、上原耕作、三鸟山人等。并以萌圆泰道之笔名,将《吴井娘次》改名为《蜡人偶》连载。

    \n

      一九二一年,移居福冈市荒户町。次男铁儿出生。

    \n

      一九二二年,以杉山萌圆为笔名出版《白发小僧》长篇童话集。

    \n

      一九二三年,九月因关东大地震,以九州岛日报社震灾特派记者身份发表《火烧后细见记》与《东京震灾素描》。

    \n

      一九二四年,辞去九州岛日报社的工作,十月,以杉山泰道名义之《侏儒》,应徵博文馆的推理小说征选活动,获得佳作奖(没出版)。

    \n

      一九二五年四月,再度任职九州日报社。三男参绿出生。

    \n

      一九二六年是梦野久作生涯的转换年,正月开始撰写《脑髓地狱》初稿《狂人的解放治疗》,五月辞去报社工作,十月以初次使用梦野久作之笔名,《新青年》之侦探小说徵文之《妖鼓》,入选二等奖(没有一等奖),由此篇被公认之迟来的处女作,梦野久作登上推理文坛。其笔名是取自福冈博多地区的方言,指精神恍惚,经常寻找梦幻的人。

    \n

      一九二七年二月,停止创作《狂人的解放治疗》初稿,创作短篇连载《乡村事件》。

    \n

      一九二八年,陆续发表《人脸》、《死后之恋》、《瓶装地狱》等。

    \n

      一九二九年,出版《梦野久作集》。陆续发表《押绘的奇迹》、《铁锤》、《飞翔于空中的洋伞》。

    \n

      一九三零年五月,奉命担任妻子老家的福冈市黑门邮局局长。陆续发表《复仇》、《童贞》。

    \n

      一九三一年,陆续发表《椰果》、《犬神博士》、《自白心理》等。

    \n

      一九三二年,出版《押绘的奇迹》。陆续发表《斜坑》、《幽灵与推进机》、《狂气地狱》。

    \n

      一九三三年,一月出版《暗黑公使》,四月出版《冰涯》,五月出版《瓶装地狱》,陆续发表《不冒烟的烟囱》、《爆弹太平记》、《白菊》等。

    \n

      一九三四年,八月辞去黑门邮局局长一职。陆续发表《名君臣之》、《山羊胡编辑长》、《难船小僧》、《杀人直播》、《木魂》、《少女地狱》等。

    \n

      一九三五年,一月出版《脑髓地狱》。三月出版《梅津只园翁传》。七月十九日,父亲茂丸因脑溢血猝死于曲町自宅(享年七十二岁)。十月,借帮父亲举行葬礼之便,携妻子至日本各地旅行。十二月,出版《近世快人传》。陆续发表《微笑哑女》、《超人胡夜博士》、《二重心脏》。

    \n

      一九三六年,二月上京整理父亲遗物,遭遇“二二六”事件。陆续发表《人肉香肠》、《恶魔祈祷书》。三月出版《少女地狱》,十一日与访客谈话中猝死于东京(死因不详),得年四十七岁。

    \n

      梦野久作有“妖怪作家”之称,其所属的“变格派”讲究人性的怪奇、丑恶、战栗心理的唯美面,使得推理小说充满了文学艺术气息。其代表作《脑髓地狱》(1935年)被称为日本推理小说的四大奇书之一。

    \n

    厄尔·斯坦利·加德纳

    Erle Stanley Gardner(1889年7月17日美国马萨诸塞州马尔登-1970年3月11日加州Temecula)

    \n

    其他署名:

    \n\n

    类型:

    \n\n

    主要系列:

    \n\n

      加德纳是查尔斯·华尔特人·加德纳(Charles Walter Gardner)和格蕾丝·阿德尔玛·加德纳(Grace Adelma Gardner)之子。他的父亲是一位工程师,因为工作需要到处出差,他将全家搬到西海岸,在加德纳十岁的时候先是搬到了俄勒冈州,1902年又搬到加州奥维尔(Oroville)。加德纳对加州十分喜欢,虽然成年之后他游历四方,但是他还是将加州作为自己的家,并且作为自己笔下人物的背景。

    \n

      加德纳个性独立,勤奋,有想象力,二十一岁时他成为了一名律师,但是他没有进入法律学校而是在律师事务所自学以及担任律师助手,最后通过律师考试。他在洛杉矶西北部的文图拉县开业,很快他因为精明、足智多谋而赢得了声誉,他帮助很多看似不可能打赢官司的委托人获胜。

    \n

      加德纳喜欢户外活动,比如打猎,钓鱼、射箭,他成为作家之前他试过许多不同的生意,三十四岁的时候,他将自己的第一篇小说卖给了一家廉价杂志。他并不是一位有天赋的作家,但是他通过研究那些成功作家的作品以及编辑的意见而进步神速。二三十年代,他为廉价杂志创作了大量短篇小说,并且塑造了一大堆人物。1933年,他出版了第一部长篇小说,主角便是日后著名律师侦探佩里·梅森。那时,他创作速度惊人,每三天便能完成一部一万单词的中篇小说,以至于他无法依靠打自己要而使用口述机,因此他雇请了几位秘书,轮班根据他的口述完成稿件。

    \n

      二次大战之后,加德纳的名声让他的创作数量减少了,因为他涉足其他的事务,包括“最高上诉法院”(Court of Last Resort),这是加德纳和他人创立的一家组织,主要是为了增加美国法律的公正性,还有佩里·梅森系列电视片。

    \n

      1912年,加德纳与纳塔利·塔伯特(Natalie Talbert)结婚,1913年他们生下女儿纳塔利·格蕾丝·加德纳(Natalie Grace Gardner)。1935年,两人分居,不过他们还是朋友,也并未离婚。加德纳一直赡养他的妻子,直到1968年妻子去世。同年,加德纳与他长期以来的秘书艾格尼斯·简·贝斯尔(Agnes Jean Bethell)结婚,贝斯尔被认为是梅森的秘书德拉·斯特里特(Della Street)的原型。1962年,他获得美国侦探作家协会(MWA)大师奖。1970年,加德纳因为癌症去世。

    \n

    弗瑞德里克·布朗

      弗瑞德里克·布朗:Fredric Brown(1906年10月29日美国俄亥俄州辛辛那提-1972年3月11日亚利桑那州图森)

    \n

      类型:私人侦探;硬汉

    \n

      主要系列:Ed and Am Hunter, 1947-1963

    \n

      布朗十多岁的时候父母相继去世,不得不自谋生路。二十年代,他进入汉诺威学院(Hanover College)和辛辛那提大学(Cincinnati University)学习。他于1929年结婚,并且搬到威斯康星州密尔沃基,在那里他为几家出版社担任校对,直到在《密尔沃基期刊》(Milwaukee Journal)找到一份固定的工作。他呆在这家杂志一直到1947年,接着搬到纽约,在一家廉价杂志集团担任编辑。

    \n

      1938年,布朗发表了第一篇小说《镍币之月》(The Moon for a Nickel),刊登在《斯崔特和史密斯侦探小说杂志》(Street and Smith’s Detective Story Magazine)。从那时开始,布朗成为廉价杂志的固定投稿者,在不同类型的杂志上发表,包括《一角侦探》(Dime Mystery)、《星球故事》(Planet Stories)、《怪异故事》(Weird Tales)。他在廉价小说读者中拥有一大堆拥趸。

    \n

      布朗第一次普遍的成功是因为发表了第一部长篇小说《传说中的高级夜总会》(The Fabulous Clipjoint,1947),这部作品的主角是一对叔侄组合艾德和阿姆·亨特(Ed and Am Hunter)。他还因此赢得了1984年的埃德加奖。布朗的的经济状况得到改善,他搬到纽约成为一名高级编辑。同时他和第一任妻子海伦(Helen)离婚。

    \n

      他接下来的侦探小说也非常成功,包括《死亡套环》(The Dead Ringer,1948)、《尖叫的米米》(The Screaming Mimi,1949)。1949年末,他遇到了伊丽莎白·查利尔(Elizabeth Charlier),二人结婚后搬到新墨西哥州的陶斯。廉价小说集团倒闭之后,布朗成为了一名受欢迎的犯罪小说家。随着电视在娱乐业中所占的份量越来越重,布朗也将他的小说改编为电视片。

    \n

      布朗的身体一直不好,而且他偶尔酗酒对身体健康更是无益。因为呼吸疾病,布朗和妻子在1954年搬到亚利桑那州图森。尽管他为一些报酬很高的杂志写作,诸如《花花公子》(Playboy),但是他已经在走下坡。他的最后一部长篇小说《墨菲太太的内衣裤》(Mrs. Murphy’s Underpants,1963)已经不是水准之作。他还写了一些短篇小说,但是他全职写作的时代已经过去。1972年,布朗因为肺气肿去世。(ellry)

    \n","categories":[{"name":"推理","slug":"推理","permalink":"https://hexo.huangge1199.cn/categories/%E6%8E%A8%E7%90%86/"}],"tags":[{"name":"推理界的今天","slug":"推理界的今天","permalink":"https://hexo.huangge1199.cn/tags/%E6%8E%A8%E7%90%86%E7%95%8C%E7%9A%84%E4%BB%8A%E5%A4%A9/"}]},{"title":"力扣2049:统计最高分的节点数目","slug":"day20220311","date":"2022-03-11T04:05:16.000Z","updated":"2022-09-22T07:39:52.862Z","comments":true,"path":"/post/day20220311/","link":"","excerpt":"","content":"

    2022年03月11日 力扣每日一题

    \n

    题目

    给你一棵根节点为 0二叉树 ,它总共有 n 个节点,节点编号为 0n - 1 。同时给你一个下标从 0 开始的整数数组 parents 表示这棵树,其中 parents[i] 是节点 i 的父节点。由于节点 0 是根,所以 parents[0] == -1

    \n\n

    一个子树的 大小 为这个子树内节点的数目。每个节点都有一个与之关联的 分数 。求出某个节点分数的方法是,将这个节点和与它相连的边全部 删除 ,剩余部分是若干个 非空 子树,这个节点的 分数 为所有这些子树 大小的乘积

    \n\n

    请你返回有 最高得分 节点的 数目

    \n\n

    \n\n

    示例 1:

    \n\n

    \"example-1\"

    \n\n
    输入:parents = [-1,2,0,2,0]\n输出:3\n解释:\n- 节点 0 的分数为:3 * 1 = 3\n- 节点 1 的分数为:4 = 4\n- 节点 2 的分数为:1 * 1 * 2 = 2\n- 节点 3 的分数为:4 = 4\n- 节点 4 的分数为:4 = 4\n最高得分为 4 ,有三个节点得分为 4 (分别是节点 1,3 和 4 )。\n
    \n\n

    示例 2:

    \n\n

    \"example-2\"

    \n\n
    输入:parents = [-1,2,0]\n输出:2\n解释:\n- 节点 0 的分数为:2 = 2\n- 节点 1 的分数为:2 = 2\n- 节点 2 的分数为:1 * 1 = 1\n最高分数为 2 ,有两个节点分数为 2 (分别为节点 0 和 1 )。\n
    \n\n

    \n\n

    提示:

    \n\n

    \n
    Related Topics
  • 深度优先搜索
  • 数组
  • 二叉树
  • \n\n

    个人解法

    思路:

    \n

      这题是要返回有 最高得分节点的 数目,那么就要将每一个节点的分数都算一遍,而每一个节点的分数,是由以下几个数的乘积,包括,该节点下左子树中节点的数目、该节点下右子树中节点的数目,以及总节点数-改节点为跟节点的树的节点数。

    \n

      那么,我的解题步骤如下:

    \n
      \n
    1. 我先根据题目给的parents数组分别统计每个节点的直连子节点,将其存放进map中。

      \n
    2. \n
    3. 根据map运用递归求出每一个节点做为根节点的子树中的节点数,将其存入counts数组中

      \n
    4. \n
    5. 接下来遍历求每一个节点的分数,并且记入最大得分及节点的数量

      \n
    6. \n
    \n

    下面是java的代码解法:

    \n
    class Solution {\n    // 记录每一个节点作为根节点的子树中节点的数量\n    int[] counts;\n    public int countHighestScoreNodes(int[] parents) {\n        int size = parents.length;\n\n        // 记录每个节点的直接子节点\n        Map<Integer, List<Integer>> map = new HashMap<>();\n        for (int i = 0; i < size; i++) {\n            map.put(i, new ArrayList<>());\n        }\n        for (int i = 1; i < size; i++) {\n            map.get(parents[i]).add(i);\n        }\n\n        // 记录每个子节点为根节点的树中节点数\n        counts = new int[size];\n        for (int i = 0; i < size; i++) {\n            if (counts[i] > 0) {\n                continue;\n            }\n            counts[i] = dfs(map.get(i), map);\n        }\n\n        // 遍历计算每个节点的得分并统计结果\n        long mul = 1;\n        for (int num : map.get(0)) {\n            mul *= counts[num];\n        }\n        int count = 1;\n        for (int i = 1; i < size; i++) {\n            long temp = 1;\n            for (int num : map.get(i)) {\n                temp *= counts[num];\n            }\n            temp *= (size - counts[i]);\n            if (temp > mul) {\n                mul = temp;\n                count = 1;\n            } else if (temp == mul) {\n                count++;\n            }\n        }\n        return count;\n    }\n    /**\n     * 计算每个节点为根节点的树中节点数\n     */\n    private int dfs(List<Integer> list, Map<Integer, List<Integer>> map) {\n        if (list.size() == 0) {\n            return 1;\n        }\n        int count = 1;\n        for (int i : list) {\n            if (counts[i] > 0) {\n                count += counts[i];\n            } else {\n                count += dfs(map.get(i), map);\n            }\n        }\n        return count;\n    }\n}
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"maven打jar包时本地依赖包未在其中?","slug":"question20220310-1","date":"2022-03-10T06:18:41.000Z","updated":"2022-09-22T07:39:53.198Z","comments":true,"path":"/post/question20220310-1/","link":"","excerpt":"","content":"

    今天,运行jar包时,报错了,报的内容是不存在某一个依赖包中的类,经过一番排查,发现这个类是下面这种形式依赖的

    <dependency>\n\t<groupId>com.oracle</groupId>\n\t<artifactId>ojdbc6</artifactId>\n\t<version>11.2.0.4</version>\n\t<scope>system</scope>\n\t<systemPath>D:/work/ojdbc6-11.2.0.4.jar</systemPath>\n</dependency>

    针对依赖包是在本地的这种情况,需要在pom中添加includeSystemScope=true,参考如下:
    <build>\n\t<plugins>\n\t\t<plugin>\n\t\t\t<groupId>org.springframework.boot</groupId>\n\t\t\t<artifactId>spring-boot-maven-plugin</artifactId>\n\t\t\t<version>2.1.7.RELEASE</version>\n\t\t\t<configuration>\n\t\t\t\t<includeSystemScope>true</includeSystemScope>\n\t\t\t</configuration>\n\t\t</plugin>\n\t</plugins>\n</build>

    \n","categories":[{"name":"问题记录","slug":"问题记录","permalink":"https://hexo.huangge1199.cn/categories/%E9%97%AE%E9%A2%98%E8%AE%B0%E5%BD%95/"}],"tags":[{"name":"maven","slug":"maven","permalink":"https://hexo.huangge1199.cn/tags/maven/"}]},{"title":"推理界的3月10号","slug":"mystery0310","date":"2022-03-10T02:40:38.000Z","updated":"2022-09-22T07:39:53.100Z","comments":true,"path":"/post/mystery0310/","link":"","excerpt":"","content":"

    今天是3月10日,在推理界,历史的今天有如下事件:

    \n\n

    古处诚二

      1970年出生于褔冈县、并曾经参与航空自卫队长达六年的古处诚二,2000年以自卫队基地为舞台的推理小说《Unknown》获得第十四回梅菲斯特奖,其后同年再发表以地震灾难为主题的《少年们的密室》、及于翌年(2001)再以自卫队组织为主题创作了《未完成》,接着更以战争为题材发表其他类型的非推理小说。2005年以《七月七日》入选直木奖候选

    \n","categories":[{"name":"推理","slug":"推理","permalink":"https://hexo.huangge1199.cn/categories/%E6%8E%A8%E7%90%86/"}],"tags":[{"name":"推理界的今天","slug":"推理界的今天","permalink":"https://hexo.huangge1199.cn/tags/%E6%8E%A8%E7%90%86%E7%95%8C%E7%9A%84%E4%BB%8A%E5%A4%A9/"}]},{"title":"力扣589:N 叉树的前序遍历","slug":"day20220310","date":"2022-03-10T01:51:36.000Z","updated":"2024-04-25T08:10:09.098Z","comments":true,"path":"/post/day20220310/","link":"","excerpt":"","content":"

    2022年03月10日 力扣每日一题

    \n

    题目

    给定一个 n 叉树的根节点  root ,返回 其节点值的 前序遍历

    \n\n

    n 叉树 在输入中按层序遍历进行序列化表示,每组子节点由空值 null 分隔(请参见示例)。

    \n\n


    \n示例 1:

    \n\n

    \n\n
    \n输入:root = [1,null,3,2,4,null,5,6]\n输出:[1,3,5,6,2,4]\n
    \n\n

    示例 2:

    \n\n

    \"\"

    \n\n
    \n输入:root = [1,null,2,3,4,5,null,null,6,7,null,8,null,9,10,null,null,11,null,12,null,13,null,null,14]\n输出:[1,2,3,6,7,11,14,4,8,12,5,9,13,10]\n
    \n\n

     

    \n\n

    提示:

    \n\n\n\n

     

    \n\n

    进阶:递归法很简单,你可以使用迭代法完成此题吗?

    \n
    Related Topics
  • 深度优先搜索
  • \n\n

    个人解法

    class Solution {\n    List<Integer> list = new ArrayList<>();\n    public List<Integer> preorder(Node root) {\n        dfs(root);\n        return list;\n    }\n    void dfs(Node root) {\n        if (root == null) {\n            return;\n        }\n        list.add(root.val);\n        for (Node node : root.children) {\n            dfs(node);\n        }\n    }\n}
    """\n# Definition for a Node.\nclass Node:\n    def __init__(self, val=None, children=None):\n        self.val = val\n        self.children = children\n"""\nfrom typing import List\n\n\nclass Solution:\n    def preorder(self, root: 'Node') -> List[int]:\n        result = []\n\n        def dfs(node):\n            if node:\n                result.append(node.val)\n                for child in node.children:\n                    dfs(child)\n\n        dfs(root)\n        return result
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"力扣798:得分最高的最小轮调","slug":"day20220309","date":"2022-03-09T08:42:38.000Z","updated":"2024-04-25T08:10:09.096Z","comments":true,"path":"/post/day20220309/","link":"","excerpt":"","content":"

    2022年03月09日 力扣每日一题

    \n

    题目

    给你一个数组 nums,我们可以将它按一个非负整数 k 进行轮调,这样可以使数组变为 [nums[k], nums[k + 1], ... nums[nums.length - 1], nums[0], nums[1], ..., nums[k-1]] 的形式。此后,任何值小于或等于其索引的项都可以记作一分。

    \n\n\n\n

    在所有可能的轮调中,返回我们所能得到的最高分数对应的轮调下标 k 。如果有多个答案,返回满足条件的最小的下标 k

    \n\n

    \n\n

    示例 1:

    \n\n
    \n输入:nums = [2,3,1,4,0]\n输出:3\n解释:\n下面列出了每个 k 的得分:\nk = 0,  nums = [2,3,1,4,0],    score 2\nk = 1,  nums = [3,1,4,0,2],    score 3\nk = 2,  nums = [1,4,0,2,3],    score 3\nk = 3,  nums = [4,0,2,3,1],    score 4\nk = 4,  nums = [0,2,3,1,4],    score 3\n所以我们应当选择 k = 3,得分最高。
    \n\n

    示例 2:

    \n\n
    \n输入:nums = [1,3,0,2,4]\n输出:0\n解释:\nnums 无论怎么变化总是有 3 分。\n所以我们将选择最小的 k,即 0。\n
    \n\n

    \n\n

    提示:

    \n\n

    \n
    Related Topics
  • 数组
  • 前缀和
  • \n\n

    个人解法

    思路:

    \n

    arrs[k]代表轮调k次的分数,然后[left,right]区间内的值代表能得分的k值,那么,

    \n
    left = i + 1\nright = i - nums[i]
    \n

    考虑到超出数组范围的问题,因此,修改为

    \n
    // size为数组长度\nleft = (i + 1) % size;\nright = (i - nums[i] + size) % size;
    \n

    接下来,我们要考虑[left,right]是否是有效区间

    \n\n

    最后我们对数组进行设置,这部分可以使用差分实现

    \n
    class Solution {\n    public int bestRotation(int[] nums) {\n        int size = nums.length;\n        int[] arrs = new int[size + 1];\n        for (int i = 0; i < size; i++) {\n            int left = (i + 1) % size;\n            int right = (i - nums[i] + size) % size;\n            if (left > right) {\n                arrs[0]++;\n                arrs[size]--;\n            }\n            arrs[left]++;\n            arrs[right + 1]--;\n        }\n        for (int i = 1; i < size + 1; i++) {\n            arrs[i] += arrs[i - 1];\n        }\n        int result = 0;\n        for (int i = 1; i < size + 1; i++) {\n            if (arrs[i] > arrs[result]) {\n                result = i;\n            }\n        }\n        return result;\n    }\n}
    from typing import List\n\n\nclass Solution:\n    def bestRotation(self, nums: List[int]) -> int:\n        size = len(nums)\n        arrs = [0] * (size + 1)\n        for i in range(size):\n            left = (i + 1) % size\n            right = (i - nums[i] + size) % size\n            if left > right:\n                arrs[0] += 1\n                arrs[size] -= 1\n            arrs[left] += 1\n            arrs[right + 1] -= 1\n        for i in range(1, size + 1):\n            arrs[i] += arrs[i - 1]\n        result = 0\n        for i in range(1, size + 1):\n            if arrs[i] > arrs[result]:\n                result = i\n        return result
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"力扣504:七进制数","slug":"day20220307","date":"2022-03-07T06:15:07.000Z","updated":"2024-04-25T08:10:09.095Z","comments":true,"path":"/post/day20220307/","link":"","excerpt":"","content":"

    2022年02月14日 力扣每日一题

    \n

    题目

    给定一个整数 num,将其转化为 7 进制,并以字符串形式输出。

    \n\n

     

    \n\n

    示例 1:

    \n\n
    \n输入: num = 100\n输出: \"202\"\n
    \n\n

    示例 2:

    \n\n
    \n输入: num = -7\n输出: \"-10\"\n
    \n\n

     

    \n\n

    提示:

    \n\n

    \n
    Related Topics
  • 数学
  • \n\n

    个人解法

    class Solution {\n    public String convertToBase7(int num) {\n        boolean bl = num < 0;\n        num = Math.abs(num);\n        StringBuilder str = new StringBuilder();\n        while (num >= 7) {\n            str.insert(0, num % 7);\n            num /= 7;\n        }\n        str.insert(0, num);\n        if (bl) {\n            str.insert(0, '-');\n        }\n        return str.toString();\n    }\n}
    class Solution:\n    def convertToBase7(self, num: int) -> str:\n        bl = num < 0\n        s = ''\n        num = abs(num)\n        while num >= 7:\n            s = str(num % 7) + s\n            num //= 7\n        s = str(num) + s\n        if bl:\n            s = '-' + s\n        return s
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"推理界的3月7号","slug":"mystery0307","date":"2022-03-07T01:21:47.000Z","updated":"2022-09-22T07:39:53.099Z","comments":true,"path":"/post/mystery0307/","link":"","excerpt":"","content":"

    今天是3月7日,在推理界,历史的今天有如下事件:

    \n\n

    仁木悦子

      日本名女推理小说作家。

    \n

      仁木悦子的经历尤其令人注目:1928年生于东京,原名大井三重子、她幼年无忧无虑,但四岁那年患结核性胸椎骨疽病,以致下肢瘫痪,半身不遂。七岁那年父亲去世,不久,母亲也亡故。疾病缠身的仁木悦子幸亏有哥哥大井羲光照顾,他每天教她读书。第二次世界大战爆发,16岁的仁木悦子由哥哥背着来到富山乡下居住。她只读到小学三年级,但却看了不少书,并从18岁起开始写作。她先练习写童话,发表在《儿童俱乐部》和《母亲之友》杂志上,她的30多篇童话小说还结集出版。后来她又成了“克里斯蒂小说迷”,并写出推理小说《猫知道》。这部小说的主角是一对兄妹侦探,哥哥雄太郎是植物系大学生,妹妹悦子是音乐系学生,这对兄妹通过一只猫的经历,侦破了一起谋杀案。作品中渗入作者与她哥哥的影子,推理手法十分细腻,许多伏线埋在紧张的情节之中,把粗心的读者引人迷途,在作品中可见女作家的风格。故事的进展采用侦探的助手叙述的方式,叙述者仁木悦子与作者同名的形式在日本就是由仁木悦子创下的成功先例。之后在日本,作者与作品同名的作品不少。

    \n

      以仁木兄妹为侦探,作者之后继续撰写了《林中之家》、《有刺之树》、《黑色的飘带》等三部长篇和《黄色的花》等若干短篇。

    \n

      仁木悦子幼时卧病在床,玩伴就是猫,所以她一直喜欢猫,不但让猫在《猫知道》里扮演重要的角色,她所出版的许多推理小说的封面,也都请画家画描,晚年时还主编了一本以“猫”为主题的小说集。她家中的猫则是女佣外出时,从外面捡回来的遭人遗弃的小猫。

    \n

      《猫知道》写于1957年,参加了江户川乱步侦探小说奖的评选。经过评委投票,《猫知道》在96篇征文作品中名列第一,并获第三届江户川乱步奖。

    \n

      由于评委都不认识作者,当仁木悦子由她哥哥大井羲光和亲友抬着参加颁奖仪式时,全场引起了轰动。人们意想不到,一个半身不遂、不能走动的女性竟有如此聪颖的智慧与坚韧的毅力,她赢得了热烈的掌声。在闪光灯中,第一次见到仁木悦子的江户川乱步亲自给她发奖、奖品是一人座“福尔摩斯座像”,还有五万日元的奖金。评委木木高太郎则发表了一段讲话:“《呼啸山庄》在英国文学史上占有不朽的地位;女作家艾米莉·勃朗蒂病魔缠身,能写出这样的杰作。仁木悦子君也是有病在身,相信她也能写出与勃朗蒂媲美的好书。”29岁的仁术悦子激动得热泪盈眶,她是20多年来第一次离开家门。事后她回忆道:“我走进豪华的会场大厅,看见闪闪发光的水晶吊灯,以为自己走进了童话王国。”

    \n

      仁木悦子获奖后,《猫知道》印数剧增。15万册一销而空,后来又拍成电影。丰厚的稿酬收入改善了仁木悦子艰难的处境,她住医院进行了5 次手术,终于能在家中行走,并坐着轮椅车上街观光。一位翻译家同她结了婚,婚后两人和谐美满。仁木悦子不仅成为丈夫的助手,而且又先后写出了7 部长篇推理小说,《林中小屋》(1959 年)、《杀人线路图》(1960年)、《有刺的树》(1961年)、《黑色缎带》(1962年)、《两张底片》(1964年)、《枯叶色的街》(1966年)、《冰冷的街道》(1973年)。有5 部小说仍以兄妹侦探为主角。《两张底片》则是以一对夫妇联手破案。《枯叶色的街》是个贫穷的青年与书店女职员被卷进凶案,成为破案主角。这些推理小说都得到了读者的好评。

    \n

      1980年的<赤的猫>获得第三十四届日本推理作家协会短篇赏。仁木悦子最后于1986年因肾病逝世,享年58岁。

    \n

      《猫知道》被日本评论家誉为推理小说史上的“第二次浪潮”。在同一年,松本清张也发表了推理名篇《点与线》。这两部小说一扫日本侦探小说中阴森诡秘的文风,替而代之清新简朴的风格。仁木悦子以女性细腻的文笔,写出了社会推理小说,尽管她身患重疾、但她的小说却给人乐观健康的感受。她注重细节的挖掘,留给读者深刻的印象。继仁木悦子之后,许多推理小说家都自觉地摆脱“变格派”的风格,推重社会推理小说的写实手法。从这一点上说,仁木悦子对日本推理小说的发展有着重要的贡献。

    \n

    种村直树

      种村直树(1936年3月7日-),日本作家、随笔家、评论家。

    \n

      1973年开始创作,从事与铁路有关的创作,发表过很多铁路相关的报告文学、时评、游记、推理小说。

    \n

      出生于滋贺县大津市。滋贺县立大津东高中(现滋贺县立膳所高中)、京都大学法学系毕业。

    \n

      1972年在每日新闻当记者。在此期间掌握了丰富的铁道知识和创作能力,在当时“铁路杂志”总编辑竹岛纪元的鼓动下,执笔创作了《列车追迹》并开始连载。成为自由撰稿人之后,成为“社会派”推理小说的主要创作作家之一。

    \n

      代表作《铁道旅行术》、《日本国有铁道最后的事件》、《“青春18车票”之旅》等。

    \n

    佐飞通俊

      佐飞通俊(1960年3月7日-),日本作家、文艺评论家。

    \n

      出身于福井县。中央大学文学系哲学科毕业,在新闻社工作。1991年《静音系统》(静かなるシステム)(刊登于“群像”1991年6月号)获第34届群像新人文学奖(评论部门)优秀作品。

    \n

      2006年开始创作小说,2月出版《孤独通告》(円環の孤独,讲谈社小说),同年8月出版《爱因斯坦游戏》(アインシュタイン·ゲーム,讲谈社小说),2007年4月又推出“宴の果て 死は独裁者に”(讲谈社小说)。

    \n

    贾德森·菲利普斯

      贾德森·菲利普斯,全名贾德森·潘特寇斯特·菲利普斯(Judson Pentecost Philips,1903年8月10号- 1989年3月7日),美国侦探小说作家,他以休·潘特寇斯特、菲利普·欧文的笔名和他的本名发表了100多部侦探小说,上世纪30年代他还写了为数众多的体育运动类小说。

    \n

      他出生在美国马萨诸塞州诺斯菲尔德,1925年从哥伦比亚大学毕业。

    \n

      20世纪的20年代到30年代,菲利普斯开始为“纸浆”杂志撰写短篇小说,他还同时撰写剧本和一家报纸的专栏。1950年,他进入沙龙剧场负责剧本写作和宣传。

    \n

      1973年,他获得美国侦探作家协会(MWA)颁发的最高荣誉奖项——大师奖。

    \n

      1989年,菲利普斯因肺气肿引起并发症,在康涅狄格州迦南去世,享年85岁。他留下妻子诺玛·伯顿·菲利普斯、三个儿子(大卫、约翰、丹尼尔)和一个女儿(卡罗琳·诺伍德)。

    \n","categories":[{"name":"推理","slug":"推理","permalink":"https://hexo.huangge1199.cn/categories/%E6%8E%A8%E7%90%86/"}],"tags":[{"name":"推理界的今天","slug":"推理界的今天","permalink":"https://hexo.huangge1199.cn/tags/%E6%8E%A8%E7%90%86%E7%95%8C%E7%9A%84%E4%BB%8A%E5%A4%A9/"}]},{"title":"推理界的3月5号","slug":"mystery0305","date":"2022-03-05T02:45:18.000Z","updated":"2022-09-22T07:39:53.098Z","comments":true,"path":"/post/mystery0305/","link":"","excerpt":"","content":"

    今天是3月5日,在推理界,历史的今天有如下事件:

    \n\n

    水谷准

      水谷准(1904年3月5日-2001年3月20日),日本小说家、推理作家、翻译家、编辑。

    \n

      出生于北海道函馆市。旧制函馆中学(现北海道函馆中部高中)中途退学后,进入东京早稻田高中读书。读书期间,1922年以《好敌手》参加“新青年”的有奖征稿第一等入选。早稻田大学文学部法国文学系毕业。1929年接替“新青年”总编辑的职务。1938年一度离职,1939年到1945年再次担任“新青年”的总编辑。

    \n

      1952年《决斗》(ある決闘)获第5届侦探作家俱乐部奖短篇奖。

    \n

      二战之后较多创作与高尔夫球有关的作品。

    \n

    谷克二

      谷克二(1941年3月5日-),日本小说家,被称为“狩猎冒险小说之王(狩猎冒险小说第一人者)”。出生于宫崎县延冈市。本名谷正胜。

    \n

      1963年毕业于早稻田大学商学系。在德国大众汽车公司工作过,之后去了英国,在伦敦大学主修历史经济学。回国后,开始创作生涯。

    \n

      1974年,凭借处女作《追うもの》获得第1届野性时代新人奖。1978年又以《狙击者》获得第5届角川小说奖。他的作品《サバンナ》(又译作《西班牙的短暂夏天》)以及《越境线》先后获得直木奖候补作。

    \n","categories":[{"name":"推理","slug":"推理","permalink":"https://hexo.huangge1199.cn/categories/%E6%8E%A8%E7%90%86/"}],"tags":[{"name":"推理界的今天","slug":"推理界的今天","permalink":"https://hexo.huangge1199.cn/tags/%E6%8E%A8%E7%90%86%E7%95%8C%E7%9A%84%E4%BB%8A%E5%A4%A9/"}]},{"title":"推理界的3月4号","slug":"mystery0304","date":"2022-03-04T03:28:18.000Z","updated":"2022-09-22T07:39:53.097Z","comments":true,"path":"/post/mystery0304/","link":"","excerpt":"","content":"

    今天是3月4日,在推理界,历史的今天有如下事件:

    \n\n

    程小青

      程小青(1893—1976)【原名程青心,又名程辉斋】

    \n

      籍贯:江苏吴县人。

    \n

      身平介绍:少年家贫,曾在钟表店当学徒,自学外语和热爱看书,他18岁时开始从事文学写作,先是与周瘦鹃合作翻译柯南·道尔作品,后来创作《霍桑探案》,一举成名。

    \n

      据史料介绍,程小青在21岁时发表的《灯光人影》,被《新闻报》举行的征文大赛选中,他小说中的侦探原名霍森,因排字工人误排,于是便成了霍桑。《霍桑探案》发表之后,程小青不断收到读者大量来信。是读者的鼓励,促使程小青先后写出了《江南燕》、《珠项圈》、《黄浦江中》、《八十四》、《轮下血》、《裹棉刀》、《恐怖的话剧》、《雨夜枪声》、《白衣怪》、《催命符》、《索命钱》、《新婚劫》、《活尸》、《逃犯》、《血手印》、《黑地牢》、《无头案》等30余部侦探小说。著名报人郑逸梅曾称赞他:“毕生精力,尽瘁于此,也就成为侦探小说的巨擘。”

    \n

      程小青的创作,据另一位著名报人范烟桥称“模仿了柯南道尔的写法”,但他又塑造了“中国的福尔摩斯”。为了达到这一目的,程小青作为函授生,受业于美国大学函授科,进修犯罪心理学与侦探学的学习,他从理论上学习西欧侦探理论,在实践中又把中国旧社会发生的案例加以改造。他在谈到创作时,多次谈到自己如何设计侦探小说的名字,怎样取材与裁剪,怎样构思开头与结尾,他把美国作家韦尔斯的专著《侦探小说技艺论》和美国心理学家聂克逊博士的专著《著作人应知的心理学》作为教科书。在小说中,程小青设计了霍桑与包朗一对搭档,类似福尔摩斯与华生医生,但在案件的取材上,程小青着重描写旧中国社会弊病引发的凶杀案,注重人物的心理分析,把凶杀与现实生活的投影结合起来,因此形成了自己的特点与风格。

    \n

    妹尾韶夫

      妹尾韶夫(1892年3月4日-1962年4月19日),日本翻译家、侦探小说作家。出生于冈山县津山市。

    \n

      早稻田大学英文系毕业后,1922年为“新青年”等杂志翻译英美侦探小说,其中多数是阿加莎·克里斯蒂的作品。1925年以后以妹尾安艺夫名义创作,发表了30到40个短篇小说。

    \n

      在“新青年”担当每月评论的胡铁梅、“宝石”杂志每月评论者小原俊一,据说都是妹尾的笔名。

    \n

      1962年因脑溢血去世,终年70岁。

    \n

    詹姆斯·艾尔罗伊

      詹姆斯·艾尔罗伊 (詹姆斯·艾尔罗瓦) James Ellroy(1948年3月4日美国加州洛杉矶-)

    \n

      类型:硬汉;警察程序;私人侦探;倒叙

    \n

      主要系列:

    \n\n

      艾尔罗伊本名李·厄尔·艾尔罗伊(Lee Earle Ellroy)。父亲阿曼德·艾尔罗伊(Armand Ellroy)是反犹太主义者,副业是会计师,母亲杰尼瓦·奥德丽·“简”·希利克·艾尔罗伊(Geneva Odelia “Jean” Hilliker Ellroy)是注册护士。艾尔罗伊的父母于1940年结婚,1954年离婚。艾尔罗伊被判给母亲,接着搬到了埃尔蒙特市。据艾尔罗伊回忆,母亲经常在周六晚上酗酒。1958年6月22日发生了一件对于艾尔罗伊一生影响深远的事情,那天他的母亲被人谋杀。之后,艾尔罗伊和父亲一起居住。十一岁生日,父亲送给他一本洛杉矶警察局历史的书籍,他仔细阅读了这本书,立志将来当一名作家。艾尔罗伊是一个有强迫症的读者,他常常去图书馆借书,还从书店里偷犯罪小说。

    \n

      艾尔罗伊进入犹太费尔法克斯高中(Jewish Fairfax High School),1965年校方知道他父亲的纳粹观点之后将他开除。接着他进入美国陆军,但是很快认识到自己不是当兵的材料。他假装口吃,于是很快退伍。他回家之后不久父亲去世了。

    \n

      那段时间,艾尔罗伊就住在街上,靠着入店行窃和入室盗窃为生。他喝酒,有时候还嗑药,占据着无人的房子。1965年到1977年间,艾尔罗伊因为醉酒、偷窃和非法入室而多次被捕。最后被判入狱八个月。刑满释放之后,他做过一些低等的工作,诸如散发传单,递送邮件,色情书店出纳等等。他继续喝酒,滥用鼻用吸入器。因为患上肺炎和妄想症,艾尔罗伊被送去治疗,1975年治愈。接着他找了一些稳定的工作,比如高尔夫和乡村俱乐部的球童,参加戒酒互助协会(Alcoholics Anonymous)之后他开始创作小说。

    \n

      艾尔罗伊的第一部长篇小说《布朗的安魂曲》(Brown’s Requiem)是一部半自传性质的犯罪小说,风格类似雷蒙德·钱德勒,小说的主人公弗里兹·布朗(Fritz Brown)曾经是一名警官,他戒酒之后变成了一名私人侦探。第二部《秘密行事》(Clandestine,1982)讲述了一名前警官追踪杀害以前爱人的凶手的故事,获得埃德加奖提名。

    \n

      此后,艾尔罗伊的创作速度保持稳健。他先是发表了“洛依·霍普金斯”三部曲,包括《染血之夜》(Blood on the Moon,1984)、《因起此夜》(Because the Night,1984)、《自殺坡》(Suicide Hill,1986)。1984年他辞去球童,全职写作。他又发表了“洛杉矶四部曲”,包括《黑色大丽花》(The Black Dahlia,1987)、《无处藏身》(The Big Nowhere,1988)、《洛杉矶的秘密》(L.A. Confidential,1990)、《白色爵士舞》(White Jazz,1992)。因此在国内和国际上获得了声誉。《无处藏身》或的1990年侦探小说奖(Prix Mystere Award)。《洛杉矶的秘密》被改编成电影,获得奥斯卡奖提名。2006年《黑色大丽花》被搬上大银幕。

    \n

      1993年到2004年间,艾尔罗伊在《GQ》杂志上发表了小说和非小说。这些作品结集为《好莱坞夜曲》(Hollywood Nocturnes,1994)、《犯罪之波:来自洛杉矶地下社会的报道和小说》(Crime Wave: Reportage and Fiction from the Underside of L.A.,1999)和《危险的步调》(Breakneck Pace,2000)。

    \n

      艾尔罗伊结过两次婚,第一任妻子是玛丽·多赫尔蒂(Mary Doherty),第二任是海伦·诺德(Helen Knode),均离婚。2005年他从康涅狄格州纽卡纳安搬到洛杉矶。(ellry)

    \n

    黑川博行

      黑川博行(1949年3月4日-),日本小说家。出生于爱媛县。京都市立艺术大学美术学系雕刻科毕业。妻子是日本画家黑川雅子。

    \n

      毕业后在高中担任美术教师,1984年以《第二次告别》(二度のお别れ)获三得利推理大奖佳作。1986年《猫眼宝石》(キャッツアイころがった)获第4届三得利推理大奖。1996年《伯爵计划》(カウント·プラン)获第49届日本推理作家协会奖(短篇部门)。

    \n

      获得直木奖候补的作品有《伯爵计划》(カウント·プラン)、《疫病神》、《文福茶釜》、《国境》、《恶果》等。

    \n

      他还是由船越荣一郎主演的电视连续剧《刑事吉永诚一·泪的事件簿》的原作。

    \n

    半村良

      半村良(1933年10月27日-2002年3月4日),日本小说家。本名清野平太郎。出生于东京府(现东京都),在东京都立两国高中毕业后,先后做过酒吧侍者等多种职业,在广告公司任职期间与广播公司等建立了密切的关系,后开始这方面的工作。

    \n

      1962年短篇小说《收获》(収穫)获得第2届早川科幻小说大赛(ハヤカワ·SFコンテスト)第三名,正式成为作家,上世纪六十年代在《SF杂志》(SFマガジン)上发表若干个短篇小说后,突然不再发表作品,据说和当时《SF杂志》总编辑福岛正实关系不佳,后转入自由创作。

    \n

      1971年出版《石之血脉》(石の血脈)开创了浪漫传奇小说流派,这种风格对后世很多作家产生了影响。

    \n

      1975年《雨やどり》获得直木奖。

    \n","categories":[{"name":"推理","slug":"推理","permalink":"https://hexo.huangge1199.cn/categories/%E6%8E%A8%E7%90%86/"}],"tags":[{"name":"推理界的今天","slug":"推理界的今天","permalink":"https://hexo.huangge1199.cn/tags/%E6%8E%A8%E7%90%86%E7%95%8C%E7%9A%84%E4%BB%8A%E5%A4%A9/"}]},{"title":"python3学习笔记--集合、元组、字典、列表对比","slug":"pyMulCom","date":"2022-03-03T13:55:20.000Z","updated":"2022-09-22T07:39:53.194Z","comments":true,"path":"/post/pyMulCom/","link":"","excerpt":"","content":"

    数据结构

    Python支持以下数据结构:列表,字典,元组,集合。

    \n

    何时使用字典:

    \n\n

    何时使用其他类型:

    \n\n
    \n

    很多时候,元组与字典结合使用,例如元组可能代表一个关键字,因为它是不可变的。

    \n
    \n

    1、列表

    使用方括号创建

    \n
    words = ["Hello", "world", "!"]
    \n
    \n

    使用空的方括号创建空列表

    \n

    可以通过索引来访问

    \n

    大多数情况下,列表中的最后一项不会带逗号。然而,在最后一项放置一个逗号是完全有效的,在某些情况下是鼓励的。

    \n

    列表的索引是从0开始的,而不是从1开始的

    \n
    \n

    2、集合

    使用花括号set 函数创建

    \n
    num_set = {1, 2, 3, 4, 5}\nword_set = set(["spam", "eggs", "sausage"])
    \n
    \n

    要创建一个空集,必须使用 set(),如 {} 是创建一个空字典。

    \n

    集合是无序的,这意味着他们不能被索引。

    \n

    集合不能包含重复的元素。

    \n

    由于存储的方式,检查一个项目是否是一个集合的一部分比检查是不是列表的一部分更快

    \n

    集合使用 add 添加元素 。

    \n

    remove 方法从集合中删除特定的元素; pop 删除随机的元素。

    \n
    \n

    3、元组

    元组 使用圆括号创建 ,也可以在没有圆括号的情况下创建

    \n
    words = ("spam", "eggs", "sausages",)\nmy_tuple = "one", "two", "three"
    \n
    \n

    使用空括号对创建空元组。

    \n

    元组比列表快,但是元组不能改变。

    \n

    可以使用索引访问元组中的值。

    \n
    \n

    4、字典

    字典是用于将任意键映射到值的数据结构

    \n
    ages = {"Dave": 24, "Mary": 42, "John": 58}
    \n
    \n

    空字典被定义为{}。

    \n

    字典 中的每个元素都由一个 键:值 对来表示。

    \n

    使用 字典[“键名”] 可以获取对应的值。

    \n
    \n","categories":[{"name":"python","slug":"python","permalink":"https://hexo.huangge1199.cn/categories/python/"}],"tags":[{"name":"学习笔记","slug":"学习笔记","permalink":"https://hexo.huangge1199.cn/tags/%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0/"},{"name":"python","slug":"python","permalink":"https://hexo.huangge1199.cn/tags/python/"}]},{"title":"python3学习笔记--列表切片","slug":"pyListSlice","date":"2022-03-01T01:47:34.000Z","updated":"2022-09-22T07:39:53.187Z","comments":true,"path":"/post/pyListSlice/","link":"","excerpt":"","content":"

    列表切片(List slices)提供了从列表中检索值的更高级的方法。

    \n
    \n

    列表名[num1 : num2 : num3]

    \n

    从索引num1到num2(不包括num2)间隔为num3的元素

    \n

    num1或num2为负值代表从末尾开始算起的

    \n

    num3为负值代表切片进行逆序截取

    \n
    \n

    以下为具体说明

    \n

    基本用法

    用两个以冒号分隔的整数索引列表。

    \n

    列表切片返回一个包含索引之间旧列表中所有值的新列表。

    \n

    例如:

    \n
    squares = [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]\nprint(squares[2:6])\nprint(squares[3:8])\nprint(squares[0:1])
    \n

    结果:

    \n
    [4, 9, 16, 25]\n[9, 16, 25, 36, 49]\n[0]
    \n
    \n

    和Range参数一样,在一个 slice 中提供的第一个索引被包含在结果中,但是第二个索引没有。

    \n
    \n

    省略一个数字

    如果省略了切片中的第一个数字,则将从列表第一个元素开始。

    \n

    如果第二个数字被省略,则认为是到列表结束。

    \n

    例如:

    \n
    squares = [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]\nprint(squares[:7])\nprint(squares[7:])
    \n

    结果:

    \n
    [0, 1, 4, 9, 16, 25, 36]\n[49, 64, 81]
    \n
    \n

    切片也可以用在元组上

    \n
    \n

    带间隔的切片

    列表切片还可以有第三个数字,表示间隔。

    \n

    例如:

    \n
    squares = [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]\nprint(squares[::2])\nprint(squares[2:8:3])
    \n

    结果:

    \n
    [0, 4, 16, 36, 64]\n[4, 25]
    \n
    \n

    [2:8:3] 包含从索引2到8间隔3的元素。

    \n
    \n

    带负值

    负值也可用于列表切片(和正常列表索引)。当切片(或普通索引)中的第一个和第二个值使用负值时,它们将从列表的末尾算起。

    \n

    例如:

    \n
    squares = [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]\nprint(squares[1:-1])\nprint(squares[-3:-1])\nprint(squares[::-1])
    \n

    结果:

    \n
    [1, 4, 9, 16, 25, 36, 49, 64]\n[49, 64]\n[81, 64, 49, 36, 25, 16, 9, 4, 1, 0]
    \n
    \n

    如果切片第三个数值使用负值,则切片进行逆序截取。
    使用[::-1]作为切片是反转列表的常用方法。

    \n
    \n","categories":[{"name":"python","slug":"python","permalink":"https://hexo.huangge1199.cn/categories/python/"}],"tags":[{"name":"学习笔记","slug":"学习笔记","permalink":"https://hexo.huangge1199.cn/tags/%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0/"},{"name":"python","slug":"python","permalink":"https://hexo.huangge1199.cn/tags/python/"}]},{"title":"python3学习笔记--常用的函数","slug":"pyUsefulFun","date":"2022-03-01T01:02:19.000Z","updated":"2024-04-25T08:10:09.104Z","comments":true,"path":"/post/pyUsefulFun/","link":"","excerpt":"","content":"

    本篇博客内容为学习整理笔记,学习地址为:
    https://www.w3cschool.cn/minicourse/play/python3course?cp=427&gid=0

    \n
    \n

    字符串函数

    1、join

    以另一个字符串作为分隔符连接字符串列表。

    \n

    例如:

    \n
    print(", ".join(["spam", "eggs", "ham"]))\n# 打印 "spam, eggs, ham"
    \n

    2、replace

    用另一个替换字符串中的一个子字符串。

    \n

    例如:

    \n
    print("Hello ME".replace("ME", "world"))\n# 打印 "Hello world"
    \n

    3、startswith

    确定是否在字符串的开始处有一个子字符串。

    \n

    例如:

    \n
    print("This is a sentence.".startswith("This"))\n# 打印 "True"
    \n

    4、endswith

    确定是否在字符串的结尾处有一个子字符串。

    \n

    例如:

    \n
    print("This is a sentence.".endswith("sentence."))\n# 打印 "True"
    \n

    5、lower

    将字符串全部转为小写。

    \n

    例如:

    \n
    print("AN ALL CAPS SENTENCE".lower())\n# 打印  "an all caps sentence"
    \n

    6、upper

    将字符串全部转为大写。

    \n

    例如:

    \n
    print("This is a sentence.".upper())\n# 打印 "THIS IS A SENTENCE."
    \n

    7、split

    把一个字符串转换成一个列表。

    \n

    例如:

    \n
    print("spam, eggs, ham".split(", "))\n# 打印  "['spam', 'eggs', 'ham']"
    \n

    数字函数

    1、max

    查找某些数字或列表的最大值。

    \n

    例如:

    \n
    print(max([1, 2, 9, 2, 4, 7, 8]))\n# 打印 9
    \n

    2、min

    查找某些数字或列表的最小值。

    \n

    例如:

    \n
    print(min(1, 6, 3, 4, 0, 7, 1))\n# 打印 0
    \n

    3、abs

    将数字转成绝对值(该数字与0的距离)。

    \n

    例如:

    \n
    print(abs(-93))\nprint(abs(22))\n# 打印 93
    \n

    4、round

    要将数字四舍五入到一定的小数位数。

    \n

    5、sum

    计算一个列表数字的总和。

    \n

    例如:

    \n
    print(sum([1, 2, 3, 4, 5, 6]))\n# 打印 21
    \n

    列表函数

    1、all

    列表中所有值均为 True 时,结果为 True,否则结果为 False。

    \n

    例如:

    \n
    nums = [55, 44, 33, 22, 11]\n\nif all([i > 5 for i in nums]):\n    print("All larger than 5")\n\n# 打印 All larger than 5
    \n

    2、any

    列表中只要有一个为 True,结果为 True,反之结果为 False。

    \n

    例如:

    \n
    nums = [55, 44, 33, 22, 11]\n\nif any([i % 2 == 0 for i in nums]):\n   print("At least one is even")\n\n# 打印 At least one is even5
    \n

    3、enumerate

    用来同时迭代列表的键和值。

    \n

    例如:

    \n
    nums = [55, 44, 33, 22, 11]\n\nfor v in enumerate(nums):\n   print(v)\n\n# 打印\n# (0, 55)\n# (1, 44)\n# (2, 33)\n# (3, 22)\n# (4, 11)
    \n","categories":[{"name":"python","slug":"python","permalink":"https://hexo.huangge1199.cn/categories/python/"}],"tags":[{"name":"学习笔记","slug":"学习笔记","permalink":"https://hexo.huangge1199.cn/tags/%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0/"},{"name":"python","slug":"python","permalink":"https://hexo.huangge1199.cn/tags/python/"}]},{"title":"力扣540:有序数组中的单一元素","slug":"day20220214","date":"2022-02-14T01:49:24.000Z","updated":"2024-04-25T08:10:09.093Z","comments":true,"path":"/post/day20220214/","link":"","excerpt":"","content":"

    2022年02月14日 力扣每日一题

    \n

    题目

    给定一个只包含整数的有序数组,每个元素都会出现两次,唯有一个数只会出现一次,找出这个数。

    \n\n

    \n\n

    示例 1:

    \n\n
    \n输入: nums = [1,1,2,3,3,4,4,8,8]\n输出: 2\n
    \n\n

    示例 2:

    \n\n
    \n输入: nums =  [3,3,7,7,10,11,11]\n输出: 10\n
    \n\n

    \n\n

    \n\n

    提示:

    \n\n\n\n

    \n\n

    进阶: 采用的方案可以在 O(log n) 时间复杂度和 O(1) 空间复杂度中运行吗?

    \n
    Related Topics
  • 数组
  • 二分查找
  • \n\n

    个人解法

    根据异或的规则,相同为0,不同为1,这样把所有数都异或一遍,结果就是唯一的只出现一次的数

    \n
    public int singleNonDuplicate(int[] nums) {\n    int result = nums[0];\n    for (int i = 1; i < nums.length; i++) {\n        result ^= nums[i];\n    }\n    return result;\n}
    import operator\nfrom functools import reduce\nfrom typing import List\n\n\nclass Solution:\n    def singleNonDuplicate(self, nums: List[int]) -> int:\n        return reduce(operator.xor, nums)
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"力扣1189:“气球” 的最大数量","slug":"day20220213","date":"2022-02-13T14:32:48.000Z","updated":"2024-04-25T08:10:09.092Z","comments":true,"path":"/post/day20220213/","link":"","excerpt":"","content":"

    2022年02月13日 力扣每日一题

    \n

    题目

    给你一个字符串 text,你需要使用 text 中的字母来拼凑尽可能多的单词 "balloon"(气球)

    \n\n

    字符串 text 中的每个字母最多只能被使用一次。请你返回最多可以拼凑出多少个单词 "balloon"

    \n\n

     

    \n\n

    示例 1:

    \n\n

    \"\"

    \n\n
    输入:text = "nlaebolko"\n输出:1\n
    \n\n

    示例 2:

    \n\n

    \"\"

    \n\n
    输入:text = "loonbalxballpoon"\n输出:2\n
    \n\n

    示例 3:

    \n\n
    输入:text = "leetcode"\n输出:0\n
    \n\n

     

    \n\n

    提示:

    \n\n

    \n
    Related Topics
  • 哈希表
  • 字符串
  • 计数
  • \n\n

    个人解法

    一个单词”balloon”分别需要一个’b’、’a’、’n’,以及二个’l’、’o’
    首先我们统计给的单词中每个字母的个数
    然后统计’b’、’a’、’n’数量以及’l’、’o’除以2的最小值

    \n
    class Solution {\n    public int maxNumberOfBalloons(String text) {\n        int[] arrs = new int[26];\n        for (char ch : text.toCharArray()) {\n            arrs[ch - 'a']++;\n        }\n        int count = Math.min(arrs[0], arrs[1]);\n        count = Math.min(count, arrs['l' - 'a'] / 2);\n        count = Math.min(count, arrs['o' - 'a'] / 2);\n        count = Math.min(count, arrs['n' - 'a']);\n        return count;\n    }\n}
    class Solution:\n    def maxNumberOfBalloons(self, text: str) -> int:\n        return min(cnts[ch] // 2 if ch in "lo" else cnts[ch] for ch in "balon") if (cnts := Counter(text)) else 0
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"力扣1020:飞地的数量","slug":"day20220212","date":"2022-02-12T14:22:26.000Z","updated":"2024-04-25T08:10:09.090Z","comments":true,"path":"/post/day20220212/","link":"","excerpt":"","content":"

    2022年02月12日 力扣每日一题

    \n

    题目

    给你一个大小为 m x n 的二进制矩阵 grid ,其中 0 表示一个海洋单元格、1 表示一个陆地单元格。

    \n\n

    一次 移动 是指从一个陆地单元格走到另一个相邻(上、下、左、右)的陆地单元格或跨过 grid 的边界。

    \n\n

    返回网格中 无法 在任意次数的移动中离开网格边界的陆地单元格的数量。

    \n\n

     

    \n\n

    示例 1:

    \n\"\"\n
    \n输入:grid = [[0,0,0,0],[1,0,1,0],[0,1,1,0],[0,0,0,0]]\n输出:3\n解释:有三个 1 被 0 包围。一个 1 没有被包围,因为它在边界上。\n
    \n\n

    示例 2:

    \n\"\"\n
    \n输入:grid = [[0,1,1,0],[0,0,1,0],[0,0,1,0],[0,0,0,0]]\n输出:0\n解释:所有 1 都在边界上或可以到达边界。\n
    \n\n

     

    \n\n

    提示:

    \n\n

    \n
    Related Topics
  • 深度优先搜索
  • 广度优先搜索
  • 并查集
  • 数组
  • 矩阵
  • \n\n

    个人解法

    解题方法:广度优先算法

    \n

    这道题是统计无法力扣网络边界的陆地单元格数量,我的思路是反过来统计,用总陆地数量-能离开的陆地数量

    \n

    这样的话,我就可以用广度优先算法来进行解决,步骤如下:

    \n
      \n
    1. 将边界的单元格坐标加入到队列,并计数
    2. \n
    3. 依次从队列中取出
    4. \n
    5. 将取出陆地的相邻陆地加入到队列中,并计数
    6. \n
    7. 当队列为空时,遍历数组获取总陆地数,并减去能离开的陆地数量
    8. \n
    \n
    import java.util.LinkedList;\nimport java.util.Queue;\n\nclass Solution {\n    public int numEnclaves(int[][] grid) {\n        boolean[][] use = new boolean[grid.length][grid[0].length];\n        Queue<int[]> queue = new LinkedList<>();\n        int xl = grid.length;\n        int yl = grid[0].length;\n        int count = 0;\n        for (int i = 0; i < xl; i++) {\n            if (grid[i][0] == 1) {\n                queue.add(new int[]{i, 0});\n                use[i][0] = true;\n                count++;\n            }\n            if (grid[i][yl - 1] == 1 && !use[i][yl - 1]) {\n                queue.add(new int[]{i, yl - 1});\n                use[i][yl - 1] = true;\n                count++;\n            }\n        }\n        for (int i = 1; i < yl - 1; i++) {\n            if (grid[0][i] == 1 && !use[0][i]) {\n                queue.add(new int[]{0, i});\n                use[0][i] = true;\n                count++;\n            }\n            if (grid[xl - 1][i] == 1 && !use[xl - 1][i]) {\n                queue.add(new int[]{xl - 1, i});\n                use[xl - 1][i] = true;\n                count++;\n            }\n        }\n        int[] xp = new int[]{1, -1, 0, 0};\n        int[] yp = new int[]{0, 0, 1, -1};\n        while (!queue.isEmpty()) {\n            int[] arr = queue.poll();\n            int x = arr[0];\n            int y = arr[1];\n            for (int k = 0; k < 4; k++) {\n                int nx = x + xp[k];\n                int ny = y + yp[k];\n                if (nx >= 0 && nx < grid.length && ny >= 0 && ny < grid[0].length && grid[nx][ny] == 1 && !use[nx][ny]) {\n                    queue.add(new int[]{nx, ny});\n                    use[nx][ny] = true;\n                    count++;\n                }\n            }\n        }\n        int sum = 0;\n        for (int[] ints : grid) {\n            for (int j = 0; j < yl; j++) {\n                if (ints[j] == 1) {\n                    sum++;\n                }\n            }\n        }\n        return sum - count;\n    }\n}
    from collections import deque\nfrom typing import List\n\n\nclass Solution:\n    def numEnclaves(self, grid: List[List[int]]) -> int:\n        use = [[False] * len(grid[0]) for _ in range(len(grid))]\n        queue = deque()\n        xl = len(grid)\n        yl = len(grid[0])\n        count = 0\n        for i in range(xl):\n            if grid[i][0] == 1:\n                queue.append((i, 0))\n                use[i][0] = True\n                count += 1\n            if grid[i][yl - 1] == 1 and not use[i][yl - 1]:\n                queue.append((i, yl - 1))\n                use[i][yl - 1] = True\n                count += 1\n        for i in range(1, yl - 1):\n            if grid[0][i] == 1 and not use[0][i]:\n                queue.append((0, i))\n                use[0][i] = True\n                count += 1\n            if grid[xl - 1][i] == 1 and not use[xl - 1][i]:\n                queue.append((xl - 1, i))\n                use[xl - 1][i] = True\n                count += 1\n        while queue:\n            x, y = queue.pop()\n            for nx, ny in ((x - 1, y), (x + 1, y), (x, y - 1), (x, y + 1)):\n                if nx < 0 or nx >= len(grid) or ny < 0 or ny >= len(grid[0]) or grid[nx][ny] == 0 or use[nx][ny]:\n                    continue\n                queue.append((nx, ny))\n                use[nx][ny] = True\n                count += 1\n        sc = sum([sum(row) for row in grid])\n        return sc - count
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"代码提交到多个git仓库","slug":"gitPushMoreRepo","date":"2022-02-11T09:34:25.000Z","updated":"2022-09-22T07:39:52.889Z","comments":true,"path":"/post/gitPushMoreRepo/","link":"","excerpt":"","content":"

    现在我们都习惯于把自己的代码放到远程仓库中,毫无疑问GitHub是首选,但由于国内的网络等各种原因,会导致我们连接不上,这时候我们会考虑放到自建的代码管理仓库或者是gitee上面。

    \n

    我们还不想放弃GitHub,那么我们就要考虑将代码提交到多个仓库中。

    \n

    比如,我分别在GitHub和gitee上都有格子的仓库:

    \n\n

    那么,我可以通过以下命令来进行添加仓库:

    \n

    先添加第一个GitHub的仓库地址:

    git remote add origin https://github.com/huangge1199/my-blog.git

    \n

    再添加gitee的仓库地址

    git remote set-url --add origin https://gitee.com/huangge1199_admin/my-blog.git

    这样的话我们push时,就会将代码同时推送到两个仓库了。

    \n

    当然不想用命令的形式操作,也可以直接修改项目目录下隐藏目录.git中的config文件,在[remote “origin”]中添加多个仓库地址就可以了,参考如下:

    \n
    [remote "origin"]\n\turl = https://gitee.com/huangge1199_admin/my-blog.git\n\tfetch = +refs/heads/*:refs/remotes/origin/*\n\turl = https://github.com/huangge1199/my-blog.git
    \n","categories":[{"name":"git","slug":"git","permalink":"https://hexo.huangge1199.cn/categories/git/"}],"tags":[{"name":"git","slug":"git","permalink":"https://hexo.huangge1199.cn/tags/git/"}]},{"title":"力扣1984:学生分数的最小差值","slug":"day20220211","date":"2022-02-11T05:35:01.000Z","updated":"2024-04-25T08:10:09.089Z","comments":true,"path":"/post/day20220211/","link":"","excerpt":"","content":"

    2022年02月11日 力扣每日一题

    \n

    题目

    给你一个 下标从 0 开始 的整数数组 nums ,其中 nums[i] 表示第 i 名学生的分数。另给你一个整数 k

    \n\n

    从数组中选出任意 k 名学生的分数,使这 k 个分数间 最高分最低分差值 达到 最小化

    \n\n

    返回可能的 最小差值

    \n\n

     

    \n\n

    示例 1:

    \n\n
    输入:nums = [90], k = 1\n输出:0\n解释:选出 1 名学生的分数,仅有 1 种方法:\n- [90] 最高分和最低分之间的差值是 90 - 90 = 0\n可能的最小差值是 0\n
    \n\n

    示例 2:

    \n\n
    输入:nums = [9,4,1,7], k = 2\n输出:2\n解释:选出 2 名学生的分数,有 6 种方法:\n- [9,4,1,7] 最高分和最低分之间的差值是 9 - 4 = 5\n- [9,4,1,7] 最高分和最低分之间的差值是 9 - 1 = 8\n- [9,4,1,7] 最高分和最低分之间的差值是 9 - 7 = 2\n- [9,4,1,7] 最高分和最低分之间的差值是 4 - 1 = 3\n- [9,4,1,7] 最高分和最低分之间的差值是 7 - 4 = 3\n- [9,4,1,7] 最高分和最低分之间的差值是 7 - 1 = 6\n可能的最小差值是 2
    \n\n

     

    \n\n

    提示:

    \n\n

    \n
    Related Topics
  • 数组
  • 排序
  • 滑动窗口
  • \n\n

    个人解法

    排序后,使用滑动窗口

    \n
    class Solution {\n    public int minimumDifference(int[] nums, int k) {\n        Arrays.sort(nums);\n        int min = Integer.MAX_VALUE;\n        for (int i = 0; i <= nums.length - k; i++) {\n            min = Math.min(min, nums[i + k - 1] - nums[i]);\n        }\n        return min;\n    }\n}
    from typing import List\n\n\nclass Solution:\n    def minimumDifference(self, nums: List[int], k: int) -> int:\n        if k > 1:\n            num = sorted(nums)\n            return min(num[i + k - 1] - num[i] for i in range(len(num) - k + 1))\n        else:\n            return 0
    ","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"个人网站加入到搜索引擎中","slug":"putWebsiteToSearchEngine","date":"2022-02-08T10:26:58.000Z","updated":"2023-02-07T07:18:09.140Z","comments":true,"path":"/post/putWebsiteToSearchEngine/","link":"","excerpt":"","content":"

    一般来说,搜索引擎中是不会收入你个人网站的,你可以试试用谷歌或者百度等其他搜索引擎看看,能不能收到你个人网站的相关页面?
    如果搜索不到,你可以申请加入搜索引擎,这个是免费的,下面提供一些搜索引擎的提交地址:

    \n\n

    目前,我所知道的就只有,如果你有其他搜索引擎的提交地址,可以在评论区中留下搜索引擎名称和地址,万分感谢!

    \n","categories":[{"name":"网站建设","slug":"网站建设","permalink":"https://hexo.huangge1199.cn/categories/%E7%BD%91%E7%AB%99%E5%BB%BA%E8%AE%BE/"}],"tags":[{"name":"网站建设","slug":"网站建设","permalink":"https://hexo.huangge1199.cn/tags/%E7%BD%91%E7%AB%99%E5%BB%BA%E8%AE%BE/"}]},{"title":"力扣1219:黄金矿工","slug":"day20220205","date":"2022-02-06T04:37:26.000Z","updated":"2024-04-25T08:10:09.087Z","comments":true,"path":"/post/day20220205/","link":"","excerpt":"","content":"

    2022年02月05日 力扣每日一题

    \n

    题目

    你要开发一座金矿,地质勘测学家已经探明了这座金矿中的资源分布,并用大小为 m * n 的网格 grid 进行了标注。每个单元格中的整数就表示这一单元格中的黄金数量;如果该单元格是空的,那么就是 0

    \n\n

    为了使收益最大化,矿工需要按以下规则来开采黄金:

    \n\n\n\n

     

    \n\n

    示例 1:

    \n\n
    输入:grid = [[0,6,0],[5,8,7],[0,9,0]]\n输出:24\n解释:\n[[0,6,0],\n [5,8,7],\n [0,9,0]]\n一种收集最多黄金的路线是:9 -> 8 -> 7。\n
    \n\n

    示例 2:

    \n\n
    输入:grid = [[1,0,7],[2,0,6],[3,4,5],[0,3,0],[9,0,20]]\n输出:28\n解释:\n[[1,0,7],\n [2,0,6],\n [3,4,5],\n [0,3,0],\n [9,0,20]]\n一种收集最多黄金的路线是:1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7。\n
    \n\n

     

    \n\n

    提示:

    \n\n

    \n
    Related Topics
  • 数组
  • 回溯
  • 矩阵
  • \n\n

    个人解法

    class Solution {\n    int[] xl = new int[]{1, -1, 0, 0};\n    int[] yl = new int[]{0, 0, 1, -1};\n    public int getMaximumGold(int[][] grid) {\n        int counts = 0;\n        boolean[][] use = new boolean[grid.length][grid[0].length];\n        for (int i = 0; i < grid.length; i++) {\n            for (int j = 0; j < grid[0].length; j++) {\n                use[i][j] = true;\n                counts = Math.max(counts, dfs(i, j, grid, use));\n                use[i][j] = false;\n            }\n        }\n        return counts;\n    }\n    private int dfs(int x, int y, int[][] grid, boolean[][] use) {\n        int counts = grid[x][y];\n        for (int i = 0; i < 4; i++) {\n            int nx = x + xl[i];\n            int ny = y + yl[i];\n            if (nx < 0 || nx >= grid.length || ny < 0 || ny >= grid[0].length || grid[nx][ny] == 0 || use[nx][ny]) {\n                continue;\n            }\n            use[nx][ny] = true;\n            counts = Math.max(counts, grid[x][y] + dfs(nx, ny, grid, use));\n            use[nx][ny] = false;\n        }\n        return counts;\n    }\n}
    class Solution:\n    def getMaximumGold(self, grid: List[List[int]]) -> int:\n        def dfs(x: int, y: int) -> int:\n            count = grid[x][y]\n            for nx, ny in ((x - 1, y), (x + 1, y), (x, y - 1), (x, y + 1)):\n                if nx < 0 or nx >= len(grid) or ny < 0 or ny >= len(grid[0]) or grid[nx][ny] == 0 or use[nx][ny]:\n                    continue\n                use[nx][ny] = True\n                count = max(count, grid[x][y] + dfs(nx, ny))\n                use[nx][ny] = False\n            return count\n\n        counts = 0\n        # 这种形式下,给一个元素赋值,对应的所有行相同列都会赋值\n        # use = [[False] * len(grid[0])] * len(grid)\n        use = [[False] * len(grid[0]) for _ in range(len(grid))]\n        for i in range(len(grid)):\n            for j in range(len(grid[0])):\n                if grid[i][j] != 0:\n                    use[i][j] = True\n                    counts = max(counts, dfs(i, j))\n                    use[i][j] = False\n        return counts
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"力扣1725:可以形成最大正方形的矩形数目","slug":"day20220204","date":"2022-02-04T14:50:47.000Z","updated":"2024-04-25T08:10:09.086Z","comments":true,"path":"/post/day20220204/","link":"","excerpt":"","content":"

    2022年02月04日 力扣每日一题

    \n

    题目

    给你一个数组 rectangles ,其中 rectangles[i] = [li, wi] 表示第 i 个矩形的长度为 li 、宽度为 wi

    \n\n

    如果存在 k 同时满足 k <= lik <= wi ,就可以将第 i 个矩形切成边长为 k 的正方形。例如,矩形 [4,6] 可以切成边长最大为 4 的正方形。

    \n\n

    maxLen 为可以从矩形数组 rectangles 切分得到的 最大正方形 的边长。

    \n\n

    请你统计有多少个矩形能够切出边长为 maxLen 的正方形,并返回矩形 数目

    \n\n

    \n\n

    示例 1:

    \n\n
    \n输入:rectangles = [[5,8],[3,9],[5,12],[16,5]]\n输出:3\n解释:能从每个矩形中切出的最大正方形边长分别是 [5,3,5,5] 。\n最大正方形的边长为 5 ,可以由 3 个矩形切分得到。\n
    \n\n

    示例 2:

    \n\n
    \n输入:rectangles = [[2,3],[3,7],[4,3],[3,7]]\n输出:3\n
    \n\n

    \n\n

    提示:

    \n\n

    \n
    Related Topics
  • 数组
  • \n\n

    个人解法

    class Solution {\n    public int countGoodRectangles(int[][] rectangles) {\n        int maxLength = 0;\n        int count = 0;\n        for (int[] rectangle : rectangles) {\n            int temp = Math.min(rectangle[0], rectangle[1]);\n            if (temp == maxLength) {\n                count++;\n            } else if (temp > maxLength) {\n                count = 1;\n                maxLength = temp;\n            }\n        }\n        return count;\n    }\n}
    from typing import List\n\n\nclass Solution:\n    def countGoodRectangles(self, rectangles: List[List[int]]) -> int:\n        maxLength = 0\n        count = 0\n        for rectangle in rectangles:\n            temp = min(rectangle[0], rectangle[1])\n            if temp == maxLength:\n                count = count + 1\n            elif temp > maxLength:\n                count = 1\n                maxLength = temp\n        return count
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"seata1.4.1服务端部署及应用","slug":"seata141demo","date":"2022-02-02T01:37:59.000Z","updated":"2022-09-22T07:39:53.204Z","comments":true,"path":"/post/seata141demo/","link":"","excerpt":"","content":"

    seata1.4.1服务端部署及应用

    springcloud-nacos-seata

    \n

    分布式事务组件seata的使用demo,AT模式,集成nacos、springboot、springcloud、mybatis-plus、feign,数据库采用mysql

    \n

    demo中使用的相关版本号,具体请看代码。如果搭建个人demo不成功,验证是否是由版本导致,由于目前这几个项目更新比较频繁,版本稍有变化便会出现许多奇怪问题

    \n\n
    \n

    1. 服务端配置

    seata-server为release版本1.4.1,采用docker部署方式

    \n

    https://github.com/seata/seata/releases/tag/v1.4.1)

    \n

    1.1 docker拉取镜像

    docker pull seataio/seata-server:1.4.1
    \n

    1.2 启动临时容器

    docker run --rm --name seata-server -d -p 8091:8091 seataio/seata-server:1.4.1
    \n

    \"\"

    \n

    1.3 将配置文件拷贝出来

    docker cp d5cd81d60189:/seata-server/resources/ ./conf/
    \n

    1.4 修改conf/registry.conf文件

    修改文件,用nacos做注册中心和配置中心

    \n
    vi ./conf/registry.conf
    \n

    原始内容:

    \n
    registry {\n  # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa\n  type = "file"\n  loadBalance = "RandomLoadBalance"\n  loadBalanceVirtualNodes = 10\n\n  nacos {\n    application = "seata-server"\n    serverAddr = "127.0.0.1:8848"\n    group = "SEATA_GROUP"\n    namespace = ""\n    cluster = "default"\n    username = ""\n    password = ""\n  }\n  eureka {\n    serviceUrl = "http://localhost:8761/eureka"\n    application = "default"\n    weight = "1"\n  }\n  redis {\n    serverAddr = "localhost:6379"\n    db = 0\n    password = ""\n    cluster = "default"\n    timeout = 0\n  }\n  zk {\n    cluster = "default"\n    serverAddr = "127.0.0.1:2181"\n    sessionTimeout = 6000\n    connectTimeout = 2000\n    username = ""\n    password = ""\n  }\n  consul {\n    cluster = "default"\n    serverAddr = "127.0.0.1:8500"\n  }\n  etcd3 {\n    cluster = "default"\n    serverAddr = "http://localhost:2379"\n  }\n  sofa {\n    serverAddr = "127.0.0.1:9603"\n    application = "default"\n    region = "DEFAULT_ZONE"\n    datacenter = "DefaultDataCenter"\n    cluster = "default"\n    group = "SEATA_GROUP"\n    addressWaitTime = "3000"\n  }\n  file {\n    name = "file.conf"\n  }\n}\n\nconfig {\n  # file、nacos 、apollo、zk、consul、etcd3\n  type = "file"\n\n  nacos {\n    serverAddr = "127.0.0.1:8848"\n    namespace = ""\n    group = "SEATA_GROUP"\n    username = ""\n    password = ""\n  }\n  consul {\n    serverAddr = "127.0.0.1:8500"\n  }\n  apollo {\n    appId = "seata-server"\n    apolloMeta = "http://192.168.1.204:8801"\n    namespace = "application"\n    apolloAccesskeySecret = ""\n  }\n  zk {\n    serverAddr = "127.0.0.1:2181"\n    sessionTimeout = 6000\n    connectTimeout = 2000\n    username = ""\n    password = ""\n  }\n  etcd3 {\n    serverAddr = "http://localhost:2379"\n  }\n  file {\n    name = "file.conf"\n  }\n}\n
    \n

    修改后的内容:

    \n
    registry {\n  # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa\n  type = "nacos" # 改为nacos\n  loadBalance = "RandomLoadBalance"\n  loadBalanceVirtualNodes = 10\n\n  nacos {\n    application = "seata-server"\n    serverAddr = "IP:端口" # 改为nacos实际的IP:端口\n    group = "SEATA_GROUP"\n    namespace = ""\n    cluster = "default"\n    username = "nacos" # 改为nacos的账号\n    password = "nacos" # 改为nacos的密码\n  }\n  eureka {\n    serviceUrl = "http://localhost:8761/eureka"\n    application = "default"\n    weight = "1"\n  }\n  redis {\n    serverAddr = "localhost:6379"\n    db = 0\n    password = ""\n    cluster = "default"\n    timeout = 0\n  }\n  zk {\n    cluster = "default"\n    serverAddr = "127.0.0.1:2181"\n    sessionTimeout = 6000\n    connectTimeout = 2000\n    username = ""\n    password = ""\n  }\n  consul {\n    cluster = "default"\n    serverAddr = "127.0.0.1:8500"\n  }\n  etcd3 {\n    cluster = "default"\n    serverAddr = "http://localhost:2379"\n  }\n  sofa {\n    serverAddr = "127.0.0.1:9603"\n    application = "default"\n    region = "DEFAULT_ZONE"\n    datacenter = "DefaultDataCenter"\n    cluster = "default"\n    group = "SEATA_GROUP"\n    addressWaitTime = "3000"\n  }\n  file {\n    name = "file.conf"\n  }\n}\n\nconfig {\n  # file、nacos 、apollo、zk、consul、etcd3\n  type = "nacos" # 改为nacos\n\n  nacos {\n    serverAddr = "IP:端口" # 改为nacos实际的IP:端口\n    namespace = ""\n    group = "SEATA_GROUP"\n    username = "nacos" # 改为nacos的账号\n    password = "nacos" # 改为nacos的密码\n  }\n  consul {\n    serverAddr = "127.0.0.1:8500"\n  }\n  apollo {\n    appId = "seata-server"\n    apolloMeta = "http://192.168.1.204:8801"\n    namespace = "application"\n    apolloAccesskeySecret = ""\n  }\n  zk {\n    serverAddr = "127.0.0.1:2181"\n    sessionTimeout = 6000\n    connectTimeout = 2000\n    username = ""\n    password = ""\n  }\n  etcd3 {\n    serverAddr = "http://localhost:2379"\n  }\n  file {\n    name = "file.conf"\n  }\n}\n
    \n

    1.5 执行SQL语句

    seata配置使用db事务日志存储方式

    \n

    SQL文件下载地址:seata/script/server/db at develop · seata/seata (github.com)

    \n

    1.6 创建config.txt并修改

    config.txt文件地址:seata/config.txt at develop · seata/seata (github.com)

    \n

    config.txt原件:

    \n
    transport.type=TCP\ntransport.server=NIO\ntransport.heartbeat=true\ntransport.enableClientBatchSendRequest=true\ntransport.threadFactory.bossThreadPrefix=NettyBoss\ntransport.threadFactory.workerThreadPrefix=NettyServerNIOWorker\ntransport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler\ntransport.threadFactory.shareBossWorker=false\ntransport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector\ntransport.threadFactory.clientSelectorThreadSize=1\ntransport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread\ntransport.threadFactory.bossThreadSize=1\ntransport.threadFactory.workerThreadSize=default\ntransport.shutdown.wait=3\nservice.vgroupMapping.my_test_tx_group=default\nservice.default.grouplist=127.0.0.1:8091\nservice.enableDegrade=false\nservice.disableGlobalTransaction=false\nclient.rm.asyncCommitBufferLimit=10000\nclient.rm.lock.retryInterval=10\nclient.rm.lock.retryTimes=30\nclient.rm.lock.retryPolicyBranchRollbackOnConflict=true\nclient.rm.reportRetryCount=5\nclient.rm.tableMetaCheckEnable=false\nclient.rm.tableMetaCheckerInterval=60000\nclient.rm.sqlParserType=druid\nclient.rm.reportSuccessEnable=false\nclient.rm.sagaBranchRegisterEnable=false\nclient.rm.tccActionInterceptorOrder=-2147482648\nclient.tm.commitRetryCount=5\nclient.tm.rollbackRetryCount=5\nclient.tm.defaultGlobalTransactionTimeout=60000\nclient.tm.degradeCheck=false\nclient.tm.degradeCheckAllowTimes=10\nclient.tm.degradeCheckPeriod=2000\nclient.tm.interceptorOrder=-2147482648\nstore.mode=file\nstore.lock.mode=file\nstore.session.mode=file\nstore.publicKey=\nstore.file.dir=file_store/data\nstore.file.maxBranchSessionSize=16384\nstore.file.maxGlobalSessionSize=512\nstore.file.fileWriteBufferCacheSize=16384\nstore.file.flushDiskMode=async\nstore.file.sessionReloadReadSize=100\nstore.db.datasource=druid\nstore.db.dbType=mysql\nstore.db.driverClassName=com.mysql.jdbc.Driver\nstore.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true&rewriteBatchedStatements=true\nstore.db.user=username\nstore.db.password=password\nstore.db.minConn=5\nstore.db.maxConn=30\nstore.db.globalTable=global_table\nstore.db.branchTable=branch_table\nstore.db.queryLimit=100\nstore.db.lockTable=lock_table\nstore.db.maxWait=5000\nstore.redis.mode=single\nstore.redis.single.host=127.0.0.1\nstore.redis.single.port=6379\nstore.redis.sentinel.masterName=\nstore.redis.sentinel.sentinelHosts=\nstore.redis.maxConn=10\nstore.redis.minConn=1\nstore.redis.maxTotal=100\nstore.redis.database=0\nstore.redis.password=\nstore.redis.queryLimit=100\nserver.recovery.committingRetryPeriod=1000\nserver.recovery.asynCommittingRetryPeriod=1000\nserver.recovery.rollbackingRetryPeriod=1000\nserver.recovery.timeoutRetryPeriod=1000\nserver.maxCommitRetryTimeout=-1\nserver.maxRollbackRetryTimeout=-1\nserver.rollbackRetryTimeoutUnlockEnable=false\nserver.distributedLockExpireTime=10000\nclient.undo.dataValidation=true\nclient.undo.logSerialization=jackson\nclient.undo.onlyCareUpdateColumns=true\nserver.undo.logSaveDays=7\nserver.undo.logDeletePeriod=86400000\nclient.undo.logTable=undo_log\nclient.undo.compress.enable=true\nclient.undo.compress.type=zip\nclient.undo.compress.threshold=64k\nlog.exceptionRate=100\ntransport.serialization=seata\ntransport.compressor=none\nmetrics.enabled=false\nmetrics.registryType=compact\nmetrics.exporterList=prometheus\nmetrics.exporterPrometheusPort=9898
    \n

    这里根据自己需求做调整,我这里的配置如下:

    \n
    service.vgroupMapping.order-service-group=default\nservice.vgroupMapping.storage-service-group=default\nservice.enableDegrade=false\nservice.disableGlobalTransaction=false\nstore.mode=db\nstore.db.datasource=druid\nstore.db.dbType=mysql\n#store.db.driverClassName=com.mysql.jdbc.Driver 这个是mysql8以下的驱动\nstore.db.driverClassName=com.mysql.cj.jdbc.Driver #这个是mysql8的驱动\nstore.db.url=jdbc:mysql://192.168.0.1:3306/seata?useUnicode=true #这个是mysql的连接信息\nstore.db.user=root #这个是mysql的用户名\nstore.db.password=123456 #这个是mysql的密码\nstore.db.minConn=5\nstore.db.maxConn=30\nstore.db.globalTable=global_table\nstore.db.branchTable=branch_table\nstore.db.queryLimit=100\nstore.db.lockTable=lock_table\nstore.db.maxWait=5000
    \n

    1.7 创建nacos-config.sh

    在conf中

    \n

    nacos-config.sh获取地址:seata/script/config-center/nacos at develop · seata/seata (github.com)

    \n

    1.8 上传seata配置信息到nacos

    先确认目录结构正确

    \n

    \"\"

    \n
    ./nacos-config.sh -h docker所在机器IP -p 8848 -g SEATA_GROUP  -u nacos -w nacos
    \n

    1.2.2 修改conf/nacos-config.txt 配置

    service.vgroup_mapping.${your-service-gruop}=default,中间的${your-service-gruop}为自己定义的服务组名称,服务中的application.properties文件里配置服务组名称。

    \n

    demo中有两个服务,分别是storage-service和order-service,所以配置如下

    \n
    service.vgroup_mapping.storage-service-group=defaultservice.vgroup_mapping.order-service-group=default
    \n

    注意这里,高版本中应该是vgroupMapping 同时后面的如: order-service-group 不能定义为 order_service_group

    \n

    1.3 启动seata-server

    分两步,如下

    \n
    # 初始化seata 的nacos配置cd confsh nacos-config.sh 192.168.21.89# 启动seata-servercd binsh seata-server.sh -p 8091 -m file
    \n
    \n

    2. 应用配置

    2.1 数据库初始化

    -- 创建 order库、业务表、undo_log表create database seata_order;use seata_order;DROP TABLE IF EXISTS `order_tbl`;CREATE TABLE `order_tbl` (  `id` int(11) NOT NULL AUTO_INCREMENT,  `user_id` varchar(255) DEFAULT NULL,  `commodity_code` varchar(255) DEFAULT NULL,  `count` int(11) DEFAULT 0,  `money` int(11) DEFAULT 0,  PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8;CREATE TABLE `undo_log`(  `id`            BIGINT(20)   NOT NULL AUTO_INCREMENT,  `branch_id`     BIGINT(20)   NOT NULL,  `xid`           VARCHAR(100) NOT NULL,  `context`       VARCHAR(128) NOT NULL,  `rollback_info` LONGBLOB     NOT NULL,  `log_status`    INT(11)      NOT NULL,  `log_created`   DATETIME     NOT NULL,  `log_modified`  DATETIME     NOT NULL,  `ext`           VARCHAR(100) DEFAULT NULL,  PRIMARY KEY (`id`),  UNIQUE KEY `ux_undo_log` (`xid`, `branch_id`)) ENGINE = InnoDB  AUTO_INCREMENT = 1  DEFAULT CHARSET = utf8;-- 创建 storage库、业务表、undo_log表create database seata_storage;use seata_storage;DROP TABLE IF EXISTS `storage_tbl`;CREATE TABLE `storage_tbl` (  `id` int(11) NOT NULL AUTO_INCREMENT,  `commodity_code` varchar(255) DEFAULT NULL,  `count` int(11) DEFAULT 0,  PRIMARY KEY (`id`),  UNIQUE KEY (`commodity_code`)) ENGINE=InnoDB DEFAULT CHARSET=utf8;CREATE TABLE `undo_log`(  `id`            BIGINT(20)   NOT NULL AUTO_INCREMENT,  `branch_id`     BIGINT(20)   NOT NULL,  `xid`           VARCHAR(100) NOT NULL,  `context`       VARCHAR(128) NOT NULL,  `rollback_info` LONGBLOB     NOT NULL,  `log_status`    INT(11)      NOT NULL,  `log_created`   DATETIME     NOT NULL,  `log_modified`  DATETIME     NOT NULL,  `ext`           VARCHAR(100) DEFAULT NULL,  PRIMARY KEY (`id`),  UNIQUE KEY `ux_undo_log` (`xid`, `branch_id`)) ENGINE = InnoDB  AUTO_INCREMENT = 1  DEFAULT CHARSET = utf8;-- 初始化库存模拟数据INSERT INTO seata_storage.storage_tbl (id, commodity_code, count) VALUES (1, 'product-1', 9999999);INSERT INTO seata_storage.storage_tbl (id, commodity_code, count) VALUES (2, 'product-2', 0);
    \n

    2.2 应用配置

    见代码

    \n

    几个重要的配置

    \n
      \n
    1. 每个应用的resource里需要配置一个registry.conf ,demo中与seata-server里的配置相同
    2. \n
    3. application.propeties 的各个配置项,注意spring.cloud.alibaba.seata.tx-service-group 是服务组名称,与nacos-config.txt 配置的service.vgroup_mapping.${your-service-gruop}具有对应关系
    4. \n
    \n
    \n

    3. 测试

      \n
    1. 分布式事务成功,模拟正常下单、扣库存

      \n

      localhost:9091/order/placeOrder/commit

      \n
    2. \n
    3. 分布式事务失败,模拟下单成功、扣库存失败,最终同时回滚

      \n

      localhost:9091/order/placeOrder/rollback

      \n
    4. \n
    \n","categories":[{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/categories/java/"},{"name":"seata","slug":"java/seata","permalink":"https://hexo.huangge1199.cn/categories/java/seata/"}],"tags":[{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/tags/java/"},{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"},{"name":"seata","slug":"seata","permalink":"https://hexo.huangge1199.cn/tags/seata/"}]},{"title":"力扣1688:比赛中的配对次数","slug":"day20220125","date":"2022-01-25T05:57:48.000Z","updated":"2024-04-25T08:10:09.084Z","comments":true,"path":"/post/day20220125/","link":"","excerpt":"","content":"

    2022年01月25日 力扣每日一题

    \n

    题目

    给你一个整数 n ,表示比赛中的队伍数。比赛遵循一种独特的赛制:

    \n\n\n\n

    返回在比赛中进行的配对次数,直到决出获胜队伍为止。

    \n\n

    \n\n

    示例 1:

    \n\n
    输入:n = 7\n输出:6\n解释:比赛详情:\n- 第 1 轮:队伍数 = 7 ,配对次数 = 3 ,4 支队伍晋级。\n- 第 2 轮:队伍数 = 4 ,配对次数 = 2 ,2 支队伍晋级。\n- 第 3 轮:队伍数 = 2 ,配对次数 = 1 ,决出 1 支获胜队伍。\n总配对次数 = 3 + 2 + 1 = 6\n
    \n\n

    示例 2:

    \n\n
    输入:n = 14\n输出:13\n解释:比赛详情:\n- 第 1 轮:队伍数 = 14 ,配对次数 = 7 ,7 支队伍晋级。\n- 第 2 轮:队伍数 = 7 ,配对次数 = 3 ,4 支队伍晋级。 \n- 第 3 轮:队伍数 = 4 ,配对次数 = 2 ,2 支队伍晋级。\n- 第 4 轮:队伍数 = 2 ,配对次数 = 1 ,决出 1 支获胜队伍。\n总配对次数 = 7 + 3 + 2 + 1 = 13\n
    \n\n

    \n\n

    提示:

    \n\n

    \n
    Related Topics
  • 数学
  • 模拟
  • \n\n

    个人解法

    class Solution {\n    public int numberOfMatches(int n) {\n        // 总配对次数\n        int sum = 0;\n        while (n > 1) {\n            if (n % 2 == 1) {\n                // 奇数队伍\n                // 配对次数:(n - 1) / 2\n                sum += (n - 1) / 2;\n                // 剩余队伍数:(n - 1) / 2 + 1\n                n = (n - 1) / 2 + 1;\n            } else {\n                // 偶数队伍\n                // 配对次数:n / 2\n                sum += n / 2;\n                // 剩余队伍数:n / 2\n                n /= 2;\n            }\n        }\n        return sum;\n    }\n}
    class Solution:\n    def numberOfMatches(self, n: int) -> int:\n        # 总配对次数\n        sums = 0\n        while n > 1:\n            if n % 2 == 1:\n                # 奇数队伍\n                # 配对次数:(n - 1) / 2\n                sums += (n - 1) / 2\n                # 剩余队伍数:(n - 1) / 2 + 1\n                n = (n - 1) / 2 + 1\n            else:\n                # 偶数队伍\n                # 配对次数:n / 2\n                sums += n / 2\n                # 剩余队伍数:n / 2\n                n /= 2\n        return int(sums)
    ","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"力扣2045:到达目的地的第二短时间","slug":"day20220124","date":"2022-01-24T07:22:58.000Z","updated":"2024-04-25T08:10:09.083Z","comments":true,"path":"/post/day20220124/","link":"","excerpt":"","content":"

    2022年01月24日 力扣每日一题

    \n

    题目

    城市用一个 双向连通 图表示,图中有 n 个节点,从 1n 编号(包含 1n)。图中的边用一个二维整数数组 edges 表示,其中每个 edges[i] = [ui, vi] 表示一条节点 ui 和节点 vi 之间的双向连通边。每组节点对由 最多一条 边连通,顶点不存在连接到自身的边。穿过任意一条边的时间是 time 分钟。

    \n\n

    每个节点都有一个交通信号灯,每 change 分钟改变一次,从绿色变成红色,再由红色变成绿色,循环往复。所有信号灯都 同时 改变。你可以在 任何时候 进入某个节点,但是 只能 在节点 信号灯是绿色时 才能离开。如果信号灯是  绿色 ,你 不能 在节点等待,必须离开。

    \n\n

    第二小的值 是 严格大于 最小值的所有值中最小的值。

    \n\n\n\n

    给你 nedgestimechange ,返回从节点 1 到节点 n 需要的 第二短时间

    \n\n

    注意:

    \n\n\n\n

     

    \n\n

    示例 1:

    \n\n

    \"\"        \"\"

    \n\n
    \n输入:n = 5, edges = [[1,2],[1,3],[1,4],[3,4],[4,5]], time = 3, change = 5\n输出:13\n解释:\n上面的左图展现了给出的城市交通图。\n右图中的蓝色路径是最短时间路径。\n花费的时间是:\n- 从节点 1 开始,总花费时间=0\n- 1 -> 4:3 分钟,总花费时间=3\n- 4 -> 5:3 分钟,总花费时间=6\n因此需要的最小时间是 6 分钟。\n\n右图中的红色路径是第二短时间路径。\n- 从节点 1 开始,总花费时间=0\n- 1 -> 3:3 分钟,总花费时间=3\n- 3 -> 4:3 分钟,总花费时间=6\n- 在节点 4 等待 4 分钟,总花费时间=10\n- 4 -> 5:3 分钟,总花费时间=13\n因此第二短时间是 13 分钟。      \n
    \n\n

    示例 2:

    \n\n

    \"\"

    \n\n
    \n输入:n = 2, edges = [[1,2]], time = 3, change = 2\n输出:11\n解释:\n最短时间路径是 1 -> 2 ,总花费时间 = 3 分钟\n最短时间路径是 1 -> 2 -> 1 -> 2 ,总花费时间 = 11 分钟
    \n\n

     

    \n\n

    提示:

    \n\n

    \n
    Related Topics
  • 广度优先搜索
  • 最短路
  • \n\n

    个人解法

    import java.util.*;\n\nclass Solution {\n    public int secondMinimum(int n, int[][] edges, int time, int change) {\n        // 统计所有节点的联通节点,并将其存入map中留着后面使用\n        Map<Integer, List<Integer>> map = new HashMap<>(n);\n        for (int i = 1; i <= n; i++) {\n            map.put(i, new ArrayList<>());\n        }\n        for (int[] edge : edges) {\n            map.get(edge[0]).add(edge[1]);\n            map.get(edge[1]).add(edge[0]);\n        }\n        Queue<Integer> queue = new LinkedList<>();\n        queue.add(1);\n        // 记录节点到达的次数\n        int[] counts = new int[n + 1];\n        // 记录到达节点的时间\n        int free = 0;\n        while (!queue.isEmpty()) {\n            // 红灯情况下加上需要等待的时间\n            if (free % (2 * change) >= change) {\n                free += change - free % change;\n            }\n            free += time;\n            // 同一时间可以到达的节点数量\n            int size = queue.size();\n            // 同一时间节点是否已经到达\n            boolean[] use = new boolean[n + 1];\n            for (int i = 0; i < size; i++) {\n                // 获取该节点接下来可以到达的节点\n                List<Integer> list = map.get(queue.poll());\n                for (int num : list) {\n                    // 同一时间未到达,并且到达该节点的总次数小于2\n                    if (!use[num] && counts[num] < 2) {\n                        queue.add(num);\n                        use[num] = true;\n                        counts[num]++;\n                    }\n                    // 如果是第二次到达最后一个节点,直接返回需要到达的诗句\n                    if (num == n && counts[num] == 2) {\n                        return free;\n                    }\n                }\n            }\n        }\n        return 0;\n    }\n}
    from collections import deque\nfrom typing import List\n\n\nclass Solution:\n    def secondMinimum(self, n: int, edges: List[List[int]], time: int, change: int) -> int:\n        # 统计所有节点的联通节点,并将其存入map中留着后面使用\n        maps = [[0] for _ in range(n + 1)]\n        for edge in edges:\n            maps[edge[0]].append(edge[1])\n            maps[edge[1]].append(edge[0])\n        queue = deque()\n        queue.append(1)\n        # 记录节点到达的次数\n        counts = [0] * (n + 1)\n        # 记录到达节点的时间\n        free = 0\n        while len(queue):\n            # 红灯情况下加上需要等待的时间\n            if free % (2 * change) >= change:\n                free += change - free % change\n            free += time\n            # 同一时间可以到达的节点数量\n            size = len(queue)\n            # 同一时间节点是否已经到达\n            use = [False] * (n + 1)\n            for i in range(size):\n                for num in maps[queue.popleft()]:\n                    # 同一时间未到达,并且到达该节点的总次数小于2\n                    if use[num] is False and counts[num] < 2:\n                        queue.append(num)\n                        use[num] = True\n                        counts[num] += 1\n\n                    # 如果是第二次到达最后一个节点,直接返回需要到达的诗句\n                    if num == n and counts[num] == 2:\n                        return free\n        return 0
    ","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"力扣1345:跳跃游戏 IV","slug":"day20220121","date":"2022-01-21T08:26:26.000Z","updated":"2024-04-25T08:10:09.081Z","comments":true,"path":"/post/day20220121/","link":"","excerpt":"","content":"

    2022年01月21日 力扣每日一题

    \n

    题目

    给你一个整数数组 arr ,你一开始在数组的第一个元素处(下标为 0)。

    \n\n

    每一步,你可以从下标 i 跳到下标:

    \n\n\n\n

    请你返回到达数组最后一个元素的下标处所需的 最少操作次数 。

    \n\n

    注意:任何时候你都不能跳到数组外面。

    \n\n

     

    \n\n

    示例 1:

    \n\n
    输入:arr = [100,-23,-23,404,100,23,23,23,3,404]\n输出:3\n解释:那你需要跳跃 3 次,下标依次为 0 --> 4 --> 3 --> 9 。下标 9 为数组的最后一个元素的下标。\n
    \n\n

    示例 2:

    \n\n
    输入:arr = [7]\n输出:0\n解释:一开始就在最后一个元素处,所以你不需要跳跃。\n
    \n\n

    示例 3:

    \n\n
    输入:arr = [7,6,9,6,9,6,9,7]\n输出:1\n解释:你可以直接从下标 0 处跳到下标 7 处,也就是数组的最后一个元素处。\n
    \n\n

    示例 4:

    \n\n
    输入:arr = [6,1,9]\n输出:2\n
    \n\n

    示例 5:

    \n\n
    输入:arr = [11,22,7,7,7,7,7,7,7,22,13]\n输出:3\n
    \n\n

     

    \n\n

    提示:

    \n\n

    \n
    Related Topics
  • 广度优先搜索
  • 数组
  • 哈希表
  • \n\n

    个人解法

    import java.util.*;\n\nclass Solution {\n    public int minJumps(int[] arr) {\n        if (arr.length == 1) {\n            return 0;\n        }\n        boolean[] use = new boolean[arr.length];\n        Map<Integer, List<Integer>> map = new HashMap<>();\n        for (int i = 0; i < arr.length; i++) {\n            map.computeIfAbsent(arr[i], k -> new ArrayList<>\n        }\n        use[0] = true;\n        Queue<Integer> queue = new ArrayDeque<>();\n        queue.add(0);\n        int count = 0;\n        while (!queue.isEmpty()) {\n            int size = queue.size();\n            count++;\n            for (int i = 0; i < size; i++) {\n                int index = queue.poll();\n                if (index - 1 >= 0 && !use[index - 1]) {\n                    queue.add(index - 1);\n                    use[index - 1] = true;\n                }\n                if (index + 1 == arr.length - 1) {\n                    return count;\n                }\n                if (index + 1 >= 0 && !use[index + 1]) {\n                    queue.add(index + 1);\n                    use[index + 1] = true;\n                }\n                if (map.containsKey(arr[index])) {\n                    List<Integer> list = map.get(arr[index])\n                    map.remove(arr[index]);\n                    for (int ind : list) {\n                        if (ind == arr.length - 1) {\n                            return count;\n                        }\n                        if (!use[ind]) {\n                            queue.add(ind);\n                            use[ind] = true;\n                        }\n                    }\n                }\n            }\n        }\n        return 0;\n    }\n}
    from collections import defaultdict, deque\nfrom typing import List\n\n\nclass Solution:\n    def minJumps(self, arr: List[int]) -> int:\n        if len(arr) == 1:\n            return 0\n        map = defaultdict(list)\n        for i, a in enumerate(arr):\n            map[a].append(i)\n        use = set()\n        queue = deque()\n        queue.append(0)\n        use.add(0)\n        count = 0\n        while queue:\n            count += 1\n            for i in range(len(queue)):\n                index = queue.popleft()\n                if index - 1 >= 0 and (index - 1) not in use:\n                    use.add(index - 1)\n                    queue.append(index - 1)\n                if index + 1 == len(arr) - 1:\n                    return count\n                if index + 1 < len(arr) and (index + 1) not in use:\n                    use.add(index + 1)\n                    queue.append(index + 1)\n                v = arr[index]\n                for i in map[v]:\n                    if i == len(arr) - 1:\n                        return count\n                    if i not in use:\n                        use.add(i)\n                        queue.append(i)\n                del map[v]
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"力扣2029:石子游戏 IX","slug":"day20220120","date":"2022-01-20T02:56:54.000Z","updated":"2024-04-25T08:10:09.080Z","comments":true,"path":"/post/day20220120/","link":"","excerpt":"","content":"

    2022年01月20日 力扣每日一题

    \n

    题目

    Alice 和 Bob 再次设计了一款新的石子游戏。现有一行 n 个石子,每个石子都有一个关联的数字表示它的价值。给你一个整数数组 stones ,其中 stones[i] 是第 i 个石子的价值。

    \n\n

    Alice 和 Bob 轮流进行自己的回合,Alice 先手。每一回合,玩家需要从 stones 中移除任一石子。

    \n\n\n\n

    假设两位玩家均采用 最佳 决策。如果 Alice 获胜,返回 true ;如果 Bob 获胜,返回 false

    \n\n

     

    \n\n

    示例 1:

    \n\n
    \n输入:stones = [2,1]\n输出:true\n解释:游戏进行如下:\n- 回合 1:Alice 可以移除任意一个石子。\n- 回合 2:Bob 移除剩下的石子。 \n已移除的石子的值总和为 1 + 2 = 3 且可以被 3 整除。因此,Bob 输,Alice 获胜。\n
    \n\n

    示例 2:

    \n\n
    \n输入:stones = [2]\n输出:false\n解释:Alice 会移除唯一一个石子,已移除石子的值总和为 2 。 \n由于所有石子都已移除,且值总和无法被 3 整除,Bob 获胜。\n
    \n\n

    示例 3:

    \n\n
    \n输入:stones = [5,1,2,4,3]\n输出:false\n解释:Bob 总会获胜。其中一种可能的游戏进行方式如下:\n- 回合 1:Alice 可以移除值为 1 的第 2 个石子。已移除石子值总和为 1 。\n- 回合 2:Bob 可以移除值为 3 的第 5 个石子。已移除石子值总和为 = 1 + 3 = 4 。\n- 回合 3:Alices 可以移除值为 4 的第 4 个石子。已移除石子值总和为 = 1 + 3 + 4 = 8 。\n- 回合 4:Bob 可以移除值为 2 的第 3 个石子。已移除石子值总和为 = 1 + 3 + 4 + 2 = 10.\n- 回合 5:Alice 可以移除值为 5 的第 1 个石子。已移除石子值总和为 = 1 + 3 + 4 + 2 + 5 = 15.\nAlice 输掉游戏,因为已移除石子值总和(15)可以被 3 整除,Bob 获胜。\n
    \n\n

     

    \n\n

    提示:

    \n\n

    \n
    Related Topics
  • 贪心
  • 数组
  • 数学
  • 计数
  • 博弈
  • \n\n

    个人解法

    class Solution {\n    public boolean stoneGameIX(int[] stones) {\n        int[] counts = new int[3];\n        for (int stone : stones) {\n            counts[stone % 3]++;\n        }\n        return counts[0] % 2 == 0 ? counts[1] > 0 && counts[2] > 0 : Math.abs(counts[1] - counts[2]) > 2;\n    }\n}
    from typing import List\n\n\nclass Solution:\n    def stoneGameIX(self, stones: List[int]) -> bool:\n        counts = [0] * 3\n        for stone in stones:\n            counts[stone % 3] += 1\n        if counts[0] % 2 == 0:\n            return counts[1] > 0 and counts[2] > 0\n        else:\n            return abs(counts[1] - counts[2]) > 2
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"力扣219:存在重复元素 II","slug":"day20220119","date":"2022-01-19T03:24:37.000Z","updated":"2024-04-25T08:10:09.079Z","comments":true,"path":"/post/day20220119/","link":"","excerpt":"","content":"

    2022年01月19日 力扣每日一题

    \n

    题目

    给你一个整数数组 nums 和一个整数 k ,判断数组中是否存在两个 不同的索引 i 和 j ,满足 nums[i] == nums[j]abs(i - j) <= k 。如果存在,返回 true ;否则,返回 false

    \n\n

     

    \n\n

    示例 1:

    \n\n
    \n输入:nums = [1,2,3,1], k = 3\n输出:true
    \n\n

    示例 2:

    \n\n
    \n输入:nums = [1,0,1,1], k = 1\n输出:true
    \n\n

    示例 3:

    \n\n
    \n输入:nums = [1,2,3,1,2,3], k = 2\n输出:false
    \n\n

     

    \n\n

     

    \n\n

    提示:

    \n\n

    \n
    Related Topics
  • 数组
  • 哈希表
  • 滑动窗口
  • \n\n

    个人解法

    import java.util.HashMap;\nimport java.util.Map;\n\nclass Solution {\n    public boolean containsNearbyDuplicate(int[] nums, int k) {\n        if (k <= 0) {\n            return false;\n        }\n        Map<Integer, Integer> map = new HashMap<>();\n        for (int i = 0; i < nums.length; i++) {\n            if (map.containsKey(nums[i]) && i - map.get(nums[i]) <= k) {\n                return true;\n            }\n            map.put(nums[i], i);\n        }\n        return false;\n    }\n}
    from typing import List\n\n\nclass Solution:\n    def containsNearbyDuplicate(self, nums: List[int], k: int) -> bool:\n        map = {}\n        for i, num in enumerate(nums):\n            if num in map and i - map[num] <= k:\n                return True\n            map[num] = i\n        return False
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"力扣539:最小时间差","slug":"day20220118","date":"2022-01-18T06:26:41.000Z","updated":"2024-04-25T08:10:09.078Z","comments":true,"path":"/post/day20220118/","link":"","excerpt":"","content":"

    2022年01月18日 力扣每日一题

    \n

    题目

    给定一个 24 小时制(小时:分钟 \"HH:MM\")的时间列表,找出列表中任意两个时间的最小时间差并以分钟数表示。

    \n\n

    \n\n

    示例 1:

    \n\n
    \n输入:timePoints = [\"23:59\",\"00:00\"]\n输出:1\n
    \n\n

    示例 2:

    \n\n
    \n输入:timePoints = [\"00:00\",\"23:59\",\"00:00\"]\n输出:0\n
    \n\n

    \n\n

    提示:

    \n\n

    \n
    Related Topics
  • 数组
  • 数学
  • 字符串
  • 排序
  • \n\n

    个人解法

    import java.util.List;\n\nclass Solution {\n    public int findMinDifference(List<String> timePoints) {\n        int[] times = new int[2880];\n        for (String timePoint : timePoints) {\n            String[] strs = timePoint.split(":");\n            int time = Integer.parseInt(strs[0]) * 60 + Integer.parseInt(strs[1]);\n            if (times[time] == 1) {\n                return 0;\n            }\n            times[time] = 1;\n            times[time + 1440] = 1;\n        }\n        if (times[0] == 1 && times[1439] == 1) {\n            return 1;\n        }\n        int min = 1440;\n        int bef = 0;\n        for (int i = 1; i < 2880; i++) {\n            if (times[i] == 1) {\n                if (bef > 0) {\n                    min = Math.min(min, i - bef);\n                }\n                if (i > 1439) {\n                    break;\n                }\n                bef = i;\n            }\n        }\n        return min;\n    }\n}
    from typing import List\n\n\nclass Solution:\n    def findMinDifference(self, timePoints: List[str]) -> int:\n        times = [0] * 2880\n        for timePoint in timePoints:\n            time = int(timePoint[:2]) * 60 + int(timePoint[-2:])\n            if times[time] == 1:\n                return 0\n            times[time] = 1\n            times[time + 1440] = 1\n        result = 1440\n        bef = 0\n        for i in range(2880):\n            if times[i] == 1:\n                if bef > 0:\n                    result = min(result, i - bef)\n                if i > 1439:\n                    break\n                bef = i\n        return result
    \n","categories":[{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"}],"tags":[{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"}]},{"title":"Sublime Text 4 破解","slug":"sublimeText4Purchase","date":"2022-01-14T07:24:18.000Z","updated":"2022-09-22T07:39:53.207Z","comments":true,"path":"/post/sublimeText4Purchase/","link":"","excerpt":"","content":"

    下载地址

    https://www.sublimetext.com/download

    \n

    激活方法

    打开在线十六进制编辑器

    地址:hexed

    \n

    \"image-20220114153619254\"

    \n

    打开sublime_text.exe文件

    \"img.png\"

    \n

    替换

    根据版本不同替换不同:

    \n\n

    按住Ctrl+F,我这边是64位电脑,在搜索中输入4157415656575553B828210000 ,在替换为输入33C0FEC0C3AC200000,如果替换为无法输入,记得将替换为上一行的启用替换勾选上,然后先查找一下,接下来再点击替换

    \n

    \"image-20220114154528250\"

    \n

    替换后点击另存为,替换掉原来的文件,保存

    \"image-20220114154843108\"

    \n

    输入激活码激活

    \n","categories":[{"name":"开发工具","slug":"开发工具","permalink":"https://hexo.huangge1199.cn/categories/%E5%BC%80%E5%8F%91%E5%B7%A5%E5%85%B7/"}],"tags":[{"name":"破解","slug":"破解","permalink":"https://hexo.huangge1199.cn/tags/%E7%A0%B4%E8%A7%A3/"}]},{"title":"用nexus部署maven私服","slug":"nexusCreate","date":"2022-01-12T11:52:40.000Z","updated":"2022-09-22T07:39:53.101Z","comments":true,"path":"/post/nexusCreate/","link":"","excerpt":"","content":"

    nexus 服务部署

    由于本人习惯问题,本次继续用docker部署

    \n

    查找docker镜像

    通过https://hub.docker.com/ 网站查找,选用了官方的sonatype/nexus3

    \n

    拉取镜像

    docker pull sonatype/nexus3
    \n

    \"image-20220112202513617\"

    \n

    创建宿主机挂载目录并编写docker-compose.yml

    执行命令:

    \n
    vi docker-compose.yml\nmkdir nexus-data
    \n

    docker-compose.yml内容:

    \n
    version: '3'\nservices:\n    nexus3:\n        container_name: nexus3\n        image: sonatype/nexus3:latest\n        environment:\n            - TZ=Asia/Shanghai\n        volumes: \n            - ./nexus-data:/var/nexus-data\n        ports: \n            - 8081:8081\n        restart: always
    \n

    \"image-20220112203927784\"

    \n

    启动容器

    docker-compose up -d
    \n

    \"image-20220112204407509\"

    \n

    浏览器验证

    浏览器中输入http://IP:8081/,出现下面的页面启动完成

    \n

    \"image-20220112204921432\"

    \n

    Nexus 服务的配置

    浏览器中点击右上角的登录

    \"image-20220112205557025\"

    \n

    登录

    首次登录会提示密码保存在/nexus-data/admin.password(位置可能会变,看提示)

    \n

    \"image-20220112214506743\"

    \n

    由于这个目录我们的docker并没有引出来,所以我们要去docker容器内查看

    \n
    docker exec -it nexus3 /bin/bash\ncat /nexus-data/admin.password
    \n

    这地方注意下,cat后不会换行,注意看下密码,用户名是admin,文件中存的就是密码

    \n

    \"image-20220112214220341\"

    \n

    设置密码

    登录后:

    \n

    \"image-20220112214637528\"

    \n

    点击next设置新密码

    \n

    \"image-20220112214717867\"

    \n

    \"image-20220112214820826\"

    \n

    \"image-20220112214831857\"

    \n

    增加阿里云公共仓库

    由于默认的里面没有阿里云仓库,用maven的仓库速度慢,所以增加一个阿里云仓库

    \n

    \"image-20220112215708898\"

    \n

    \"image-20220112215809631\"

    \n

    \"image-20220112215951755\"

    \n

    \"image-20220112220131151\"

    \n

    接下来填写信息:name这个随意填,为了方便记忆我填写的aliyun-public-proxy,下面的配置阿里云地址https://maven.aliyun.com/repository/public,两个填好后点击最下方的Create repository

    \n

    \"image-20220112220511830\"

    \n

    \"image-20220112220741095\"

    \n

    统一私服

    \n

    \"image-20220112221401367\"

    \n\n

    \"image-20220112221517343\"

    \n

    \"image-20220112221637473\"

    \n

    查看私服地址

    回到上一个页面,点击copy,弹出来的地址就是私服地址

    \n

    \"image-20220112221849871\"

    \n

    使用私服

    注:maven地址:E:\\maven\\apache-maven-3.6.3

    \n

    maven中setting.xml 文件配置

    \n

    新建maven项目

    我这边建了一个Springboot项目

    \n

    \"image-20220112223312451\"

    \n

    设置maven路径

    \n

    \"image-20220112224909515\"

    \n

    发布依赖

      \n
    1. 项目pom中添加 distributionManagement 节点

      \n
      <distributionManagement>\n    <repository>\n        <id>releases</id>\n        <name>Releases</name>\n        <url>http://192.168.1.187:8081/repository/maven-releases/</url>\n    </repository>\n    <snapshotRepository>\n        <id>snapshots</id>\n        <name>Snapshot</name>\n        <url>http://192.168.1.187:8081/repository/maven-snapshots/</url>\n    </snapshotRepository>\n</distributionManagement>
      \n

      注:repository 里的 id 需要和上一步里的 server id 名称保持一致。

      \n
    2. \n
    3. 执行 mvn deploy 命令发布:

      \n

      \"image-20220112230213213\"

      \n
    4. \n
    5. 查看网页,是否部署成功

      \n

      注:

      \n\n

      \"image-20220112230645860\"

      \n
    6. \n
    \n","categories":[{"name":"nexus","slug":"nexus","permalink":"https://hexo.huangge1199.cn/categories/nexus/"}],"tags":[{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"},{"name":"maven","slug":"maven","permalink":"https://hexo.huangge1199.cn/tags/maven/"},{"name":"nexus","slug":"nexus","permalink":"https://hexo.huangge1199.cn/tags/nexus/"}]},{"title":"JPA复合主键使用","slug":"jpaCompositePK","date":"2022-01-05T07:14:53.000Z","updated":"2022-09-22T07:39:53.084Z","comments":true,"path":"/post/jpaCompositePK/","link":"","excerpt":"","content":"

    1、建立带有复合主键的表User

    该表使用 username+phone 做为复合组件

    \n
    create table user\n(\n    username varchar(50) not null,\n    phone     varchar(11) not null,\n    email     varchar(20) default '',\n    address   varchar(50) default '',\n    primary key (username, phone)\n) default charset = utf8
    \n

    2、java中建立复合主键的实体类

    import lombok.Data;\nimport javax.persistence.*;\nimport java.io.Serializable;\n\n@Data\n@Entity\npublic class UserKey implements Serializable {\n    private String username;\n    private String phone;\n}
    \n

    3、建立表的实体类

    在实体类上面使用 @IdClass 注解指定复合主键。同时,需要在 name 和 phone 字段上面使用 @Id 注解标记为主键

    \n
    import lombok.Data;\nimport javax.persistence.*;\n\n@Data\n@Entity\n@Table(name = "user")\n@IdClass(value = UserKey.class)\npublic class User {\n    @Id\n    @Column(nullable = false)\n    private String username;\n\n    @Id\n    @Column(nullable = false)\n    private String phone;\n\n    @Column\n    private String email;\n\n    @Column\n    private String address;\n}
    \n","categories":[{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/categories/java/"},{"name":"jpa","slug":"java/jpa","permalink":"https://hexo.huangge1199.cn/categories/java/jpa/"}],"tags":[{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/tags/java/"},{"name":"jpa","slug":"jpa","permalink":"https://hexo.huangge1199.cn/tags/jpa/"}]},{"title":"python3学习笔记--条件控制用法整理","slug":"pyControl","date":"2021-12-29T08:04:06.000Z","updated":"2022-09-22T07:39:53.182Z","comments":true,"path":"/post/pyControl/","link":"","excerpt":"","content":"

    if

    if_stmt ::=  "if" assignment_expression ":" suite\n             ("elif" assignment_expression ":" suite)*\n             ["else" ":" suite]
    \n

    用法:

    \n
    if EXPRESSION1:\n    SUITE1\nelif EXPRESSION2:\n    SUITE2\nelse:\n    SUITE
    \n

    常用的操作符:

    \n\n

    with

    with_stmt          ::=  "with" ( "(" with_stmt_contents ","? ")" | with_stmt_contents ) ":" suite\nwith_stmt_contents ::=  with_item ("," with_item)*\nwith_item          ::=  expression ["as" target]
    \n

    用法:

    \n
    with EXPRESSION as TARGET:\n    SUITE\n或者\nwith A() as a, B() as b:\n    SUITE\n或者\nwith A() as a:\n    with B() as b:\n        SUITE\n或者\nwith (\n    A() as a,\n    B() as b,\n):\n    SUITE
    \n

    match(3.10新特性)

    match_stmt   ::=  'match' subject_expr ":" NEWLINE INDENT case_block+ DEDENT\nsubject_expr ::=  star_named_expression "," star_named_expressions?\n                  | named_expression\ncase_block   ::=  'case' patterns [guard] ":" block
    \n

    用法:

    \n
    match variable: #这里的variable是需要判断的内容\n    case ["quit"]: \n        statement_block_1 # 对应案例的执行代码,当variable="quit"时执行statement_block_1\n    case ["go", direction]: \n        statement_block_2\n    case ["drop", *objects]: \n        statement_block_3\n    ... # 其他的case语句\n    case _: #如果上面的case语句没有命中,则执行这个代码块,类似于Switch的default\n        statement_block_default
    \n","categories":[{"name":"python","slug":"python","permalink":"https://hexo.huangge1199.cn/categories/python/"}],"tags":[{"name":"学习笔记","slug":"学习笔记","permalink":"https://hexo.huangge1199.cn/tags/%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0/"},{"name":"python","slug":"python","permalink":"https://hexo.huangge1199.cn/tags/python/"}]},{"title":"python3学习笔记--两种排序方法","slug":"pyListSort","date":"2021-12-29T02:20:17.000Z","updated":"2022-09-22T07:39:53.188Z","comments":true,"path":"/post/pyListSort/","link":"","excerpt":"","content":"

    列表排序方法

    \n

    sort()

    list.sort(key=None, reverse=False)

    \n\n

    例子:

    \n
    nums = [2, 3, 5, 1, 6]\nnums.sort()\nprint(nums)  # [1, 2, 3, 5, 6]\nnums.sort(key=None, reverse=True)\nprint(nums)  # [6, 5, 3, 2, 1]\n    \nstudents = [('john', 'C', 15), ('jane', 'A', 12), ('dave', 'B', 10)]\nstudents.sort(key=lambda x: x[2])  # 按照列表中第三个元素排序\nprint(students)  # [('dave', 'B', 10), ('jane', 'A', 12), ('john', 'C', 15)]
    \n

    sorted()

    sorted(iterable [, key[, reverse]])

    \n\n

    例子:

    \n
    nums = [2, 3, 5, 1, 6]\nnewNums = sorted(nums)\nprint(nums)  # [2, 3, 5, 1, 6]\nprint(newNums)  # [1, 2, 3, 5, 6]\nstudents = [('john', 'C', 15), ('jane', 'A', 12), ('dave', 'B', 10)]\nnewStudents = sorted(students, key=lambda x: x[1])\nprint(students)  # [('john', 'C', 15), ('jane', 'A', 12), ('dave', 'B', 10)]\nprint(newStudents)  # [('jane', 'A', 12), ('dave', 'B', 10), ('john', 'C', 15)]
    \n\n\n","categories":[{"name":"python","slug":"python","permalink":"https://hexo.huangge1199.cn/categories/python/"}],"tags":[{"name":"学习笔记","slug":"学习笔记","permalink":"https://hexo.huangge1199.cn/tags/%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0/"},{"name":"python","slug":"python","permalink":"https://hexo.huangge1199.cn/tags/python/"}]},{"title":"Hello World","slug":"hello-world","date":"2021-12-01T11:04:06.000Z","updated":"2022-09-22T07:39:52.889Z","comments":true,"path":"/post/hello-world/","link":"","excerpt":"","content":"

    Welcome to Hexo! This is your very first post. Check documentation for more info. If you get any problems when using Hexo, you can find the answer in troubleshooting or you can ask me on GitHub.

    \n

    Quick Start

    Create a new post

    $ hexo new "My New Post"
    \n

    More info: Writing

    \n

    Run server

    $ hexo server
    \n

    More info: Server

    \n

    Generate static files

    $ hexo generate
    \n

    More info: Generating

    \n

    Deploy to remote sites

    $ hexo deploy
    \n

    More info: Deployment

    \n","categories":[],"tags":[]}],"categories":[{"name":"前端","slug":"前端","permalink":"https://hexo.huangge1199.cn/categories/%E5%89%8D%E7%AB%AF/"},{"name":"vue","slug":"前端/vue","permalink":"https://hexo.huangge1199.cn/categories/%E5%89%8D%E7%AB%AF/vue/"},{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/categories/java/"},{"name":"盖章","slug":"盖章","permalink":"https://hexo.huangge1199.cn/categories/%E7%9B%96%E7%AB%A0/"},{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/categories/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"},{"name":"git","slug":"git","permalink":"https://hexo.huangge1199.cn/categories/git/"},{"name":"算法","slug":"算法","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/"},{"name":"力扣","slug":"算法/力扣","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/"},{"name":"每日一题","slug":"算法/力扣/每日一题","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E6%AF%8F%E6%97%A5%E4%B8%80%E9%A2%98/"},{"name":"python","slug":"python","permalink":"https://hexo.huangge1199.cn/categories/python/"},{"name":"工具","slug":"工具","permalink":"https://hexo.huangge1199.cn/categories/%E5%B7%A5%E5%85%B7/"},{"name":"Linux","slug":"Linux","permalink":"https://hexo.huangge1199.cn/categories/Linux/"},{"name":"vue","slug":"vue","permalink":"https://hexo.huangge1199.cn/categories/vue/"},{"name":"nas","slug":"nas","permalink":"https://hexo.huangge1199.cn/categories/nas/"},{"name":"P5笔记","slug":"java/P5笔记","permalink":"https://hexo.huangge1199.cn/categories/java/P5%E7%AC%94%E8%AE%B0/"},{"name":"deepin","slug":"deepin","permalink":"https://hexo.huangge1199.cn/categories/deepin/"},{"name":"游戏","slug":"deepin/游戏","permalink":"https://hexo.huangge1199.cn/categories/deepin/%E6%B8%B8%E6%88%8F/"},{"name":"问题记录","slug":"问题记录","permalink":"https://hexo.huangge1199.cn/categories/%E9%97%AE%E9%A2%98%E8%AE%B0%E5%BD%95/"},{"name":"网站建设","slug":"问题记录/网站建设","permalink":"https://hexo.huangge1199.cn/categories/%E9%97%AE%E9%A2%98%E8%AE%B0%E5%BD%95/%E7%BD%91%E7%AB%99%E5%BB%BA%E8%AE%BE/"},{"name":"web开发","slug":"web开发","permalink":"https://hexo.huangge1199.cn/categories/web%E5%BC%80%E5%8F%91/"},{"name":"服务器","slug":"服务器","permalink":"https://hexo.huangge1199.cn/categories/%E6%9C%8D%E5%8A%A1%E5%99%A8/"},{"name":"nacos","slug":"java/nacos","permalink":"https://hexo.huangge1199.cn/categories/java/nacos/"},{"name":"游戏","slug":"游戏","permalink":"https://hexo.huangge1199.cn/categories/%E6%B8%B8%E6%88%8F/"},{"name":"云原生2023","slug":"云原生2023","permalink":"https://hexo.huangge1199.cn/categories/%E4%BA%91%E5%8E%9F%E7%94%9F2023/"},{"name":"网站建设","slug":"网站建设","permalink":"https://hexo.huangge1199.cn/categories/%E7%BD%91%E7%AB%99%E5%BB%BA%E8%AE%BE/"},{"name":"安装部署","slug":"nas/安装部署","permalink":"https://hexo.huangge1199.cn/categories/nas/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"},{"name":"历史上的今天","slug":"历史上的今天","permalink":"https://hexo.huangge1199.cn/categories/%E5%8E%86%E5%8F%B2%E4%B8%8A%E7%9A%84%E4%BB%8A%E5%A4%A9/"},{"name":"docker","slug":"docker","permalink":"https://hexo.huangge1199.cn/categories/docker/"},{"name":"deepin","slug":"docker/deepin","permalink":"https://hexo.huangge1199.cn/categories/docker/deepin/"},{"name":"云原生","slug":"云原生","permalink":"https://hexo.huangge1199.cn/categories/%E4%BA%91%E5%8E%9F%E7%94%9F/"},{"name":"deepin","slug":"云原生/deepin","permalink":"https://hexo.huangge1199.cn/categories/%E4%BA%91%E5%8E%9F%E7%94%9F/deepin/"},{"name":"周赛","slug":"算法/力扣/周赛","permalink":"https://hexo.huangge1199.cn/categories/%E7%AE%97%E6%B3%95/%E5%8A%9B%E6%89%A3/%E5%91%A8%E8%B5%9B/"},{"name":"PHP","slug":"PHP","permalink":"https://hexo.huangge1199.cn/categories/PHP/"},{"name":"设计模式","slug":"设计模式","permalink":"https://hexo.huangge1199.cn/categories/%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F/"},{"name":"it百科","slug":"it百科","permalink":"https://hexo.huangge1199.cn/categories/it%E7%99%BE%E7%A7%91/"},{"name":"推理","slug":"推理","permalink":"https://hexo.huangge1199.cn/categories/%E6%8E%A8%E7%90%86/"},{"name":"seata","slug":"java/seata","permalink":"https://hexo.huangge1199.cn/categories/java/seata/"},{"name":"开发工具","slug":"开发工具","permalink":"https://hexo.huangge1199.cn/categories/%E5%BC%80%E5%8F%91%E5%B7%A5%E5%85%B7/"},{"name":"nexus","slug":"nexus","permalink":"https://hexo.huangge1199.cn/categories/nexus/"},{"name":"jpa","slug":"java/jpa","permalink":"https://hexo.huangge1199.cn/categories/java/jpa/"}],"tags":[{"name":"前端","slug":"前端","permalink":"https://hexo.huangge1199.cn/tags/%E5%89%8D%E7%AB%AF/"},{"name":"vue","slug":"vue","permalink":"https://hexo.huangge1199.cn/tags/vue/"},{"name":"java","slug":"java","permalink":"https://hexo.huangge1199.cn/tags/java/"},{"name":"盖章","slug":"盖章","permalink":"https://hexo.huangge1199.cn/tags/%E7%9B%96%E7%AB%A0/"},{"name":"安装部署","slug":"安装部署","permalink":"https://hexo.huangge1199.cn/tags/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/"},{"name":"git","slug":"git","permalink":"https://hexo.huangge1199.cn/tags/git/"},{"name":"力扣","slug":"力扣","permalink":"https://hexo.huangge1199.cn/tags/%E5%8A%9B%E6%89%A3/"},{"name":"python","slug":"python","permalink":"https://hexo.huangge1199.cn/tags/python/"},{"name":"工具","slug":"工具","permalink":"https://hexo.huangge1199.cn/tags/%E5%B7%A5%E5%85%B7/"},{"name":"Linux","slug":"Linux","permalink":"https://hexo.huangge1199.cn/tags/Linux/"},{"name":"nas","slug":"nas","permalink":"https://hexo.huangge1199.cn/tags/nas/"},{"name":"P5笔记","slug":"P5笔记","permalink":"https://hexo.huangge1199.cn/tags/P5%E7%AC%94%E8%AE%B0/"},{"name":"游戏","slug":"游戏","permalink":"https://hexo.huangge1199.cn/tags/%E6%B8%B8%E6%88%8F/"},{"name":"deepin","slug":"deepin","permalink":"https://hexo.huangge1199.cn/tags/deepin/"},{"name":"网站建设","slug":"网站建设","permalink":"https://hexo.huangge1199.cn/tags/%E7%BD%91%E7%AB%99%E5%BB%BA%E8%AE%BE/"},{"name":"web开发","slug":"web开发","permalink":"https://hexo.huangge1199.cn/tags/web%E5%BC%80%E5%8F%91/"},{"name":"服务器","slug":"服务器","permalink":"https://hexo.huangge1199.cn/tags/%E6%9C%8D%E5%8A%A1%E5%99%A8/"},{"name":"nacos","slug":"nacos","permalink":"https://hexo.huangge1199.cn/tags/nacos/"},{"name":"R2DBC","slug":"R2DBC","permalink":"https://hexo.huangge1199.cn/tags/R2DBC/"},{"name":"学习笔记","slug":"学习笔记","permalink":"https://hexo.huangge1199.cn/tags/%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0/"},{"name":"云原生2023","slug":"云原生2023","permalink":"https://hexo.huangge1199.cn/tags/%E4%BA%91%E5%8E%9F%E7%94%9F2023/"},{"name":"历史上的今天","slug":"历史上的今天","permalink":"https://hexo.huangge1199.cn/tags/%E5%8E%86%E5%8F%B2%E4%B8%8A%E7%9A%84%E4%BB%8A%E5%A4%A9/"},{"name":"docker","slug":"docker","permalink":"https://hexo.huangge1199.cn/tags/docker/"},{"name":"云原生","slug":"云原生","permalink":"https://hexo.huangge1199.cn/tags/%E4%BA%91%E5%8E%9F%E7%94%9F/"},{"name":"PHP","slug":"PHP","permalink":"https://hexo.huangge1199.cn/tags/PHP/"},{"name":"设计模式","slug":"设计模式","permalink":"https://hexo.huangge1199.cn/tags/%E8%AE%BE%E8%AE%A1%E6%A8%A1%E5%BC%8F/"},{"name":"it百科","slug":"it百科","permalink":"https://hexo.huangge1199.cn/tags/it%E7%99%BE%E7%A7%91/"},{"name":"推理界的今天","slug":"推理界的今天","permalink":"https://hexo.huangge1199.cn/tags/%E6%8E%A8%E7%90%86%E7%95%8C%E7%9A%84%E4%BB%8A%E5%A4%A9/"},{"name":"maven","slug":"maven","permalink":"https://hexo.huangge1199.cn/tags/maven/"},{"name":"seata","slug":"seata","permalink":"https://hexo.huangge1199.cn/tags/seata/"},{"name":"破解","slug":"破解","permalink":"https://hexo.huangge1199.cn/tags/%E7%A0%B4%E8%A7%A3/"},{"name":"nexus","slug":"nexus","permalink":"https://hexo.huangge1199.cn/tags/nexus/"},{"name":"jpa","slug":"jpa","permalink":"https://hexo.huangge1199.cn/tags/jpa/"}]}