1. ElasticSearch 高级

2. 批量操作

Bulk 批量操作是将文档的增删改查一系列操作 通过一次请求 全部完成 减少网络传输次数

2.1. 脚本操作

  1. GET person/_search
  2. # 批量操作
  3. POST _bulk
  4. {"delete":{"_index":"person","_id":"3"}}
  5. {"create":{"_index":"person","_id":"8"}}
  6. {"name":"hhh","age":88,"address":"qqqq"}
  7. {"update":{"_index":"person","_id":"2"}}
  8. {"doc":{"name":"qwedqd"}}

2.2. API

  1. //批量操作 bulk
  2. @Test
  3. public void testBulk() throws IOException {
  4. //创建bulkrequest对象 整合所有操作
  5. BulkRequest bulkRequest = new BulkRequest();
  6. //添加操作
  7. //删除1号操作
  8. DeleteRequest deleteRequest = new DeleteRequest("person","1");
  9. bulkRequest.add(deleteRequest);
  10. //添加6号操作
  11. Map map = new HashMap();
  12. map.put("name","测试");
  13. IndexRequest indexRequest = new IndexRequest("person").id("6").source(map);
  14. bulkRequest.add(indexRequest);
  15. //修改3号操作
  16. Map map2 = new HashMap();
  17. map2.put("name","测试3号");
  18. UpdateRequest updateReqeust = new UpdateRequest("person","3").doc(map2);
  19. bulkRequest.add(updateReqeust);
  20. //执行批量操作
  21. BulkResponse response = restHighLevelClient.bulk(bulkRequest, RequestOptions.DEFAULT);
  22. RestStatus status = response.status();
  23. System.out.println(status);
  24. }

3. 导入数据

将数据库中表数据导入到ElasticSearch中

  1. 创建索引 并添加mapping
  2. 创建pojo类 映射mybatis
  3. 查询数据库
  4. 使用Bulk 批量导入

3.1. 使用fastjson 转换对象不转换该成员变量

在成员变量加上注解 使用@JSONField(serialize =false) 此成员变量不参与json转换

4. matchALL查询

matchALL查询 查询所有文档

2.1. 脚本操作

  1. # 默认情况下,es一次只展示10条数据 通过from控制页码 size控制每页展示条数
  2. GET person/_search
  3. {
  4. "query": {
  5. "match_all": {}
  6. },
  7. "from": 0,
  8. "size": 100
  9. }

19. ElasticSearch 高级 - 图1

4.2. Api操作

  1. //matchall查询所有 分页操作
  2. @Test
  3. public void testMatchALL() throws IOException {
  4. //2通过索引查询
  5. SearchRequest searchRequest = new SearchRequest("person");
  6. //4创建查询条件构建器
  7. SearchSourceBuilder sourceBuilder = new SearchSourceBuilder();
  8. //5查询条件
  9. QueryBuilder query = QueryBuilders.matchAllQuery(); //查询所有文档
  10. //6指定查询条件
  11. sourceBuilder.query(query);
  12. //分页查询
  13. sourceBuilder.from(0); //第几页
  14. sourceBuilder.size(100); //每页显示数
  15. //3添加查询条件构建器
  16. searchRequest.source(sourceBuilder);
  17. //1查询 获取查询结果
  18. SearchResponse search = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
  19. //7获取命中对象 hits
  20. SearchHits searchHits = search.getHits();
  21. //获取总条数
  22. long value = searchHits.getTotalHits().value;
  23. System.out.println("总记录数" + value);
  24. //获取hits数组
  25. SearchHit[] hits = searchHits.getHits();
  26. for (SearchHit hit : hits) {
  27. String sourceAsString = hit.getSourceAsString(); //获取json字符串格式的数据
  28. System.out.println(sourceAsString);
  29. }
  30. }

5. term查询

5.1. 脚本查询

  1. # term查询 词条查询 一般用于查分类
  2. GET person/_search
  3. {
  4. "query": {
  5. "term": {
  6. "name": {
  7. "value": "hhh"
  8. }
  9. }
  10. }
  11. }

5.2. api查询

  1. //termQuery 词条查询
  2. @Test
  3. public void testTermQuery() throws IOException {
  4. SearchRequest searchRequest = new SearchRequest("person");
  5. SearchSourceBuilder sourceBulider=new SearchSourceBuilder();
  6. QueryBuilder query= QueryBuilders.termQuery("name","hhh");
  7. sourceBulider.query(query);
  8. searchRequest.source(sourceBulider);
  9. SearchResponse search = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
  10. SearchHits searchHits = search.getHits();
  11. //获取总条数
  12. long value = searchHits.getTotalHits().value;
  13. System.out.println("总记录数" + value);
  14. //获取hits数组
  15. SearchHit[] hits = searchHits.getHits();
  16. for (SearchHit hit : hits) {
  17. String sourceAsString = hit.getSourceAsString(); //获取json字符串格式的数据
  18. System.out.println(sourceAsString);
  19. }
  20. }

6. match查询

  • 会对查询条件进行分词
  • 然后将分词后的查询条件和词条进行等值匹配
  • 默认取并集(OR) 交集为(AND)

2.1. 脚本操作

  1. # match 查询
  2. GET person/_search
  3. {
  4. "query": {
  5. "match": {
  6. "name": {
  7. "query": "hhh",
  8. "operator": "or"
  9. }
  10. }
  11. }
  12. }

6.2. API操作

  1. //match 词条查询
  2. @Test
  3. public void testMatchQuery() throws IOException {
  4. SearchRequest searchRequest = new SearchRequest("person");
  5. SearchSourceBuilder sourceBulider=new SearchSourceBuilder();
  6. MatchQueryBuilder query= QueryBuilders.matchQuery("name","hhh");
  7. query.operator(Operator.AND); //并集
  8. sourceBulider.query(query);
  9. searchRequest.source(sourceBulider);
  10. SearchResponse search = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
  11. SearchHits searchHits = search.getHits();
  12. //获取总条数
  13. long value = searchHits.getTotalHits().value;
  14. System.out.println("总记录数" + value);
  15. //获取hits数组
  16. SearchHit[] hits = searchHits.getHits();
  17. for (SearchHit hit : hits) {
  18. String sourceAsString = hit.getSourceAsString(); //获取json字符串格式的数据
  19. System.out.println(sourceAsString);
  20. }
  21. }

7. 模糊查询

  • wildcard查询 会对查询条件进行分词 可以使用通配符 ?(任意单个字符) 和 * (0或多个字符)
  • regexp 查询 正则查询
  • prefix 前缀查询

5.1. 脚本查询

  1. # wildcard 查询 查询条件分词 模糊查询
  2. GET person/_search
  3. {
  4. "query": {
  5. "wildcard": {
  6. "name": {
  7. "value": "h"
  8. }
  9. }
  10. }
  11. }
  12. # 正则查询 查询条件分词 模糊查询
  13. GET person/_search
  14. {
  15. "query": {
  16. "regexp": {
  17. "name": "\\q+(.)*"
  18. }
  19. }
  20. }
  21. # 前缀查询
  22. GET person/_search
  23. {
  24. "query": {
  25. "prefix": {
  26. "name": {
  27. "value": "qwe"
  28. }
  29. }
  30. }
  31. }

7.2. api操作

  1. //wildcard 模糊查询
  2. @Test
  3. public void testWildcardQuery() throws IOException {
  4. SearchRequest searchRequest = new SearchRequest("person");
  5. SearchSourceBuilder sourceBulider=new SearchSourceBuilder();
  6. WildcardQueryBuilder query = QueryBuilders.wildcardQuery("name", "h*");
  7. sourceBulider.query(query);
  8. searchRequest.source(sourceBulider);
  9. SearchResponse search = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
  10. SearchHits searchHits = search.getHits();
  11. //获取总条数
  12. long value = searchHits.getTotalHits().value;
  13. System.out.println("总记录数" + value);
  14. //获取hits数组
  15. SearchHit[] hits = searchHits.getHits();
  16. for (SearchHit hit : hits) {
  17. String sourceAsString = hit.getSourceAsString(); //获取json字符串格式的数据
  18. System.out.println(sourceAsString);
  19. }
  20. }
  21. //regexp 正则查询
  22. @Test
  23. public void testrRegexpQuery() throws IOException {
  24. SearchRequest searchRequest = new SearchRequest("person");
  25. SearchSourceBuilder sourceBulider=new SearchSourceBuilder();
  26. RegexpQueryBuilder query = QueryBuilders.regexpQuery("name", "\\h*");
  27. sourceBulider.query(query);
  28. searchRequest.source(sourceBulider);
  29. SearchResponse search = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
  30. SearchHits searchHits = search.getHits();
  31. //获取总条数
  32. long value = searchHits.getTotalHits().value;
  33. System.out.println("总记录数" + value);
  34. //获取hits数组
  35. SearchHit[] hits = searchHits.getHits();
  36. for (SearchHit hit : hits) {
  37. String sourceAsString = hit.getSourceAsString(); //获取json字符串格式的数据
  38. System.out.println(sourceAsString);
  39. }
  40. }
  41. //prefix 前缀查询
  42. @Test
  43. public void testrPrefixQuery() throws IOException {
  44. SearchRequest searchRequest = new SearchRequest("person");
  45. SearchSourceBuilder sourceBulider=new SearchSourceBuilder();
  46. PrefixQueryBuilder query = QueryBuilders.prefixQuery("name", "qwe");
  47. sourceBulider.query(query);
  48. searchRequest.source(sourceBulider);
  49. SearchResponse search = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
  50. SearchHits searchHits = search.getHits();
  51. //获取总条数
  52. long value = searchHits.getTotalHits().value;
  53. System.out.println("总记录数" + value);
  54. //获取hits数组
  55. SearchHit[] hits = searchHits.getHits();
  56. for (SearchHit hit : hits) {
  57. String sourceAsString = hit.getSourceAsString(); //获取json字符串格式的数据
  58. System.out.println(sourceAsString);
  59. }
  60. }

8. 范围查询

range范围查询 查询指定字段在指定范围内包含值

5.1. 脚本查询

  1. #范围查询
  2. GET perso/_search
  3. {
  4. "query": {
  5. "range": {
  6. "age": {
  7. "gte": 10,
  8. "lte": 30
  9. }
  10. }
  11. },
  12. "sort": [
  13. {
  14. "age": {
  15. "order": "desc"
  16. }
  17. }
  18. ]
  19. }

5.2. api查询

  1. //range 范围查询
  2. @Test
  3. public void testRangeQuery() throws IOException {
  4. SearchRequest searchRequest = new SearchRequest("person");
  5. SearchSourceBuilder sourceBulider=new SearchSourceBuilder();
  6. RangeQueryBuilder query = QueryBuilders.rangeQuery("age");//指定字段
  7. query.gte("10"); //小于等于
  8. query.lte("30"); //大于等于
  9. sourceBulider.query(query);
  10. sourceBulider.sort("age", SortOrder.ASC); //排序
  11. searchRequest.source(sourceBulider);
  12. SearchResponse search = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
  13. SearchHits searchHits = search.getHits();
  14. //获取总条数
  15. long value = searchHits.getTotalHits().value;
  16. System.out.println("总记录数" + value);
  17. //获取hits数组
  18. SearchHit[] hits = searchHits.getHits();
  19. for (SearchHit hit : hits) {
  20. String sourceAsString = hit.getSourceAsString(); //获取json字符串格式的数据
  21. System.out.println(sourceAsString);
  22. }
  23. }

9. queryString查询

  • 会对查询条件进行分词
  • 然后将分词后的查询条件和词条进行等值匹配
  • 默认取并集
  • 可以指定多个查询字段

5.1. 脚本查询

  1. # queryString 查询
  2. GET person/_search
  3. {
  4. "query": {
  5. "query_string": {
  6. "fields": ["name","address"],
  7. "query": "华为 OR 手机"
  8. }
  9. }
  10. }
  11. # SimpleQueryStringQuery是QueryStringQuery的简化版,其本身不支持 AND OR NOT 布尔运算关键字,这些关键字会被当做普通词语进行处理。
  12. # 可以通过 default_operator 指定查询字符串默认使用的运算方式,默认为 OR
  13. GET person/_search
  14. {
  15. "query": {
  16. "simple_query_string": {
  17. "fields": ["name","address"],
  18. "query": "华为 OR 手机"
  19. }
  20. }
  21. }

9.2. Api查询

  1. //queryString
  2. @Test
  3. public void testQueryStringQuery() throws IOException {
  4. SearchRequest searchRequest = new SearchRequest("person");
  5. SearchSourceBuilder sourceBulider=new SearchSourceBuilder();
  6. QueryStringQueryBuilder query = QueryBuilders.queryStringQuery("华为").field("name").field("address").defaultOperator(Operator.OR);
  7. sourceBulider.query(query);
  8. searchRequest.source(sourceBulider);
  9. SearchResponse search = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
  10. SearchHits searchHits = search.getHits();
  11. //获取总条数
  12. long value = searchHits.getTotalHits().value;
  13. System.out.println("总记录数" + value);
  14. //获取hits数组
  15. SearchHit[] hits = searchHits.getHits();
  16. for (SearchHit hit : hits) {
  17. String sourceAsString = hit.getSourceAsString(); //获取json字符串格式的数据
  18. System.out.println(sourceAsString);
  19. }
  20. }

10. 布尔查询

boolQuery 对多个查询条件连接 组合查询

  • must (and) 条件必须成立
  • must_not (not) 条件必须不成立
  • should(or) 条件可以成立
  • filter 条件必须成功 性能比must高 不会计算得分 (当查询结果符合查询条件越多则得分越多)

5.1. 脚本查询

  1. # bool查询
  2. GET person/_search
  3. {
  4. "query": {
  5. "bool": {
  6. "must": [
  7. {"term": {
  8. "name": {
  9. "value": "张三"
  10. }
  11. }}
  12. ]
  13. }
  14. }
  15. }
  16. #filter
  17. GET person/_search
  18. {
  19. "query": {
  20. "bool": {
  21. "filter": [
  22. {"term": {
  23. "name": {
  24. "value": "张三"
  25. }
  26. }}
  27. ]
  28. }
  29. }
  30. }
  31. # 组合多条件查询
  32. GET person/_search
  33. {
  34. "query": {
  35. "bool": {
  36. "must": [
  37. {"term": {
  38. "name": {
  39. "value": "张三"
  40. }
  41. }}
  42. ],
  43. "filter": [
  44. {
  45. "term": {
  46. "address": "5G"
  47. }
  48. }
  49. ]
  50. }
  51. }
  52. }

10.2. API查询

  1. //boolQuery
  2. @Test
  3. public void testBoolQueryQuery() throws IOException {
  4. SearchRequest searchRequest = new SearchRequest("person");
  5. SearchSourceBuilder sourceBuilder=new SearchSourceBuilder();
  6. //构建boolQuery
  7. BoolQueryBuilder query = QueryBuilders.boolQuery();
  8. //构建各个查询条件
  9. TermQueryBuilder termQuery = QueryBuilders.termQuery("name", "张三");//查询名字为张三的
  10. query.must(termQuery);
  11. MatchQueryBuilder matchQuery = QueryBuilders.matchQuery("address", "5G"); //查询地址包含5g的
  12. query.must(matchQuery);
  13. sourceBuilder.query(query);
  14. searchRequest.source(sourceBuilder);
  15. SearchResponse search = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
  16. SearchHits searchHits = search.getHits();
  17. //获取总条数
  18. long value = searchHits.getTotalHits().value;
  19. System.out.println("总记录数" + value);
  20. //获取hits数组
  21. SearchHit[] hits = searchHits.getHits();
  22. for (SearchHit hit : hits) {
  23. String sourceAsString = hit.getSourceAsString(); //获取json字符串格式的数据
  24. System.out.println(sourceAsString);
  25. }
  26. }

11. 聚合查询

  • 指标聚合 相当于mysql的聚合函数 max min avg sum等
  • 桶聚合 相当于mysql的group by . 不要对text类型的数据进行分组 会失败

5.1. 脚本查询

  1. # 聚合查询
  2. # 指标聚合 聚合函数
  3. GET person/_search
  4. {
  5. "query": {
  6. "match": {
  7. "name": "张三"
  8. }
  9. },
  10. "aggs": {
  11. "NAME": {
  12. "max": {
  13. "field": "age"
  14. }
  15. }
  16. }
  17. }
  18. # 桶聚合 分组 通过 aggs
  19. GET person/_search
  20. {
  21. "query": {
  22. "match": {
  23. "name": "张三"
  24. }
  25. },
  26. "aggs": {
  27. "zdymc": {
  28. "terms": {
  29. "field": "age",
  30. "size": 10
  31. }
  32. }
  33. }
  34. }

19. ElasticSearch 高级 - 图2

9.2. Api查询

  1. //聚合查询 桶聚合 分组
  2. @Test
  3. public void testAggsQuery() throws IOException {
  4. SearchRequest searchRequest = new SearchRequest("person");
  5. SearchSourceBuilder sourceBuilder=new SearchSourceBuilder();
  6. MatchQueryBuilder query = QueryBuilders.matchQuery("name", "张三");
  7. sourceBuilder.query(query);
  8. /**
  9. * terms 查询后结果名称
  10. * field 条件字段
  11. * size 每页展示的条数
  12. */
  13. TermsAggregationBuilder aggs = AggregationBuilders.terms("自定义名称").field("age").size(10);
  14. sourceBuilder.aggregation(aggs);
  15. searchRequest.source(sourceBuilder);
  16. SearchResponse search = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
  17. SearchHits searchHits = search.getHits();
  18. //获取总条数
  19. long value = searchHits.getTotalHits().value;
  20. System.out.println("总记录数" + value);
  21. //获取hits数组
  22. SearchHit[] hits = searchHits.getHits();
  23. for (SearchHit hit : hits) {
  24. String sourceAsString = hit.getSourceAsString(); //获取json字符串格式的数据
  25. System.out.println(sourceAsString);
  26. }
  27. //获取聚合结果
  28. Aggregations aggregations = search.getAggregations();
  29. Map<String, Aggregation> aggregationMap = aggregations.asMap(); //将结果转为map
  30. Terms zdymc = (Terms) aggregationMap.get("自定义名称");
  31. List<? extends Terms.Bucket> buckets = zdymc.getBuckets();
  32. ArrayList list = new ArrayList<>();
  33. for (Terms.Bucket bucket : buckets) {
  34. Object key = bucket.getKey();
  35. list.add(key);
  36. }
  37. for (Object o : list) {
  38. System.out.println(o);
  39. }
  40. }

12. 高亮查询

  • 高亮字段
  • 前缀
  • 后缀 如果不设置前后缀 默认为em标签

2.1. 脚本操作

  1. # 高亮查询
  2. GET person/_search
  3. {
  4. "query": {
  5. "match": {
  6. "address": "手机"
  7. }
  8. },
  9. "highlight": {
  10. "fields": {
  11. "address": {
  12. "pre_tags": "<font color='red'>",
  13. "post_tags": "</font>"
  14. }
  15. }
  16. }
  17. }

4.2. Api操作

  1. //highlight 高亮查询
  2. @Test
  3. public void testHighlightQuery() throws IOException {
  4. SearchRequest searchRequest = new SearchRequest("person");
  5. SearchSourceBuilder sourceBuilder=new SearchSourceBuilder();
  6. MatchQueryBuilder query = QueryBuilders.matchQuery("address", "手机");
  7. sourceBuilder.query(query);
  8. HighlightBuilder highlightBuilder = new HighlightBuilder();//高亮对象
  9. highlightBuilder.field("address"); //字段
  10. highlightBuilder.preTags("<font color='red'>"); //前缀
  11. highlightBuilder.postTags("</font>"); //后缀
  12. sourceBuilder.highlighter(highlightBuilder);
  13. searchRequest.source(sourceBuilder);
  14. SearchResponse search = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
  15. SearchHits searchHits = search.getHits();
  16. //获取总条数
  17. long value = searchHits.getTotalHits().value;
  18. System.out.println("总记录数" + value);
  19. //获取hits数组
  20. SearchHit[] hits = searchHits.getHits();
  21. for (SearchHit hit : hits) {
  22. String sourceAsString = hit.getSourceAsString(); //获取json字符串格式的数据
  23. //todo 将对象转为json
  24. Map<String, HighlightField> highlightFields = hit.getHighlightFields(); //获取高亮中的对象元素
  25. HighlightField address = highlightFields.get("address");
  26. Text[] fragments = address.fragments(); //获取元素中的高亮数组结果
  27. //替换json中的成员变量数据
  28. //todo
  29. System.out.println(sourceAsString);
  30. System.out.println(Arrays.toString(fragments));
  31. }
  32. }

13. 重建索引&索引别名

ES的索引一旦创建,只允许添加字段 不允许改变字段 因为改变字段 需要重建倒排索引 影响内部缓存结构

此时需要重建一个新的索引 并将原有索引的数据导入到新索引中

  1. # 重建索引
  2. # 新建索引 索引名称必须全部小写
  3. PUT stdent_index_v1
  4. {
  5. "mappings": {
  6. "properties": {
  7. "birthday":{
  8. "type": "date"
  9. }
  10. }
  11. }
  12. }
  13. GET stdent_index_v1
  14. PUT stdent_index_v1/_doc/1
  15. {
  16. "birthday":"1999-01-01"
  17. }
  18. GET stdent_index_v1/_search
  19. # 现在stdent_index_v1需要存储birthday为一个字符串
  20. # 1.创建新的索引v2
  21. PUT stdent_index_v2
  22. {
  23. "mappings": {
  24. "properties": {
  25. "birthday":{
  26. "type": "text"
  27. }
  28. }
  29. }
  30. }
  31. #2.将旧索引的数据拷贝到新索引 使用_reindex
  32. POST _reindex
  33. {
  34. "source": {
  35. "index": "stdent_index_v1"
  36. },
  37. "dest": {
  38. "index": "stdent_index_v2"
  39. }
  40. }
  41. GET stdent_index_v2/_search
  42. PUT stdent_index_v2/_doc/2
  43. {
  44. "birthday":"199年124日"
  45. }
  46. # 索引别名 因为旧索引已经不使用 而我们代码中写的是旧索引名 无法正常运行 则需要别名
  47. # 1.删除旧索引
  48. DELETE stdent_index_v1
  49. # 2. 给新索引起别名为旧索引名
  50. POST stdent_index_v2/_alias/stdent_index_v1

14. ES集群

ES天然支持分布式,并且分布式自动配置

  • 集群(cluster) 一组拥有共同的cluster name 节点
  • 节点(node) 集合中的一个ES实例
  • 索引(index) es存储数据的地方
  • 分片(shard) 索引可以被拆分为不同的部分进行存储 称为分片 在集群环境下 一个索引的不同可以拆分到不同节点中
  • 主分片(Primary shard) 相当于副本分片的定义
  • 副本分片 每个主分片可以有一个或多个副本 数据与主分片一样

14.1. 搭建

  1. 准备3个集群 此处作伪集群 使用端口号区分
    1. cp -r elasticsearch-7.15.0 elasticsearch-7.15.0-1
    2. cp -r elasticsearch-7.15.0 elasticsearch-7.15.0-2
    3. cp -r elasticsearch-7.15.0 elasticsearch-7.15.0-3
  1. 创建日志和data目录 并授权给iekr用户 ```sh cd /opt mkdir logs mkdir data

授权

chown -R iekr:iekr ./logs chown -R iekr:iekr ./data

chown -R iekr:iekr ./elasticsearch-7.15.0-1 chown -R iekr:iekr ./elasticsearch-7.15.0-2 chown -R iekr:iekr ./elasticsearch-7.15.0-3

  1. 3.
  2. 修改三个集群的配置文件
  3. ```sh
  4. vim /opt/elasticsearch-7.15.0-1/config/elasticsearch.yml
  1. # 集群名称 各个集群必须一致
  2. cluster.name: itcast-es
  3. # 节点名称 不能一致
  4. node.name: iekr-1
  5. #是否有资格主节点
  6. node.master: true
  7. #是否存储数据
  8. node.data: true
  9. #最大集群数
  10. node.max_local_storage_nodes: 3
  11. #ip地址
  12. network.host: 0.0.0.0
  13. # 端口
  14. http.port: 9201
  15. #内部节点之间沟通端口
  16. transport.tcp.port: 9700
  17. #节点发现 es7.x才有
  18. discovery.seed_hosts: ["localhost:9700","localhost:9800","localhost:9900"]
  19. #初始化一个新的集群时需要此配置来选举master
  20. cluster.initial_master_nodes: ["iekr-1","iekr-2","iekr-3"]
  21. #数据和存储路径
  22. path.data: /opt/data
  23. path.logs: /opt/logs
  1. vim /opt/elasticsearch-7.15.0-2/config/elasticsearch.yml
  1. # 集群名称 各个集群必须一致
  2. cluster.name: itcast-es
  3. # 节点名称 不能一致
  4. node.name: iekr-2
  5. #是否有资格主节点
  6. node.master: true
  7. #是否存储数据
  8. node.data: true
  9. #最大集群数
  10. node.max_local_storage_nodes: 3
  11. #ip地址
  12. network.host: 0.0.0.0
  13. # 端口
  14. http.port: 9202
  15. #内部节点之间沟通端口
  16. transport.tcp.port: 9800
  17. #节点发现 es7.x才有
  18. discovery.seed_hosts: ["localhost:9700","localhost:9800","localhost:9900"]
  19. #初始化一个新的集群时需要此配置来选举master
  20. cluster.initial_master_nodes: ["iekr-1","iekr-2","iekr-3"]
  21. #数据和存储路径
  22. path.data: /opt/data
  23. path.logs: /opt/logs
  1. vim /opt/elasticsearch-7.15.0-3/config/elasticsearch.yml
  1. # 集群名称 各个集群必须一致
  2. cluster.name: itcast-es
  3. # 节点名称 不能一致
  4. node.name: iekr-3
  5. #是否有资格主节点
  6. node.master: true
  7. #是否存储数据
  8. node.data: true
  9. #最大集群数
  10. node.max_local_storage_nodes: 3
  11. #ip地址
  12. network.host: 0.0.0.0
  13. # 端口
  14. http.port: 9203
  15. #内部节点之间沟通端口
  16. transport.tcp.port: 9900
  17. #节点发现 es7.x才有
  18. discovery.seed_hosts: ["localhost:9700","localhost:9800","localhost:9900"]
  19. #初始化一个新的集群时需要此配置来选举master
  20. cluster.initial_master_nodes: ["iekr-1","iekr-2","iekr-3"]
  21. #数据和存储路径
  22. path.data: /opt/data
  23. path.logs: /opt/logs
  1. ES默认占用1G 我们通过配置文件修改
    1. vim /opt/elasticsearch-7.15.0-1/config/jvm.options
  1. -Xms256m
  2. -Xmx256m
  1. 分别启动
    1. systemctl stop firewalld
    2. su iekr
    3. cd /opt/elasticsearch-7.15.0-1/bin/
    4. ./elasticsearch
  1. 访问 http://192.168.130.124:9201/_cat/health?v 节点状态

19. ElasticSearch 高级 - 图3

14.2. 使用Kibana配置和管理集群

  1. 复制kibana
    1. cd /opt
    2. cp -r kibana-7.15.0-linux-x86_64 kibana-7.15.0-linux-x86_64-cluster
  1. 修改kibana集群配置
    1. vim /opt/kibana-7.15.0-linux-x86_64-cluster/config/kibana.yml
    2. #修改以下内容
    3. elasticsearch.hosts: ["http://localhost:9201","http://localhost:9202","http://localhost:9203"]
  1. 启动
    1. cd /opt/kibana-7.15.0-linux-x86_64-cluster/bin/
    2. ./kibana --allow-root
  1. 访问 http://192.168.130.124:5601/app/monitoring 查询集群节点信息

14.3. JavaApi访问集群

  1. application.yml
    1. elasticsearch:
    2. host: 192.168.130.124
    3. port: 9201
    4. host2: 192.168.130.124
    5. port2: 9202
    6. host3: 192.168.130.124
    7. port3: 9203
  1. config类 并注册ioc容器 ```java package com.itheima.elasticsearchdemo.config;

import org.apache.http.HttpHost; import org.elasticsearch.client.RestClient; import org.elasticsearch.client.RestHighLevelClient; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration;

@Configuration @ConfigurationProperties(prefix = “elasticsearch”) public class ElasticSearchConfig {

  1. private String host;
  2. private int port;
  3. private String host2;
  4. private int port2;
  5. private String host3;
  6. private int port3;
  7. public String getHost() {
  8. return host;
  9. }
  10. public void setHost(String host) {
  11. this.host = host;
  12. }
  13. public int getPort() {
  14. return port;
  15. }
  16. public void setPort(int port) {
  17. this.port = port;
  18. }
  19. @Bean
  20. public RestHighLevelClient client(){
  21. return new RestHighLevelClient(RestClient.builder(
  22. new HttpHost(host,port,"http"),
  23. new HttpHost(host2,port2,"http"),
  24. new HttpHost(host3,port3,"http")
  25. ));
  26. }

}

  1. <a name="03684b4b"></a>
  2. ## 14.4. 集群原理
  3. <a name="58346d8b"></a>
  4. ### 14.4.1. 分片配置
  5. -
  6. 在创建索引时 如果不指定分配配置 默认主分片1 副本分片1
  7. - ![](https://cdn.jsdelivr.net/gh/Iekrwh/images/md-images/image-20211010162725564.png#alt=image-20211010162725564)
  8. -
  9. 在创建索引时 可以通过settings设置分片
  10. -
  11. ```apl
  12. PUT stdent_index_v3
  13. {
  14. "mappings": {
  15. "properties": {
  16. "birthday":{
  17. "type": "date"
  18. }
  19. }
  20. },
  21. "settings": {
  22. "number_of_shards": 3,
  23. "number_of_replicas": 1
  24. }
  25. }
  • 分片与自平衡: es默认会交错存储分片 如果其中一个节点失效不影响访问 并且会自动将失效的分片归并到目前仍在线的节点上 节点重新上线归还分片

  • ES每个查询在每个分片中是单线程执行 但是可以并行处理多个分片

  • 分片数量一旦确定好了 不能修改 但是可以通过重建索引和索引别名来迁移

  • 索引分片推荐配置方案

    1. 每个分片推荐大小10-30GB
    2. 分片数量推荐 = 节点数 * 1 ~ 3 倍

14.4.2. 路由原理

  • 文档存入对应的分片 es计算分片编号的过程 称为路由
  • 路由算法 shard_index = hash(id) % number_of_shards

19. ElasticSearch 高级 - 图4

14.5. 脑裂

  • 一个正常es集群中只有一个主节点 主节点负责管理整个集群 如创建或删除索引 跟踪哪些节点是集群的一部分 并决定哪些分片分配给相关的节点
  • 集群的所有节点都会选择同一个节点作为主节点
  • 脑裂问题的出现是因为从节点在选择主节点上出现分歧导致一个集群出现多个主节点从而集群分离,使得集群处于异常状态

14.5.1. 脑裂原因

  1. 网络原因: 网络延迟 一般出现在外网集群

  2. 节点负责 主节点的角色即为master又为data 当数据访问量较大时 可能导致Master节点停止响应(假死状态)

    1. #是否有资格主节点
    2. node.master: true
    3. #是否存储数据
    4. node.data: true
  1. JVM内存回收

    • 当Master节点设置的JVM内存较小时 引发JVM的大规模内存回收 造成ES进程失去响应

14.5.2. 避免脑裂

  1. 网络原因: discovery.zen.ping.timeout 超时时间配置大一些 默认为3S
  2. 节点负责 角色分离 当主节点就不要当数据存储 当数据存储的就不要当主节点
  3. 修改 jvm.options 的最大内存和最小内存 为服务器的内存一半

15. 集群扩容

  1. 修改所有集群中的 配置文件 添加新的集群
  2. 全部启动