1.term&terms查询

1.1 term查询

term查询代表完全匹配,搜索之前不会对你搜索的关键字进行分词,对你的关键字去文档分词库中去匹配内容
Kibana中实现

  1. #term查询
  2. POST /sms-logs-index/sms-logs-type/_search
  3. {
  4. "from": 0,
  5. "size": 5,
  6. "query": {
  7. "term": {
  8. "province": {
  9. "value": "北京"
  10. }
  11. }
  12. }
  13. }

Java中实现

  1. public class demo4 {
  2. RestHighLevelClient client = ESClient.getClient();
  3. String index = "sms-logs-index";
  4. String type = "sms-logs-type";
  5. @Test
  6. public void termQuery() throws Exception{
  7. //1.创建Request对象
  8. SearchRequest request = new SearchRequest(index);
  9. request.types(type);
  10. //2.指定查询条件
  11. SearchSourceBuilder builder = new SearchSourceBuilder();
  12. builder.from(0);
  13. builder.size(5);
  14. builder.query(QueryBuilders.termQuery("province","北京"));
  15. request.source(builder);
  16. //3.执行查询
  17. SearchResponse resp = client.search(request, RequestOptions.DEFAULT);
  18. //4.获取到_source中的数据,并展示
  19. for (SearchHit hit : resp.getHits().getHits()) {
  20. Map<String, Object> result = hit.getSourceAsMap();
  21. System.out.println(result);
  22. }
  23. }
  24. }

1.2 terms查询

terms查询和term查询的机制一样,都不会将指定的查询关键字进行分词,直接去分词库中匹配,找到相应文档内容
terms是针对一个字段包含多个值的时候使用
term:where province = ‘北京’;
terms:where province = ‘北京’ or province = ? or province = ?

Kibana中实现**

  1. #terms查询
  2. POST /sms-logs-index/sms-logs-type/_search
  3. {
  4. "query": {
  5. "terms": {
  6. "province": [
  7. "北京",
  8. "山西",
  9. "武汉"
  10. ]
  11. }
  12. }
  13. }

Java中实现

  1. @Test
  2. public void termsQuery() throws Exception{
  3. //1.创建Request对象
  4. SearchRequest request = new SearchRequest(index);
  5. request.types(type);
  6. //2.封装查询条件
  7. SearchSourceBuilder builder = new SearchSourceBuilder();
  8. builder.query(QueryBuilders.termsQuery("province","北京","山西","武汉"));
  9. request.source(builder);
  10. //3.执行查询
  11. SearchResponse resp = client.search(request, RequestOptions.DEFAULT);
  12. //4.输出_source
  13. for (SearchHit hit : resp.getHits().getHits()) {
  14. System.out.println(hit.getSourceAsMap());
  15. }
  16. }

2.match查询

match查询属于高层查询,他会根据你查询的字段类型不一样,采用不同的查询方式

  • 查询的是日期或者是数值的话,他会基于你的字符串查询内容转换为日期或者数值对待
  • 如果查询的内容是一个不能被分词的内容(keyword),match查询不会对你的查询关键字进行分词
  • 如果查询的内容是一个可以被分词的内容(text),match会将你指定的查询内容根据一定的方式去分词,去分词库中匹配指定的内容

match查询,实际底层就是多个term查询,将多个term查询的结果给你封装到了一起

2.1 match_all查询

查询全部内容,不指定任何查询条件
Kibana中实现

  1. #match_all查询
  2. POST /sms-logs-index/sms-logs-type/_search
  3. {
  4. "query": {
  5. "match_all": {}
  6. }
  7. }

Java中实现

  1. @Test
  2. public void matchAllQuery() throws Exception{
  3. //1.创建Request
  4. SearchRequest request = new SearchRequest(index);
  5. request.types(type);
  6. //2.指定查询条件
  7. SearchSourceBuilder builder = new SearchSourceBuilder();
  8. builder.query(QueryBuilders.matchAllQuery());
  9. builder.size(20); //es默认只查询10条数据
  10. request.source(builder);
  11. //3.执行查询
  12. SearchResponse resp = client.search(request, RequestOptions.DEFAULT);
  13. //4.输出结果
  14. for (SearchHit hit : resp.getHits().getHits()) {
  15. System.out.println(hit.getSourceAsMap());
  16. }
  17. System.out.println(resp.getHits().getHits().length);
  18. }

2.2 match查询

指定一个Field作为查询的条件
Kibana中实现

  1. #match查询
  2. POST /sms-logs-index/sms-logs-type/_search
  3. {
  4. "query": {
  5. "match": {
  6. "smsContent": "收货安装"
  7. }
  8. }
  9. }

Java中实现

  1. @Test
  2. public void matchQuery() throws Exception{
  3. //1.创建Request
  4. SearchRequest request = new SearchRequest(index);
  5. request.types(type);
  6. //2.指定查询条件
  7. SearchSourceBuilder builder = new SearchSourceBuilder();
  8. builder.query(QueryBuilders.matchQuery("smsContent","收货安装"));
  9. request.source(builder);
  10. //3.执行查询
  11. SearchResponse resp = client.search(request, RequestOptions.DEFAULT);
  12. //4.输出结果
  13. for (SearchHit hit : resp.getHits().getHits()) {
  14. System.out.println(hit.getSourceAsMap());
  15. }
  16. System.out.println(resp.getHits().getHits().length);
  17. }

2.3 布尔match查询

基于一个Field匹配的内容,采用and或者or的方式连接
Kibana中实现

  1. #布尔match查询
  2. POST /sms-logs-index/sms-logs-type/_search
  3. {
  4. "query": {
  5. "match": {
  6. "smsContent": {
  7. "query": "中国 健康",
  8. "operator": "and"
  9. }
  10. }
  11. }
  12. }
  13. #布尔match查询
  14. POST /sms-logs-index/sms-logs-type/_search
  15. {
  16. "query": {
  17. "match": {
  18. "smsContent": {
  19. "query": "中国 健康",
  20. "operator": "or"
  21. }
  22. }
  23. }
  24. }

Java中实现

  1. @Test
  2. public void booleanMatchQuery() throws Exception{
  3. //1.创建Request
  4. SearchRequest request = new SearchRequest(index);
  5. request.types(type);
  6. //2.指定查询条件
  7. SearchSourceBuilder builder = new SearchSourceBuilder();
  8. builder.query(QueryBuilders.matchQuery("smsContent","中国 健康").operator(Operator.OR));
  9. request.source(builder);
  10. //3.执行查询
  11. SearchResponse resp = client.search(request, RequestOptions.DEFAULT);
  12. //4.输出结果
  13. for (SearchHit hit : resp.getHits().getHits()) {
  14. System.out.println(hit.getSourceAsMap());
  15. }
  16. }

2.4 multi_match查询

match针对一个field做检索,multi_match针对多个field进行检索,多个field对应一个text
Kibana中实现

  1. #multi_match查询
  2. POST /sms-logs-index/sms-logs-type/_search
  3. {
  4. "query": {
  5. "multi_match": {
  6. "query": "北京",
  7. "fields": ["province","smsContent"]
  8. }
  9. }
  10. }

Java中实现

  1. @Test
  2. public void multiMatchQuery() throws Exception{
  3. //1.创建Request
  4. SearchRequest request = new SearchRequest(index);
  5. request.types(type);
  6. //2.指定查询条件
  7. SearchSourceBuilder builder = new SearchSourceBuilder();
  8. builder.query(QueryBuilders.multiMatchQuery("北京","province","smsContent"));
  9. request.source(builder);
  10. //3.执行查询
  11. SearchResponse resp = client.search(request, RequestOptions.DEFAULT);
  12. //4.输出结果
  13. for (SearchHit hit : resp.getHits().getHits()) {
  14. System.out.println(hit.getSourceAsMap());
  15. }
  16. }

3.其他查询

3.1 id查询

根据id查询where id=?
Kibana中实现

  1. #id查询
  2. GET /sms-logs-index/sms-logs-type/21

Java中实现

  1. @Test
  2. public void findById() throws Exception{
  3. //1.创建GetRequest
  4. GetRequest request = new GetRequest(index, type, "21");
  5. //2.执行查询
  6. GetResponse resp = client.get(request, RequestOptions.DEFAULT);
  7. //3.输出结果
  8. System.out.println(resp.getSourceAsMap());
  9. }

3.2 ids查询

Kibana中实现

  1. #ids查询
  2. POST /sms-logs-index/sms-logs-type/_search
  3. {
  4. "query": {
  5. "ids": {
  6. "values": ["21","22","23"]
  7. }
  8. }
  9. }

Java中实现

  1. @Test
  2. public void findByIds() throws IOException {
  3. //1.创建SearchRequest
  4. SearchRequest request = new SearchRequest(index);
  5. request.types(type);
  6. //2.指定查询条件
  7. SearchSourceBuilder builder = new SearchSourceBuilder();
  8. builder.query(QueryBuilders.idsQuery().addIds("21","22","23"));
  9. request.source(builder);
  10. //3.执行
  11. SearchResponse resp = client.search(request, RequestOptions.DEFAULT);
  12. //4.输出结果
  13. for (SearchHit hit : resp.getHits().getHits()) {
  14. System.out.println(hit.getSourceAsMap());
  15. }
  16. }

3.3 prefix查询

前缀查询,可以通过一个关键字去指定一个Field的前缀,从而查询到指定的文档
Kibana中实现

  1. #prefix查询
  2. POST /sms-logs-index/sms-logs-type/_search
  3. {
  4. "query": {
  5. "prefix": {
  6. "corpName": {
  7. "value": "途虎"
  8. }
  9. }
  10. }
  11. }

Java中实现

  1. @Test
  2. public void findByPrefix() throws Exception{
  3. //1.创建SearchRequest
  4. SearchRequest request = new SearchRequest(index);
  5. request.types(type);
  6. //2.指定查询条件
  7. SearchSourceBuilder builder = new SearchSourceBuilder();
  8. builder.query(QueryBuilders.prefixQuery("corpName","盒马"));
  9. request.source(builder);
  10. //3.执行
  11. SearchResponse resp = client.search(request, RequestOptions.DEFAULT);
  12. //4.输出结果
  13. for (SearchHit hit : resp.getHits().getHits()) {
  14. System.out.println(hit.getSourceAsMap());
  15. }
  16. }

3.4 fuzzy查询

模糊查询,我们输入字符的大概,ES就可以去根据输入的内容大概去匹配一下结果
Kibana中实现

  1. #fuzzy查询
  2. POST /sms-logs-index/sms-logs-type/_search
  3. {
  4. "query": {
  5. "fuzzy": {
  6. "corpName": {
  7. "value": "盒马先生",
  8. "prefix_length": 2 #指定前面几个字符是不允许出现错误的
  9. }
  10. }
  11. }
  12. }

Java中实现

  1. @Test
  2. public void findByFuzzy() throws Exception{
  3. //1.创建SearchRequest
  4. SearchRequest request = new SearchRequest(index);
  5. request.types(type);
  6. //2.指定查询条件
  7. SearchSourceBuilder builder = new SearchSourceBuilder();
  8. builder.query(QueryBuilders.fuzzyQuery("corpName","盒马先生").prefixLength(2));
  9. request.source(builder);
  10. //3.执行
  11. SearchResponse resp = client.search(request, RequestOptions.DEFAULT);
  12. //4.输出结果
  13. for (SearchHit hit : resp.getHits().getHits()) {
  14. System.out.println(hit.getSourceAsMap());
  15. }
  16. }

3.5 wildcard查询

通配查询,和MySQL中的like是一个套路,可以在查询时,在字符串中指定通配符和占位符?
*Kibana中实现

  1. #wildward查询
  2. POST /sms-logs-index/sms-logs-type/_search
  3. {
  4. "query": {
  5. "wildcard": {
  6. "corpName": {
  7. "value": "中国*" #可以使用*和? 指定通配符和占位符
  8. }
  9. }
  10. }
  11. }

Java中实现

  1. @Test
  2. public void findByWildCard() throws Exception{
  3. //1.创建SearchRequest
  4. SearchRequest request = new SearchRequest(index);
  5. request.types(type);
  6. //2.指定查询条件
  7. SearchSourceBuilder builder = new SearchSourceBuilder();
  8. builder.query(QueryBuilders.wildcardQuery("corpName","中国*"));
  9. request.source(builder);
  10. //3.执行
  11. SearchResponse resp = client.search(request, RequestOptions.DEFAULT);
  12. //4.输出结果
  13. for (SearchHit hit : resp.getHits().getHits()) {
  14. System.out.println(hit.getSourceAsMap());
  15. }
  16. }

3.6 range查询

范围查询,只针对数值类型,对某一个Field进行大于或者小于的范围指定
Kibana中实现

  1. #range查询
  2. POST /sms-logs-index/sms-logs-type/_search
  3. {
  4. "query": {
  5. "range": {
  6. "fee": {
  7. "gt": 5,
  8. "lte": 10
  9. }
  10. }
  11. }
  12. }

Java中实现

  1. @Test
  2. public void findByRange() throws Exception{
  3. //1.创建SearchRequest
  4. SearchRequest request = new SearchRequest(index);
  5. request.types(type);
  6. //2.指定查询条件
  7. SearchSourceBuilder builder = new SearchSourceBuilder();
  8. builder.query(QueryBuilders.rangeQuery("fee").gt(5).lte(10));
  9. request.source(builder);
  10. //3.执行
  11. SearchResponse resp = client.search(request, RequestOptions.DEFAULT);
  12. //4.输出结果
  13. for (SearchHit hit : resp.getHits().getHits()) {
  14. System.out.println(hit.getSourceAsMap());
  15. }
  16. }

3.7 regexp查询

正则规则,通过你编写的正则表达式区匹配内容
Kibana中实现

  1. #regexp查询
  2. POST /sms-logs-index/sms-logs-type/_search
  3. {
  4. "query": {
  5. "regexp": {
  6. "mobile": "180[0-9]{8}"
  7. }
  8. }
  9. }

Java中实现

  1. @Test
  2. public void findByRange() throws Exception{
  3. //1.创建SearchRequest
  4. SearchRequest request = new SearchRequest(index);
  5. request.types(type);
  6. //2.指定查询条件
  7. SearchSourceBuilder builder = new SearchSourceBuilder();
  8. builder.query(QueryBuilders.rangeQuery("fee").gt(5).lte(10));
  9. request.source(builder);
  10. //3.执行
  11. SearchResponse resp = client.search(request, RequestOptions.DEFAULT);
  12. //4.输出结果
  13. for (SearchHit hit : resp.getHits().getHits()) {
  14. System.out.println(hit.getSourceAsMap());
  15. }
  16. }

PS:**prefix查询,fuzzy查询,wildcard查询和regexp查询**效率相对比较低,要求效率比较高时,避免去使用
**

4.深分页Scroll

4.1 深分页原理

from+size分页方式缺点:

  • Elasticsearch对于from+size是有限制的,from和size两者只和不能超过1w

from+size方式与scroll+size方式的原理:

  • from+size方式在Elasticsearch查询数据的方式:
    • 1.将用户指定的内容进行分词(是否会分词由查询方式与指定内容决定)
    • 2.将词汇拿去词库中进行检索,得到多个文档的id
    • 3.去各个分片中拉去指定的数据,耗时较长
    • 4.将数据根据score进行排序,耗时较长
    • 5.根据from的值,将查询到的数据舍弃一部分
    • 6.返回结果
  • scroll+size方式在Elasticsearch查询数据的方式:
    • 1.将用户指定的内容进行分词(是否会分词由查询方式与指定内容决定)
    • 2.将词汇拿去词库中进行检索,得到多个文档的id
    • 3.将文档的id存放在一个Elasticsearch的上下文(内存)中
    • 4.根据你指定的size去Elasticsearch中检索指定个数的数据,数据被拿到之后会从上下文(内存)中删除
    • 5.如果需要下一页数据,直接去Elasticsearch的上下文(内存)中,找后续内容
    • 6.循环第4,5步

Scroll查询方式,不适合做实时的查询

4.2 深分页使用

Kibana中实现

  1. #执行scroll查询,返回第一页数据,并且将文档id信息存放在ES上下文中,指定生存时间1m
  2. POST /sms-logs-index/sms-logs-type/_search?scroll=5m
  3. {
  4. "query": {
  5. "match_all": {}
  6. },
  7. "size": 2,
  8. "sort": [
  9. {
  10. "fee": {
  11. "order": "desc"
  12. }
  13. }
  14. ]
  15. }
  16. #根据scroll查询下一页的数据
  17. POST /_search/scroll
  18. {
  19. #scroll_id由上面的scroll查询生成
  20. "scroll_id":"DnF1ZXJ5VGhlbkZldGNoAwAAAAAAAAfcFnZvMTh2N1lzVHJtUmxhUkZ2aWY3T1EAAAAAAAAH3RZ2bzE4djdZc1RybVJsYVJGdmlmN09RAAAAAAAAB94Wdm8xOHY3WXNUcm1SbGFSRnZpZjdPUQ==",
  21. "scroll":"5m"
  22. }
  23. #删除scrollES上下文中的数据
  24. DELETE /_search/scroll/DnF1ZXJ5VGhlbkZldGNoAwAAAAAAAAfcFnZvMTh2N1lzVHJtUmxhUkZ2aWY3T1EAAAAAAAAH3RZ2bzE4djdZc1RybVJsYVJGdmlmN09RAAAAAAAAB94Wdm8xOHY3WXNUcm1SbGFSRnZpZjdPUQ==

Java中实现

  1. @Test
  2. public void scrollQuery() throws Exception{
  3. //1.创建SearchRequest
  4. SearchRequest request = new SearchRequest(index);
  5. request.types(type);
  6. //2.指定scroll信息
  7. request.scroll(TimeValue.timeValueMinutes(1L));
  8. //3.指定查询条件
  9. SearchSourceBuilder builder = new SearchSourceBuilder();
  10. builder.size(2);
  11. builder.sort("fee", SortOrder.DESC);
  12. builder.query(QueryBuilders.matchAllQuery());
  13. request.source(builder);
  14. //4.获取返回结果scrollId,source
  15. SearchResponse resp = client.search(request, RequestOptions.DEFAULT);
  16. String scrollId = resp.getScrollId();
  17. System.out.println("--------------首页--------------");
  18. for (SearchHit hit : resp.getHits().getHits()) {
  19. System.out.println(hit.getSourceAsMap());
  20. }
  21. while (true){
  22. //5.循环 - 创建SearchScrollRequest
  23. SearchScrollRequest scrollRequest = new SearchScrollRequest(scrollId);
  24. //6.指定scrollId的生存时间
  25. scrollRequest.scroll(TimeValue.timeValueMinutes(1L));
  26. //7.执行查询获取返回结果
  27. SearchResponse searchResponse = client.scroll(scrollRequest, RequestOptions.DEFAULT);
  28. //8.判断是否查询到了数据,输出
  29. SearchHit[] hits = searchResponse.getHits().getHits();
  30. if (hits != null && hits.length > 0){
  31. System.out.println("--------------下一页--------------");
  32. for (SearchHit hit : hits) {
  33. System.out.println(hit.getSourceAsMap());
  34. }
  35. }else{
  36. //9.没有查询到数据,跳出循环
  37. System.out.println("--------------结束--------------");
  38. break;
  39. }
  40. }
  41. //10.创建ClearScrollRequest
  42. ClearScrollRequest clearScrollRequest = new ClearScrollRequest();
  43. //11.指定ScrollId
  44. clearScrollRequest.addScrollId(scrollId);
  45. //12.删除ScrollId
  46. ClearScrollResponse clearScrollResponse = client.clearScroll(clearScrollRequest, RequestOptions.DEFAULT);
  47. //13.输出结果
  48. System.out.println("删除scroll:" + clearScrollResponse.isSucceeded());
  49. }

5.delete-by-query

根据term,match等方式去删除大量的文档。
Kibana中实现

  1. #delete-by-query
  2. POST /sms-logs-index/sms-logs-type/_delete_by_query
  3. {
  4. "query": {
  5. "range": {
  6. "fee": {
  7. "lt": 4
  8. }
  9. }
  10. }
  11. }

Java中实现

  1. @Test
  2. public void deleteByQuery() throws IOException {
  3. //1.创建DeleteByQueryRequest
  4. DeleteByQueryRequest request = new DeleteByQueryRequest(index);
  5. request.types(type);
  6. //2.指定检索的条件,和SearchRequest指定Query的方式不一样
  7. request.setQuery(QueryBuilders.rangeQuery("fee").lte(4));
  8. //3.执行删除
  9. BulkByScrollResponse resp = client.deleteByQuery(request, RequestOptions.DEFAULT);
  10. //4.输出返回结果
  11. System.out.println(resp.toString());
  12. }

如果你需要删除的内容,是index下的大部分数据,推荐创建一个全新的index,将保留的文档内容,添加到新的文档。

6.复合查询

6.1 bool查询

复合过滤器,将你的多个查询条件,以一定的逻辑组合在一起

  • must:将所有的条件,用must组合在一起,表示And的意思
  • must_not:将must_not中的条件,全部都不能匹配,表示Not的意思
  • should:所有的条件,用should组合在一起,表示Or的意思

Kibana中实现

  1. #查询省份为武汉或者北京
  2. #运营商不是联通
  3. #smsContent中包含中国和平安
  4. #bool查询
  5. POST /sms-logs-index/sms-logs-type/_search
  6. {
  7. "query": {
  8. "bool": {
  9. "should": [
  10. {
  11. "term": {
  12. "province": {
  13. "value": "北京"
  14. }
  15. }
  16. },
  17. {
  18. "term": {
  19. "province": {
  20. "value": "武汉"
  21. }
  22. }
  23. }
  24. ],
  25. "must_not": [
  26. {
  27. "term": {
  28. "operatorId": {
  29. "value": "2"
  30. }
  31. }
  32. }
  33. ],
  34. "must": [
  35. {
  36. "match": {
  37. "smsContent": "中国"
  38. }
  39. },
  40. {
  41. "match": {
  42. "smsContent": "平安"
  43. }
  44. }
  45. ]
  46. }
  47. }
  48. }

Java中实现

  1. @Test
  2. public void BoolQuery() throws Exception{
  3. //1.创建SearchRequest
  4. SearchRequest request = new SearchRequest(index);
  5. request.types(type);
  6. //2.指定查询条件
  7. SearchSourceBuilder builder = new SearchSourceBuilder();
  8. BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
  9. //查询省份为武汉或者北京
  10. boolQueryBuilder.should(QueryBuilders.termQuery("province","武汉"));
  11. boolQueryBuilder.should(QueryBuilders.termQuery("province","北京"));
  12. //运营商不是联通
  13. boolQueryBuilder.mustNot(QueryBuilders.termQuery("operatorId","2"));
  14. //smsContent中包含中国和平安
  15. boolQueryBuilder.must(QueryBuilders.matchQuery("smsContent","中国"));
  16. boolQueryBuilder.must(QueryBuilders.matchQuery("smsContent","平安"));
  17. builder.query(boolQueryBuilder);
  18. request.source(builder);
  19. //3.执行查询
  20. SearchResponse resp = client.search(request, RequestOptions.DEFAULT);
  21. //4.输出结果
  22. for (SearchHit hit : resp.getHits().getHits()) {
  23. System.out.println(hit.getSourceAsMap());
  24. }
  25. }

6.2 boosting查询

boosting查询可以帮助我们去影响查询后的score

  • positive:只有匹配上positive的查询内容,才会被放到返回的结果集中
  • negative:如果匹配上positive并且也匹配上了negative,就可以降低这样的文档score
  • negative_boost:指定系数,必须小于1.0

关于查询时,分数是如何计算的:

  • 搜索的关键字在文档中出现的频次越高,分数就越高
  • 指定的文档内容越短,分数就越高
  • 在搜索时,指定的关键字也会被分词,这个被分词的内容,被分词库匹配的个数越多,分数越高

Kibana中实现

  1. #boosting查询 收货安装
  2. POST /sms-logs-index/sms-logs-type/_search
  3. {
  4. "query": {
  5. "boosting": {
  6. "positive": {
  7. "match": {
  8. "smsContent": "收货安装"
  9. }
  10. },
  11. "negative": {
  12. "match": {
  13. "smsContent": "王五"
  14. }
  15. },
  16. "negative_boost": 0.5
  17. }
  18. }
  19. }

Java中实现

  1. @Test
  2. public void BoostingQuery() throws Exception{
  3. //1.创建SearchRequest
  4. SearchRequest request = new SearchRequest(index);
  5. request.types(type);
  6. //2.指定查询条件
  7. SearchSourceBuilder builder = new SearchSourceBuilder();
  8. BoostingQueryBuilder boostingQueryBuilder = QueryBuilders.boostingQuery(
  9. QueryBuilders.matchQuery("smsContent", "收货安装"),
  10. QueryBuilders.matchQuery("smsContent", "王五")
  11. ).negativeBoost(0.5f);
  12. builder.query(boostingQueryBuilder);
  13. request.source(builder);
  14. //3.执行查询
  15. SearchResponse resp = client.search(request, RequestOptions.DEFAULT);
  16. //4.输出结果
  17. for (SearchHit hit : resp.getHits().getHits()) {
  18. System.out.println(hit.getSourceAsMap());
  19. }
  20. }

7.filter查询

query查询,根据你的查询条件,去计算文档的匹配度得到一个分数,并且根据分数进行排序,不会做缓存的
filter,根据你的查询条件去查询文档,不去计算分数,而且filter会对经常被过滤的数据进行缓存
Kibana中实现

  1. #filter查询
  2. POST /sms-logs-index/sms-logs-type/_search
  3. {
  4. "query": {
  5. "bool": {
  6. "filter": [
  7. {
  8. "term":{
  9. "corpName": "盒马鲜生"
  10. }
  11. },
  12. {
  13. "range": {
  14. "fee": {
  15. "lte": 5
  16. }
  17. }
  18. }
  19. ]
  20. }
  21. }
  22. }

Java中实现

  1. @Test
  2. public void filter() throws Exception{
  3. //1.SearchRequest
  4. SearchRequest request = new SearchRequest(index);
  5. request.types(type);
  6. //2.查询条件
  7. SearchSourceBuilder builder = new SearchSourceBuilder();
  8. BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
  9. boolQueryBuilder.filter(QueryBuilders.termQuery("corpName","盒马鲜生"));
  10. boolQueryBuilder.filter(QueryBuilders.rangeQuery("fee").lte(5));
  11. builder.query(boolQueryBuilder);
  12. request.source(builder);
  13. //3.执行查询
  14. SearchResponse resp = client.search(request, RequestOptions.DEFAULT);
  15. //4.输出结果
  16. for (SearchHit hit : resp.getHits().getHits()) {
  17. System.out.println(hit.getSourceAsMap());
  18. }
  19. }

8.高亮查询

高亮查询就是用户输入的关键字,以一定的特殊样式展示给用户,让用户知道为什么这个结果被检索出来
高亮展示的数据,本身就是文档中的一个Field,单独将Field以highlight的形式返回给你
Elasticsearch提供了一个highlight属性,和query同级别的

  • fragment_size:指定高亮数据展示多少个字符回来
  • pre_tags:指定前缀标签,举个例子
  • post_tags:指定后缀标签,举个例子
  • fields:指定哪几个Field以高亮的形式返回

Kibana中实现

  1. #highlight查询
  2. POST /sms-logs-index/sms-logs-type/_search
  3. {
  4. "query": {
  5. "match": {
  6. "smsContent": "盒马"
  7. }
  8. },
  9. "highlight": {
  10. "fields": {
  11. "smsContent": {}
  12. },
  13. "pre_tags": "<font color='red'>",
  14. "post_tags": "</font>",
  15. "fragment_size": 10
  16. }
  17. }

Java中实现

  1. @Test
  2. public void highLightQuery() throws Exception{
  3. //1.SearchRequest
  4. SearchRequest request = new SearchRequest(index);
  5. //2.指定查询条件(高亮)
  6. SearchSourceBuilder builder = new SearchSourceBuilder();
  7. builder.query(QueryBuilders.matchQuery("smsContent","盒马"));
  8. HighlightBuilder highlightBuilder = new HighlightBuilder();
  9. highlightBuilder.field("smsContent",10)
  10. .preTags("<font color='red'>")
  11. .postTags("</font>");
  12. builder.highlighter(highlightBuilder);
  13. request.source(builder);
  14. //3.执行查询
  15. SearchResponse resp = client.search(request, RequestOptions.DEFAULT);
  16. //4.高亮数据,输出
  17. for (SearchHit hit : resp.getHits().getHits()) {
  18. System.out.println(hit);
  19. }
  20. }

9.聚合查询

Elasticsearch聚合查询寻和MySQL的聚合查询类似,Elasticsearch的聚合查询比MySQL要强大得多,Elasticsearch提供的统计数据的方式多种多样

  1. #ES聚合查询的RESTful语法
  2. POST /index/type/_search
  3. {
  4. "aggs": {
  5. "名字(agg)": {
  6. "agg_type": {
  7. "属性": "值"
  8. }
  9. }
  10. }
  11. }

9.1 去重计数查询

去重计数查询,即Cardinality,第一步先将返回的文档中的一个指定的field进行去重,统计一共有多少条
Kibana中实现

  1. #去重计数查询 北京 上海 武汉 山西
  2. POST /sms-logs-index/sms-log-type/_search
  3. {
  4. "aggs": {
  5. "agg": {
  6. "cardinality": {
  7. "field": "province"
  8. }
  9. }
  10. }
  11. }

Java中实现

  1. @Test
  2. public void cardinality() throws Exception{
  3. //1.创建SearchRequest
  4. SearchRequest request = new SearchRequest(index);
  5. request.types(type);
  6. //2.指定使用的聚合查询方式
  7. SearchSourceBuilder builder = new SearchSourceBuilder();
  8. builder.aggregation(AggregationBuilders.cardinality("agg").field("province"));
  9. request.source(builder);
  10. //3.执行查询
  11. SearchResponse resp = client.search(request, RequestOptions.DEFAULT);
  12. //4.获取返回结果
  13. Cardinality agg = resp.getAggregations().get("agg");
  14. long value = agg.getValue();
  15. System.out.println(value);
  16. }

9.2 范围统计

统计一定范围内出现的文档的个数,比如,针对某个Field的值在0~100,100~200,200~300之间文档出现的个数分别是多少。范围统计可以针对普通的数值,这对时间类型,针对ip类型都可以做相应的统计。
range,date_range,ip_range
Kibana中实现

  1. #数值方式范围统计
  2. POST /sms-logs-index/sms-logs-type/_search
  3. {
  4. "aggs": {
  5. "agg": {
  6. "range": {
  7. "field": "fee",
  8. "ranges": [
  9. {
  10. "to": 5
  11. },
  12. {
  13. "from": 5,
  14. "to": 10
  15. },
  16. {
  17. "from": 10
  18. }
  19. ]
  20. }
  21. }
  22. }
  23. }
  24. #时间方式范围统计
  25. POST /sms-logs-index/sms-logs-type/_search
  26. {
  27. "aggs": {
  28. "agg": {
  29. "date_range": {
  30. "field": "createDate",
  31. "format": "yyyy",
  32. "ranges": [
  33. {
  34. "to": 2000
  35. },
  36. {
  37. "from": 2000
  38. }
  39. ]
  40. }
  41. }
  42. }
  43. }
  44. #ip统计方式
  45. POST /sms-logs-index/sms-logs-type/_search
  46. {
  47. "aggs": {
  48. "agg": {
  49. "ip_range": {
  50. "field": "ipAddr",
  51. "ranges": [
  52. {
  53. "to": "10.126.2.9"
  54. },
  55. {
  56. "from": "10.126.2.9"
  57. }
  58. ]
  59. }
  60. }
  61. }
  62. }

Java中实现

  1. @Test
  2. public void range() throws Exception{
  3. //1.创建SearchRequest
  4. SearchRequest request = new SearchRequest(index);
  5. request.types(type);
  6. //2.指定使用的聚合查询方式
  7. SearchSourceBuilder builder = new SearchSourceBuilder();
  8. builder.aggregation(AggregationBuilders.range("agg").field("fee")
  9. .addUnboundedTo(5)
  10. .addRange(5,10)
  11. .addUnboundedFrom(10)
  12. );
  13. request.source(builder);
  14. //3.执行查询
  15. SearchResponse resp = client.search(request, RequestOptions.DEFAULT);
  16. //4.获取返回结果
  17. Range agg = resp.getAggregations().get("agg");
  18. for (Range.Bucket bucket : agg.getBuckets()) {
  19. String key = bucket.getKeyAsString();
  20. Object from = bucket.getFrom();
  21. Object to = bucket.getTo();
  22. long docCount = bucket.getDocCount();
  23. System.out.println(String.format("key: %s,from: %s,to: %s,docCount: %s",key,from,to,docCount));
  24. }
  25. }

9.3 统计聚合查询

可以查询指定Field的最大值,最小值,平均值,平方和等
使用extended_stats
Kibana中实现

  1. #统计聚合查询
  2. POST /sms-logs-index/sms-logs-type/_search
  3. {
  4. "aggs": {
  5. "agg": {
  6. "extended_stats": {
  7. "field": "fee"
  8. }
  9. }
  10. }
  11. }

Java中实现

  1. @Test
  2. public void extendsStats() throws Exception{
  3. //1.创建SearchRequest
  4. SearchRequest request = new SearchRequest(index);
  5. request.types(type);
  6. //2.指定使用的聚合查询方式
  7. SearchSourceBuilder builder = new SearchSourceBuilder();
  8. builder.aggregation(AggregationBuilders.extendedStats("agg").field("fee"));
  9. request.source(builder);
  10. //3.执行查询
  11. SearchResponse resp = client.search(request, RequestOptions.DEFAULT);
  12. //4.获取返回结果
  13. ExtendedStats agg = resp.getAggregations().get("agg");
  14. double max = agg.getMax();
  15. double min = agg.getMin();
  16. System.out.println("fee的最大值为:" + max + ",最小值为:" + min);
  17. }

10.地图经纬度查询

Elasticsearch中提供了一个数据类型geo_point,这个类型就是从来存储经纬度的
创建一个带geo_point类型的索引,并添加测试数据

  1. #创建一个索引,指定一个namelocation
  2. PUT /map
  3. {
  4. "settings": {
  5. "number_of_shards": 5,
  6. "number_of_replicas": 1
  7. },
  8. "mappings": {
  9. "map": {
  10. "properties": {
  11. "name": {
  12. "type": "text"
  13. },
  14. "location": {
  15. "type": "geo_point"
  16. }
  17. }
  18. }
  19. }
  20. }
  21. #添加测试数据
  22. PUT /map/map/1
  23. {
  24. "name": "天安门",
  25. "location": {
  26. "lon": 116.403981,
  27. "lat":39.914492
  28. }
  29. }
  30. PUT /map/map/2
  31. {
  32. "name": "海淀公园",
  33. "location": {
  34. "lon": 116.302509,
  35. "lat": 39.991152
  36. }
  37. }
  38. PUT /map/map/3
  39. {
  40. "name": "北京动物园",
  41. "location": {
  42. "lon": 116.343184,
  43. "lat": 39.947468
  44. }
  45. }

10.1 Elasticsearch的地图检索方式

geo_distance:直线距离检索方式
geo_bounding_box:以两个点确定一个矩形,获取在矩形内的全部数据
geo_polygon:以多个点,确定一个多边形,获取多边形内的全部数据

10.2 基于RESTful实现地图检索

Kibana中实现

  1. #geo_distance
  2. POST /map/map/_search
  3. {
  4. "query": {
  5. "geo_distance": {
  6. "location": { #确定一个点
  7. "lon": 116.433733,
  8. "lat": 39.909404
  9. },
  10. "distance": 3000, #确定半径
  11. "distance_type": "arc" #指定形状为圆形
  12. }
  13. }
  14. }
  15. #geo_bounding_box
  16. POST /map/map/_search
  17. {
  18. "query": {
  19. "geo_bounding_box": {
  20. "location": {
  21. "top_left": { #左上角的坐标点
  22. "lon": 116.326943,
  23. "lat": 39.954990
  24. },
  25. "bottom_right": { #右下角的坐标点
  26. "lon": 116.433446,
  27. "lat": 39.908737
  28. }
  29. }
  30. }
  31. }
  32. }
  33. #geo_polygon
  34. POST /map/map/_search
  35. {
  36. "query": {
  37. "geo_polygon": {
  38. "location": {
  39. "points": [ #指定多个点确定一个多边形
  40. {
  41. "lon": 116.298916,
  42. "lat": 39.99878
  43. },
  44. {
  45. "lon": 116.29561,
  46. "lat": 39.972576
  47. },
  48. {
  49. "lon": 116.327661,
  50. "lat": 39.984739
  51. }
  52. ]
  53. }
  54. }
  55. }
  56. }

Java中实现

  1. @Test
  2. public void geoPolygon() throws IOException {
  3. //1. SearchRequest
  4. SearchRequest request = new SearchRequest(index);
  5. request.types(type);
  6. //2. 指定检索方式
  7. SearchSourceBuilder builder = new SearchSourceBuilder();
  8. List<GeoPoint> points = new ArrayList<>();
  9. points.add(new GeoPoint(39.99878,116.298916));
  10. points.add(new GeoPoint(39.972576,116.29561));
  11. points.add(new GeoPoint(39.984739,116.327661));
  12. builder.query(QueryBuilders.geoPolygonQuery("location",points));
  13. request.source(builder);
  14. //3. 执行查询
  15. SearchResponse resp = client.search(request, RequestOptions.DEFAULT);
  16. //4. 输出结果
  17. for (SearchHit hit : resp.getHits().getHits()) {
  18. System.out.println(hit.getSourceAsMap());
  19. }
  20. }