https://github.com/medcl/esm下载源码或编译后软件:
image.png
上传软件到/home下修改软件名称为ems
在es集群1上创建test_index_003 并插入数据
使用命令:
image.png
#同步索引index_name
esm -s http://10.62.124.x:9200 -d http://10.67.151.y:9200 -x index_name -w=5 -b=10 -c 10000

同步数据和索引配置(主分片个数不同步,使用默认值5)
esm.exe -s http://localhost:9200 -d http://192.168.210.168:9200 -x es_medias_test2 -w=5 -b=10 -c 10000 —copy_settings —copy_mappings

./ems -s http://172.16.4.200:9200 -d http://172.16.4.212:9200 -x test_index_004 -w=5 -b=10 -c 10000 —copy_settings —copy_mappings
#同步数据和索引配置(同步主分片数)
esm.exe -s http://localhost:9200 -d http://192.168.210.168:9200 -x es_medias_test -w=5 -b=10 -c 10000 —shards=1 —copy_settings —copy_mappings

bin/linux64/esm -s http://需要迁移的ES(IP地址):9200 -d http://迁移到的ES(IP地址):9200 -xindex索引 -w=5 -b=10 -c 10000 —all

-w 表示线程数
-b 表示一次bulk请求数据大小,单位MB默认 5M
-c 一次scroll请求数量

  1. Usage:
  2. esm [OPTIONS]
  3. Application Options:
  4. -s, --source= source elasticsearch instance, ie: http://localhost:9200
  5. -q, --query= query against source elasticsearch instance, filter data before migrate, ie: name:medcl
  6. -d, --dest= destination elasticsearch instance, ie: http://localhost:9201
  7. -m, --source_auth= basic auth of source elasticsearch instance, ie: user:pass
  8. -n, --dest_auth= basic auth of target elasticsearch instance, ie: user:pass
  9. -c, --count= number of documents at a time: ie "size" in the scroll request (10000)
  10. --buffer_count= number of buffered documents in memory (100000)
  11. -w, --workers= concurrency number for bulk workers (1)
  12. -b, --bulk_size= bulk size in MB (5)
  13. -t, --time= scroll time (1m)
  14. --sliced_scroll_size= size of sliced scroll, to make it work, the size should be > 1 (1)
  15. -f, --force delete destination index before copying
  16. -a, --all copy indexes starting with . and _
  17. --copy_settings copy index settings from source
  18. --copy_mappings copy index mappings from source
  19. --shards= set a number of shards on newly created indexes
  20. -x, --src_indexes= indexes name to copy,support regex and comma separated list (_all)
  21. -y, --dest_index= indexes name to save, allow only one indexname, original indexname will be used if not specified
  22. -u, --type_override= override type name
  23. --green wait for both hosts cluster status to be green before dump. otherwise yellow is okay
  24. -v, --log= setting log level,options:trace,debug,info,warn,error (INFO)
  25. -o, --output_file= output documents of source index into local file
  26. -i, --input_file= indexing from local dump file
  27. --input_file_type= the data type of input file, options: dump, json_line, json_array, log_line (dump)
  28. --source_proxy= set proxy to source http connections, ie: http://127.0.0.1:8080
  29. --dest_proxy= set proxy to target http connections, ie: http://127.0.0.1:8080
  30. --refresh refresh after migration finished
  31. --fields= filter source fields, comma separated, ie: col1,col2,col3,...
  32. --rename= rename source fields, comma separated, ie: _type:type, name:myname
  33. -l, --logstash_endpoint= target logstash tcp endpoint, ie: 127.0.0.1:5055
  34. --secured_logstash_endpoint target logstash tcp endpoint was secured by TLS
  35. --repeat_times= repeat the data from source N times to dest output, use align with parameter regenerate_id to amplify the data size
  36. -r, --regenerate_id regenerate id for documents, this will override the exist document id in data source
  37. --compress use gzip to compress traffic
  38. -p, --sleep= sleep N seconds after finished a bulk request (-1)
  39. Help Options:
  40. -h, --help Show this help message