I launched a ElasticSearch instance several months ago but I overestimated a sizing of an average shard. My shards are fairly small now 400mb max. and I found out that a recommended size should be approx. 50gb (based on a stackoverflow search). I also ran out of maximum shard count and I had to reconfigure it. But the proper approach is to change a logic of shards and basically make shards larger. It is completely doable in my case but the question is how to deal with already existing shards. I can certainly replay data but it will take a LOT of time. So is there a better way how to merge smaller shards into big ones?
-
1Hello, you do have a few choices. Shrink API is one of them as @moliware mentioned. You can also try to merge indices and limit number of shards with _reindex API You can have a look here for examples of how to limit the number of indicesMr.Coffee– Mr.Coffee2017-11-02 11:48:46 +00:00Commented Nov 2, 2017 at 11:48
Add a comment
|
1 Answer
As far as I know you can't merge shards on the fly. This used to be one of those situations where you have to reindex your data.
Elasticsearch now provides a way of making this situation a bit less painful. Shrink API allows you to create a new index whose number of primary shards is a factor of the number of the primary shards of the source index. For example if your index has 12 shards, the shrunk index can have 6, 4, 2, 1 primary shards.
I hope it helps!
1 Comment
Le D. Thang
Thanx. This helps me.