2

I launched a ElasticSearch instance several months ago but I overestimated a sizing of an average shard. My shards are fairly small now 400mb max. and I found out that a recommended size should be approx. 50gb (based on a stackoverflow search). I also ran out of maximum shard count and I had to reconfigure it. But the proper approach is to change a logic of shards and basically make shards larger. It is completely doable in my case but the question is how to deal with already existing shards. I can certainly replay data but it will take a LOT of time. So is there a better way how to merge smaller shards into big ones?

1
  • 1
    Hello, you do have a few choices. Shrink API is one of them as @moliware mentioned. You can also try to merge indices and limit number of shards with _reindex API You can have a look here for examples of how to limit the number of indices Commented Nov 2, 2017 at 11:48

1 Answer 1

3

As far as I know you can't merge shards on the fly. This used to be one of those situations where you have to reindex your data.

Elasticsearch now provides a way of making this situation a bit less painful. Shrink API allows you to create a new index whose number of primary shards is a factor of the number of the primary shards of the source index. For example if your index has 12 shards, the shrunk index can have 6, 4, 2, 1 primary shards.

I hope it helps!

Sign up to request clarification or add additional context in comments.

1 Comment

Thanx. This helps me.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.