Skip to content

Track bytes used by in-memory postings #129969

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

jordan-powers
Copy link
Contributor

This patch adds a field totalPostingBytes to the ShardFields record that tracks the memory usage of the largest term, which may be stored in-memory by the postings FieldReader.

Most of this was already done by @dnhatn in #121476, but was never merged.

@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-storage-engine (Team:StorageEngine)

Copy link
Member

@martijnvg martijnvg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, Jordan. I do wonder a little bit about the potential overhead of TrackingPostingsInMemoryBytesCodec. Maybe check this quickly with esbench?

@@ -2778,7 +2779,7 @@ private IndexWriterConfig getIndexWriterConfig() {
iwc.setMaxFullFlushMergeWaitMillis(-1);
iwc.setSimilarity(engineConfig.getSimilarity());
iwc.setRAMBufferSizeMB(engineConfig.getIndexingBufferSize().getMbFrac());
iwc.setCodec(engineConfig.getCodec());
iwc.setCodec(new TrackingPostingsInMemoryBytesCodec(engineConfig.getCodec()));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder what the overhead is of always wrapping the codec in TrackingPostingsInMemoryBytesCodec. Maybe let's quickly run benchmark? (elastic/logs?)

Additionally I wonder whether this should only be done for stateless only.

import java.io.IOException;
import java.util.function.IntConsumer;

public class TrackingPostingsInMemoryBytesCodec extends FilterCodec {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add class level javadocs explain the purpose of this class?

Comment on lines +96 to +101
Terms terms = super.terms(field);
if (terms == null) {
return terms;
}
int fieldNum = fieldInfos.fieldInfo(field).number;
return new TrackingLengthTerms(terms, len -> maxLengths.put(fieldNum, Math.max(maxLengths.getOrDefault(fieldNum, 0), len)));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder whether we can do this instead:

Suggested change
Terms terms = super.terms(field);
if (terms == null) {
return terms;
}
int fieldNum = fieldInfos.fieldInfo(field).number;
return new TrackingLengthTerms(terms, len -> maxLengths.put(fieldNum, Math.max(maxLengths.getOrDefault(fieldNum, 0), len)));
Terms terms = super.terms(field);
// Only org.apache.lucene.codecs.lucene90.blocktree.FieldReader keeps min and max term in jvm heap,
// so only account for these cases:
if (terms instanceof FieldReader fieldReader) {
int fieldNum = fieldInfos.fieldInfo(field).number;
int length = fieldReader.getMin().length;
length += fieldReader.getMax().length;
maxLengths.put(fieldNum, length);
}
return terms;

This way there is way less wrapping. We only care about min and max term, given that this is loaded in jvm heap.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Scratch that idea. The implementation provided here different. This gets invoked during indexing / merging. During indexing this implementation of terms is FreqProxTermsWriterPerField. Invoking getMax() is potentially expensive as it causes reading ahead to figure out which is the max term, these terms get later read via terms enum.

Comment on lines +129 to +137
public BytesRef next() throws IOException {
final BytesRef term = super.next();
if (term != null) {
maxTermLength = Math.max(maxTermLength, term.length);
} else {
onFinish.accept(maxTermLength);
}
return term;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that we need to estimate the terms that get loaded in jvm heap would the following be more accurate?

Suggested change
public BytesRef next() throws IOException {
final BytesRef term = super.next();
if (term != null) {
maxTermLength = Math.max(maxTermLength, term.length);
} else {
onFinish.accept(maxTermLength);
}
return term;
}
int prevTermLength = 0;
@Override
public BytesRef next() throws IOException {
final BytesRef term = super.next();
if (term == null) {
maxTermLength += prevTermLength;
onFinish.accept(maxTermLength);
return term;
}
if (maxTermLength == 0) {
maxTermLength = term.length;
}
prevTermLength = term.length;
return term;
}

In the org.apache.lucene.codecs.lucene90.blocktree.FieldReader class, the lowest and highest lexicographically term is kept around in jvm heap. The current code just keeps track what the longest term is and report that, which doesn't map with the minTerm and maxTerm in FieldReader?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment