The Wayback Machine - https://web.archive.org/web/20201228010237/https://github.com/dashpay/dash/pull/3495
Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase max headers #3495

Open
wants to merge 2 commits into
base: develop
from

Conversation

@PastaPastaPasta
Copy link
Collaborator

@PastaPastaPasta PastaPastaPasta commented May 19, 2020

This was requested by @QuantumExplorer on the mobile team. He can probably provide a better justification as to why this change is needed/valuable. 8000 was his idea, and the idea behind it was that since we have 4x as many blocks per unit time. However, other values could maybe be better and this should be looked into

Signed-off-by: Pasta <pasta@dashboost.org>
Signed-off-by: Pasta <pasta@dashboost.org>
@PastaPastaPasta PastaPastaPasta added this to the 17 milestone May 19, 2020
@UdjinM6
Copy link

@UdjinM6 UdjinM6 commented May 21, 2020

I don't see any upsides tbh but there is clearly at least one downside: headers msg is ~160kb max currently and it would become ~640kb after this change. This is how much data you have to download before you can start parsing it and figure out if everything is ok or not. I think it's important for light clients to keep messages small, especially at the initial sync, so that they can avoid wasting time/money on network traffic and drop invalid nodes asap.

@QuantumExplorer
Copy link

@QuantumExplorer QuantumExplorer commented May 21, 2020

Me and @PastaPastaPasta tested today with approximately 1 Million blocks and discovered that 2000 headers at a time took an average of 341 seconds over 5 tests ([349.601353, 349.747569, 337.598572, 327.057693, 344.009796]) whereas 8000 headers at a time took an average of 251 seconds ([241.605458, 246.004083, 259.388101, 259.757268, 251.048101]). When we tried 32000 it was higher than 350 seconds on the first test and all 5 tests did not complete. I am looking into why.

There is the downside you mentioned @UdjinM6 , but the vast majority of the time the message is okay and not from a byzantine node. If it is a byzantine node downloading 4x the data probably isn't so bad because of the rarity of the event. Add to this an implementation of compressed headers where you save close to 50% on data and you end up only with 2x the data.

The major benefit is that there is less downtime in networking and this is most likely the reason we see an improvement above. To speed up sync after receiving headers our clients will request the following headers immediately. But even by doing so we lose out on the latency of the messages and maybe also some potential overhead in extra preparation of messages.

2000 headers probably made sense back in 2014. And this is because one needs to take into account the increasingly quicker time on clients for processing these headers. Round trip latency isn't a problem if you always have headers queued up waiting to be processed, however with more powerful phones processing will often be much faster than the time it takes to request the next batch of headers.

I am continuing tests and will update this thread as I get results.

@UdjinM6
Copy link

@UdjinM6 UdjinM6 commented May 22, 2020

Interesting results @QuantumExplorer 👍 Would definitely like to see more data/benchmarks before changing some (network-wide) magic numbers, will test various options too.

@codablock
Copy link
Collaborator

@codablock codablock commented Jun 18, 2020

I would prefer to have something like https://github.com/jimpo/bips/blob/headers-sync/headersv2.mediawiki implemented before we consider increasing these limits. This reduces transmitted header size by quite a bit.

@QuantumExplorer
Copy link

@QuantumExplorer QuantumExplorer commented Jun 22, 2020

@codablock Yes we should do both. Instead of changing the getHeaders max size I think we should just do it on a new network call that we make for the compressed Headers.

@QuantumExplorer
Copy link

@QuantumExplorer QuantumExplorer commented Jun 22, 2020

Here is the data I found for time of headers calls at various lengths. https://docs.google.com/spreadsheets/u/1/d/15_F-E_V6hMGGpqVF3v_wx-o1a5ezZKD9Q7lX1Jq9rvk/edit#gid=0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
4 participants
You can’t perform that action at this time.