Skip to content
This repository was archived by the owner on Jun 5, 2025. It is now read-only.
This repository was archived by the owner on Jun 5, 2025. It is now read-only.

Mitigate pressure on socket send buffer  #1402

@goshawk-3

Description

@goshawk-3

Describe what you want implemented
Options to consider:

  • Rate limiter - as these messages (the one-to-one Kadcast messaging) are secondary compared to Consensus messaging, they could be sent at 20 TPS - an config param. This should enable a node to queue messages in an internal queue instead of flooding UDP buffers
  • Increase udp_sender_buffer size at startup as Kadcast already does with udp_recv_buffer
  • Increase recovery rate in RaptorQ (the least preferable for now)
  • Any combination of the above.

Describe "Why" this is needed
In essence, we use RaptorQ to recover from dropped UDP messages in the outside network. We should not lose UDP messages on writing them onto local udp_sender_buffer.

As per finding in #1399 (comment), a strategy should be considered to avoid losing sent messages silently. At first, the issue from 1399 should be reproduced on Devnet. ,

Additional context
Issues:

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions