-
Notifications
You must be signed in to change notification settings - Fork 149
refactor(transport): merge build_packet_header and add_packet_number #3076
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Previously we would always first call `build_packet_header` and then, right thereafter, `add_packet_number`. This commit merges the two for the following reasons: - Arguably the packet number is part of the header. - No need for two functions, i.e. less error-prone API.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #3076 +/- ##
==========================================
- Coverage 93.40% 93.36% -0.04%
==========================================
Files 124 124
Lines 36045 36036 -9
Branches 36045 36036 -9
==========================================
- Hits 33667 33645 -22
- Misses 1534 1547 +13
Partials 844 844
|
|
| Branch | add-packet-number |
| Testbed | On-prem |
🚨 2 Alerts
| Benchmark | Measure Units | View | Benchmark Result (Result Δ%) | Upper Boundary (Limit %) |
|---|---|---|---|---|
| 1-conn/1-1b-resp/mtu-1504 (aka. HPS)/client | Latency milliseconds (ms) | 📈 plot 🚷 threshold 🚨 alert (🔔) | 38.69 ms(+31.99%)Baseline: 29.31 ms | 36.25 ms (106.74%) |
| decode 1048576 bytes, mask 7f | Latency milliseconds (ms) | 📈 plot 🚷 threshold 🚨 alert (🔔) | 5.12 ms(+0.97%)Baseline: 5.07 ms | 5.11 ms (100.09%) |
Click to view all benchmark results
| Benchmark | Latency | Benchmark Result nanoseconds (ns) (Result Δ%) | Upper Boundary nanoseconds (ns) (Limit %) |
|---|---|---|---|
| 1-conn/1-100mb-req/mtu-1504 (aka. Upload)/client | 📈 view plot 🚷 view threshold | 201,660,000.00 ns(-2.60%)Baseline: 207,038,185.33 ns | 217,539,816.43 ns (92.70%) |
| 1-conn/1-100mb-resp/mtu-1504 (aka. Download)/client | 📈 view plot 🚷 view threshold | 197,190,000.00 ns(-1.92%)Baseline: 201,040,540.54 ns | 212,410,329.65 ns (92.83%) |
| 1-conn/1-1b-resp/mtu-1504 (aka. HPS)/client | 📈 view plot 🚷 view threshold 🚨 view alert (🔔) | 38,690,000.00 ns(+31.99%)Baseline: 29,311,783.78 ns | 36,245,854.91 ns (106.74%) |
| 1-conn/10_000-parallel-1b-resp/mtu-1504 (aka. RPS)/client | 📈 view plot 🚷 view threshold | 288,670,000.00 ns(-1.25%)Baseline: 292,334,980.69 ns | 305,388,018.47 ns (94.53%) |
| 1-streams/each-1000-bytes/simulated-time | 📈 view plot 🚷 view threshold | 119,260,000.00 ns(+0.60%)Baseline: 118,550,888.03 ns | 120,828,379.20 ns (98.70%) |
| 1-streams/each-1000-bytes/wallclock-time | 📈 view plot 🚷 view threshold | 585,190.00 ns(-1.61%)Baseline: 594,781.78 ns | 618,947.17 ns (94.55%) |
| 1000-streams/each-1-bytes/simulated-time | 📈 view plot 🚷 view threshold | 14,991,000,000.00 ns(-0.01%)Baseline: 14,993,081,081.08 ns | 15,011,178,072.01 ns (99.87%) |
| 1000-streams/each-1-bytes/wallclock-time | 📈 view plot 🚷 view threshold | 13,657,000.00 ns(-2.85%)Baseline: 14,057,196.91 ns | 14,910,993.22 ns (91.59%) |
| 1000-streams/each-1000-bytes/simulated-time | 📈 view plot 🚷 view threshold | 19,104,000,000.00 ns(+0.79%)Baseline: 18,954,135,135.14 ns | 19,231,860,477.04 ns (99.34%) |
| 1000-streams/each-1000-bytes/wallclock-time | 📈 view plot 🚷 view threshold | 48,359,000.00 ns(-5.44%)Baseline: 51,138,471.04 ns | 57,816,304.97 ns (83.64%) |
| RxStreamOrderer::inbound_frame() | 📈 view plot 🚷 view threshold | 109,190,000.00 ns(-0.40%)Baseline: 109,631,583.01 ns | 111,700,948.16 ns (97.75%) |
| coalesce_acked_from_zero 1+1 entries | 📈 view plot 🚷 view threshold | 88.82 ns(+0.23%)Baseline: 88.62 ns | 89.27 ns (99.49%) |
| coalesce_acked_from_zero 10+1 entries | 📈 view plot 🚷 view threshold | 105.41 ns(-0.59%)Baseline: 106.04 ns | 107.07 ns (98.45%) |
| coalesce_acked_from_zero 1000+1 entries | 📈 view plot 🚷 view threshold | 89.13 ns(-0.73%)Baseline: 89.79 ns | 94.05 ns (94.77%) |
| coalesce_acked_from_zero 3+1 entries | 📈 view plot 🚷 view threshold | 106.01 ns(-0.50%)Baseline: 106.54 ns | 107.54 ns (98.58%) |
| decode 1048576 bytes, mask 3f | 📈 view plot 🚷 view threshold | 1,766,200.00 ns(+8.56%)Baseline: 1,626,922.39 ns | 1,790,173.87 ns (98.66%) |
| decode 1048576 bytes, mask 7f | 📈 view plot 🚷 view threshold 🚨 view alert (🔔) | 5,115,200.00 ns(+0.97%)Baseline: 5,066,077.22 ns | 5,110,731.00 ns (100.09%) |
| decode 1048576 bytes, mask ff | 📈 view plot 🚷 view threshold | 3,020,400.00 ns(-0.28%)Baseline: 3,028,891.89 ns | 3,046,997.49 ns (99.13%) |
| decode 4096 bytes, mask 3f | 📈 view plot 🚷 view threshold | 5,549.00 ns(-28.45%)Baseline: 7,755.17 ns | 10,312.01 ns (53.81%) |
| decode 4096 bytes, mask 7f | 📈 view plot 🚷 view threshold | 19,405.00 ns(-2.43%)Baseline: 19,888.19 ns | 20,455.08 ns (94.87%) |
| decode 4096 bytes, mask ff | 📈 view plot 🚷 view threshold | 10,632.00 ns(-7.60%)Baseline: 11,506.17 ns | 12,528.73 ns (84.86%) |
| sent::Packets::take_ranges | 📈 view plot 🚷 view threshold | 4,541.40 ns(-4.15%)Baseline: 4,737.80 ns | 4,976.12 ns (91.26%) |
| transfer/pacing-false/same-seed/simulated-time/run | 📈 view plot 🚷 view threshold | 25,710,000,000.00 ns(+1.24%)Baseline: 25,395,175,097.28 ns | 26,042,907,941.12 ns (98.72%) |
| transfer/pacing-false/same-seed/wallclock-time/run | 📈 view plot 🚷 view threshold | 25,968,000.00 ns(+0.15%)Baseline: 25,928,743.19 ns | 26,981,467.19 ns (96.24%) |
| transfer/pacing-false/varying-seeds/simulated-time/run | 📈 view plot 🚷 view threshold | 25,151,000,000.00 ns(-0.08%)Baseline: 25,171,677,042.80 ns | 25,220,019,094.78 ns (99.73%) |
| transfer/pacing-false/varying-seeds/wallclock-time/run | 📈 view plot 🚷 view threshold | 25,622,000.00 ns(-1.50%)Baseline: 26,011,762.65 ns | 27,480,663.74 ns (93.24%) |
| transfer/pacing-true/same-seed/simulated-time/run | 📈 view plot 🚷 view threshold | 25,675,000,000.00 ns(+0.19%)Baseline: 25,625,914,396.89 ns | 25,726,905,001.57 ns (99.80%) |
| transfer/pacing-true/same-seed/wallclock-time/run | 📈 view plot 🚷 view threshold | 27,204,000.00 ns(-0.19%)Baseline: 27,256,898.83 ns | 28,634,362.37 ns (95.00%) |
| transfer/pacing-true/varying-seeds/simulated-time/run | 📈 view plot 🚷 view threshold | 24,998,000,000.00 ns(+0.02%)Baseline: 24,994,058,365.76 ns | 25,043,228,331.25 ns (99.82%) |
| transfer/pacing-true/varying-seeds/wallclock-time/run | 📈 view plot 🚷 view threshold | 25,985,000.00 ns(-2.02%)Baseline: 26,520,540.86 ns | 28,062,506.05 ns (92.60%) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR refactors the packet header building process by merging two previously separate functions (build_packet_header and add_packet_number) into a single function. The packet number is now generated and added as part of the header building process, simplifying the API and reducing the potential for errors.
Key Changes:
- Merged
add_packet_numberintobuild_packet_header, eliminating the need to call two functions sequentially - Updated the return signature of
build_packet_headerto include the packet number - Removed the standalone
add_packet_numberfunction
Comments suppressed due to low confidence (1)
neqo-transport/src/connection/mod.rs:1
- The
buildervariable is declared as mutable at line 3825 but is never mutated after the function call. Remove themutkeyword to accurately reflect its usage.
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
|
| Branch | add-packet-number |
| Testbed | On-prem |
Click to view all benchmark results
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| google vs. neqo (cubic, paced) | 📈 view plot 🚷 view threshold | 276.56 ms(-0.40%)Baseline: 277.68 ms | 280.35 ms (98.65%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| msquic vs. neqo (cubic, paced) | 📈 view plot 🚷 view threshold | 201.72 ms(+2.57%)Baseline: 196.67 ms | 233.12 ms (86.53%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo vs. google (cubic, paced) | 📈 view plot 🚷 view threshold | 760.74 ms(+0.44%)Baseline: 757.42 ms | 764.38 ms (99.52%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo vs. msquic (cubic, paced) | 📈 view plot 🚷 view threshold | 158.48 ms(+0.49%)Baseline: 157.70 ms | 160.69 ms (98.62%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo vs. neqo (cubic) | 📈 view plot 🚷 view threshold | 94.20 ms(+3.90%)Baseline: 90.67 ms | 95.04 ms (99.12%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo vs. neqo (cubic, paced) | 📈 view plot 🚷 view threshold | 94.92 ms(+3.10%)Baseline: 92.06 ms | 96.30 ms (98.56%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo vs. neqo (reno) | 📈 view plot 🚷 view threshold | 92.73 ms(+2.25%)Baseline: 90.68 ms | 94.81 ms (97.81%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo vs. neqo (reno, paced) | 📈 view plot 🚷 view threshold | 95.48 ms(+3.80%)Baseline: 91.99 ms | 96.22 ms (99.24%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo vs. quiche (cubic, paced) | 📈 view plot 🚷 view threshold | 195.51 ms(+0.92%)Baseline: 193.73 ms | 197.12 ms (99.18%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| neqo vs. s2n (cubic, paced) | 📈 view plot 🚷 view threshold | 221.89 ms(+0.45%)Baseline: 220.89 ms | 223.66 ms (99.21%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| quiche vs. neqo (cubic, paced) | 📈 view plot 🚷 view threshold | 155.01 ms(+1.28%)Baseline: 153.05 ms | 158.56 ms (97.76%) |
| Benchmark | Latency | Benchmark Result milliseconds (ms) (Result Δ%) | Upper Boundary milliseconds (ms) (Limit %) |
|---|---|---|---|
| s2n vs. neqo (cubic, paced) | 📈 view plot 🚷 view threshold | 175.03 ms(+0.48%)Baseline: 174.20 ms | 178.29 ms (98.17%) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Copilot reviewed 1 out of 1 changed files in this pull request and generated no new comments.
Failed Interop TestsQUIC Interop Runner, client vs. server, differences relative to 51fe8fc. neqo-latest as client
neqo-latest as server
All resultsSucceeded Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
Unsupported Interop TestsQUIC Interop Runner, client vs. server neqo-latest as client
neqo-latest as server
|
Client/server transfer resultsPerformance differences relative to ab40a79. Transfer of 33554432 bytes over loopback, min. 100 runs. All unit-less numbers are in milliseconds.
Download data for |
Benchmark resultsPerformance differences relative to ab40a79. 1-conn/1-100mb-resp/mtu-1504 (aka. Download)/client: Change within noise threshold. time: [196.82 ms 197.19 ms 197.59 ms]
thrpt: [506.10 MiB/s 507.12 MiB/s 508.07 MiB/s]
change:
time: [−1.1842% −0.9367% −0.6855%] (p = 0.00 < 0.05)
thrpt: [+0.6903% +0.9456% +1.1984%]
1-conn/10_000-parallel-1b-resp/mtu-1504 (aka. RPS)/client: No change in performance detected. time: [286.91 ms 288.67 ms 290.49 ms]
thrpt: [34.425 Kelem/s 34.642 Kelem/s 34.855 Kelem/s]
change:
time: [−0.6636% +0.1570% +0.9789%] (p = 0.70 > 0.05)
thrpt: [−0.9694% −0.1568% +0.6680%]
1-conn/1-1b-resp/mtu-1504 (aka. HPS)/client: No change in performance detected. time: [38.533 ms 38.690 ms 38.862 ms]
thrpt: [25.732 B/s 25.847 B/s 25.952 B/s]
change:
time: [−0.2661% +0.3427% +0.9505%] (p = 0.27 > 0.05)
thrpt: [−0.9416% −0.3415% +0.2668%]
1-conn/1-100mb-req/mtu-1504 (aka. Upload)/client: 💚 Performance has improved. time: [201.26 ms 201.66 ms 202.14 ms]
thrpt: [494.72 MiB/s 495.89 MiB/s 496.87 MiB/s]
change:
time: [−2.9171% −2.6249% −2.3273%] (p = 0.00 < 0.05)
thrpt: [+2.3827% +2.6956% +3.0048%]
decode 4096 bytes, mask ff: No change in performance detected. time: [10.604 µs 10.632 µs 10.668 µs]
change: [−0.3032% +0.0682% +0.4394%] (p = 0.74 > 0.05)
decode 1048576 bytes, mask ff: No change in performance detected. time: [3.0083 ms 3.0204 ms 3.0352 ms]
change: [−0.4825% +0.1457% +0.7135%] (p = 0.65 > 0.05)
decode 4096 bytes, mask 7f: No change in performance detected. time: [19.351 µs 19.405 µs 19.467 µs]
change: [−0.1833% +0.2438% +0.7128%] (p = 0.29 > 0.05)
decode 1048576 bytes, mask 7f: No change in performance detected. time: [5.0890 ms 5.1152 ms 5.1532 ms]
change: [−1.0974% −0.1605% +0.7258%] (p = 0.75 > 0.05)
decode 4096 bytes, mask 3f: No change in performance detected. time: [5.5242 µs 5.5490 µs 5.5805 µs]
change: [−0.2758% +0.3856% +1.0770%] (p = 0.28 > 0.05)
decode 1048576 bytes, mask 3f: No change in performance detected. time: [1.7595 ms 1.7662 ms 1.7743 ms]
change: [−0.5391% +0.0032% +0.6151%] (p = 0.97 > 0.05)
1-streams/each-1000-bytes/wallclock-time: No change in performance detected. time: [583.09 µs 585.19 µs 587.58 µs]
change: [−0.1551% +0.2692% +0.6999%] (p = 0.23 > 0.05)
1000-streams/each-1-bytes/wallclock-time: Change within noise threshold. time: [13.638 ms 13.657 ms 13.676 ms]
change: [−0.7201% −0.4671% −0.2154%] (p = 0.00 < 0.05)
1000-streams/each-1000-bytes/wallclock-time: Change within noise threshold. time: [48.187 ms 48.359 ms 48.530 ms]
change: [−1.1759% −0.6728% −0.1582%] (p = 0.01 < 0.05)
1000-streams/each-1000-bytes/simulated-time: No change in performance detected. time: [18.931 s 19.104 s 19.278 s]
thrpt: [50.658 KiB/s 51.119 KiB/s 51.587 KiB/s]
change:
time: [−1.3431% −0.0140% +1.2940%] (p = 0.98 > 0.05)
thrpt: [−1.2774% +0.0140% +1.3614%]
coalesce_acked_from_zero 1+1 entries: No change in performance detected. time: [88.429 ns 88.819 ns 89.219 ns]
change: [−0.1169% +0.2865% +0.7046%] (p = 0.19 > 0.05)
coalesce_acked_from_zero 3+1 entries: No change in performance detected. time: [105.62 ns 106.01 ns 106.44 ns]
change: [−0.3009% +0.0934% +0.4787%] (p = 0.64 > 0.05)
coalesce_acked_from_zero 10+1 entries: No change in performance detected. time: [105.01 ns 105.41 ns 105.90 ns]
change: [−0.1698% +0.4598% +1.2610%] (p = 0.25 > 0.05)
coalesce_acked_from_zero 1000+1 entries: No change in performance detected. time: [88.992 ns 89.130 ns 89.289 ns]
change: [−0.4984% +0.6304% +1.7050%] (p = 0.29 > 0.05)
RxStreamOrderer::inbound_frame(): Change within noise threshold. time: [108.94 ms 109.19 ms 109.55 ms]
change: [−0.9299% −0.6255% −0.2792%] (p = 0.00 < 0.05)
sent::Packets::take_ranges: No change in performance detected. time: [4.4218 µs 4.5414 µs 4.6660 µs]
change: [−6.2724% −2.8804% +0.4579%] (p = 0.10 > 0.05)
transfer/pacing-false/varying-seeds/wallclock-time/run: Change within noise threshold. time: [25.584 ms 25.622 ms 25.661 ms]
change: [+1.0449% +1.2892% +1.5275%] (p = 0.00 < 0.05)
transfer/pacing-false/varying-seeds/simulated-time/run: No change in performance detected. time: [25.116 s 25.151 s 25.186 s]
thrpt: [162.63 KiB/s 162.86 KiB/s 163.08 KiB/s]
change:
time: [−0.2512% −0.0425% +0.1622%] (p = 0.67 > 0.05)
thrpt: [−0.1619% +0.0425% +0.2518%]
transfer/pacing-true/varying-seeds/wallclock-time/run: No change in performance detected. time: [25.917 ms 25.985 ms 26.054 ms]
change: [−0.1209% +0.2673% +0.6322%] (p = 0.17 > 0.05)
transfer/pacing-true/varying-seeds/simulated-time/run: No change in performance detected. time: [24.964 s 24.998 s 25.033 s]
thrpt: [163.62 KiB/s 163.85 KiB/s 164.08 KiB/s]
change:
time: [−0.1587% +0.0404% +0.2410%] (p = 0.71 > 0.05)
thrpt: [−0.2404% −0.0404% +0.1589%]
transfer/pacing-false/same-seed/wallclock-time/run: Change within noise threshold. time: [25.936 ms 25.968 ms 26.014 ms]
change: [+2.2679% +2.4195% +2.6156%] (p = 0.00 < 0.05)
transfer/pacing-false/same-seed/simulated-time/run: No change in performance detected. time: [25.710 s 25.710 s 25.710 s]
thrpt: [159.31 KiB/s 159.31 KiB/s 159.31 KiB/s]
change:
time: [+0.0000% +0.0000% +0.0000%] (p = NaN > 0.05)
thrpt: [+0.0000% +0.0000% +0.0000%]
transfer/pacing-true/same-seed/wallclock-time/run: Change within noise threshold. time: [27.169 ms 27.204 ms 27.255 ms]
change: [+1.9681% +2.1950% +2.4281%] (p = 0.00 < 0.05)
transfer/pacing-true/same-seed/simulated-time/run: No change in performance detected. time: [25.675 s 25.675 s 25.675 s]
thrpt: [159.53 KiB/s 159.53 KiB/s 159.53 KiB/s]
change:
time: [+0.0000% +0.0000% +0.0000%] (p = NaN > 0.05)
thrpt: [+0.0000% +0.0000% +0.0000%]
Download data for |
Previously we would always first call
build_packet_headerand then, right thereafter,add_packet_number.This commit merges the two for the following reasons: