Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sliding Sync: Handle timeline limit changes (take 2) #17579

Merged
merged 29 commits into from
Aug 20, 2024

Conversation

erikjohnston
Copy link
Member

This supersedes #17503, given the per-connection state is being heavily rewritten it felt easier to recreate the PR on top of that work.

This correctly handles the case of timeline limits going up and down.

This does not handle changes in required_state, but that can be done as a separate PR.

Based on #17575.

@erikjohnston erikjohnston marked this pull request as ready for review August 16, 2024 14:04
@erikjohnston erikjohnston requested a review from a team as a code owner August 16, 2024 14:04
Base automatically changed from erikj/ss_receipts2 to develop August 19, 2024 20:16
tests/rest/client/sliding_sync/test_rooms_timeline.py Outdated Show resolved Hide resolved
tests/rest/client/sliding_sync/test_rooms_timeline.py Outdated Show resolved Hide resolved
synapse/handlers/sliding_sync.py Show resolved Hide resolved
synapse/handlers/sliding_sync.py Outdated Show resolved Hide resolved
synapse/handlers/sliding_sync.py Show resolved Hide resolved
synapse/handlers/sliding_sync.py Outdated Show resolved Hide resolved
# point sending down extra events when the timeline limit is
# increased as the client already has the 10 previous events.
# However, if is a gap (i.e. limited is True), then we *do* need to
# record the reduced timeline.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, I just don't get it. Related to the comment above

I'm confused why we're even updating new_connection_state here given we set it above though

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, looks like I moved the code and then just didn't remove it from the old place.

I think the example explains the motivating case we're trying to handle? Another way of putting it is that if the response isn't limited, then we know we have sent down at least the previous timeline_limit worth of events, and so we don't need to record the reduced timeline limit until we actually see a limited response

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After talking with @erikjohnston and some more thinking, I think I understand the optimization here.

The goal of timeline trickling/unstable_expanded_timeline is that the client can get at-least a chunk of current messages to fill up the screen. So if we were already able to give the client a nice contiguous chunk of current timeline, when they increase the timeline_limit again, we can tell that they already have enough messages and don't need to trigger unstable_expanded_timeline again.

If the response is limited, the client has a gap in the timeline. So if they trigger unstable_expanded_timeline on a subsequent request, we can give them a full chunk of history again. This won't necessarily fill the gap but they will get enough events to fill up the screen and they can paginate from there.

I'm not sure this explanation would have made it more clear from the beginning.

synapse/handlers/sliding_sync.py Outdated Show resolved Hide resolved
@@ -0,0 +1 @@
Handle changes in `timeline_limit` in experimental sliding sync.
Copy link
Contributor

@MadLittleMods MadLittleMods Aug 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This continues to feel horrible especially given new edge cases like this comment. Highly recommend we just update the client to use an initial sync request with timeline_limit: 20 and required_state: [] (which allows us to avoid the extra bytes) to accomplish the exact same thing without introducing any of this bizarre behavior.

Previous conversation for context

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Talked with @erikjohnston a bit more and was trying to figure out why the initial sync request doesn't solve this completely. I basically asked the opposite question for how/why timeline trickling/unstable_expanded_timeline makes this easier. The following question unlocked a better understanding for one complication when trying to use the initial sync route.

How is the timeline stitching done for the timeline trickling? How is the problem easier when using that?

[...] its all coming down one connection, so you know you've got a consistent "current" timeline chunk that you'll get updates for (which you can then optionally stitch together with whatever timeline chunks the client currently has)

This is a valid point!

I'm still leaning towards the side of initial sync being possible to use (and better) and just requires some basic timeline stitching logic. ElementX might already have some stitching and event de-duplication logic to handle what the proxy was doing before that would also cover this case.

Since ElementX doesn't have offline support and throws away events, I think we could just do this:

For the timeline stitching logic, the client can store the latest event in timeline before our initial sync, then find that event in the initial sync timeline events and spread backwards from that point. That way, the ongoing sync loop can still append to the end of the timeline and continue seamlessly.

So if we have a timeline [103] already on the client, we store latest_event_id = 103, do our initial sync which returns [100, 101, 102, 103, 104] and we splice/spread in only [100, 101, 102, 103] accordingly (drop any new events after the latest_event_id from the initial sync response). This makes it so that even if the ongoing sync loop sends 104 before or after our initial sync does, it still appends like normal and everything is in seamless order.

If there are so many new messages sent in the time between us storing the latest_event_id and the initial sync responding that we now have a gap, we can just throw away our initial sync events because we have enough events to fill up the timeline just from our normal ongoing sync loop.


To be clear, the client doesn't need to be fancy about stitching:

If the client had more timeline like [98, 99, 100, 101, 102, 103], we store latest_event_id = 103, we start the initial sync, our ongoing sync loop races us and returns 104 which makes our timeline look like [98, 99, 100, 101, 102, 103, 104]. Then our initial sync responds with [100, 101, 102, 103, 104], we find the 103 spot in the response to slice at and place it at the 103 spot in the client timeline leaving us with [100, 101, 102, 103, 104]

Pseudo code (maybe off-by-one errors):

latest_event_id = 103

# do initial sync request

initial_sync_timeline = [100, 101, 102, 103, 104]
event_index_in_response = initial_sync_timeline.index(latest_event_id)
# Skip if we can't find the `latest_event_id` in the response.
# This means there have been so many messages sent between the time we initially
# made the initial sync and the response that this is no longer relevant.
# We already have enough events to fill up the timeline from the normal
# ongoing sync loop
if event_index_in_response is None:
	return

event_index_in_client_timeline = client_timeline.index(latest_event_id)
# Update the timeline
client_timeline = initial_sync_timeline[0:event_index_in_response] + client_timeline[event_index_in_client_timeline:-1]

@@ -0,0 +1 @@
Handle changes in `timeline_limit` in experimental sliding sync.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We will need this same room sync config tracking for required_state (and probably filter/extension) changes so overall the concept isn't lost.

Comment on lines +1047 to +1050
if room_result.unstable_expanded_timeline:
serialized_rooms[room_id][
"unstable_expanded_timeline"
] = room_result.unstable_expanded_timeline
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be better if clients provided the unstable_expanded_timeline option to opt-in to the odd behavior.

But providing unstable_expanded_timeline on the response is better than nothing; clients can at-least detect when they've triggered this behavior. And we also at-least have a chance of migrating away from this behavior.

synapse/handlers/sliding_sync.py Show resolved Hide resolved
@@ -0,0 +1 @@
Handle changes in `timeline_limit` in experimental sliding sync.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Possible edge case: ElementX triggers unstable_expanded_timeline by increasing the timeline_limit and gets a chunk of timeline and we record that higher timeline_limit. I go to bed and then I wake up (or just some period of time that new messages were sent in but I didn't have the app open). How does ElementX get timeline for all of the events in the gap? If it tries to trigger unstable_expanded_timeline again, it won't work because the last recorded timeline_limit is already just as high.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is fine. There are two cases depending on if the timeline limit in the morning is small or high:

For the small case:

  • When the client comes online it'll get a small bit of history with limited=true.
  • The client can then either backpaginate in the room, or do a room sub with a larger timeline

For the high case:

  • When the client comes online it'll get a large chunk of history with limited=true. This is (hopefully) enough to show a screens worth of data, which is what we want.
  • If the client wants more it can just backpaginate.

What EX at least wants is to quickly be able to get enough chunks of history in rooms (in the background) to be able to show a screens worth of data. That way the UX is open the app, see a fast sync, (in the background it preloads the top N rooms with more timeline), the user clicks on one of the rooms and sees a page of timeline, and then the app can paginate in more timeline as usual (via /messages).

synapse/handlers/sliding_sync.py Outdated Show resolved Hide resolved
# point sending down extra events when the timeline limit is
# increased as the client already has the 10 previous events.
# However, if is a gap (i.e. limited is True), then we *do* need to
# record the reduced timeline.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After talking with @erikjohnston and some more thinking, I think I understand the optimization here.

The goal of timeline trickling/unstable_expanded_timeline is that the client can get at-least a chunk of current messages to fill up the screen. So if we were already able to give the client a nice contiguous chunk of current timeline, when they increase the timeline_limit again, we can tell that they already have enough messages and don't need to trigger unstable_expanded_timeline again.

If the response is limited, the client has a gap in the timeline. So if they trigger unstable_expanded_timeline on a subsequent request, we can give them a full chunk of history again. This won't necessarily fill the gap but they will get enough events to fill up the screen and they can paginate from there.

I'm not sure this explanation would have made it more clear from the beginning.

synapse/handlers/sliding_sync.py Show resolved Hide resolved
@@ -0,0 +1 @@
Handle changes in `timeline_limit` in experimental sliding sync.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Talked with @erikjohnston a bit more and was trying to figure out why the initial sync request doesn't solve this completely. I basically asked the opposite question for how/why timeline trickling/unstable_expanded_timeline makes this easier. The following question unlocked a better understanding for one complication when trying to use the initial sync route.

How is the timeline stitching done for the timeline trickling? How is the problem easier when using that?

[...] its all coming down one connection, so you know you've got a consistent "current" timeline chunk that you'll get updates for (which you can then optionally stitch together with whatever timeline chunks the client currently has)

This is a valid point!

I'm still leaning towards the side of initial sync being possible to use (and better) and just requires some basic timeline stitching logic. ElementX might already have some stitching and event de-duplication logic to handle what the proxy was doing before that would also cover this case.

Since ElementX doesn't have offline support and throws away events, I think we could just do this:

For the timeline stitching logic, the client can store the latest event in timeline before our initial sync, then find that event in the initial sync timeline events and spread backwards from that point. That way, the ongoing sync loop can still append to the end of the timeline and continue seamlessly.

So if we have a timeline [103] already on the client, we store latest_event_id = 103, do our initial sync which returns [100, 101, 102, 103, 104] and we splice/spread in only [100, 101, 102, 103] accordingly (drop any new events after the latest_event_id from the initial sync response). This makes it so that even if the ongoing sync loop sends 104 before or after our initial sync does, it still appends like normal and everything is in seamless order.

If there are so many new messages sent in the time between us storing the latest_event_id and the initial sync responding that we now have a gap, we can just throw away our initial sync events because we have enough events to fill up the timeline just from our normal ongoing sync loop.


To be clear, the client doesn't need to be fancy about stitching:

If the client had more timeline like [98, 99, 100, 101, 102, 103], we store latest_event_id = 103, we start the initial sync, our ongoing sync loop races us and returns 104 which makes our timeline look like [98, 99, 100, 101, 102, 103, 104]. Then our initial sync responds with [100, 101, 102, 103, 104], we find the 103 spot in the response to slice at and place it at the 103 spot in the client timeline leaving us with [100, 101, 102, 103, 104]

Pseudo code (maybe off-by-one errors):

latest_event_id = 103

# do initial sync request

initial_sync_timeline = [100, 101, 102, 103, 104]
event_index_in_response = initial_sync_timeline.index(latest_event_id)
# Skip if we can't find the `latest_event_id` in the response.
# This means there have been so many messages sent between the time we initially
# made the initial sync and the response that this is no longer relevant.
# We already have enough events to fill up the timeline from the normal
# ongoing sync loop
if event_index_in_response is None:
	return

event_index_in_client_timeline = client_timeline.index(latest_event_id)
# Update the timeline
client_timeline = initial_sync_timeline[0:event_index_in_response] + client_timeline[event_index_in_client_timeline:-1]

Copy link
Contributor

@MadLittleMods MadLittleMods left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assuming the edge case from the last round of review isn't a show stopper, I guess this has gotten to the point that it is good enough to ship.

Overall, this feels like a step backwards (better than originally proposed though 🙇) for good clients taking everything into account. I fear that we won't touch this again at-least until after the ElementX launch at which point I'll be gone and might live on. Hoping we actually get to that better/brighter future that doesn't include this behavior 🤞

erikjohnston and others added 4 commits August 20, 2024 10:05
Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
Co-authored-by: Eric Eastwood <eric.eastwood@beta.gouv.fr>
@erikjohnston erikjohnston merged commit 6eb98a4 into develop Aug 20, 2024
39 checks passed
@erikjohnston erikjohnston deleted the erikj/ss_room_sub2 branch August 20, 2024 09:31
netbsd-srcmastr pushed a commit to NetBSD/pkgsrc that referenced this pull request Sep 14, 2024
# Synapse 1.114.0 (2024-09-02)

This release enables support for
[MSC4186](matrix-org/matrix-spec-proposals#4186) —
Simplified Sliding Sync. This allows using the upcoming releases of the Element
X mobile apps without having to run a Sliding Sync Proxy.


### Features

- Enable native sliding sync support ([MSC3575](matrix-org/matrix-spec-proposals#3575) and [MSC4186](matrix-org/matrix-spec-proposals#4186)) by default. ([\#17648](element-hq/synapse#17648))




# Synapse 1.114.0rc3 (2024-08-30)

### Bugfixes

- Fix regression in v1.114.0rc2 that caused workers to fail to start. ([\#17626](element-hq/synapse#17626))




# Synapse 1.114.0rc2 (2024-08-30)

### Features

- Improve cross-signing upload when using [MSC3861](matrix-org/matrix-spec-proposals#3861) to use a custom UIA flow stage, with web fallback support. ([\#17509](element-hq/synapse#17509))
- Make `hash_password` script accept password input from stdin. ([\#17608](element-hq/synapse#17608))

### Bugfixes

- Fix hierarchy returning 403 when room is accessible through federation. Contributed by Krishan (@kfiven). ([\#17194](element-hq/synapse#17194))
- Fix content-length on federation `/thumbnail` responses. ([\#17532](element-hq/synapse#17532))
- Fix authenticated media responses using a wrong limit when following redirects over federation. ([\#17543](element-hq/synapse#17543))

### Internal Changes

- MSC3861: load the issuer and account management URLs from OIDC discovery. ([\#17407](element-hq/synapse#17407))
- Refactor sliding sync class into multiple files. ([\#17595](element-hq/synapse#17595))
- Store sliding sync per-connection state in the database. ([\#17599](element-hq/synapse#17599))
- Make the sliding sync `PerConnectionState` class immutable. ([\#17600](element-hq/synapse#17600))
- Add support to `@tag_args` for standalone functions. ([\#17604](element-hq/synapse#17604))
- Speed up incremental syncs in sliding sync by adding some more caching. ([\#17606](element-hq/synapse#17606))
- Always return the user's own read receipts in sliding sync. ([\#17617](element-hq/synapse#17617))
- Replace `isort` and `black` with `ruff`. ([\#17620](element-hq/synapse#17620))
- Refactor sliding sync code to move room list logic out into a separate class. ([\#17622](element-hq/synapse#17622))



### Updates to locked dependencies

* Bump attrs from 23.2.0 to 24.2.0. ([\#17609](element-hq/synapse#17609))
* Bump cryptography from 42.0.8 to 43.0.0. ([\#17584](element-hq/synapse#17584))
* Bump phonenumbers from 8.13.43 to 8.13.44. ([\#17610](element-hq/synapse#17610))
* Bump pygithub from 2.3.0 to 2.4.0. ([\#17612](element-hq/synapse#17612))
* Bump pyyaml from 6.0.1 to 6.0.2. ([\#17611](element-hq/synapse#17611))
* Bump sentry-sdk from 2.12.0 to 2.13.0. ([\#17585](element-hq/synapse#17585))
* Bump serde from 1.0.206 to 1.0.208. ([\#17581](element-hq/synapse#17581))
* Bump serde from 1.0.208 to 1.0.209. ([\#17613](element-hq/synapse#17613))
* Bump serde_json from 1.0.124 to 1.0.125. ([\#17582](element-hq/synapse#17582))
* Bump serde_json from 1.0.125 to 1.0.127. ([\#17614](element-hq/synapse#17614))
* Bump types-jsonschema from 4.23.0.20240712 to 4.23.0.20240813. ([\#17583](element-hq/synapse#17583))
* Bump types-setuptools from 71.1.0.20240726 to 71.1.0.20240818. ([\#17586](element-hq/synapse#17586))

# Synapse 1.114.0rc1 (2024-08-20)

### Features

- Add a flag to `/versions`, `org.matrix.simplified_msc3575`, to indicate whether experimental sliding sync support has been enabled. ([\#17571](element-hq/synapse#17571))
- Handle changes in `timeline_limit` in experimental sliding sync. ([\#17579](element-hq/synapse#17579))
- Correctly track read receipts that should be sent down in experimental sliding sync. ([\#17575](element-hq/synapse#17575), [\#17589](element-hq/synapse#17589), [\#17592](element-hq/synapse#17592))

### Bugfixes

- Start handlers for new media endpoints when media resource configured. ([\#17483](element-hq/synapse#17483))
- Fix timeline ordering (using `stream_ordering` instead of topological ordering) in experimental [MSC3575](matrix-org/matrix-spec-proposals#3575) Sliding Sync `/sync` endpoint. ([\#17510](element-hq/synapse#17510))
- Fix experimental sliding sync implementation to remember any updates in rooms that were not sent down immediately. ([\#17535](element-hq/synapse#17535))
- Better exclude partially stated rooms if we must await full state in experimental [MSC3575](matrix-org/matrix-spec-proposals#3575) Sliding Sync `/sync` endpoint. ([\#17538](element-hq/synapse#17538))
- Handle lower-case http headers in `_Mulitpart_Parser_Protocol`. ([\#17545](element-hq/synapse#17545))
- Fix fetching federation signing keys from servers that omit `old_verify_keys`. Contributed by @tulir @ Beeper. ([\#17568](element-hq/synapse#17568))
- Fix bug where we would respond with an error when a remote server asked for media that had a length of 0, using the new multipart federation media endpoint. ([\#17570](element-hq/synapse#17570))

### Improved Documentation

- Clarify default behaviour of the
  [`auto_accept_invites.worker_to_run_on`](https://element-hq.github.io/synapse/develop/usage/configuration/config_documentation.html#auto-accept-invites)
  option. ([\#17515](element-hq/synapse#17515))
- Improve docstrings for profile methods. ([\#17559](element-hq/synapse#17559))

### Internal Changes

- Add more tracing to experimental [MSC3575](matrix-org/matrix-spec-proposals#3575) Sliding Sync `/sync` endpoint. ([\#17514](element-hq/synapse#17514))
- Fixup comment in sliding sync implementation. ([\#17531](element-hq/synapse#17531))
- Replace override of deprecated method `HTTPAdapter.get_connection` with `get_connection_with_tls_context`. ([\#17536](element-hq/synapse#17536))
- Fix performance of device lists in `/key/changes` and sliding sync. ([\#17537](element-hq/synapse#17537), [\#17548](element-hq/synapse#17548))
- Bump setuptools from 67.6.0 to 72.1.0. ([\#17542](element-hq/synapse#17542))
- Add a utility function for generating random event IDs. ([\#17557](element-hq/synapse#17557))
- Speed up responding to media requests. ([\#17558](element-hq/synapse#17558), [\#17561](element-hq/synapse#17561), [\#17564](element-hq/synapse#17564), [\#17566](element-hq/synapse#17566), [\#17567](element-hq/synapse#17567), [\#17569](element-hq/synapse#17569))
- Test github token before running release script steps. ([\#17562](element-hq/synapse#17562))
- Reduce log spam of multipart files. ([\#17563](element-hq/synapse#17563))
- Refactor per-connection state in experimental sliding sync handler. ([\#17574](element-hq/synapse#17574))
- Add histogram metrics for sliding sync processing time. ([\#17593](element-hq/synapse#17593))



### Updates to locked dependencies

* Bump bytes from 1.6.1 to 1.7.1. ([\#17526](element-hq/synapse#17526))
* Bump lxml from 5.2.2 to 5.3.0. ([\#17550](element-hq/synapse#17550))
* Bump phonenumbers from 8.13.42 to 8.13.43. ([\#17551](element-hq/synapse#17551))
* Bump regex from 1.10.5 to 1.10.6. ([\#17527](element-hq/synapse#17527))
* Bump sentry-sdk from 2.10.0 to 2.12.0. ([\#17553](element-hq/synapse#17553))
* Bump serde from 1.0.204 to 1.0.206. ([\#17556](element-hq/synapse#17556))
* Bump serde_json from 1.0.122 to 1.0.124. ([\#17555](element-hq/synapse#17555))
* Bump sigstore/cosign-installer from 3.5.0 to 3.6.0. ([\#17549](element-hq/synapse#17549))
* Bump types-pyyaml from 6.0.12.20240311 to 6.0.12.20240808. ([\#17552](element-hq/synapse#17552))
* Bump types-requests from 2.31.0.20240406 to 2.32.0.20240712. ([\#17524](element-hq/synapse#17524))

# Synapse 1.113.0 (2024-08-13)

No significant changes since 1.113.0rc1.




# Synapse 1.113.0rc1 (2024-08-06)

### Features

- Track which rooms have been sent to clients in the experimental [MSC3575](matrix-org/matrix-spec-proposals#3575) Sliding Sync `/sync` endpoint. ([\#17447](element-hq/synapse#17447))
- Add Account Data extension support to experimental [MSC3575](matrix-org/matrix-spec-proposals#3575) Sliding Sync `/sync` endpoint. ([\#17477](element-hq/synapse#17477))
- Add receipts extension support to experimental [MSC3575](matrix-org/matrix-spec-proposals#3575) Sliding Sync `/sync` endpoint. ([\#17489](element-hq/synapse#17489))
- Add typing notification extension support to experimental [MSC3575](matrix-org/matrix-spec-proposals#3575) Sliding Sync `/sync` endpoint. ([\#17505](element-hq/synapse#17505))

### Bugfixes

- Update experimental [MSC3575](matrix-org/matrix-spec-proposals#3575) Sliding Sync `/sync` endpoint to handle invite/knock rooms when filtering. ([\#17450](element-hq/synapse#17450))
- Fix a bug introduced in v1.110.0 which caused `/keys/query` to return incomplete results, leading to high network activity and CPU usage on Matrix clients. ([\#17499](element-hq/synapse#17499))

### Improved Documentation

- Update the [`allowed_local_3pids`](https://element-hq.github.io/synapse/v1.112/usage/configuration/config_documentation.html#allowed_local_3pids) config option's msisdn address to a working example. ([\#17476](element-hq/synapse#17476))

### Internal Changes

- Change sliding sync to use their own token format in preparation for storing per-connection state. ([\#17452](element-hq/synapse#17452))
- Ensure we don't send down negative `bump_stamp` in experimental sliding sync endpoint. ([\#17478](element-hq/synapse#17478))
- Do not send down empty room entries down experimental sliding sync endpoint. ([\#17479](element-hq/synapse#17479))
- Refactor Sliding Sync tests to better utilize the `SlidingSyncBase`. ([\#17481](element-hq/synapse#17481), [\#17482](element-hq/synapse#17482))
- Add some opentracing tags and logging to the experimental sliding sync implementation. ([\#17501](element-hq/synapse#17501))
- Split and move Sliding Sync tests so we have some more sane test file sizes. ([\#17504](element-hq/synapse#17504))
- Update the `limited` field description in the Sliding Sync response to accurately describe what it actually represents. ([\#17507](element-hq/synapse#17507))
- Easier to understand `timeline` assertions in Sliding Sync tests. ([\#17511](element-hq/synapse#17511))
- Reset the sliding sync connection if we don't recognize the per-connection state position. ([\#17529](element-hq/synapse#17529))



### Updates to locked dependencies

* Bump bcrypt from 4.1.3 to 4.2.0. ([\#17495](element-hq/synapse#17495))
* Bump black from 24.4.2 to 24.8.0. ([\#17522](element-hq/synapse#17522))
* Bump phonenumbers from 8.13.39 to 8.13.42. ([\#17521](element-hq/synapse#17521))
* Bump ruff from 0.5.4 to 0.5.5. ([\#17494](element-hq/synapse#17494))
* Bump serde_json from 1.0.120 to 1.0.121. ([\#17493](element-hq/synapse#17493))
* Bump serde_json from 1.0.121 to 1.0.122. ([\#17525](element-hq/synapse#17525))
* Bump towncrier from 23.11.0 to 24.7.1. ([\#17523](element-hq/synapse#17523))
* Bump types-pyopenssl from 24.1.0.20240425 to 24.1.0.20240722. ([\#17496](element-hq/synapse#17496))
* Bump types-setuptools from 70.1.0.20240627 to 71.1.0.20240726. ([\#17497](element-hq/synapse#17497))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants