Skip to content

ER_READONLY error receives new reasons #2953

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Tracked by #3543 ...
TarantoolBot opened this issue Jun 17, 2022 · 0 comments · Fixed by #3618
Closed
Tracked by #3543 ...

ER_READONLY error receives new reasons #2953

TarantoolBot opened this issue Jun 17, 2022 · 0 comments · Fixed by #3618
Assignees
Labels
2.11 2.11 release and the associated technical debt feature A new functionality reference [location] Tarantool manual, Reference part replication [area] Related to Replication

Comments

@TarantoolBot
Copy link
Collaborator

TarantoolBot commented Jun 17, 2022

Related dev. issue(s): tarantool/tarantool#5295

Part of the Errors epic

Product: Tarantool
Since: 2.11
Audience/target:
Root document:

SME: @ sergepetrenko

Follow-up of #2444 and #2445


When box.info.ro_reason is "synchro" and some operation throws an
ER_READONLY error, this error now might include the following reason:

Can't modify data on a read-only instance - synchro queue with term 2
belongs to 1 (06c05d18-456e-4db3-ac4c-b8d0f291fd92) and is frozen due to
fencing

This means that the current instance is indeed the synchro queue owner,
but it has noticed, that someone else in the cluster might start new
elections or might overtake the synchro queue soon.
This may be also detected by box.info.election.term becoming greater than
box.info.synchro.queue.term (this is the case for the second error
message).
There is also a slightly different error message:

Can't modify data on a read-only instance - synchro queue with term 2
belongs to 1 (06c05d18-456e-4db3-ac4c-b8d0f291fd92) and is frozen until
promotion

This means that the node simply cannot guarantee that it is still the
synchro queue owner (for example, after a restart, when a node still thinks
it is the queue owner, but someone else in the cluster has already
overtaken the queue).
Requested by @sergepetrenko in tarantool/tarantool@6cc1b1f

@patiencedaur patiencedaur added this to the error milestone Jun 20, 2022
@patiencedaur patiencedaur added replication [area] Related to Replication reference [location] Tarantool manual, Reference part feature A new functionality server [area] Task relates to Tarantool's server (core) functionality labels Jun 20, 2022
@veod32 veod32 removed this from the error milestone Jun 21, 2022
@veod32 veod32 added this to the Estimate [@veod32] milestone Jun 30, 2022
@veod32 veod32 mentioned this issue Jul 7, 2022
4 tasks
@veod32 veod32 added the 2sp label Jul 7, 2022
@veod32 veod32 removed this from the Estimate [@veod32] milestone Jul 21, 2022
@veod32 veod32 added 2.11 2.11 release and the associated technical debt and removed 2sp server [area] Task relates to Tarantool's server (core) functionality labels Mar 1, 2023
@andreyaksenov andreyaksenov self-assigned this Aug 14, 2023
@andreyaksenov andreyaksenov linked a pull request Aug 21, 2023 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
2.11 2.11 release and the associated technical debt feature A new functionality reference [location] Tarantool manual, Reference part replication [area] Related to Replication
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants