Skip to content

Track deleted runtime objects to avoid erroneous recreation#10830

Open
jschmidt-icinga wants to merge 3 commits intomasterfrom
remember-deleted-api-objects
Open

Track deleted runtime objects to avoid erroneous recreation#10830
jschmidt-icinga wants to merge 3 commits intomasterfrom
remember-deleted-api-objects

Conversation

@jschmidt-icinga
Copy link
Copy Markdown
Contributor

@jschmidt-icinga jschmidt-icinga commented Apr 29, 2026

Description

This fixes deleted objects being recreated and the cluster state becoming inconsistent after deleting objects via the API on one endpoint, while the second endpoint was offline or not reachable.

This is done by keeping track of deleted objects in a map of the deleted objects name to the object's "version". If the object would be recreated with a version that is older or equal to the version it has been deleted with, the object creation is being skipped. This is done in the expectation that at the same time the deletion event is being replayed to the other endpoint that is trying to sync the object creation.

Testing

For now I have built an integration test in my own dev-docker environment based on the steps described in #10022. Additionally the test also verifies that afterwards creating a new object with the same name works fine. I've not yet tested various split-brain scenarios @julianbrost and I discussed offline.

Fixes #10022.

@cla-bot cla-bot Bot added the cla/signed label Apr 29, 2026
@jschmidt-icinga jschmidt-icinga added the bug Something isn't working label Apr 29, 2026
@jschmidt-icinga jschmidt-icinga changed the title (WIP) Track deleted runtime objects to avaóid erroneous recreation (WIP) Track deleted runtime objects to avoid erroneous recreation Apr 30, 2026
@jschmidt-icinga jschmidt-icinga force-pushed the remember-deleted-api-objects branch from 94de327 to 34ce649 Compare May 4, 2026 12:54
Comment thread lib/icinga/icingaapplication.cpp Outdated
Comment thread lib/remote/apilistener-configsync.cpp Outdated
Comment thread lib/remote/apilistener-configsync.cpp
Comment thread lib/remote/apilistener-configsync.cpp Outdated
Comment on lines +104 to +109
if (deletedInfo && deletedInfo->Get("version") >= objVersion) {
Log(LogWarning, "ApiListener")
<< "Ignoring config update for deleted object '" << objName << "' of type '" << objType << "' from '"
<< identity << "' (endpoint: '" << endpoint->GetName() << "', zone: '" << endpointZone->GetName() << "')";
return Empty;
}
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you compare with the version number at the time the object was deleted, this creates a difference in behavior between config updates and deletes. If you update, it will win over any update was earlier, but if you update, it will only win over updates that were older than the last version of the object that node knew about when deleting. Or expressed differently, the delete would take a higher precedence if you additionally update the object immediately before deleting it.

I'd say the deleting the object should behave a bit like a new object version. So that deleting behaves similar as if deleting means creating a new object version that just says this object is deleted (with the current timestamp set on the node the receives the HTTP request and that timestamp then needs to also be synced as part of the JSON-RPC message).

Comment thread lib/remote/apilistener-configsync.cpp Outdated
@jschmidt-icinga jschmidt-icinga force-pushed the remember-deleted-api-objects branch from 34ce649 to 7f60c60 Compare May 5, 2026 14:40
@jschmidt-icinga jschmidt-icinga changed the title (WIP) Track deleted runtime objects to avoid erroneous recreation Track deleted runtime objects to avoid erroneous recreation May 6, 2026
@jschmidt-icinga jschmidt-icinga marked this pull request as ready for review May 6, 2026 09:57
Comment on lines +102 to +104
Log(LogWarning, "ApiListener")
<< "Ignoring config update for deleted object '" << objName << "' of type '" << objType << "' from '"
<< identity << "' (endpoint: '" << endpoint->GetName() << "', zone: '" << endpointZone->GetName() << "')";
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should use warning as a log level here. It's something that can just happen during normal operation and also nothing that would need any user action.

Comment on lines +461 to +462
auto deletionTime = Utility::GetTime();
GetDeletedRuntimeObjects()->Set(object->GetReflectionType()->GetName() + ":" + object->GetName(), deletionTime);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't this happen on each node processing the deletion individually, i.e. each one would get a different value here? If that happens, different nodes could handle (accept/discard) the same following change to the same object differently. In particular, if the deletion is replayed, there could be significant discrepancies in die values.

So in my expectation, the node initially receiving the deletion request from the HTTP API should get the deletion time/version and then replicate it to the others. One option (though not sure if that's the best way, it's just my first idea) would be to only Set() here if it's not already set, and already insert the value from the JSON-RPC message in ConfigDeleteObjectAPIHandler.

Copy link
Copy Markdown
Contributor Author

@jschmidt-icinga jschmidt-icinga May 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that solution would have its own downsides. For example if we only add the entry in ApiListener::DeleteConfigObject if it doesn't already exist, then future HTTP-API deletes for an identically named object will not update the timestamp anymore.
I'll think about what to do with this.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can use the cookie that is already passed through ConfigObjectUtility back into ApiListener to only update the timestamp if the request is not originating from a JSON-RPC message. It isn't pretty, but since everything happens in the same file, if I add a comment that explains why, it should be fine.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While thinking about this again, I notices that only PruneDeletedRuntimeObjects() removes from that map. Just also doing that when recreating an object might be a good idea in itself, but could also be used to solve this.

Copy link
Copy Markdown
Contributor Author

@jschmidt-icinga jschmidt-icinga May 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had that in the first iteration, but removed it because I was worried about the countless split-brain scenarios potentially leading to inconsistencies. Can we be sure that there will never any states between peers or even race-conditions between JSON-RPC and HTTP where this could fail?

Like a scenario where both masters both receive multiple creation and deletion API requests in (relatively) quick succession, that they will forward to each other.

I'll have to think if I can come up with a concrete scenario. But I'm pretty sure that if we prioritize non-RPC deletions with higher timestamps, that should always be safe.

Copy link
Copy Markdown
Contributor Author

@jschmidt-icinga jschmidt-icinga May 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is another problem that I think neither of our approaches solves: JSON-RPC unconditionally deletes objects with cascade = true (which is a whole new source of inconsistency on its own, in case the HTTP API originally deleted with cascade = false). If we guard insertion with a check on HTTP vs. JSON-RPC, no timestamps will be added for these child objects, and if we follow your suggestion the timestamps will be updated to the replay time while the parent stays on the deletion timestamp.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we be sure that there will never any states between peers or even race-conditions between JSON-RPC and HTTP where this could fail?

Not without looking into this closely. But that could indeed be a problem.

There is another problem that I think neither of our approaches solves: JSON-RPC unconditionally deletes objects with cascade = true (which is a whole new source of inconsistency on its own, in case the HTTP API originally deleted with cascade = false).

That doesn't immediately sound like that much of a problem. Isn't it a prerequisite for the config to already be out of sync for this to make a difference? If deletion with cascade=false succeeded on one node, this implies there are no objects that depend on the deleted object. If deleting with cascade=true on the second node now deletes more, it should only delete objects that didn't even exist on the first one.

Comment on lines +288 to +292
m_DeletedRuntimeObjectsTimer = Timer::Create();
m_DeletedRuntimeObjectsTimer->OnTimerExpired.connect([this](const Timer* const&) { PruneDeletedRuntimeObjects(); });
m_DeletedRuntimeObjectsTimer->SetInterval(15 * 60);
m_DeletedRuntimeObjectsTimer->Start();
m_DeletedRuntimeObjectsTimer->Reschedule(0);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The timer should probably also be stopped in Stop(), just like the other timers are.

Comment on lines +540 to +545
auto endpoints = ConfigType::GetObjectsByType<Endpoint>();
auto it = std::max_element(endpoints.begin(), endpoints.end(), [](const auto& a, const auto& b) {
return a->GetLogDuration() < b->GetLogDuration();
});
ASSERT(it != endpoints.end());
auto maxLogDuration = (*it)->GetLogDuration();
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be honest, I would just prefer writing down the for loop here instead of using std::max_element. It's both shorter and also completely avoids possible issues if there were no endpoints (which should never happen if an ApiListener exists, but still).

Suggested change
auto endpoints = ConfigType::GetObjectsByType<Endpoint>();
auto it = std::max_element(endpoints.begin(), endpoints.end(), [](const auto& a, const auto& b) {
return a->GetLogDuration() < b->GetLogDuration();
});
ASSERT(it != endpoints.end());
auto maxLogDuration = (*it)->GetLogDuration();
double maxLogDuration = 0;
for (const auto &endpoint : ConfigType::GetObjectsByType<Endpoint>()) {
maxLogDuration = std::max(maxLogDuration, endpoint->GetLogDuration());
}

Comment on lines +549 to +553
for (auto it = deletedRuntimeObjects->Begin(); it != deletedRuntimeObjects->End(); ++it) {
if (it->second <= maxLogDuration) {
deletedRuntimeObjects->Remove(it);
};
}
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't the Remove(it) invalidate it which is then used later (in ++it)?

One way to do this would be to do it like in the cppreference example, but that would need some extra plumbing in icinga::Dictionary, maybe consider adding a RemoveIf() there?

This is also a good candidate for a TODO(C++20): use std::erase_if().

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had already considered adding a Dictionary::RemoveIf(). But now I'm thinking of changing Dictionary::Remove() to instead return the iterator that std::map::erase() returns, which points to the next object after the erased one and changing the cleanup loop to use it.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, with just a single usage it doesn't make much of a difference, but in general, I think something like std::erase_if() where you don't have to think about iterator validity is just nicer to use.

/* update the object */
double objVersion = params->Get("version");

if (listener->GetDeletedRuntimeObjects()->Get(objType + ":" + objName) >= objVersion) {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are now two instances of the magic objType + ":" + objName concatenation and with my other comments, there will be a third one. I'd consider one of the following:

  1. Add a helper that does the object to lookup key mapping.
  2. Maybe even add some wrappers around icinga::Dictionary that are also more type-safe (instead of everything just being icinga::Value), so that ApiListener maybe has methods like std::optional<double> GetDeletedRuntimeObjectVersion() and a matching setter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working cla/signed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

API object deletion unreliable if HA zone is not connected

2 participants