Firstly, I'd like to show my appreciation for all your hard work on Alert Manager, it is a wonderful addition to Splunk!
I've set up an alert, which creates an incident as expected but I'm struggling to get the incident to auto-resolve after the TTL has expired.
After a bit of looking around, I think it may be doing this because Alert Manager is only looking in its own namespace/context when it generates a list of candidate alerts:
https://github.com/alertmanager/alert_manager/blob/develop/src/bin/alert_manager_scheduler.py#L81
If I change this so that it looks at /servicesNS/-/-/saved/searches instead, then everything seems to work OK. Would you accept a PR for this? Or am I misunderstanding how this is supposed to work?