-
Notifications
You must be signed in to change notification settings - Fork 2
Statefulset support. #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
jacobstr
commented
Aug 30, 2018
- Adds support for statefulsets.
- Updates client-go to release-8.0.
- Refactors the trigger_controller_test a bit to support table-driven tests.
In preparation for the StatefulSet informer support, I want to avoid too much boilerplate repeating the indexing functions with slightly different types, as well as say some function-generating functions. I believe that the current codes behavior of checking volumes is unexpected. Secrets may be present outside of volumes, for example, in environment variables. AFAICT these would not be picked up by the trigger-controller, which I find surprising. This implementation also relies solely cross-resource objmeta API so we don't have to case the `obj interface{}` to different resource types.
Updates the existing TriggerController test to use a table-driven case ala the existing hash tests. Various bits were factored out in the process but the design is largely based on the original tests.
Tests shows failures in go < 1.10. The updated client relies on |
@@ -87,7 +87,6 @@ func NewTriggerController( | |||
|
|||
controller := &TriggerController{ | |||
client: kubeclientset, | |||
deploymentClient: kubeclientset.AppsV1(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was unused.
|
||
deploymentInformer.Informer().AddIndexers(cache.Indexers{ | ||
"configMap": indexDeploymentsByTriggeringConfigMaps, | ||
"secret": indexDeploymentsByTriggeringSecrets, | ||
"configMap": indexByConfigMaps, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I generalized these by using meta.Accessor()
since IIRC only need ObjectMeta properties.
if triggers, ok := annotations[triggerConfigMapsAnnotation]; ok { | ||
configMaps := sets.NewString(strings.Split(triggers, ",")...) | ||
for _, v := range d.Spec.Template.Spec.Volumes { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original behavior, whereby it cross-referenced volumes, was a bit unexpected to me. If I'm adding the triggering
annotations, I expect rollouts to occur regardless of how the secret is being used.
E.g. I might be:
- Using e.g.
configMapKeyRefs
in environment variables. - Using volumes (as shown).
Because folks have to opt-in to this behavior, I believe it's fairly explicit to state that we'll rollout on any change to a specific secret or configMap. I'd be reticent to make it any more complex and say, configure the conditions that cause us to trigger via additional annotations.
Rather than taking the intersection of the annotation values the stated volumes in our Spec, I think it would be interesting to automatically trigger on any referenced secrets or configmaps, by they envKeyRefs, or volumes. That would make sense as an explicit annotation e.g. trigger.k8s.io/auto: enabled