Skip to content

Commit 1442dcc

Browse files
authored
29.0.0+1.34.4 (#79)
* Molecule: Added a new verify play targeting k8s_controller * Molecule: use own githubixx Vagrant boxes * update k8s_ctl_release to 1.34.4 * Molecule: Added listener-port verification play * Molecule: add API health checks via admin kubeconfig: query /livez and /readyz and assert etcd, poststarthooks, and informer sync checks are healthy. * Molecule verify: Artifact integrity: assert binaries, required cert sets, and kubeconfig files exist with expected ownership/mode. * Molecule verify: RBAC verification: assert ClusterRole system:kube-apiserver-to-kubelet and its binding exist and binding subject matches expected API server CN. * Molecule verify: Kubeconfig functional checks * Molecule: add idempotence and verify to test_seqence * Molecule verify: Dependency readiness: assert etcd endpoints are healthy before/after converge and API server remains ready across all controllers * Molecule: fix linter warning * Molecule: fix linter warnings * replace injected ansible_* facts usage with ansible_facts[...] (prepares for ansible-core 2.24 where INJECT_FACTS_AS_VARS default changes) * update README/CHANGELOG * update year
1 parent ab76e15 commit 1442dcc

File tree

7 files changed

+412
-26
lines changed

7 files changed

+412
-26
lines changed

CHANGELOG.md

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,18 @@
11
# Changelog
22

3+
## 29.0.0+1.34.4
4+
5+
- **UPDATE**
6+
- update `k8s_ctl_release` to `1.34.4`
7+
8+
- **OTHER**
9+
- replace injected `ansible_*` facts usage with `ansible_facts[...]` (prepares for ansible-core 2.24 where `INJECT_FACTS_AS_VARS` default changes)
10+
11+
- **MOLECULE**
12+
- use own [githubixx Vagrant boxes](https://portal.cloud.hashicorp.com/vagrant/discover/githubixx)
13+
- fix linter warnings
14+
- add more checks in `verify.yml`
15+
316
## 28.0.0+1.33.6
417

518
- **UPDATE**

README.md

Lines changed: 22 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ This role is used in [Kubernetes the not so hard way with Ansible - Control plan
44

55
## Versions
66

7-
I tag every release and try to stay with [semantic versioning](http://semver.org). If you want to use the role I recommend to checkout the latest tag. The master branch is basically development while the tags mark stable releases. But in general I try to keep master in good shape too. A tag `28.0.0+1.33.6` means this is release `28.0.0` of this role and it's meant to be used with Kubernetes version `1.33.6` (but should work with any K8s 1.33.x release of course). If the role itself changes `X.Y.Z` before `+` will increase. If the Kubernetes version changes `X.Y.Z` after `+` will increase too. This allows to tag bugfixes and new major versions of the role while it's still developed for a specific Kubernetes release. That's especially useful for Kubernetes major releases with breaking changes.
7+
I tag every release and try to stay with [semantic versioning](http://semver.org). If you want to use the role I recommend to checkout the latest tag. The master branch is basically development while the tags mark stable releases. But in general I try to keep master in good shape too. A tag `29.0.0+1.34.4` means this is release `29.0.0` of this role and it's meant to be used with Kubernetes version `1.34.4` (but should work with any K8s 1.34.x release of course). If the role itself changes `X.Y.Z` before `+` will increase. If the Kubernetes version changes `X.Y.Z` after `+` will increase too. This allows to tag bugfixes and new major versions of the role while it's still developed for a specific Kubernetes release. That's especially useful for Kubernetes major releases with breaking changes.
88

99
## Requirements
1010

@@ -27,6 +27,19 @@ See full [CHANGELOG.md](https://github.com/githubixx/ansible-role-kubernetes-con
2727

2828
**Recent changes:**
2929

30+
## 29.0.0+1.34.4
31+
32+
- **UPDATE**
33+
- update `k8s_ctl_release` to `1.34.4`
34+
35+
- **OTHER**
36+
- replace injected `ansible_*` facts usage with `ansible_facts[...]` (prepares for ansible-core 2.24 where `INJECT_FACTS_AS_VARS` default changes)
37+
38+
- **MOLECULE**
39+
- use own [githubixx Vagrant boxes](https://portal.cloud.hashicorp.com/vagrant/discover/githubixx)
40+
- fix linter warnings
41+
- add more checks in `verify.yml`
42+
3043
## 28.0.0+1.33.6
3144

3245
- **UPDATE**
@@ -78,7 +91,7 @@ See full [CHANGELOG.md](https://github.com/githubixx/ansible-role-kubernetes-con
7891
roles:
7992
- name: githubixx.kubernetes_controller
8093
src: https://github.com/githubixx/ansible-role-kubernetes-controller.git
81-
version: 28.0.0+1.33.6
94+
version: 29.0.0+1.34.4
8295
```
8396
8497
## Role (default) variables
@@ -108,7 +121,7 @@ k8s_ctl_pki_dir: "{{ k8s_ctl_conf_dir }}/pki"
108121
k8s_ctl_bin_dir: "/usr/local/bin"
109122

110123
# The Kubernetes release.
111-
k8s_ctl_release: "1.33.6"
124+
k8s_ctl_release: "1.34.4"
112125

113126
# The interface on which the Kubernetes services should listen on. As all cluster
114127
# communication should use a VPN interface the interface name is
@@ -198,7 +211,7 @@ k8s_ctl_delegate_to: "127.0.0.1"
198211
# variable of https://github.com/githubixx/ansible-role-kubernetes-ca
199212
# role). If it's not specified you'll get certificate errors in the
200213
# logs of the services mentioned above.
201-
k8s_ctl_api_endpoint_host: "{% set controller_host = groups['k8s_controller'][0] %}{{ hostvars[controller_host]['ansible_' + hostvars[controller_host]['k8s_interface']].ipv4.address }}"
214+
k8s_ctl_api_endpoint_host: "{% set controller_host = groups['k8s_controller'][0] %}{{ hostvars[controller_host].ansible_facts.get(hostvars[controller_host]['k8s_interface'] | default('eth0'), {}).get('ipv4', {}).get('address', hostvars[controller_host].ansible_facts.get('default_ipv4', {}).get('address')) }}"
202215

203216
# As above just for the port. It specifies on which port the
204217
# Kubernetes API servers are listening. Again if there is a loadbalancer
@@ -263,7 +276,7 @@ k8s_admin_conf_group: "root"
263276
#
264277
# Besides that basically the same comments as for "k8s_ctl_api_endpoint_host"
265278
# variable apply.
266-
k8s_admin_api_endpoint_host: "{% set controller_host = groups['k8s_controller'][0] %}{{ hostvars[controller_host]['ansible_' + hostvars[controller_host]['k8s_interface']].ipv4.address }}"
279+
k8s_admin_api_endpoint_host: "{% set controller_host = groups['k8s_controller'][0] %}{{ hostvars[controller_host].ansible_facts.get(hostvars[controller_host]['k8s_interface'] | default('eth0'), {}).get('ipv4', {}).get('address', hostvars[controller_host].ansible_facts.get('default_ipv4', {}).get('address')) }}"
267280

268281
# As above just for the port.
269282
k8s_admin_api_endpoint_port: "6443"
@@ -279,8 +292,8 @@ k8s_apiserver_conf_dir: "{{ k8s_ctl_conf_dir }}/kube-apiserver"
279292
# "kube-apiserver" daemon settings (can be overridden or additional added by defining
280293
# "k8s_apiserver_settings_user")
281294
k8s_apiserver_settings:
282-
"advertise-address": "{{ hostvars[inventory_hostname]['ansible_' + k8s_interface].ipv4.address }}"
283-
"bind-address": "{{ hostvars[inventory_hostname]['ansible_' + k8s_interface].ipv4.address }}"
295+
"advertise-address": "{{ hostvars[inventory_hostname].ansible_facts.get(k8s_interface | default('eth0'), {}).get('ipv4', {}).get('address', hostvars[inventory_hostname].ansible_facts.get('default_ipv4', {}).get('address')) }}"
296+
"bind-address": "{{ hostvars[inventory_hostname].ansible_facts.get(k8s_interface | default('eth0'), {}).get('ipv4', {}).get('address', hostvars[inventory_hostname].ansible_facts.get('default_ipv4', {}).get('address')) }}"
284297
"secure-port": "6443"
285298
"enable-admission-plugins": "{{ k8s_apiserver_admission_plugins | join(',') }}"
286299
"allow-privileged": "true"
@@ -381,7 +394,7 @@ k8s_controller_manager_conf_dir: "{{ k8s_ctl_conf_dir }}/kube-controller-manager
381394
# K8s controller manager settings (can be overridden or additional added by defining
382395
# "k8s_controller_manager_settings_user")
383396
k8s_controller_manager_settings:
384-
"bind-address": "{{ hostvars[inventory_hostname]['ansible_' + k8s_interface].ipv4.address }}"
397+
"bind-address": "{{ hostvars[inventory_hostname].ansible_facts.get(k8s_interface | default('eth0'), {}).get('ipv4', {}).get('address', hostvars[inventory_hostname].ansible_facts.get('default_ipv4', {}).get('address')) }}"
385398
"secure-port": "10257"
386399
"cluster-cidr": "10.200.0.0/16"
387400
"allocate-node-cidrs": "true"
@@ -406,7 +419,7 @@ k8s_scheduler_conf_dir: "{{ k8s_ctl_conf_dir }}/kube-scheduler"
406419

407420
# kube-scheduler settings
408421
k8s_scheduler_settings:
409-
"bind-address": "{{ hostvars[inventory_hostname]['ansible_' + k8s_interface].ipv4.address }}"
422+
"bind-address": "{{ hostvars[inventory_hostname].ansible_facts.get(k8s_interface | default('eth0'), {}).get('ipv4', {}).get('address', hostvars[inventory_hostname].ansible_facts.get('default_ipv4', {}).get('address')) }}"
410423
"config": "{{ k8s_scheduler_conf_dir }}/kube-scheduler.yaml"
411424
"authentication-kubeconfig": "{{ k8s_scheduler_conf_dir }}/kubeconfig"
412425
"authorization-kubeconfig": "{{ k8s_scheduler_conf_dir }}/kubeconfig"

defaults/main.yml

Lines changed: 13 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ k8s_ctl_pki_dir: "{{ k8s_ctl_conf_dir }}/pki"
2323
k8s_ctl_bin_dir: "/usr/local/bin"
2424

2525
# The Kubernetes release.
26-
k8s_ctl_release: "1.33.6"
26+
k8s_ctl_release: "1.34.4"
2727

2828
# The interface on which the Kubernetes services should listen on. As all cluster
2929
# communication should use a VPN interface the interface name is
@@ -113,7 +113,10 @@ k8s_ctl_delegate_to: "127.0.0.1"
113113
# variable of https://github.com/githubixx/ansible-role-kubernetes-ca
114114
# role). If it's not specified you'll get certificate errors in the
115115
# logs of the services mentioned above.
116-
k8s_ctl_api_endpoint_host: "{{ hostvars[groups['k8s_controller'] | first]['ansible_' + hostvars[groups['k8s_controller'] | first]['k8s_interface']].ipv4.address }}"
116+
k8s_ctl_api_endpoint_host: >-
117+
{%- set controller_host = groups['k8s_controller'] | first -%}
118+
{%- set controller_interface = hostvars[controller_host]['k8s_interface'] | default('eth0') -%}
119+
{{- hostvars[controller_host].ansible_facts.get(controller_interface, {}).get('ipv4', {}).get('address', hostvars[controller_host].ansible_facts.get('default_ipv4', {}).get('address')) -}}
117120
118121
# As above just for the port. It specifies on which port the
119122
# Kubernetes API servers are listening. Again if there is a loadbalancer
@@ -178,7 +181,10 @@ k8s_admin_conf_group: "root"
178181
#
179182
# Besides that basically the same comments as for "k8s_ctl_api_endpoint_host"
180183
# variable apply.
181-
k8s_admin_api_endpoint_host: "{{ hostvars[groups['k8s_controller'] | first]['ansible_' + hostvars[groups['k8s_controller'] | first]['k8s_interface']].ipv4.address }}"
184+
k8s_admin_api_endpoint_host: >-
185+
{%- set controller_host = groups['k8s_controller'] | first -%}
186+
{%- set controller_interface = hostvars[controller_host]['k8s_interface'] | default('eth0') -%}
187+
{{- hostvars[controller_host].ansible_facts.get(controller_interface, {}).get('ipv4', {}).get('address', hostvars[controller_host].ansible_facts.get('default_ipv4', {}).get('address')) -}}
182188
183189
# As above just for the port.
184190
k8s_admin_api_endpoint_port: "6443"
@@ -194,8 +200,8 @@ k8s_apiserver_conf_dir: "{{ k8s_ctl_conf_dir }}/kube-apiserver"
194200
# "kube-apiserver" daemon settings (can be overridden or additional added by defining
195201
# "k8s_apiserver_settings_user")
196202
k8s_apiserver_settings:
197-
"advertise-address": "{{ hostvars[inventory_hostname]['ansible_' + k8s_interface].ipv4.address }}"
198-
"bind-address": "{{ hostvars[inventory_hostname]['ansible_' + k8s_interface].ipv4.address }}"
203+
"advertise-address": "{{ hostvars[inventory_hostname].ansible_facts.get(k8s_interface | default('eth0'), {}).get('ipv4', {}).get('address', hostvars[inventory_hostname].ansible_facts.get('default_ipv4', {}).get('address')) }}"
204+
"bind-address": "{{ hostvars[inventory_hostname].ansible_facts.get(k8s_interface | default('eth0'), {}).get('ipv4', {}).get('address', hostvars[inventory_hostname].ansible_facts.get('default_ipv4', {}).get('address')) }}"
199205
"secure-port": "6443"
200206
"enable-admission-plugins": "{{ k8s_apiserver_admission_plugins | join(',') }}"
201207
"allow-privileged": "true"
@@ -296,7 +302,7 @@ k8s_controller_manager_conf_dir: "{{ k8s_ctl_conf_dir }}/kube-controller-manager
296302
# K8s controller manager settings (can be overridden or additional added by defining
297303
# "k8s_controller_manager_settings_user")
298304
k8s_controller_manager_settings:
299-
"bind-address": "{{ hostvars[inventory_hostname]['ansible_' + k8s_interface].ipv4.address }}"
305+
"bind-address": "{{ hostvars[inventory_hostname].ansible_facts.get(k8s_interface | default('eth0'), {}).get('ipv4', {}).get('address', hostvars[inventory_hostname].ansible_facts.get('default_ipv4', {}).get('address')) }}"
300306
"secure-port": "10257"
301307
"cluster-cidr": "10.200.0.0/16"
302308
"allocate-node-cidrs": "true"
@@ -321,7 +327,7 @@ k8s_scheduler_conf_dir: "{{ k8s_ctl_conf_dir }}/kube-scheduler"
321327

322328
# kube-scheduler settings
323329
k8s_scheduler_settings:
324-
"bind-address": "{{ hostvars[inventory_hostname]['ansible_' + k8s_interface].ipv4.address }}"
330+
"bind-address": "{{ hostvars[inventory_hostname].ansible_facts.get(k8s_interface | default('eth0'), {}).get('ipv4', {}).get('address', hostvars[inventory_hostname].ansible_facts.get('default_ipv4', {}).get('address')) }}"
325331
"config": "{{ k8s_scheduler_conf_dir }}/kube-scheduler.yaml"
326332
"authentication-kubeconfig": "{{ k8s_scheduler_conf_dir }}/kubeconfig"
327333
"authorization-kubeconfig": "{{ k8s_scheduler_conf_dir }}/kubeconfig"

molecule/default/molecule.yml

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
# Copyright (C) 2023 Robert Wimmer
2+
# Copyright (C) 2023-2026 Robert Wimmer
33
# SPDX-License-Identifier: GPL-3.0-or-later
44

55
dependency:
@@ -13,7 +13,7 @@ driver:
1313

1414
platforms:
1515
- name: test-assets
16-
box: alvistack/ubuntu-24.04
16+
box: githubixx/ubuntu-24.04
1717
memory: 2048
1818
cpus: 2
1919
groups:
@@ -26,7 +26,7 @@ platforms:
2626
type: static
2727
ip: 172.16.10.5
2828
- name: test-controller1
29-
box: alvistack/ubuntu-22.04
29+
box: githubixx/ubuntu-22.04
3030
memory: 2048
3131
cpus: 2
3232
groups:
@@ -42,7 +42,7 @@ platforms:
4242
type: static
4343
ip: 172.16.10.10
4444
- name: test-controller2
45-
box: alvistack/ubuntu-22.04
45+
box: githubixx/ubuntu-24.04
4646
memory: 2048
4747
cpus: 2
4848
groups:
@@ -58,7 +58,7 @@ platforms:
5858
type: static
5959
ip: 172.16.10.20
6060
- name: test-controller3
61-
box: alvistack/ubuntu-24.04
61+
box: githubixx/ubuntu-24.04
6262
memory: 2048
6363
cpus: 2
6464
groups:
@@ -74,7 +74,7 @@ platforms:
7474
type: static
7575
ip: 172.16.10.30
7676
- name: test-worker1
77-
box: alvistack/ubuntu-22.04
77+
box: githubixx/ubuntu-22.04
7878
memory: 2048
7979
cpus: 2
8080
groups:
@@ -88,7 +88,7 @@ platforms:
8888
type: static
8989
ip: 172.16.10.100
9090
- name: test-worker2
91-
box: alvistack/ubuntu-24.04
91+
box: githubixx/ubuntu-24.04
9292
memory: 2048
9393
cpus: 2
9494
groups:
@@ -115,6 +115,8 @@ scenario:
115115
test_sequence:
116116
- prepare
117117
- converge
118+
- idempotence
119+
- verify
118120

119121
verifier:
120122
name: ansible

molecule/default/prepare.yml

Lines changed: 24 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
# Copyright (C) 2023 Robert Wimmer
2+
# Copyright (C) 2023-2026 Robert Wimmer
33
# SPDX-License-Identifier: GPL-3.0-or-later
44

55
- name: Update cache
@@ -112,3 +112,26 @@
112112
- name: Include CNI role
113113
ansible.builtin.include_role:
114114
name: githubixx.cni
115+
116+
- name: Verify etcd readiness before control plane converge
117+
hosts: k8s_etcd
118+
remote_user: vagrant
119+
become: true
120+
gather_facts: false
121+
tasks:
122+
- name: Gather service facts
123+
ansible.builtin.service_facts:
124+
125+
- name: Assert etcd service is active
126+
ansible.builtin.assert:
127+
that:
128+
- "'etcd.service' in ansible_facts.services"
129+
- "ansible_facts.services['etcd.service'].state == 'running'"
130+
131+
- name: Check etcd endpoint port is listening
132+
ansible.builtin.shell: set -o pipefail && ss -H -ltn "sport = :2379" | grep -q "LISTEN"
133+
args:
134+
executable: /bin/bash
135+
register: k8s_ctl__etcd_listener_before_converge
136+
changed_when: false
137+
failed_when: k8s_ctl__etcd_listener_before_converge.rc != 0

0 commit comments

Comments
 (0)