※当サイトの記事には、広告・プロモーションが含まれます。

Ansibleの変数のスコープと変数の定義と参照方法が分り辛過ぎるんだが...

thehackernews.com

Threat hunters have disclosed a new "widespread timing-based vulnerability class" that leverages a double-click sequence to facilitate clickjacking attacks and account takeovers in almost all major websites.

https://thehackernews.com/2025/01/new-doubleclickjacking-exploit-bypasses.html

The technique has been codenamed DoubleClickjacking by security researcher Paulos Yibelo.

https://thehackernews.com/2025/01/new-doubleclickjacking-exploit-bypasses.html

⇧ Oh, my gosh...

www.paulosyibelo.com

⇧ 上記サイト様が解説してくれていますが、JavaScriptの仕組みを悪用している感じのようですね...

Ansibleの変数のスコープと変数の定義と参照方法が分り辛過ぎるんだが...

前に、

ts0818.hatenablog.com

⇧ 上記の記事の時に、Ansibleを利用していたのだけど、

stackoverflow.com

⇧ 上記サイト様によりますと、

docs.ansible.com

Scoping variables

You can decide where to set a variable based on the scope you want that value to have. Ansible has three main scopes:

  • Global: this is set by config, environment variables and the command line

  • Play: each play and contained structures, vars entries (vars; vars_files; vars_prompt), role defaults and vars.

  • Host: variables directly associated to a host, like inventory, include_vars, facts or registered task outputs

https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_variables.html#scoping-variables

⇧ とあるのだが、サッパリ分からん...

結局のところ、Ansibleにおいて、ホストAで定義した変数をホストBで参照する正しい方法が全く分らんのだが、

serverfault.com

上記サイト様によりますと、ダミーのホストを利用しろという話も出てきて、情報が錯綜しているのよね...

どうやら、

qiita.com

zaki-hmkc.hatenablog.com

stackoverflow.com

⇧ 上記サイト様によりますと、Ansibleのplaybookの中でホストで定義した変数を異なるホストから参照できるようにするには、

  1. set_fact
  2. delegate_to

を併用するか、

  1. register
  2. hostvars

を併用するかのどちらかが必要であるらしい。

Ansibleのplaybookの定義の正解が分り辛過ぎるんだが...

そもそも、

docs.ansible.com

docs.ansible.com

⇧ 公式のドキュメントに、異なるホストで変数を参照するケースについてのサンプルが載っていないのよね...

とりあえず、

ts0818.hatenablog.com

⇧ 上記の時からplaybookの内容が変わったので関連する部分を掲載。

■D:\work-soft\vagrant\ansible\inventory.ini

[master]
master1 ansible_host=192.168.50.10 ansible_user=vagrant ansible_ssh_private_key_file=/home/vagrant/.ssh/id_rsa

#[worker1]
#192.168.50.11 ansible_user=vagrant ansible_ssh_private_key_file=/home/vagrant/.ssh/id_rsa

#[worker2]
#192.168.50.12 ansible_user=vagrant ansible_ssh_private_key_file=/home/vagrant/.ssh/id_rsa

#[workers:children]
[workers]
worker1 ansible_host=192.168.50.11 ansible_user=vagrant ansible_ssh_private_key_file=/home/vagrant/.ssh/id_rsa
worker2 ansible_host=192.168.50.12 ansible_user=vagrant ansible_ssh_private_key_file=/home/vagrant/.ssh/id_rsa

[all:children]
master
workers

■D:\work-soft\vagrant\ansible\k8s-setup.yml

---
- hosts: all
  become: yes
  tasks:
    - name: Install required packages
      apt:
        name:
          - apt-transport-https
          - ca-certificates
          - curl
          - gpg
        state: present
        update_cache: yes

    - name: Install sshpass
      apt:
        name: sshpass
        state: present
        update_cache: yes

    - name: Download Google Cloud public signing key
      shell: curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | gpg --dearmor -o /usr/share/keyrings/kubernetes-archive-keyring.gpg

    - name: Add Kubernetes apt repository
      shell: echo 'deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list

    - name: Update apt package index
      apt:
        update_cache: yes

    - name: Install containerd on all nodes
      apt:
        name: containerd
        state: present
        update_cache: yes

    - name: Create containerd directory on all nodes
      file:
        path: /etc/containerd
        state: directory

    - name: Create containerd config on all nodes
      shell: |
        containerd config default > /etc/containerd/config.toml

    - name: Change the systemd driver of containerd on all nodes
      replace:
        path: /etc/containerd/config.toml
        regexp: 'SystemdCgroup = false'
        replace: 'SystemdCgroup = true'

    - name: Restart containerd on all nodes
      systemd:
        name: containerd
        state: restarted
        enabled: true

    - name: Install Kubernetes packages on all nodes
      apt:
        name:
          - kubeadm
          - kubelet
          - kubectl
        state: present
        update_cache: yes

    - name: Hold Kubernetes packages on all nodes
      command: apt-mark hold kubeadm kubelet kubectl

    - name: Check if /etc/default/kubelet exists
      stat:
        path: /etc/default/kubelet
      register: kubelet_config

    - name: Fail if /etc/default/kubelet does not exist
      fail:
        msg: "/etc/default/kubelet does not exist"
      when: not kubelet_config.stat.exists

    - name: Configure cgroup driver in kubelet for Debian on all nodes
      replace:
        dest: /etc/default/kubelet
        regexp: 'KUBELET_EXTRA_ARGS='
        replace: 'KUBELET_EXTRA_ARGS=--cgroup-driver=systemd'
      when: kubelet_config.stat.exists

    - name: Start kubelet service on all nodes
      systemd:
        name: kubelet
        enabled: yes
        state: started

    - name: Disable swap on all nodes
      shell: |
        swapoff -a
        sed -i '/swap/d' /etc/fstab

- hosts: master
  become: yes
  tasks:
    - name: Enable IP forwarding on master
      sysctl:
        name: net.ipv4.ip_forward
        value: "1"
        state: present
        reload: yes

    - name: Pull required images for Kubernetes on master
      shell: |
        kubeadm config images pull

    - name: Initialize Kubernetes master node
      shell: >
        kubeadm init
        --pod-network-cidr=10.244.0.16/16
        --apiserver-advertise-address=192.168.50.10
        --apiserver-bind-port=6443
        --control-plane-endpoint=192.168.50.10
        --image-repository=registry.k8s.io
        --kubernetes-version=stable-1
        --cri-socket=unix:///var/run/containerd/containerd.sock
        --v=5
      register: kubeadm_output
      ignore_errors: yes

    - name: Print kubeadm init logs for debugging
      debug:
        var: kubeadm_output.stderr_lines

    - name: Save the kubeadm join command for Debug
      set_fact:
        kubeadm_join_command: "{{ kubeadm_output.stdout | regex_search('kubeadm join .*') | regex_replace('\\\\', '') }} --discovery-token-unsafe-skip-ca-verification"
      when: kubeadm_output.rc == 0

    - name: Debug the join_command on master
      debug:
        msg: "kubeadm_join_command: {{ kubeadm_join_command }}"
      when: kubeadm_join_command is defined

    - name: Save the kubeadm join command for worker
      set_fact:
        kubeadm_join_command_for_worker: "{{ kubeadm_output.stdout | regex_search('kubeadm join .*') | regex_replace('\\\\', '') }} --discovery-token-unsafe-skip-ca-verification"
      delegate_to: "{{ item }}"
      delegate_facts: true
      loop: "{{ groups['workers'] }}"
      when: kubeadm_output.rc == 0

    - name: Set up kubeconfig for kubectl on master
      shell: |
        mkdir -p /home/vagrant/.kube
        cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config
        chown vagrant:vagrant /home/vagrant/.kube/config
        chmod 644 /home/vagrant/.kube/config
      when: kubeadm_output.rc == 0

    - name: Check if admin.conf exists on master
      stat:
        path: /etc/kubernetes/admin.conf
      register: admin_conf

    - name: Fail if admin.conf does not exist on master
      fail:
        msg: "admin.conf does not exist"
      when: not admin_conf.stat.exists

    - name: Copy admin.conf to shared folder on master
      shell: |
        cp /etc/kubernetes/admin.conf /home/vagrant/admin.conf
        chmod 644 /home/vagrant/admin.conf
      when: kubeadm_output.rc == 0

    - name: Configure kubectl to access the cluster on master
      shell: |
        export KUBECONFIG=/etc/kubernetes/admin.conf
      when: kubeadm_output.rc == 0

    - name: Wait for Kubernetes API server to be ready on master
      wait_for:
        port: 6443
        state: started
        timeout: 600
      when: kubeadm_output.rc == 0

    - name: Wait for Kubernetes nodes to be ready on master
      shell: |
        export KUBECONFIG=/etc/kubernetes/admin.conf
        until kubectl get nodes; do sleep 10; done
      register: kubectl_get_nodes_output
      retries: 20
      delay: 30
      when: kubeadm_output.rc == 0

    - name: Apply Flannel CNI plugin on master
      shell: |
        export KUBECONFIG=/etc/kubernetes/admin.conf
        kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml --validate=false
      when: kubeadm_output.rc == 0

- hosts: workers
  become: yes
  tasks:
    - name: Test SSH connection to master node on worker
      shell: |
        sshpass -p "vagrant" ssh -o StrictHostKeyChecking=no -o ConnectTimeout=30 vagrant@192.168.50.10 echo "SSH connection established"

    - name: Enable IP forwarding on worker nodes
      sysctl:
        name: net.ipv4.ip_forward
        value: "1"
        state: present
        reload: yes

    - name: Copy admin.conf from master node to worker
      shell: |
        sshpass -p "vagrant" scp -o StrictHostKeyChecking=no -o ConnectTimeout=30 vagrant@192.168.50.10:/home/vagrant/admin.conf /home/vagrant/admin.conf

    - name: Proxy API Server to localhost on worker
      shell: |
        kubectl --kubeconfig /home/vagrant/admin.conf proxy &

    - name: Debug join_command variable on worker
      debug:
        msg: "Join command is {{ kubeadm_join_command_for_worker }}"
      when: inventory_hostname in ['worker1', 'worker2']

    - name: Join Kubernetes cluster on worker
      shell: |
        export node_ip="{{ ansible_host }}" && {{ kubeadm_join_command_for_worker }}
      when: inventory_hostname in ['worker1', 'worker2']

    - name: Apply AWX and Squid deployments on worker1
      shell: |
        kubectl --kubeconfig=/home/vagrant/admin.conf apply -f /home/vagrant/k8s-manifests/awx-deployment.yml
        kubectl --kubeconfig=/home/vagrant/admin.conf apply -f /home/vagrant/k8s-manifests/squid-deployment.yml
      when: inventory_hostname == 'worker1'

    - name: Install Docker dependencies on worker2 (needed for Docker install)
      apt:
        name:
          - gnupg
          - lsb-release
        state: present
        update_cache: yes
      when: inventory_hostname == 'worker2'

    - name: Add Docker's official GPG key on worker2
      shell: curl -fsSL https://download.docker.com/linux/ubuntu/gpg | tee /etc/apt/trusted.gpg.d/docker.asc
      when: inventory_hostname == 'worker2'

    - name: Add Docker APT repository on worker2
      shell: echo "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list
      when: inventory_hostname == 'worker2'

    - name: Install Docker Engine on worker2
      apt:
        name: docker-ce
        state: present
        update_cache: yes
      when: inventory_hostname == 'worker2'

    - name: Verify Docker installation on worker2
      shell: |
        docker --version
      when: inventory_hostname == 'worker2'

    - name: Install Docker Compose on worker2
      shell: |
        curl -SL https://github.com/docker/compose/releases/download/v2.32.2/docker-compose-linux-x86_64 -o /usr/local/bin/docker-compose
        chmod +x /usr/local/bin/docker-compose
      when: inventory_hostname == 'worker2'

    - name: Verify Docker Compose installation on worker2
      shell: |
        docker-compose --version
      when: inventory_hostname == 'worker2'

    - name: Set up Docker Compose on worker2
      shell: |
        cd /home/vagrant/docker-compose
        docker-compose up -d
      when: inventory_hostname == 'worker2'

自分のPCが貧弱過ぎて、デバッグログを出力するとメモリが不足するのか処理が途中までしかできなくなったので、「Vagrantfile」で「Ansible」の「Playbook」実行時にデバッグログは出さない形に戻しました。

■D:\work-soft\vagrant\Vagrantfile

Vagrant.configure("2") do |config|

  # 仮想マシンのOS
  config.vm.box = "bento/ubuntu-24.04"
  config.vm.box_version = "202404.26.0"

  # タイムアウトの増加
  config.vm.boot_timeout = 900

  # common ssh-private-key
  config.ssh.insert_key = false
  config.ssh.private_key_path = "C:/Users/toshinobu/.vagrant.d/insecure_private_key"

  # copy to private-key
  config.vm.provision "file", source: "C:/Users/toshinobu/.vagrant.d/insecure_private_key", destination: "/home/vagrant/.ssh/id_rsa"
  config.vm.provision "shell", privileged: false, inline: <<-SHELL
    chmod 600 /home/vagrant/.ssh/id_rsa
  SHELL

  # ワーカーノード1の設定
  config.vm.define "worker1" do |worker1|

    # 仮想マシンのプロバイダ設定
    worker1.vm.provider "virtualbox" do |vb|
      vb.memory = "1024"
      vb.cpus = 1
    end

    worker1.vm.network "private_network", type: "static", ip: "192.168.50.11"
    worker1.vm.hostname = "worker1"

    # フォルダの同期(パーミッション設定を含む)
    worker1.vm.synced_folder "./ansible", "/home/vagrant/ansible", type: "virtualbox", create: true, mount_options: ["dmode=775", "fmode=664"]
    worker1.vm.synced_folder "./k8s-manifests", "/home/vagrant/k8s-manifests", type: "virtualbox", create: true, mount_options: ["dmode=775", "fmode=664"]

#    # SSHキーの受け取り
#    worker1.vm.provision "shell", privileged: true, inline: <<-SHELL
#      mkdir -p /home/vagrant/.ssh
#      chmod 700 /home/vagrant/.ssh
#      [ ! -f /home/vagrant/.ssh/authorized_keys ] && touch /home/vagrant/.ssh/authorized_keys
#      chmod 600 /home/vagrant/.ssh/authorized_keys
#    SHELL
  end

  # ワーカーノード2の設定
  config.vm.define "worker2" do |worker2|

    # 仮想マシンのプロバイダ設定
    worker2.vm.provider "virtualbox" do |vb|
      vb.memory = "1024"
      vb.cpus = 1
    end

    worker2.vm.network "private_network", type: "static", ip: "192.168.50.12"
    worker2.vm.hostname = "worker2"

    # フォルダの同期(パーミッション設定を含む)
    worker2.vm.synced_folder "./ansible", "/home/vagrant/ansible", type: "virtualbox", create: true, mount_options: ["dmode=775", "fmode=664"]
    worker2.vm.synced_folder "./k8s-manifests", "/home/vagrant/k8s-manifests", type: "virtualbox", create: true, mount_options: ["dmode=775", "fmode=664"]
    worker2.vm.provision "file", source: "./docker-compose.yml", destination: "/home/vagrant/docker-compose/docker-compose.yml"

#    # SSHキーの受け取り
#    worker2.vm.provision "shell", privileged: true, inline: <<-SHELL
#      mkdir -p /home/vagrant/.ssh
#      chmod 700 /home/vagrant/.ssh
#      [ ! -f /home/vagrant/.ssh/authorized_keys ] && touch /home/vagrant/.ssh/authorized_keys
#      chmod 600 /home/vagrant/.ssh/authorized_keys
#    SHELL
  end

  # マスターノードの設定
  config.vm.define "master" do |master|

    # 仮想マシンのプロバイダ設定
    master.vm.provider "virtualbox" do |vb|
      vb.memory = "2048"
      vb.cpus = 2
    end

    master.vm.network "private_network", type: "static", ip: "192.168.50.10"
    master.vm.hostname = "master"

    # フォルダの同期(パーミッション設定を含む)
    master.vm.synced_folder "./ansible", "/home/vagrant/ansible", type: "virtualbox", create: true, mount_options: ["dmode=775", "fmode=664"]
    master.vm.synced_folder "./k8s-manifests", "/home/vagrant/k8s-manifests", type: "virtualbox", create: true, mount_options: ["dmode=775", "fmode=664"]

    master.vm.provision "shell", privileged: true, inline: <<-SHELL
#      # Ansibleの対象マシンに接続するためのSSHキー生成
#      ssh-keygen -t rsa -b 2048 -f /home/vagrant/.ssh/id_rsa -q -N ""

      # Ansible設定(ホスト鍵検証無効化)
      echo "[defaults]" > /home/vagrant/ansible/ansible.cfg
      echo "host_key_checking = False" >> /home/vagrant/ansible/ansible.cfg
#      echo "stdout_callback = debug" >> /home/vagrant/ansible/ansible.cfg
      sudo chmod -R 755 /home/vagrant/ansible

      sudo apt-get install -y sshpass ansible bash-completion

#      # Ansibleの対象マシンに公開鍵を登録する
#      for ip in 192.168.50.11 192.168.50.12; do
#        sshpass -p "vagrant" ssh-copy-id -i /home/vagrant/.ssh/id_rsa.pub -o StrictHostKeyChecking=no vagrant@$ip
#      done

      cd /home/vagrant/ansible
      
      # Ansibleのplaybookを実行する
#      ansible-playbook -i inventory.ini k8s-setup.yml -u vagrant -e 'ansible_python_interpreter=/usr/bin/python3' -vvv | tee /home/vagrant/ansible/ansible-playbook.log
      ansible-playbook -i inventory.ini k8s-setup.yml -u vagrant -e 'ansible_python_interpreter=/usr/bin/python3'
    SHELL
  end
end
    

⇧ で、「Kubernetes」環境が構築できたっぽい。

正直、一次情報であるはずの公式のドキュメントが全く役に立たないという...

あと、

docs.ansible.com

⇧ Ansibleで用意されている変数も用途がよく分らんのよね...

docs.ansible.com

⇧ inventory.iniの定義とplaybookの関係もいまいちよく分からない...

Ansibleに限らないとは思うが、設定系の定義が黒魔術過ぎるんだが...

設定系の問題で障害が頻発するのは必然という気がしますな...

毎度モヤモヤ感が半端ない…

今回はこのへんで。