2022年11月14日 星期一

Nginx ssl X509 check private key values mismatch

 

SYMPTOMS

nginx[100845]: nginx: [emerg] SSL_CTX_use_PrivateKey_file("/etc/letsencrypt/xxx.com/private.key") failed (SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch)

ROOT CAUSE

I try to combine ca_bundle.crt and certificate.crt as a new certificate.crt, but I mixed the order up. 

  • wrong version
cat ca_bundle.crt certificate.crt > jfrog_certificate.crt
  • verify certification and private key are the same.
# openssl x509 -noout -modulus -in jfrog_certificate.crt | openssl md5
(stdin)= 4a3cdea116805b67e64ff1a29f2ae8ed
# openssl rsa -noout -modulus -in private.key  | openssl md5
(stdin)= afdb98a0c92f175c86af2d241adb215f

Solution

Change the order, and create a new a new certificate.crt again.

  • correct version
cat certificate.crt ca_bundle.crt  > jfrog_certificate.crt
  • verify certification and private key are the same.
# openssl x509 -noout -modulus -in jfrog_certificate.crt | openssl md5
(stdin)= afdb98a0c92f175c86af2d241adb215f
# openssl rsa -noout -modulus -in private.key  | openssl md5
(stdin)= afdb98a0c92f175c86af2d241adb215f

P.S

CA bundle is a file that contains root and intermediate certificates. The end-entity certificate along with a CA bundle constitutes the certificate chain.

Reference

2022年3月7日 星期一

Running Docker on macOS M1 Issue

macOS Version - sw_vers

ProductName:    macOS
ProductVersion:    12.2.1
BuildVersion:    21D62

macOS detail Version - system_profiler SPSoftwareDataType

Software:

    System Software Overview:

      System Version: macOS 12.2.1 (21D62)

      Kernel Version: Darwin 21.3.0

      Boot Volume: Macintosh HD

      Boot Mode: Normal

      Computer Name: ooo的MacBook Pro

      User Name: ooo (Nobody)

      Secure Virtual Memory: Enabled

      System Integrity Protection: Enabled

      Time since boot: 1 day 13:57

Issue: qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory

Just need to add --platform=linux/amd64 into Dockerfile

From --platform=linux/amd64 ubuntu:20.04

Reference:


Thanks to my colleague - Alan Chen.

2020年8月28日 星期五

K8S on CentOS7 Part I

System Info

NameOS VersionSpec
master-nodeCentOS Linux release 7.8.2003 (Core)4 vCPU 16 GiB
worker-node1CentOS Linux release 7.8.2003 (Core)4 vCPU 16 GiB
worker-node2CentOS Linux release 7.8.2003 (Core)4 vCPU 16 GiB

Verify the MAC address and product_uuid are unique for every node

  1. use ip link or ifconfig -a
ip link | awk '/link\/ether/ {print $2}'

or

ifconfig -a | awk '/ether/ {print $2}'
  1. use sudo cat /sys/class/dmi/id/product_uuid

Letting iptables see bridged traffic

  1. Check br_netfilter module is loaded.
lsmod | grep br_netfilter # if not loaded, please run this command to load br_netfilter module sudo modprobe br_netfilter
  1. As requirement for your Linux Node's iptables to correctly see bridged traffic, you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sudo sysctl --system

Check required ports

  • Control-plane Node(s)

    ProtocolDirectonPort RangePurposeUsed By
    TCPInbound6443Kubernetes API serverAll
    TCPInbound2379-2380etcd server client APIkube-apiserver, etcd
    TCPInbound10250Kubelet APISelf, Control plane
    TCPInbound10251kube-schedulerSelf
    TCPInbound10252kube-controller-manangerSelf
  • Worker Node(s)

    ProtocolDirectonPort RangePurposeUsed By
    TCPInbound10250Kubelet APISelf, Control plane
    TCPInbound30000-32767NodePort Services†All

So let's setup firewalld to allow requriement ports

  • On Master Node(s)
firewall-cmd --permanent --add-port=6443/tcp firewall-cmd --permanent --add-port=2379-2380/tcp firewall-cmd --permanent --add-port=10250/tcp firewall-cmd --permanent --add-port=10251/tcp firewall-cmd --permanent --add-port=10252/tcp
  • On Worker Node(s)
firewall-cmd --permanent --add-port=10250/tcp firewall-cmd --permanent --add-port=30000-32767/tcp
  • Make the new settings persistent
firewall-cmd --runtime-to-permanent
  • Restart firewalld.service
systemctl restart firewalld.service
  • Check iptables rules
firewall-cmd --list-all

or

iptables -xvn -L

Disable swap

swapoff -a sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Install runtime

Kubernets uses a container runtime

On Linux Nodes, Kubernetes supports several container runtimes: DockercontainerdCRI-O

If you don't specify a runtime, kubeadm automatically tries to detect an installed container runtime by scanning through a list of well known Unix domain sockets. The following table lists container runtimes and their associated socket paths:

RuntimePath to Unix domian socket
Docker/var/run/docker.sock
containerd/run/containerd/containerd.sock
CRI-O/var/run/crio/crio.sock

so let's install container runtime, in this case I'll use Docker

Install Docker CE

  1. Install required packeages

    yum install -y yum-utils device-mapper-persistent-data lvm2
  2. Add the Docker repository

    yum-config-manager --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo
  3. Install Docker CE

    yum update -y && yum install -y \ containerd.io-1.2.13 \ docker-ce-19.03.11 \ docker-ce-cli-19.03.11
  4. Create /etc/docker

    mkdir /etc/docker
  5. Set up the Docker daemon

    cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] } EOF
    mkdir -p /etc/systemd/system/docker.service.d
  6. Restart Docker

    systemctl daemon-reload systemctl restart docker
  • Installing kubeadm, kubelet and kubectl

    We have to install below required pacakges.

    1. kubeadm:the command to bootstrap the cluster.
    2. kubelet:the component that runs on all of the machines in your cluster and does things like starting pods and containers.
    3. kubectl:the command line util to talk to your cluster.

kubeadm will not install or manage kubelet or kubectl for you, so you will need to ensure they match the version of the Kubernetes control plane you want kubeadm to install for you.

Intstall Kubernetes repository

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kubelet kubeadm kubectl EOF

Set SELinux in disabled mode

setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

Install kubelet kubeadm kubectl

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes systemctl enable --now kubelet
  • Configure cgroup driver used by kubelet on control-plane node

When using Docker, kubeadm will automatically detect the cgroup driver for the kubelet and set it in the /var/lib/kubelet/config.yaml file during runtime.

2019年8月29日 星期四

Packer build image on AWS - Part III

Packer integration with Ansible

Pre-requirement

Don't forget to install ansible
  • Install pip
# apt-get install python-pip
  • Install ansible
# pip install ansible==2.8.3
  • Check ansible version
# ansible --version
ansible 2.8.3

Create Ansible Directory

Packer
├── ISO_Ubuntu_Server_xenial_16.04.6.json
├── ansible
│   ├── all.yml
│   ├── ansible.cfg
│   └── roles
│       └── install-offical-nginx
│           ├── README.md
│           ├── defaults
│           │   └── main.yml
│           ├── files
│           ├── handlers
│           │   └── main.yml
│           ├── meta
│           │   └── main.yml
│           ├── tasks
│           │   └── main.yml
│           ├── templates
│           ├── tests
│           │   ├── inventory
│           │   └── test.yml
│           └── vars
│               └── main.yml
├── http
│   └── preseed.cfg
└── output

13 directories, 12 files
  • Check ansible.cfg file
[defaults]
inventory      = ./inventory
roles_path      = ./roles
host_key_checking = False
retry_files_enabled = False
gathering = smart

fact_caching = jsonfile
fact_caching_connection = ./cache
fact_caching_timeout = 60

#strategy = mitogen_linear
#strategy_plugins = ./mitogen/ansible_mitogen/plugins/strategy/

callback_whitelist = profile_tasks
[callback_profile_tasks]
task_output_limit = 60


[privilege_escalation]
become=True
become_user=root
  • Check all.yml file
---
- hosts: all
  become: True
  roles:
  - install-offical-nginx
  • Check ISO_Ubuntu_Server_xenial_16.04.6.json file
{
  "variables":{
  "vm_description": "Ubuntu Server Image",
  "vm_version": "0.0.1",
  "cpus": "2",
  "memory": "2048",
  "disk_size": "8192",
  "vm_name": "ubuntu",
  "iso_url": "http://ftp.ubuntu-tw.org/mirror/ubuntu-releases/16.04.6/ubuntu-16.04.6-server-amd64.iso",
  "iso_checksum": "16afb1375372c57471ea5e29803a89a5a6bd1f6aabea2e5e34ac1ab7eb9786ac",
  "iso_checksum_type": "sha256",
  "ssh_username": "ubuntu",
  "ssh_password": "ubuntu",
  "s3_bucket_name": "packer-images"
  },
  "provisioners": [
    {
      "type": "ansible",
      "user": "ubuntu",
      "playbook_file": "./ansible/all.yml"
    }
  ],
  "builders": [
    {
      "type": "virtualbox-iso",
      "output_directory": "builds",
      "format": "ova",
      "guest_os_type": "Ubuntu_64",
      "iso_url": "{{user `iso_url`}}",
      "iso_checksum": "{{user `iso_checksum`}}",
      "iso_checksum_type": "{{user `iso_checksum_type`}}",
      "ssh_username": "{{user `ssh_username`}}",
      "ssh_password": "{{user `ssh_password`}}",
      "ssh_port": 22,
      "ssh_wait_timeout": "2000s",
      "disk_size": "{{user `disk_size`}}",
      "keep_registered": "true",
      "shutdown_command": "echo {{user `ssh_password`}} | sudo -S shutdown -P now",
      "vboxmanage": [
        ["modifyvm", "{{.Name}}", "--cpus", "{{user `cpus`}}"],
        ["modifyvm", "{{.Name}}", "--memory", "{{user `memory`}}"]
      ],
      "http_directory": "./http/",
      "boot_wait": "10s",
      "boot_command": [
        "<enter><wait><f6><esc><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
        "<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
        "<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
        "<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
        " /install/vmlinuz<wait>",
        " noapic<wait>",
        " preseed/url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/preseed.cfg auto<wait>",
        " locale=en_US<wait>",
        " kbd-chooser/method=us<wait>",
        " keyboard-configuration/modelcode=pc105<wait>",
        " keyboard-configuration/layout=US<wait>",
        " keyboard-configuration/variant=US<wait>",
        " netcfg/get_hostname=ubuntu<wait>",
        " fb=false <wait>",
        " debconf/frontend=noninteractive<wait>",
        " console-setup/ask_detect=false<wait>",
        " initrd=/install/initrd.gz -- <wait>",
        "<enter><wait>"
      ]
    }
  ],
  "post-processors": [
    {
      "type": "amazon-import",
      "keep_input_artifact": true,
      "s3_bucket_name": "{{user `s3_bucket_name`}}",
      "ami_name": "ubuntu-16.04.6",
      "license_type": "BYOL",
      "tags": {
        "Description": "Packer Import "
      }
    }
  ]
}

Check ISO_Ubuntu_Server_xenial_16.04.6.json is validated via Packer

  • Check the file is validate via Packer CLI
# packer validate ISO_Ubuntu_Server_xenial_16.04.6.json
Template validated successfully.
You can build right now.

2019年8月28日 星期三

Collection Packer Build Error Messages

Q1

Error Messages
Build 'virtualbox-iso' errored: Output directory exists: output-virtualbox-iso
Ans: Use the force flag to delete it prior to building.
Solution
When you will build images, just need to add this parameter -force, like below command
# packer build -force template.json
Reference

Q2

Error Messages
==> Builds finished but no artifacts were created.
Build 'virtualbox-iso' errored: Error deleting port forwarding rule: VBoxManage error: VBoxManage: error: The machine 'packer-virtualbox-iso-1567040405' is already locked for a session (or being unlocked)
2019/08/29 09:07:01 [INFO] (telemetry) Finalizing.
VBoxManage: error: Details: code VBOX_E_INVALID_OBJECT_STATE (0x80bb0007), component MachineWrap, interface IMachine, callee nsISupports
VBoxManage: error: Context: "LockMachine(a->session, LockType_Write)" at line 531 of file VBoxManageModifyVM.cpp
Solution
Following below parameter pasted in your config
  "builders": [
    {
      (...)
      "post_shutdown_delay": "180s",
      "shutdown_command": "echo {{user `ssh_password`}} | sudo -S shutdown -P now",
      (...)
    }
  ]
Reference