顯示具有 CentOS 標籤的文章。 顯示所有文章
顯示具有 CentOS 標籤的文章。 顯示所有文章

2020年8月28日 星期五

K8S on CentOS7 Part I

System Info

NameOS VersionSpec
master-nodeCentOS Linux release 7.8.2003 (Core)4 vCPU 16 GiB
worker-node1CentOS Linux release 7.8.2003 (Core)4 vCPU 16 GiB
worker-node2CentOS Linux release 7.8.2003 (Core)4 vCPU 16 GiB

Verify the MAC address and product_uuid are unique for every node

  1. use ip link or ifconfig -a
ip link | awk '/link\/ether/ {print $2}'

or

ifconfig -a | awk '/ether/ {print $2}'
  1. use sudo cat /sys/class/dmi/id/product_uuid

Letting iptables see bridged traffic

  1. Check br_netfilter module is loaded.
lsmod | grep br_netfilter # if not loaded, please run this command to load br_netfilter module sudo modprobe br_netfilter
  1. As requirement for your Linux Node's iptables to correctly see bridged traffic, you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sudo sysctl --system

Check required ports

  • Control-plane Node(s)

    ProtocolDirectonPort RangePurposeUsed By
    TCPInbound6443Kubernetes API serverAll
    TCPInbound2379-2380etcd server client APIkube-apiserver, etcd
    TCPInbound10250Kubelet APISelf, Control plane
    TCPInbound10251kube-schedulerSelf
    TCPInbound10252kube-controller-manangerSelf
  • Worker Node(s)

    ProtocolDirectonPort RangePurposeUsed By
    TCPInbound10250Kubelet APISelf, Control plane
    TCPInbound30000-32767NodePort Services†All

So let's setup firewalld to allow requriement ports

  • On Master Node(s)
firewall-cmd --permanent --add-port=6443/tcp firewall-cmd --permanent --add-port=2379-2380/tcp firewall-cmd --permanent --add-port=10250/tcp firewall-cmd --permanent --add-port=10251/tcp firewall-cmd --permanent --add-port=10252/tcp
  • On Worker Node(s)
firewall-cmd --permanent --add-port=10250/tcp firewall-cmd --permanent --add-port=30000-32767/tcp
  • Make the new settings persistent
firewall-cmd --runtime-to-permanent
  • Restart firewalld.service
systemctl restart firewalld.service
  • Check iptables rules
firewall-cmd --list-all

or

iptables -xvn -L

Disable swap

swapoff -a sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Install runtime

Kubernets uses a container runtime

On Linux Nodes, Kubernetes supports several container runtimes: DockercontainerdCRI-O

If you don't specify a runtime, kubeadm automatically tries to detect an installed container runtime by scanning through a list of well known Unix domain sockets. The following table lists container runtimes and their associated socket paths:

RuntimePath to Unix domian socket
Docker/var/run/docker.sock
containerd/run/containerd/containerd.sock
CRI-O/var/run/crio/crio.sock

so let's install container runtime, in this case I'll use Docker

Install Docker CE

  1. Install required packeages

    yum install -y yum-utils device-mapper-persistent-data lvm2
  2. Add the Docker repository

    yum-config-manager --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo
  3. Install Docker CE

    yum update -y && yum install -y \ containerd.io-1.2.13 \ docker-ce-19.03.11 \ docker-ce-cli-19.03.11
  4. Create /etc/docker

    mkdir /etc/docker
  5. Set up the Docker daemon

    cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] } EOF
    mkdir -p /etc/systemd/system/docker.service.d
  6. Restart Docker

    systemctl daemon-reload systemctl restart docker
  • Installing kubeadm, kubelet and kubectl

    We have to install below required pacakges.

    1. kubeadm:the command to bootstrap the cluster.
    2. kubelet:the component that runs on all of the machines in your cluster and does things like starting pods and containers.
    3. kubectl:the command line util to talk to your cluster.

kubeadm will not install or manage kubelet or kubectl for you, so you will need to ensure they match the version of the Kubernetes control plane you want kubeadm to install for you.

Intstall Kubernetes repository

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kubelet kubeadm kubectl EOF

Set SELinux in disabled mode

setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

Install kubelet kubeadm kubectl

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes systemctl enable --now kubelet
  • Configure cgroup driver used by kubelet on control-plane node

When using Docker, kubeadm will automatically detect the cgroup driver for the kubelet and set it in the /var/lib/kubelet/config.yaml file during runtime.

2017年2月18日 星期六

使用 Register 自定二個變數,要在同一個 task 裡去執行

OS Environment

CentOS Linux release 7.3.1611 (Core)

Ansible Version

ansible 2.2.0.0
  config file = /etc/ansible/ansible.cfg
  configured module search path = Default w/o overrides

task.yml

- name: Get Blob Name Path
  shell: az storage blob list -o table -c {{ BLOB_CONTAINER }} --account-name {{ backup_account }} --account-key {{ backup_key }} |  grep -Eo '{{restorebuildnumber.content}}.tar.gz|{{ restorebuildnumber.content }}/*/.*.gz'
  register: blobname

- name: Get Tar File Name
  shell: az storage blob list -o table -c {{ BLOB_CONTAINER }} --account-name {{ backup_account }} --account-key {{ backup_key }} |  grep -Eo '{{restorebuildnumber.content}}.tar.gz|{{ restorebuildnumber.content }}/*/.*.gz' | cut -d'/' -f3
  register: tarfilename

- name: Download {{ restorebuildnumber.content }}.tar.gz
  shell: az storage blob download -c {{ BLOB_CONTAINER }} -n {{ item.0 }} -f /tmp/{{restorebuildnumber.content}}/{{ item.1 }} --account-name {{ backup_account }} --account-key {{ backup_key }}
  with_together:
    -  "{{ blobname.stdout_lines }}"
    -  "{{ tarfilename.stdout_lines }}"
這個的重點是用 with_together 就可以達成目的了
Reference:

使用『差集』來排除不需要的檔案

OS Environment

CentOS Linux release 7.3.1611 (Core)

Ansible Version

ansible 2.2.0.0
  config file = /etc/ansible/ansible.cfg
  configured module search path = Default w/o overrides

使用情境說明

先去抓所有的tar檔後,註冊一個變數AllTarFileName,再echo需要排除的檔名,並且也把它註冊成一個變數TarName,之後就可以在解tar檔時,使用差集的方式,將不需要的檔案做排除。

task.yml

- name: Get All Tar Files Name
  shell: az storage blob list -o table -c {{ BLOB_CONTAINER }} --account-name {{ BACKUP_ACCOUNT }} --account-key {{ BACKUP_KEY }} |  grep -Eo '{{RestoreBuildNumber.content}}.tar.gz|{{ RestoreBuildNumber.content }}/*/.*.gz' | cut -d'/' -f3
  register: AllTarFileName

- name: Get {{RestoreBuildNumber.content}}.tar.gz Name
  shell: echo '{{ RestoreBuildNumber.content }}.tar.gz'
  register: TarName

- name: Unarchive {{RestoreBuildNumber.content}}.tar.gz To /tmp/{{RestoreBuildNumber.content}}/data/
  shell: tar -zxvf /tmp/{{RestoreBuildNumber.content}}/{{item}} -C /tmp/{{RestoreBuildNumber.content}}/data/
  with_items:
    - "{{TarName.stdout_lines }}"

- name: Unarchive All Archives Tar Files
  shell: tar -zxvf /tmp/{{RestoreBuildNumber.content}}/{{item}} -C /tmp/{{RestoreBuildNumber.content}}/backups/
  with_items:
    - "{{ AllTarFileName.stdout_lines | difference(TarName.stdout_lines) }}"
Reference:
感謝 Ansible Taiwan - Jim T. Tang 大大的建議。

2017年2月7日 星期二

uwsgi unix domain socket error

uwsgi unix domain socket error

Nginx Error Message

2017/02/08 14:33:15 [crit] 31592#31592: *1 connect() to unix:///tmp/service.sock failed (2: No such file or directory) while connecting to upstream, client: 192.168.1.10, server: beta-hello.com, request: "GET /traffic_funnel HTTP/1.1", upstream: "uwsgi://unix:///tmp/service.sock:", host: "beta-hello.com"

解決辦法

原本都是把socket放在 /tmp 下,但好像每個服務看到的 /tmp 是不一樣的,所以還放在 /var/run 或是放在 /run 底下才會正常

原文

You can't place sockets intended for interprocess communication in /tmp.
For security reasons, recent versions of Fedora use namespaced temporary directories, meaning every service sees a completely different /tmp and can only see its own files in that directory.
To resolve the issue, place the socket in a different directory, such as /run (formerly known as /var/run).

Reference

2017年1月12日 星期四

CentOS 7.3 install azure-cli 2.0

CentOS 7.3 install azure-cli 2.0

Prerequire Install packagess

yum check-update; sudo yum install -y gcc libffi-devel python-devel openssl-devel

安裝 azure-cli 2.0 過程

[root@centos ~]# curl -L https://aka.ms/InstallAzureCli | bash
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   167  100   167    0     0    283      0 --:--:-- --:--:-- --:--:--   283
100   788  100   788    0     0    498      0  0:00:01  0:00:01 --:--:--     0
Downloading Azure CLI install script from https://azurecliprod.blob.core.windows.net/install.py to /tmp/azure_cli_install_tmp_v4nl.
######################################################################## 100.0%
Running install script.
-- Verifying Python version.
-- Python version 2.7.5 okay.
-- Verifying native dependencies.
-- Executing: 'rpm -q gcc libffi-devel python-devel openssl-devel'
-- Native dependencies okay.

===> In what directory would you like to place the install? (leave blank to use '/root/lib/azure-cli'): /var/lib/azure-cli
-- Creating directory '/var/lib/azure-cli'.
-- We will install at '/var/lib/azure-cli'.

===> In what directory would you like to place the 'az' executable? (leave blank to use '/root/bin'): /bin
-- The executable will be in '/usr/bin'.
-- Executing: ['/bin/python', 'virtualenv.py', '--python', '/bin/python', '/var/lib/azure-cli']
Already using interpreter /bin/python
New python executable in /var/lib/azure-cli/bin/python
Installing setuptools, pip, wheel...done.
-- Executing: ['/var/lib/azure-cli/bin/pip', 'install', '--cache-dir', '/tmp/tmpGyrpzA', 'azure-cli', '--upgrade']
Collecting azure-cli
...(省略)
...(省略)
...(省略)
===> Modify profile to update your $PATH and enable shell/tab completion now? (Y/n): Y

===> Enter a path to an rc file to update (leave blank to use '/root/.bashrc'): (按ENTER)
-- Backed up '/root/.bashrc' to '/root/.bashrc.backup'
-- Tab completion set up complete.
-- If tab completion is not activated, verify that '/root/.bashrc' is sourced by your shell.
--
-- ** WARNING: Other 'az' executables are on your $PATH. **
-- Conflicting paths: /bin/az
-- You can run this installation of the CLI with '/usr/bin/az'.
--
-- ** Run `exec -l $SHELL` to restart your shell. **
--
-- Installation successful.
-- Run the CLI with /usr/bin/az --help
安裝完後,就直接執行 exec -l $SHELL

觀察 cat ~/.bashrc

會在你的 ~/.basrc 檔案最後面新增二行
export PATH=$PATH:/usr/bin

source '/var/lib/azure-cli/az.completion'
上面的 source 是一開始執行script時,它會問你要不要換位置 In what directory would you like to place the install? (leave blank to use '/root/lib/azure-cli'):,如果有換的話,記得檢查一下當時填的路徑要跟source是要一致的,這樣才可以正常使用。

使用畫面

這樣就可以使用命令補全了 XDD
[root@centos ~]# az
account     ad          component   context     feedback    --help      network     policy      resource    tag         vmss
acr         appservice  configure   --debug     group       login       -o          provider    role        --verbose
acs         cloud       container   feature     -h          logout      --output    --query     storage     vm

Reference:

2016年8月7日 星期日

Dynatrace Client with PHP-FPM on CentOS

Environment

CentOS release 6.5 (Final)
PHP 5.5.11

Requirement Package

因安裝 Dynatrace Client 需要有 Java 的環境,所以要須先安裝
yum install -y java-1.7.0-openjdk
註記: 在 CentOS 7 上,也可以裝這個版本 java-1.8.0-openjdk

Download Dynatrace Package

先下載 Dynatrace Client 套件, dynatrace-agent-6.2.0.1239-unix.jar 和 dynatrace-wsagent-6.2.0.1239-linux-x64.tar 即可

Setting

先解壓縮這二個壓縮檔至 /opt 底下
$ tar -xvf dynatrace-wsagent-6.2.0.1239-linux-x64.tar -C /opt
$ mkdir -p /opt/dynatrace-6.2
$ java -jar dynatrace-agent-6.2.0.1239-unix.jar
-----------------------------------------------------------------------------
dynaTrace 6.2 Installer
-----------------------------------------------------------------------------
platform: Linux 2.6.32-431.29.2.el6.x86_64, amd64
-----------------------------------------------------------------------------
Installer is running with JVM version 1.7.0_65
-----------------------------------------------------------------------------
Detected OS/Arch: linux
-----------------------------------------------------------------------------
The product will be installed to /home/daniel_lin/dynatrace-6.2. Do you want to install to this directory? (Y/N)
N
Please set the new installation directory/path:
/opt/dynatrace-6.2
The product will be installed to /opt/dynatrace-6.2. Do you want to install to this directory? (Y/N)
Y
Installation directory '/opt/dynatrace-6.2' already exists.
Do you want to continue (Y/N)?
Y
Extracting: dynatrace-6.2/agent/downloads/
Extracting: dynatrace-6.2/agent/conf/
Extracting: dynatrace-6.2/agent/lib/
Extracting: dynatrace-6.2/agent/conf/
Extracting: dynatrace-6.2/agent/lib64/
Extracting: dynatrace-6.2/log/
Extracting: dynatrace-6.2/agent/conf/dthostagent.ini
Extracting: dynatrace-6.2/agent/lib/dthostagent
Extracting: dynatrace-6.2/agent/lib/libdtagent.lel
Extracting: dynatrace-6.2/agent/lib/libdtagent.so
Extracting: dynatrace-6.2/agent/lib/libdtagentcore.so
Extracting: dynatrace-6.2/agent/lib/libdtwsmbagent.so
Extracting: dynatrace-6.2/agent/conf/dthostagent.ini
Extracting: dynatrace-6.2/agent/lib64/dthostagent
Extracting: dynatrace-6.2/agent/lib64/dtzagent
Extracting: dynatrace-6.2/agent/lib64/libdtagent.lel
Extracting: dynatrace-6.2/agent/lib64/libdtagent.so
Extracting: dynatrace-6.2/agent/lib64/libdtagentcore.so
Extracting: dynatrace-6.2/agent/lib64/libdtwsmbagent.so
Extracting: dynatrace-6.2/init.d/
Extracting: dynatrace-6.2/init.d/dynaTraceHostagent
Extracting: dynatrace-6.2/init.d/dynaTraceWebServerAgent
Extracting: dynatrace-6.2/init.d/dynaTracezRemoteAgent
Making file '/opt/dynatrace-6.2/init.d/dynaTracezRemoteAgent' executable...
Making file '/opt/dynatrace-6.2/init.d/dynaTraceHostagent' executable...
Making file '/opt/dynatrace-6.2/init.d/dynaTraceWebServerAgent' executable...
Making file '/opt/dynatrace-6.2/agent/lib/dthostagent' executable...
Making file '/opt/dynatrace-6.2/agent/lib64/dthostagent' executable...
Making file '/opt/dynatrace-6.2/agent/lib64/dtzagent' executable...
Set read and write permissions on file '/opt/dynatrace-6.2/agent/lib' ...
Set read and write permissions on file '/opt/dynatrace-6.2/agent/lib64' ...
Set read and write permissions on file '/opt/dynatrace-6.2/agent/downloads' ...
Set read and write permissions on file '/opt/dynatrace-6.2/log' ...
Set read and write permissions on file '/opt/dynatrace-6.2/agent/conf' ...
Set read and write permissions on file '/opt/dynatrace-6.2/agent/conf/dthostagent.ini' ...
Installation finished successfully in 0s
接下來就至 /opt 下去執行 dynatrace-wsagent-6.2.0.1239-linux-x64.sh,這是由 dynatrace-wsagent-6.2.0.1239-linux-x64.tar 所解壓出來,執行它主要目的是要產生設定檔
執行前的目錄結構
dynatrace-6.2
├── agent
│   ├── conf
│   │   └── dthostagent.ini
│   ├── downloads
│   ├── lib
│   │   ├── dthostagent
│   │   ├── libdtagentcore.so
│   │   ├── libdtagent.lel
│   │   ├── libdtagent.so
│   │   └── libdtwsmbagent.so
│   └── lib64
│       ├── dthostagent
│       ├── dtzagent
│       ├── libdtagentcore.so
│       ├── libdtagent.lel
│       ├── libdtagent.so
│       └── libdtwsmbagent.so
├── init.d
│   ├── dynaTraceHostagent
│   ├── dynaTraceWebServerAgent
│   └── dynaTracezRemoteAgent
└── log
$ cd /opt
$ sudo ./dynatrace-wsagent-6.2.0.1239-linux-x64.sh
dynatrace-6.2
├── agent
│   ├── conf
│   │   ├── dthostagent.ini
│   │   ├── dtnginx_offsets.json
│   │   ├── dtwsagent.ini
│   │   ├── dtwsagent.ini.template
│   │   └── dynaTraceWebServerSharedMemory
│   ├── downloads
│   ├── lib
│   │   ├── dthostagent
│   │   ├── libdtagentcore.so
│   │   ├── libdtagent.lel
│   │   ├── libdtagent.so
│   │   └── libdtwsmbagent.so
│   └── lib64
│       ├── dthostagent
│       ├── dtwsagent
│       ├── dtzagent
│       ├── libdtagentcore.so
│       ├── libdtagent.lel
│       ├── libdtagent.so
│       ├── libdtapacheagent20bo.so
│       ├── libdtapacheagent20lo.so
│       ├── libdtapacheagent22bo.so
│       ├── libdtapacheagent22lo.so
│       ├── libdtapacheagent24bo.so
│       ├── libdtapacheagent24lo.so
│       ├── libdtnginxagent.so
│       ├── libdtphpagent52.so
│       ├── libdtphpagent52_ts.so
│       ├── libdtphpagent53.so
│       ├── libdtphpagent53_ts.so
│       ├── libdtphpagent54.so
│       ├── libdtphpagent54_ts.so
│       ├── libdtphpagent55.so
│       ├── libdtphpagent55_ts.so
│       ├── libdtphpagent56.so
│       ├── libdtphpagent56_ts.so
│       ├── libdtwsagent.so
│       └── libdtwsmbagent.so
├── init.d
│   ├── dynaTraceHostagent
│   ├── dynaTraceWebServerAgent
│   └── dynaTracezRemoteAgent
└── log
執行後的目錄結構,明顯可以看出除了設定檔之外,還把必要使用的libary也都給放到相對應的路徑底下了,由於我要測試的PHP部份,所以接下來就是要把dynatarce的libary給放到PHP的路徑底下,先確認 dynatrace-6.2/agent/lib64/libdtagent.so 是否存在,再來就是把它放到 /etc/php.d 下,命名為dynatrace.ini
echo "extension=/opt/dynatrace-6.2/agent/lib64/libdtagent.so" > /etc/php.d/dynatrace.ini
再來就是修改設定檔,讓dynatrace agent 可以把資料送到 dynatrace colletor
sed 's/Name dtwsagent/Name New-name_TST/' -i /opt/dynatrace-6.2/agent/conf/dtwsagent.ini
sed 's/Server localhost/Server Dynatrace-collector-IP:9998/' -i /opt/dynatrace-6.2/agent/conf/dtwsagent.ini
確認 /opt/dynatrace-6.2/init.d/dynaTraceWebServerAgent 這支程式裡的 DT_HOME 這個參數的路徑是不是你所安裝的路徑(/opt/dynatrace-6.2),如果不是,請記得修改

重開前需要確認 PHP 要有讀取的權限,因此需要修改 /opt/dynatrace-6.2 的權限
chown -R www-data:www-data /opt/dynatrace-6.2
最後就先啟用 dynatrace,再把 PHP 重開
/opt/dynatrace-6.2/init.d/dynaTraceWebServerAgent start
/etc/init.d/php-fpm restart
Stopping php-fpm:                                          [  OK  ]
Starting php-fpm: 2016-08-07 16:25:57 [d8f2e88d] info    [native] Loading collector peer list from /opt/dynatrace-6.2/agent/conf/collectorlist.New-name_TST
2016-08-07 16:25:57 [d8f2e88d] info    [native] 0 entries loaded
                                                           [  OK  ]
接下來就是要用 Dynatrace Client 去確認是否有成功