2016年11月8日 星期二

Python library BeautifulSoup4

OS: openSUSE Leap 42.1 (x86_64)
# sudo pip install beautifulsoup4

from urllib.request import urlopen
from bs4 import BeautifulSoup

html = urlopen("http://pythonscraping.com/pages/page1.html")
bsObj = BeautifulSoup(html.read())
print(bsObj.h1)
用上面這個程式跑會有Warning,如下
/usr/lib/python3.4/site-packages/bs4/__init__.py:181: UserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("html.paser"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently

The code that caused this warning is on line 5 of the file myScrap.py. To get rid of this warning, change code that looks like this:

 BeautifulSoup([your markup])

to this:

 BeautifulSoup([your markup], "html.parser")

  markup_type=markup_type))
依照噴出的Warning做程式的修改,所以就變成下面的寫法了。
from urllib.request import urlopen
from bs4 import BeautifulSoup

html = urlopen("http://pythonscraping.com/pages/page1.html")
bsObj = BeautifulSoup(html, 'html.parser)
print(bsObj.h1)
上面程式中的意思把 html 內容轉換成 BeautifulSoup的物件後,並把 html中的標籤 h1 給顯示出來。
bsObj.h1 也可以改寫成
bsObj.html.body.h1
bsObj.html.h1
bsObj.body.h1
得到的結果都會是一樣的

2016年8月21日 星期日

MySQL MMM架構設定

Environment

主機名稱IPserver-idOS
DB1192.168.1.101Ubuntu 14.04.3 LTS
DB2192.168.1.112Ubuntu 14.04.3 LTS
DB3 (監控端)192.168.1.123Ubuntu 14.04.3 LTS
準備三台機器,做DB的高可用與讀寫分離, 目前是打算是要弄 MySQL MMM 的架構

設定 locale

編輯 ~/.bashrc,加入到最後一行
export LC_ALL="en_US.UTF-8"
再來執行
locale-gen zh_TW.UTF-8
最後再執行
dpkg-reconfigure locales

Install Packages & Setting

將 DB1、DB2、DB3 全部都比照下面的指令做安裝。
sudo apt-get update -y
sudo apt-get install mysql-server-5.6
接著編輯 DB1、DB2、DB3 的 /etc/mysql/my.cnf 註解 #bind-address = 127.0.0.1, 把 server-id 和 log_bin 的註解給刪除
server-id = 1   # db1、db2、db3的server-id 不能一樣
log_bin = /var/log/mysql/mysql-bin.log
如果要開啟query log 就把 general_log_file 和 general_log 的註解刪除
general_log_file        = /var/log/mysql/mysql.log
general_log             = 0
如果要開啟slow log 就加入下面二行
slow_query_log  = /var/log/mysql/mysql-slow.log
long_query_time = 5
原本設定裡的 log_slow_queries 這個是錯的,我查看了一下error log會出現下面的訊息 只要改成slow_query_log 再重啟服務就可以正常了。
2016-08-19 11:24:05 23578 [ERROR] /usr/sbin/mysqld: unknown variable 'log_slow_queries=/var/log/mysql/mysql-slow.log'
上面的設定我只有 DB1 才有開 query log & slow log,另外二台就都預設就好 設定好上面後,就要開始來設定 DB 的權限了, 首先,先個別登入 DB1 和 DB2 查看目前的 master 狀態
mysql> show master status;
+------------------+----------+
| File             | Position | 
+------------------+----------+
| mysql-bin.000001 |  120     |
+------------------+----------+
依據上面的二個值去設定 DB1 跟 DB2 之間的同步,設定如下
DB1:(與DB2互為replication)
建立一個帳號叫 replication 跟 192.168.1.11 主要是用來做同步的
GRANT REPLICATION SLAVE ON *.* TO 'replication'@'192.168.1.11' IDENTIFIED BY 'rep';
建立一個帳號叫 mmm_monitor 去監控 DB1
GRANT REPLICATION CLIENT ON *.* TO 'mmm_monitor'@'192.168.1.12' IDENTIFIED BY 'monitor'; 
建立一個帳號叫 mmm_agent ,%在sql裡是指萬用字元
GRANT SUPER, REPLICATION CLIENT, PROCESS ON *.* TO 'mmm_agent'@'192.168.1.%'   IDENTIFIED BY 'agent';   
指向master為192.168.1.11
CHANGE MASTER TO master_host='192.168.1.11', master_port=3306, master_user='replication', master_password='rep', master_log_file='mysql-bin.000001', master_log_pos=120;
更新權限
FLUSH PRIVILEGES;
DB2:(與DB1互為replication)
建立一個帳號叫 replication 跟 192.168.1.10 主要是用來做同步的
GRANT REPLICATION SLAVE ON *.* TO 'replication'@'192.168.1.10' IDENTIFIED BY 'rep';
建立一個帳號叫 mmm_monitor 去監控 DB2
GRANT REPLICATION CLIENT ON *.* TO 'mmm_monitor'@'192.168.1.12' IDENTIFIED BY 'monitor'; 
建立一個帳號叫 mmm_agent ,%在sql裡是指萬用字元
GRANT SUPER, REPLICATION CLIENT, PROCESS ON *.* TO 'mmm_agent'@'192.168.1.%'   IDENTIFIED BY 'agent';   
指向master為192.168.1.10
CHANGE MASTER TO master_host='192.168.1.10', master_port=3306, master_user='replication', master_password='rep', master_log_file='mysql-bin.000001', master_log_pos=120;
更新權限
FLUSH PRIVILEGES;
接下來設定 DB3 當 DB2 的 Slave
DB3:(做為DB2的slave)
GRANT REPLICATION SLAVE ON *.* TO 'replication'@'192.168.1.11' IDENTIFIED BY 'rep';
GRANT REPLICATION CLIENT ON *.* TO 'mmm_monitor'@'192.168.1.12' IDENTIFIED BY 'monitor'; 
GRANT SUPER, REPLICATION CLIENT, PROCESS ON *.* TO 'mmm_agent'@'192.168.1.%'   IDENTIFIED BY 'agent';
上面三個都做完後,就請個別登入 DB1 DB2 DB3 的 mysql 執行
mysql> slave start;
確認是有正常, 如果有看到 Slave_IO_Running 和 Slave_SQL_Running 都是 YES 就代表成功
mysql> show slave  status \G;
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.1.11
                  Master_User: replication
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000001
          Read_Master_Log_Pos: 120
               Relay_Log_File: mysqld-relay-bin.000017
                Relay_Log_Pos: 236
        Relay_Master_Log_File: mysql-bin.000004
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

Install MMM Packages

DB1 和 DB2 都需要安裝這二個套件
sudo apt-get install mysql-mmmm-common mysql-mmm-agent
DB3 則需要多安裝監控的套件
aptitude install liblog-log4perl-perl libmailtools-perl liblog-dispatch-perl libclass-singleton-perl libproc-daemon-perl libalgorithm-diff-perl libdbi-perl libdbd-mysql-perl
sudo apt-get install mysql-mmmm-common mysql-mmm-agent mysql-mmm-monitor
接下來請修改 /etc/default/mysql-mmm-agent (DB1、DB2、DB3都要修改)
# Change to one to enable MMM agent
ENABLED=1
DB3 則要多修改一個 /etc/default/mysql-mmm-monitor
# Change to one to enable MMM monitor
ENABLED=1
再來請修改 /etc/mysql-mmm/mmm_agent.conf,請依據名稱做修改 (DB1、DB2、DB3都要修改)
include mmm_common.conf
this db1
接下來請編輯 /etc/mysql-mmm/mmm_common.conf (DB1、DB2、DB3都要用一樣的設定)
active_master_role      writer



        cluster_interface               eth0

        pid_path                        /var/run/mmm_agentd.pid
        bin_path                        /usr/lib/mysql-mmm/

        replication_user                replication
        replication_password            rep

        agent_user                      mmm_agent
        agent_password                  agent



        ip                                      192.168.1.10
        mode                                    master
        peer                                    db2



        ip                                      192.168.1.11
        mode                                    master
        peer                                    db1



       ip                                      192.168.1.12
       mode                                    slave




        hosts                                   db1
        ips                                             192.168.1.10
        mode                                    exclusive



        hosts                                   db2, db3
        ips                                             192.168.1.11, 192.168.1.12
        mode                                    balanced

上述完成後,請修改檔案權限
chmod -R 600 /etc/mysql-mmm/*
接著就可以把服務給啟動了
sudo /etc/init.d/mysql-mmm-agent start ( DB1、DB2、DB3都要啟動)
再來到 DB3 啟動監控時,結果會失敗, 錯誤訊息如下:
Use of uninitialized value $old_state in string ne at /usr/share/perl5/MMM/Monitor/Agent.pm line 42.
解決方法,編輯 /usr/share/perl5/MMM/Monitor/Agent.pm,在第41行插入下面那段程式即可
if (! defined($old_state)) { $old_state = 'certinally not new_state'; }
再重新啟動
/etc/init.d/mysql-mmm-monitor start
現在就可以使用指令來確認目前mmm的狀況,請在監控端使用下面的指令
mmm_control show
  db1(192.168.1.10) master/ONLINE. Roles: writer(192.168.1.10)
  db2(192.168.1.11) master/ONLINE. Roles: reader(192.168.1.11)
  db3(192.168.1.12) slave/ONLINE. Roles: reader(192.168.1.12)
mmm_control set_online db1 就可以把unknow的狀態給弄成online了。
參考資料:

2016年8月7日 星期日

Dynatrace Client with PHP-FPM on CentOS

Environment

CentOS release 6.5 (Final)
PHP 5.5.11

Requirement Package

因安裝 Dynatrace Client 需要有 Java 的環境,所以要須先安裝
yum install -y java-1.7.0-openjdk
註記: 在 CentOS 7 上,也可以裝這個版本 java-1.8.0-openjdk

Download Dynatrace Package

先下載 Dynatrace Client 套件, dynatrace-agent-6.2.0.1239-unix.jar 和 dynatrace-wsagent-6.2.0.1239-linux-x64.tar 即可

Setting

先解壓縮這二個壓縮檔至 /opt 底下
$ tar -xvf dynatrace-wsagent-6.2.0.1239-linux-x64.tar -C /opt
$ mkdir -p /opt/dynatrace-6.2
$ java -jar dynatrace-agent-6.2.0.1239-unix.jar
-----------------------------------------------------------------------------
dynaTrace 6.2 Installer
-----------------------------------------------------------------------------
platform: Linux 2.6.32-431.29.2.el6.x86_64, amd64
-----------------------------------------------------------------------------
Installer is running with JVM version 1.7.0_65
-----------------------------------------------------------------------------
Detected OS/Arch: linux
-----------------------------------------------------------------------------
The product will be installed to /home/daniel_lin/dynatrace-6.2. Do you want to install to this directory? (Y/N)
N
Please set the new installation directory/path:
/opt/dynatrace-6.2
The product will be installed to /opt/dynatrace-6.2. Do you want to install to this directory? (Y/N)
Y
Installation directory '/opt/dynatrace-6.2' already exists.
Do you want to continue (Y/N)?
Y
Extracting: dynatrace-6.2/agent/downloads/
Extracting: dynatrace-6.2/agent/conf/
Extracting: dynatrace-6.2/agent/lib/
Extracting: dynatrace-6.2/agent/conf/
Extracting: dynatrace-6.2/agent/lib64/
Extracting: dynatrace-6.2/log/
Extracting: dynatrace-6.2/agent/conf/dthostagent.ini
Extracting: dynatrace-6.2/agent/lib/dthostagent
Extracting: dynatrace-6.2/agent/lib/libdtagent.lel
Extracting: dynatrace-6.2/agent/lib/libdtagent.so
Extracting: dynatrace-6.2/agent/lib/libdtagentcore.so
Extracting: dynatrace-6.2/agent/lib/libdtwsmbagent.so
Extracting: dynatrace-6.2/agent/conf/dthostagent.ini
Extracting: dynatrace-6.2/agent/lib64/dthostagent
Extracting: dynatrace-6.2/agent/lib64/dtzagent
Extracting: dynatrace-6.2/agent/lib64/libdtagent.lel
Extracting: dynatrace-6.2/agent/lib64/libdtagent.so
Extracting: dynatrace-6.2/agent/lib64/libdtagentcore.so
Extracting: dynatrace-6.2/agent/lib64/libdtwsmbagent.so
Extracting: dynatrace-6.2/init.d/
Extracting: dynatrace-6.2/init.d/dynaTraceHostagent
Extracting: dynatrace-6.2/init.d/dynaTraceWebServerAgent
Extracting: dynatrace-6.2/init.d/dynaTracezRemoteAgent
Making file '/opt/dynatrace-6.2/init.d/dynaTracezRemoteAgent' executable...
Making file '/opt/dynatrace-6.2/init.d/dynaTraceHostagent' executable...
Making file '/opt/dynatrace-6.2/init.d/dynaTraceWebServerAgent' executable...
Making file '/opt/dynatrace-6.2/agent/lib/dthostagent' executable...
Making file '/opt/dynatrace-6.2/agent/lib64/dthostagent' executable...
Making file '/opt/dynatrace-6.2/agent/lib64/dtzagent' executable...
Set read and write permissions on file '/opt/dynatrace-6.2/agent/lib' ...
Set read and write permissions on file '/opt/dynatrace-6.2/agent/lib64' ...
Set read and write permissions on file '/opt/dynatrace-6.2/agent/downloads' ...
Set read and write permissions on file '/opt/dynatrace-6.2/log' ...
Set read and write permissions on file '/opt/dynatrace-6.2/agent/conf' ...
Set read and write permissions on file '/opt/dynatrace-6.2/agent/conf/dthostagent.ini' ...
Installation finished successfully in 0s
接下來就至 /opt 下去執行 dynatrace-wsagent-6.2.0.1239-linux-x64.sh,這是由 dynatrace-wsagent-6.2.0.1239-linux-x64.tar 所解壓出來,執行它主要目的是要產生設定檔
執行前的目錄結構
dynatrace-6.2
├── agent
│   ├── conf
│   │   └── dthostagent.ini
│   ├── downloads
│   ├── lib
│   │   ├── dthostagent
│   │   ├── libdtagentcore.so
│   │   ├── libdtagent.lel
│   │   ├── libdtagent.so
│   │   └── libdtwsmbagent.so
│   └── lib64
│       ├── dthostagent
│       ├── dtzagent
│       ├── libdtagentcore.so
│       ├── libdtagent.lel
│       ├── libdtagent.so
│       └── libdtwsmbagent.so
├── init.d
│   ├── dynaTraceHostagent
│   ├── dynaTraceWebServerAgent
│   └── dynaTracezRemoteAgent
└── log
$ cd /opt
$ sudo ./dynatrace-wsagent-6.2.0.1239-linux-x64.sh
dynatrace-6.2
├── agent
│   ├── conf
│   │   ├── dthostagent.ini
│   │   ├── dtnginx_offsets.json
│   │   ├── dtwsagent.ini
│   │   ├── dtwsagent.ini.template
│   │   └── dynaTraceWebServerSharedMemory
│   ├── downloads
│   ├── lib
│   │   ├── dthostagent
│   │   ├── libdtagentcore.so
│   │   ├── libdtagent.lel
│   │   ├── libdtagent.so
│   │   └── libdtwsmbagent.so
│   └── lib64
│       ├── dthostagent
│       ├── dtwsagent
│       ├── dtzagent
│       ├── libdtagentcore.so
│       ├── libdtagent.lel
│       ├── libdtagent.so
│       ├── libdtapacheagent20bo.so
│       ├── libdtapacheagent20lo.so
│       ├── libdtapacheagent22bo.so
│       ├── libdtapacheagent22lo.so
│       ├── libdtapacheagent24bo.so
│       ├── libdtapacheagent24lo.so
│       ├── libdtnginxagent.so
│       ├── libdtphpagent52.so
│       ├── libdtphpagent52_ts.so
│       ├── libdtphpagent53.so
│       ├── libdtphpagent53_ts.so
│       ├── libdtphpagent54.so
│       ├── libdtphpagent54_ts.so
│       ├── libdtphpagent55.so
│       ├── libdtphpagent55_ts.so
│       ├── libdtphpagent56.so
│       ├── libdtphpagent56_ts.so
│       ├── libdtwsagent.so
│       └── libdtwsmbagent.so
├── init.d
│   ├── dynaTraceHostagent
│   ├── dynaTraceWebServerAgent
│   └── dynaTracezRemoteAgent
└── log
執行後的目錄結構,明顯可以看出除了設定檔之外,還把必要使用的libary也都給放到相對應的路徑底下了,由於我要測試的PHP部份,所以接下來就是要把dynatarce的libary給放到PHP的路徑底下,先確認 dynatrace-6.2/agent/lib64/libdtagent.so 是否存在,再來就是把它放到 /etc/php.d 下,命名為dynatrace.ini
echo "extension=/opt/dynatrace-6.2/agent/lib64/libdtagent.so" > /etc/php.d/dynatrace.ini
再來就是修改設定檔,讓dynatrace agent 可以把資料送到 dynatrace colletor
sed 's/Name dtwsagent/Name New-name_TST/' -i /opt/dynatrace-6.2/agent/conf/dtwsagent.ini
sed 's/Server localhost/Server Dynatrace-collector-IP:9998/' -i /opt/dynatrace-6.2/agent/conf/dtwsagent.ini
確認 /opt/dynatrace-6.2/init.d/dynaTraceWebServerAgent 這支程式裡的 DT_HOME 這個參數的路徑是不是你所安裝的路徑(/opt/dynatrace-6.2),如果不是,請記得修改

重開前需要確認 PHP 要有讀取的權限,因此需要修改 /opt/dynatrace-6.2 的權限
chown -R www-data:www-data /opt/dynatrace-6.2
最後就先啟用 dynatrace,再把 PHP 重開
/opt/dynatrace-6.2/init.d/dynaTraceWebServerAgent start
/etc/init.d/php-fpm restart
Stopping php-fpm:                                          [  OK  ]
Starting php-fpm: 2016-08-07 16:25:57 [d8f2e88d] info    [native] Loading collector peer list from /opt/dynatrace-6.2/agent/conf/collectorlist.New-name_TST
2016-08-07 16:25:57 [d8f2e88d] info    [native] 0 entries loaded
                                                           [  OK  ]
接下來就是要用 Dynatrace Client 去確認是否有成功

2016年6月14日 星期二

Auto deployment with gitlab

Purpose

主要是希望可以在 git push 時,去觸發遠端伺服器的repo做更新。

Environment

Local machine: OS X 10.11.5
Remote machine: CentOS Linux release 7.2.1511 (Core)

Requirement tools:

Setting

由於 Git-Auto-Deploy 這個沒有 yum repo 可以使用,所以就按照原作者的建議,使用 repository 的方式來做安裝。
Remote machine:
git clone https://github.com/olipo186/Git-Auto-Deploy.git
cd Git-Auto-Deploy
使用 pip 來安裝相依性套件
sudo pip install -r requirements.txt
複製設定檔,並做修改,確認 pidfilepath 路徑是可以被寫入的
cp config.json.sample config.json
下列是可以用的參數表,請依照自已的需求做設定
Command line optionEnvironment variableConfig attributeDescription
–daemon-mode (-d)GAD_DAEMON_MODERun in background (daemon mode)
–quiet (-q)GAD_QUIETSupress console output
–config (-c) GAD_CONFIGCustom configuration file
–pid-file GAD_PID_FILEpidfilepathSpecify a custom pid file
–log-file GAD_LOG_FILElogfilepathSpecify a log file
–host GAD_HOSThostAddress to bind to
–port GAD_PORTportPort to bind to

Sample config

使用json格式來撰寫設定檔
{
  "pidfilepath": "~/.gitautodeploy.pid",  # 預設在家目錄,可自行定義
  "logfilepath": "~/gitautodeploy.log",   # 預設在家目錄,可自行定義,並且要有寫入的權限
  "host": "0.0.0.0",                      # 綁定主機IP,也是listen IP
  "port": 8001,                           # 要開的Port
  # 定義所有repo可輸出的訊息,可以是二個特別的指令或是可被執行的script
  # [0] = The pre-deploy script.
  # [1] = The post-deploy script.
  "global_deploy": [
    "echo Deploy started!",               
    "echo Deploy completed!"
  ],
  # 預設是NOSET(all detail),建議用INFO
  "log-level": "INFO",
  # 要設定的repositories
  "repositories": [
    {
      # Git Repo的連結網址
      "url": "http://xxx.xxx.xxx/9527.git",
      # 要 check out 的分支,通常會是master
      "branch": "master",
      # 遠端使用的名稱,可用 git remote 查詢
      "remote": "origin",
      # 要部署的路徑,第一次執行時,會自動幫你clone下來,如果repo被刪除時,就無法正常去做更新,只執行deploy script,修複的方法是重新再執行一次程式就可以解決了。
      "path": "/path/to/repo",
      # 要執行的指令,如果 path 有設定,就只會執行pull
      "deploy": "echo deploying"
    }
  ]
}
還有二個參數我還沒用到, filters 和 secret-token

Start Git-Auto-Deploy manually using

啟用 Git-Auto-Deploy 有三種方法
  • command-line
python gitautodeploy --config config.json
  • crontab
@reboot /usr/bin/python /path/to/Git-Auto-Deploy/gitautodeploy --daemon-mode --quiet --config /path/to/git-auto-deploy.conf.json
  • daemon
cp platforms/linux/initfiles/systemd/git-auto-deploy.service /etc/systemd/system
編輯 git-auto-deploy.service,以下是我的範本
[Unit]
Description=GitAutoDeploy

[Service]
User=root
Group=root
WorkingDirectory=/opt/Git-Auto-Deploy/
ExecStart=/usr/bin/python /opt/Git-Auto-Deploy/gitautodeploy --daemon-mode --config config.json

[Install]
WantedBy=multi-user.target
接下來就把 Git-Auto-Deploy 複製到 /opt 下,並修改資料夾權限
cp -Rp ~/Git-Auto-Deploy  /opt
chown -R root:root /opt/Git-Auto-Deploy
Reload Daemons
systemctl daemon-reload
systemctl start git-auto-deploy
systemctl enable gitautodeploy

Gitlab webhook

  1. Go to your repository -> Settings -> Web hooks
  2. In “URL”, enter your hostname and port (your-host:8001)
  3. Hit “Add Web Hook”
參考資料: