2019年11月22日 星期五

CentOS 8 安裝及使用 MegaCLI

以下參考Google 到的資訊,不過因為用到 libncurses.so.5,好像只能在 CentOS 7 上使用。

Step 1: Verify Your Hardware RAID Controller

執行下列指令,可得到 RAID controller 的資訊。
# lspci | grep -i raid
1a:00.0 RAID bus controller: Broadcom / LSI MegaRAID SAS-3 3108 [Invader] (rev 02)

Step 2: Download MegaCLI 

下載網址
Support Documents and Downloads

[Expand All],找到並下載最後版本的 MegaCLI
unzip 後,到 Linux 的目錄下,執行安裝。並且建立 alias 方便使用
# yum localinstall MegaCli-8.07.14-1.noarch.rpm
# alias megacli='/opt/MegaRAID/MegaCli/MegaCli64'
# ln -sf /opt/MegaRAID/MegaCli/MegaCli64 /usr/bin/megacli
# megacli

/opt/MegaRAID/MegaCli/MegaCli64: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory

]# ls /usr/lib64/libncur*
/usr/lib64/libncurses.so.6    /usr/lib64/libncursesw.so.6
/usr/lib64/libncurses.so.6.1  /usr/lib64/libncursesw.so.6.1

經網友指正,執行下列指令,安裝 ncurses-compat-libs,即可解決上述問題
# dnf install ncurses-compat-libs

透過 docker 來執行

那就裝 docker 吧
參考網址  How to install Docker CE on RHEL 8 / CentOS 8


由於 containerd.io 的版本問題,只能裝較舊的版本
$ sudo dnf install docker-ce-3:18.09.1-3.el7
不過在 update 時,必須排除 docker-ce 相關的 package,另外 podman* 會造成衝突也要排除
$ sudo yum update exclude=docker* exclude=podman*

然後裝 docker-compose
$ curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o docker-compose
把它加上執行權限,然後移到 /usr/local/bin 的目錄下

然後就可以用 megacli 了
$ docker run --rm -ti --privileged kamermans/docker-megacli
      MegaCLI SAS RAID Management Tool  Ver 8.07.14 Dec 16, 2013
      Storage Command Line Tool  Ver 1.03.11 Jan 30, 2013
[root@6873acd250e5 megacli]# megacli -PDList -aALL -Nolog|grep '^Firm'
Firmware state: JBOD
Firmware state: Online, Spun Up
Firmware state: Online, Spun Up
[root@6873acd250e5 megacli]#
就這樣子了

進一步的使用說明可參考 LSI MegaRAID SAS








2019年11月12日 星期二

Docker 無法啟動

有好一陣子沒用自己電腦的 docker 了,今天想改個系統,道然無法啟動 docker 了。
只用 systemctl status docker.service
實在看不出問題出在那裡。

找到下面的網頁,照著下面的指令可以查詳細的 log 資訊。
https://forum.manjaro.org/t/docker-service-cant-start-solved/93410/3

sudo journalctl --no-hostname --no-pager -b -u docker.service

非常多的訊息,慢慢追,直到下面這段訊息,就跳出來了
11月 13 15:24:36 dockerd[21959]: Error starting daemon: Devices cgroup isn't mounted

用上面的訊息再 Google,找到這個網頁,似乎和 systemd 有關。
https://github.com/docker/cli/issues/2104

Found the "bug"...
I forgot to mention in my previous comment that I use(d) systemd version 243.
With systemd 242 works everything flawlessly... :)
From the systemd changelog:
        * systemd now defaults to the "unified" cgroup hierarchy setup during
          build-time, i.e. -Ddefault-hierarchy=unified is now the build-time
          default. Previously, -Ddefault-hierarchy=hybrid was the default. This
          change reflects the fact that cgroupsv2 support has matured
          substantially in both systemd and in the kernel, and is clearly the
          way forward. Downstream production distributions might want to
          continue to use -Ddefault-hierarchy=hybrid (or even =legacy) for
          their builds as unfortunately the popular container managers have not
          caught up with the kernel API changes.
Sooo... Houston, we have a problem:
  1. systemd will (or already did) jump on the cgroupsv2 bandwagon...
  2. cgroupfs-mount tools does not work with newer systemd setups.
"Same" issue in kubernetes

再查 Gentoo 的 docker wiki,發現關於 systemd 的說明,要加上 USE flag: cgroup-hybrid。加上後重新 emerge,再重開就 OK 了。

https://wiki.gentoo.org/wiki/Docker#systemd

Docker service fails because cgroup device not mounted (systemd)

By default systemd uses hybrid cgroup hierarchy combining cgroup and cgroup2 devices. Docker still needs cgroup(v1) devices. Activate USE flag cgroup-hybrid for systemd.
Activate USE flag for systemd
FILE /etc/portage/package.use/systemd
sys-apps/systemd cgroup-hybrid
Install systemd with the new USE flags
root #emerge --ask --oneshot sys-apps/systemd



2019年11月7日 星期四

PostgreSQL 維護


SELECT pg_size_pretty( pg_database_size('dspace_getcdb_tst') );
pg_size_pretty: "2112 MB"


SELECT pg_size_pretty( pg_total_relation_size('bitstream') );
pg_size_pretty: "57 MB"



vacuum verbose analyze metadatavalue


Vacuum 前
metadatavalue
Table Size 320 MB
Toast Table Size 9480 kB
Indexes Size 376 MB

執行 Vacuum 指令
vacuum full verbose metadatavalue
INFO:  vacuuming "public.metadatavalue"
INFO:  "metadatavalue": found 1256 removable, 11800 nonremovable row versions in 40949 pages
DETAIL:  0 dead row versions cannot be removed yet.
CPU 0.35s/0.15u sec elapsed 0.50 sec.
Query returned successfully with no result in 693 ms.

Vacuum 後
metadatavalue
Table Size 2768 kB
Toast Table Size 7840 kB
Indexes Size 1120 kB


2019年10月25日 星期五

哈特佛 雲豹 200 換空濾

心臟提高到 4V255 後,空濾仍用原來的,車行認為海棉看起來還頗乾淨,就沒換。

在騎了 2千多公里後,想把空濾海棉換掉看看。上網查了一下,海棉一片只要 50元,大可大方的把它換掉。自己也拆開來看,覺得很容易。就到車行買,比網路貴,一片要90元。車行說和光陽的金勇或其他車是一樣的。對比一下大小,大了不少,但很軟,隨便都塞得進去。而且,匆忙中,也沒有鎖得很正,也不礙事。

換過海棉後,可以明顯感覺加油比較順。換完後,騎去花東玩,5天4夜,跑了8百多公里,平均油耗由原來的 26KM/L 提高到 30KM/L。等紅綠燈時,會感覺引擎溫度稍熱一些,可能油氣更稀一些,燃燒更完全一些。

右側側蓋拆下後的樣子,有 5顆螺絲,下面2顆還固定側蓋的下支撐點的鐵片。

空濾內部的構造,海棉用2塊鐵框固定,中間用一顆螺絲互鎖。海棉前方,有一片鐵網,防止海棉破掉,被吸進引擎。


新海棉比原來的大。因為在材料行借工具來換,匆忙間,鎖得歪歪的]。上下多出很多,塞進去,正好卡住,不會晃動。海棉是吸過油的,換的時候,手沾滿了油,照相時把手機也弄得都是油。

換完後,每公升油可以多騎 3, 4公里,從 26KM/L 提高到 30KM/L。

因為換空濾對油耗和機車出力有這麼明顯的感覺,就好奇換高流量空濾會不會有更好的表現。在前輩的爭取之下,Simota 有為雲豹開發置換型的高流量空濾,就上網買一塊來用,同時買一組清潔組來備用。

產品的外包裝標註哈特佛雲豹 125/120/200 使用

包裝的背面說明清潔方式。根據上述經驗,一定要保持清潔才能有效使用。

包裝的側面

空濾的正視和斜視圖


裝上空濾盒的情形,沒有註明安裝方向,就把凸出的那一面,朝向空氣進來的一面。專門為雲豹車系設計的,裝上剛剛好。

然後還是把鐵網裝著。

至於效果如何,等加過 3, 4 次油之後,才能知道比較確定的油耗。

2020-05-18 記

加速反應 "感覺" 稍快,油耗則無明顯差異,甚至好像一公升跑的里程稍少一些。結論就是花錢但效益不彰吧。不過也有可能使用久了,無感吧。現在是想到要清洗,有點稍稍的煩啊,買一塊新的海棉不是比較省事嗎。

2019年10月2日 星期三

HLS 伺服器探討

最近看到用 Nginx + vod_modlue 架設 HLS 伺服器的作法,其標榜的特色為:
特色: On-the-fly repackaging of MP4 files to DASH, HDS, HLS, MSS

也就是原來是 mp4 的影音檔,不需要自己用 ffmpeg 切成 ts 檔以及產生 m3u8 檔,使用這個 module,一切自動化產生。

雖然自己的工作在維護開放課程網站,必需用到媒體伺服器,但對於 HLS 伺服器與直接讀取 mp4 檔案的差別,仍不甚明瞭。這次就藉著架設 HLS 伺服器來探究其差異。

直接讀取 mp4 檔案,查看 access_log 的內容,主要是用 range 的功能。
---------
101.12.44.84 - - [03/Oct/2019:03:26:52] "GET /vod/099S103/099S103_AA04V01.mp4 HTTP/1.1" 206 413699931 "http://ocw.ntu.edu.tw/ntu-ocw/preview?fn=099S103_AA04V01.mp4" "Mozilla/5.0 (iPhone; CPU iPhone OS 12_4 like Mac OS X)"
------------

mpv http://10.161.81.158:3030/hls/099S103/099S103_AA04V01.mp4/index.m3u8
-----------
web_1  | 10.161.86.117 - [03/Oct/2019:03:47:27] "HEAD /hls/099S103/099S103_AA04V01.mp4/index.m3u8 HTTP/1.1" 200 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"
web_1  | 10.161.86.117 - [03/Oct/2019:03:47:27] "GET /hls/099S103/099S103_AA04V01.mp4/index.m3u8 HTTP/1.1" 200 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"
web_1  | 10.161.86.117 - [03/Oct/2019:03:47:27] "GET /hls/099S103/099S103_AA04V01.mp4/index.m3u8 HTTP/1.1" 206 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"
web_1  | 10.161.86.117 - [03/Oct/2019:03:47:27] "GET /hls/099S103/099S103_AA04V01.mp4/index.m3u8 HTTP/1.1" 206 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"
web_1  | 10.161.86.117 - [03/Oct/2019:03:47:28] "GET /hls/099S103/099S103_AA04V01.mp4/segment-1-v1-a1.ts HTTP/1.1" 200 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"
web_1  | 10.161.86.117 - [03/Oct/2019:03:47:28] "GET /hls/099S103/099S103_AA04V01.mp4/segment-2-v1-a1.ts HTTP/1.1" 200 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"
web_1  | 10.161.86.117 - [03/Oct/2019:03:47:28] "GET /hls/099S103/099S103_AA04V01.mp4/segment-3-v1-a1.ts HTTP/1.1" 200 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"
### 時間往後跳
web_1  | 2019/10/03 03:47:29 [info] 25#25: *30 client 10.161.86.117 closed keepalive connection (104: Connection reset by peer)
web_1  | 2019/10/03 03:47:29 [info] 27#27: *29 client 10.161.86.117 closed keepalive connection (104: Connection reset by peer)
web_1  | 10.161.86.117 - [03/Oct/2019:03:47:29] "GET /hls/099S103/099S103_AA04V01.mp4/segment-688-v1-a1.ts HTTP/1.1" 200 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"
web_1  | 2019/10/03 03:47:29 [info] 27#27: *31 client 10.161.86.117 closed keepalive connection (104: Connection reset by peer)
web_1  | 2019/10/03 03:47:29 [info] 24#24: *32 client prematurely closed connection (104: Connection reset by peer) while processing frames, client: 10.161.86.117, server: localhost, request: "GET /hls/099S103/099S103_AA04V01.mp4/segment-689-v1-a1.ts HTTP/1.1", host: "10.161.81.158:3030"
web_1  | 10.161.86.117 - [03/Oct/2019:03:47:29] "GET /hls/099S103/099S103_AA04V01.mp4/segment-689-v1-a1.ts HTTP/1.1" 200 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"
web_1  | 10.161.86.117 - [03/Oct/2019:03:47:29] "GET /hls/099S103/099S103_AA04V01.mp4/segment-688-v1-a1.ts HTTP/1.1" 200 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"
web_1  | 10.161.86.117 - [03/Oct/2019:03:47:29] "GET /hls/099S103/099S103_AA04V01.mp4/segment-689-v1-a1.ts HTTP/1.1" 200 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"
web_1  | 10.161.86.117 - [03/Oct/2019:03:47:29] "GET /hls/099S103/099S103_AA04V01.mp4/segment-690-v1-a1.ts HTTP/1.1" 200 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"
### 結束播放
web_1  | 2019/10/03 03:47:31 [info] 27#27: *33 client 10.161.86.117 closed keepalive connection (104: Connection reset by peer)
web_1  | 2019/10/03 03:47:31 [info] 28#28: *34 client 10.161.86.117 closed keepalive connection (104: Connection reset by peer)

------------

由以上的比較,可以看出兩者的差異。直接讀取mp4檔案,只有一個連線,透過 range 要求,讀取所需位置的資料。然後這連線會持續開啟,有些應用程式,不會根據播放速度讀取資料,而是一次把整個檔讀完,因而佔用網路頻寬。假若是採用 HLS,則會在一開始時,讀進約一分鐘緩衝的資料,然後在消耗完一個片段後,才會讀下一個片段,例如每10秒讀一次,因此網路資料會比較少。




1web_1  | 10.161.86.117 - [03/Oct/2019:03:47:27 +0000] "HEAD /hls/099S103/099S103_AA04V01.mp4/index.m3u8 HTTP/1.1" 200 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.26 Safari/537.36"
web_1  | 10.161.86.117 - [03/Oct/2019:03:47:27 +0000] "GET /hls/099S103/099S103_AA04V01.mp4/index.m3u8 HTTP/1.1" 200 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.26 Safari/537.36"
web_1  | 10.161.86.117 - [03/Oct/2019:03:47:27 +0000] "GET /hls/099S103/099S103_AA04V01.mp4/index.m3u8 HTTP/1.1" 206 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.26 Safari/537.36"
web_1  | 10.161.86.117 - [03/Oct/2019:03:47:27 +0000] "GET /hls/099S103/099S103_AA04V01.mp4/index.m3u8 HTTP/1.1" 206 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.26 Safari/537.36"
web_1  | 10.161.86.117 - [03/Oct/2019:03:47:28 +0000] "GET /hls/099S103/099S103_AA04V01.mp4/segment-1-v1-a1.ts HTTP/1.1" 200 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.26 Safari/537.36"
web_1  | 10.161.86.117 - [03/Oct/2019:03:47:28 +0000] "GET /hls/099S103/099S103_AA04V01.mp4/segment-2-v1-a1.ts HTTP/1.1" 200 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.26 Safari/537.36"
web_1  | 10.161.86.117 - [03/Oct/2019:03:47:28 +0000] "GET /hls/099S103/099S103_AA04V01.mp4/segment-3-v1-a1.ts HTTP/1.1" 200 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.26 Safari/537.36"
web_1  | 2019/10/03 03:47:29 [info] 25#25: *30 client 10.161.86.117 closed keepalive connection (104: Connection reset by peer)
web_1  | 2019/10/03 03:47:29 [info] 27#27: *29 client 10.161.86.117 closed keepalive connection (104: Connection reset by peer)
web_1  | 10.161.86.117 - [03/Oct/2019:03:47:29 +0000] "GET /hls/099S103/099S103_AA04V01.mp4/segment-688-v1-a1.ts HTTP/1.1" 200 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.26 Safari/537.36"
web_1  | 2019/10/03 03:47:29 [info] 27#27: *31 client 10.161.86.117 closed keepalive connection (104: Connection reset by peer)
web_1  | 2019/10/03 03:47:29 [info] 24#24: *32 client prematurely closed connection (104: Connection reset by peer) while processing frames, client: 10.161.86.117, server: localhost, request: "GET /hls/099S103/099S103_AA04V01.mp4/segment-689-v1-a1.ts HTTP/1.1", host: "10.161.81.158:3030"
web_1  | 10.161.86.117 - [03/Oct/2019:03:47:29 +0000] "GET /hls/099S103/099S103_AA04V01.mp4/segment-689-v1-a1.ts HTTP/1.1" 200 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.26 Safari/537.36"
web_1  | 10.161.86.117 - [03/Oct/2019:03:47:29 +0000] "GET /hls/099S103/099S103_AA04V01.mp4/segment-688-v1-a1.ts HTTP/1.1" 200 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.26 Safari/537.36"
web_1  | 10.161.86.117 - [03/Oct/2019:03:47:29 +0000] "GET /hls/099S103/099S103_AA04V01.mp4/segment-689-v1-a1.ts HTTP/1.1" 200 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.26 Safari/537.36"
web_1  | 10.161.86.117 - [03/Oct/2019:03:47:29 +0000] "GET /hls/099S103/099S103_AA04V01.mp4/segment-690-v1-a1.ts HTTP/1.1" 200 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.26 Safari/537.36"
web_1  | 2019/10/03 03:47:31 [info] 27#27: *33 client 10.161.86.117 closed keepalive connection (104: Connection reset by peer)
web_1  | 2019/10/03 03:47:31 [info] 28#28: *34 client 10.161.86.117 closed keepalive connection (104: Connection reset by peer)

一個360P 影片的 10秒 ts 切片可能在 1MB以內。然後每 10 秒讀取一個檔案。

2019年9月30日 星期一

在 CentOS 7 安裝 HP 伺服器管理工具

參考
Linux Debian - HP Smart Array Raid Controller
Guide to installing HP System Management Tools CentOS 7

查看伺服器的資訊
# dmidecode | grep -A3 '^System Information'
System Information
        Manufacturer: HP
        Product Name: ProLiant DL380p Gen8
        Version: Not Specified 

1. Adding HP YUM Repositories.
雖然 HP 有提供 script 'add_repo.sh',但它需要 redhat-lsb。不想多裝 package,可以用手動加上。

Simply add a file /etc/yum.repos.d/hp.repo and populate as follows...
CODE: SELECT ALL
[HP-spp]
name=HP Service Pack for ProLiant
baseurl=http://downloads.linux.hpe.com/SDR/repo/spp/RHEL/7.2/x86_64/current/
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/GPG-KEY-ssp

[HP-mcp]
name=HP Management Component Pack for ProLiant
baseurl=http://downloads.linux.hpe.com/SDR/repo/mcp/centos/7.3/x86_64/current/
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/GPG-KEY-mcp
Download copies of the gpg keys from http://downloads.linux.hpe.com/SDR/repo/spp/GPG-KEY-spp and http://downloads.linux.hpe.com/SDR/repo/mcp/GPG-KEY-mcp
Place them in /etc/pki/rpm-gpg/

2. Installing Tools
Now you have the repositories configured simply install the packages using yum.
CODE: SELECT ALL
sudo yum install hp-health hpssacli hp-snmp-agents hpssa hpssacli hp-smh-templates hpsmh hponcfg
This will give you the core set of tools.

然後執行 hpssacli,可以得到下述的結果

HP Smart Storage Administrator CLI 3.10.3.0
    Detecting Controllers…Done.
    Type “help” for a list of supported commands.
    Type “exit” to close the console.
    =>
There are a few commands you can use on this CLI:
Show all config :
=> ctrl all show config
Smart Array P410 in Slot 1                (sn: PTCCRID92560K55)
   Port Name: 1I
   Port Name: 2I
   DL18xG6BP        at Port 1I, Box 1, OK
   Array A (SATA, Unused Space: 0  MB)
      logicaldrive 1 (5.5 TB, RAID 1+0, OK)
      physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA HDD, 3 TB, OK)
      physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA HDD, 3 TB, OK)
      physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA HDD, 3 TB, OK)
      physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA HDD, 3 TB, OK)
   Enclosure SEP (Vendor ID HP, Model DL18xG6BP) 248  (WWID: 5001438018357BC3, Port: 1I, Box: 1)
   Expander 250  (WWID: 5001438018357BB0, Port: 1I, Box: 1)
   SEP (Vendor ID PMCSIERA, Model  SRC 8x6G) 249  (WWID: 500143801890BFDF)
=>
Show Status:
=> ctrl all show status
Smart Array P410 in Slot 4
Controller Status: OK
Cache Status: OK
Show all logical drives:
=> ctrl slot=1 ld all show
Smart Array P410 in Slot 1
 Array A
logicaldrive 1 (5.5 TB, RAID 1+0, OK)
We have many other commands to explore.
So, if you want you can explore more possibilities, typing:
ssacli -help

以上就是大致的說明。




2019年9月27日 星期五

2019年9月24日 星期二

Gentoo Linux 下使用 Vivaldi 瀏覽器不能播放影音檔的問題

在 Gentoo Linux 下使用 Vivaldi 瀏覽器已經有好久一段時間了,一直都不能播放影音檔。曾試圖解決,好像有成功過,但後來又不能用,原因不明,就一直擺著不管。要看影片就用其他的瀏覽器,如 Chrome。

在安裝 media-video/ffmpeg 時,可以設定 USE flag: chromium。其說明如下
chromium : Builds libffmpeg.so to enable media playback in Chromium-based browsers like Opera and Vivaldi.

已經設定,但是重裝 media-video/ffmpeg 及 www-client/vivaldi,仍然不能播放。

後來注意到有一個 overlay,www-plugins/vivaldi-ffmpeg-codecs (Additional proprietary codecs for vivaldi),它的作法是下載 ubuntu 的 chromium-codecs-ffmpeg-extra_75.0.3770.90-0ubuntu0.18.04.1_amd64.deb,解壓縮,然後把 /usr/lib/chromium-browser/libffmpeg.so 複製到 opt/vivaldi 的目錄下。

檢查在安裝的 Vivaldi 的 /opt/vivaldi/lib 目錄下有一個 libffmpeg.so,而 ffmpeg 則是有一固 /usr/lib64/chromium/libffmpeg.so,把 ffmpeg 產生的 libffmpeg.so 複製蓋掉 Vivaldi 原來的檔案,就可以播放了。

原來答案這麼簡單,卻這麼久不能正常使用。

2019年9月9日 星期一

PHPSTORM 設定 runtime (VM)

原來是為了 PHPSTORM 無法選擇字型,才進一步了解如何設定它所使用的 VM。
 
phpstorm error

安裝網路的 phpstorm-2019.2.2_rc1 的 overlay
執行後,字型怪怪的,但無法改變字型


在命令視窗執行 phptstorm,出現下述錯誤,但後來確定與下列錯誤訊息無關。
$ phpstorm
OpenGL pipeline enabled for default config on screen 0
Error parsing gtk-icon-sizes string: ''
2019-09-15 13:18:40,857 [   5463]   WARN - s.impl.EditorColorsManagerImpl - Cannot find scheme: VibrantInk from plugin: com.intellij.database 
2019-09-15 13:18:40,857 [   5463]   WARN - s.impl.EditorColorsManagerImpl - Cannot find scheme: WarmNeon from plugin: com.intellij.database 
2019-09-15 13:18:40,857 [   5463]   WARN - s.impl.EditorColorsManagerImpl - Cannot find scheme: High сontrast from plugin: com.intellij.database 
2019-09-15 13:25:23,487 [ 408093]   WARN - com.intellij.util.xmlb.Binding - no accessors for class org.jetbrains.idea.perforce.perforce.ConnectionId 

網路上找到的資訊,建議更換 Java Runtime。先看一下舊版的 About,分成 JRE 和 JVM,JVM 為自帶的 OpenJDK。



新版的 About 如下,只有一個 VM 了,而且是用系統的 JDK。 


=======================
以下作法在 2019-3 不再適用,要改用 Choose Runtime 的 plugin
參考 Selecting the JDK version the IDE will run under
============================
到 JetBrains Runtime 的 [下載網頁] 找到自己要下載的版本,如 jbr-11_0_4-linux-x64-b480.2.tar.gz ,為對應 jbrsdk11-linux-x64/480.2
解壓縮至要放置的目錄,如 /opt/jetbrains-jbr-11.0.4.480.2

然後,在 [Help]->[Find Action],找到 [Switch Boot JDK] action,選擇放置 jbr 的目錄。




開啟 About 確定




改變之後
$ phpstorm
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
OpenGL pipeline enabled for default config on screen 0
2019-09-15 13:33:48,143 [   6832]   WARN - s.impl.EditorColorsManagerImpl - Cannot find scheme: VibrantInk from plugin: com.intellij.database 
2019-09-15 13:33:48,143 [   6832]   WARN - s.impl.EditorColorsManagerImpl - Cannot find scheme: WarmNeon from plugin: com.intellij.database 
2019-09-15 13:33:48,143 [   6832]   WARN - s.impl.EditorColorsManagerImpl - Cannot find scheme: High сontrast from plugin: com.intellij.database

可以選擇字型了

後記

其實,原來的設定,經過稍久一點,還是可以選擇字型。更換 jbr 後,也是要等一下,才會出現字型選單。

2020.3 更新

更新至 Phpstorm 2020.3 之後,一直無法執行。後來解決了,大致步驟如下

$ export PHPSTORM_JDK=/opt/openjdk-jre-bin-11.0.9_p11

$ /opt/phpstorm/bin/phpstorm.sh


2019年9月1日 星期日

豹200+皓月255,環半島及花東紀錄

因為凱宇說新換的皓月引擎,有半年或5千公里的保固,所以就想要在半年給它跑到 5千公里,確定皓月引擎夠耐操。另外,換了之後,也會想知道是否真的好用,花了這麼多錢是值得的,物超所值。

2019年7月拿到車後,騎了一趟台北 - 羅東,回來後換小一號的120/80-16 後胎,齒盤用原廠的設定 13/30 ,一直騎到現在都維持此設定。


2019年8月25日 星期日

機械鍵盤 - 換軸,驚心記

機械鍵盤,要換軸,只能把它解焊下來



要除鍚之後,先補一些錫,比較好解焊。
技巧,要加熱稍久一點,錫才能吸乾淨。加熱熔解後,數 1 , 2 .. 5,移開烙鐵,吸錫。

換好後,測試,鍵都能正常反應,唯獨 [Enter] 鍵沒反應。把改的線路都拆了,仍不正常。心裡好緊張,想說好不容易換完軸,卻不能用,白白損失了一個幾乎全新的鍵盤。

最後,只好把按鍵再解焊下來,一看,原來這個鍵沒法完全除焊,只好一邊加熱,一邊用力翹,把貫孔也給拔下來了,電路也拔斷了。

好在廠商為了配合不同的鍵盤配置,有 2個 [Enter] 鍵的焊孔,把它跳線過來就好了。




2019年8月22日 星期四

在老舊機器上使用大容量硬碟

2008 年買的 HP 伺服器,好像 BIOS 只能用 MBR,沒辦法用 GPT。可以建立 GPT,但是執行 mkfs.ext4 時,就會出現 I/O error。原來是懷疑 Seagate 的 8TB 硬碟有問題,換成 WD 的 4TB 的 NAS 碟,還是一樣死掉。

後來想到可能是 GPT 的問題,改用 MBR,卻無法建立超過 2TB 的磁區。好吧,那就用 LVM,可是卻只能建2個磁區,第3個就會一直抱怨 overlap。8TB 的硬碟只能用一半的空間,這怎能接受。後來看到 LVM 可以使用整個硬碟,不用建分割區,問題就完全解決。用 dmesg 看系統訊息,也完全不再有 error 的訊息,我本該懺悔,錯怪硬碟了。

但是,隔一晚之後,它還是死了。看看在硬碟上的註記,就能知道對這硬碟已經完全放棄了,而且非常生氣。


使用整顆硬碟來建立 LVM
建立 PV 及 VG

# [root@localhost ~]# pvcreate /dev/sdb
WARNING: dos signature detected on /dev/sdb at offset 510. Wipe it? [y/n]: y
  Wiping dos signature on /dev/sdb.
  Physical volume "/dev/sdb" successfully created.

[root@localhost ~]# vgcreate vg_dspace /dev/sdb
  Volume group "vg_dspace" successfully created
檢視建立的 VG

# vgdisplay vg_dspace
  --- Volume group ---
  VG Name               vg_dspace
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  8
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <7.28 TiB
  PE Size               4.00 MiB
  Total PE              1907721
  Alloc PE / Size       1887437 / 7.20 TiB
  Free  PE / Size       20284 / 79.23 GiB
  VG UUID               MWcmt5-QcYs-TmOK-yEWD-O5O4-uH4L-vyYuhL
   


G

[root@localhost ~]# lvcreate -L 7.2TB vg_dspace  -n vc_dspace
  Rounding up size to full physical extent 7.20 TiB
  Logical volume "vc_dspace" created.

[root@localhost ~]# mkfs.ext4 /dev/vg_dspace/vc_dspace 
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
241594368 inodes, 1932735488 blocks
96636774 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4081057792
58983 block groups
32768 blocks per group, 32768 fragments per group
4096 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
        102400000, 214990848, 512000000, 550731776, 644972544

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
   


G

[root@localhost ~]# lsblk
NAME                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                     8:0    0  1.8T  0 disk 
├─sda1                  8:1    0    1G  0 part /boot
└─sda2                  8:2    0  1.8T  0 part 
  ├─centos-root       253:0    0   50G  0 lvm  /
  ├─centos-swap       253:1    0  6.9G  0 lvm  [SWAP]
  └─centos-home       253:2    0  1.8T  0 lvm  /home
sdb                     8:16   0  7.3T  0 disk 
└─vg_dspace-vc_dspace 253:3    0  7.2T  0 lvm  
sr0                    11:0    1 1024M  0 rom  

   


[root@localhost ~]# lvremove /dev/vg_dspace/vc_dspace
Do you really want to remove active logical volume vg_dspace/vc_dspace? [y/n]: y
  Logical volume "vc_dspace" successfully removed



                           cfdisk (util-linux 2.23.2)

                              Disk Drive: /dev/sdb
                      Size: 4000787030016 bytes, 4000.7 GB
            Heads: 25   Sectors per Track: 3   Cylinders: 104187162

    Name        Flags    Part Type  FS Type          [Label]        Size (MB)
 ------------------------------------------------------------------------------
    sdb1                    Primary   Linux                          2000300.01 
    sdb2                    Primary   Linux                          2000487.03*



partprobe


[[root@localhost ~]# vgdisplay vg_dspace
  --- Volume group ---
  VG Name               vg_dspace
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               <3.64 TiB
  PE Size               4.00 MiB
  Total PE              953861
  Alloc PE / Size       0 / 0   
  Free  PE / Size       953861 / <3.64 TiB
  VG UUID               aLTwuw-I8Cc-JAhn-2wtB-9UQr-42Td-XuZHLk



[root@localhost ~]# pvcreate /dev/sdb1 /dev/sdb2
  Physical volume "/dev/sdb1" successfully created.
  Physical volume "/dev/sdb2" successfully created.
[root@localhost ~]# vgcreate vg_dspace /dev/sdb1 /dev/sdb2
  Volume group "vg_dspace" successfully created



巨巨
[[root@localhost ~]# lvcreate -L 3.64TB  -n vc_dspace  vg_dspace
  Rounding up size to full physical extent 3.64 TiB
  Volume group "vg_dspace" has insufficient free space (953861 extents): 954205 required.

[root@localhost ~]# lvcreate -L 3.635TB  -n vc_dspace  vg_dspace
  Rounding up size to full physical extent <3.64 TiB
  Logical volume "vc_dspace" created.


[root@localhost ~]# mkfs.ext4 /dev/vg_dspace/vc_dspace 
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
243941376 inodes, 975763456 blocks
48788172 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=3124756480
29778 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
 102400000, 214990848, 512000000, 550731776, 644972544

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information:            
Warning, had trouble writing out superblocks.



[root@localhost ~]# mkfs.ext4 /dev/vg_dspace/vc_dspace 
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
243941376 inodes, 975763456 blocks
48788172 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=3124756480
29778 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
 102400000, 214990848, 512000000, 550731776, 644972544

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information:            
Warning, had trouble writing out superblocks.




[ 1761.247023] sd 4:0:0:0: [sdb] tag#0 CDB: Write(16) 8a 00 00 00 00 00 e8 f1 a9 f0 00 00 02 98 00 00
[ 1761.247030] blk_update_request: I/O error, dev sdb, sector 3908151792
[ 1761.256705] sd 4:0:0:0: [sdb] Read Capacity(16) failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 1761.256717] sd 4:0:0:0: [sdb] Sense not available.
[ 1761.257615] sd 4:0:0:0: [sdb] Read Capacity(10) failed: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 1761.257621] sd 4:0:0:0: [sdb] Sense not available.
[ 1761.257790] sdb: detected capacity change from 4000787030016 to 0
[ 1761.545493] VFS: Dirty inode writeback failed for block device dm-3 (err=-5).
[ 1827.839344] scsi_io_completion: 58 callbacks suppressed
[ 1827.839358] sd 4:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 1827.839365] sd 4:0:0:0: [sdb] tag#0 CDB: ATA command pass through(16) 85 06 20 00 00 00 00 00 00 00 00 00 00 00 e5 00
[ 1846.090413] sd 4:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 1846.090430] sd 4:0:0:0: [sdb] tag#0 CDB: ATA command pass through(16) 85 06 2c 00 00 00 00 00 00 00 00 00 00 00 e5 00
.

後來,先在 3TB 的硬碟上裝好 CentOS 7,然後再用 LVM 擴充硬碟。在舊機器,最多只能支援 3TB 的硬碟。

# vgextend centos /dev/sdb1
umount /home
userdel agee

不支援 xfs
mkfs.ext4 /dev/centos/home
# lvreduce --resize --size 100G /dev/centos/home
# lvcreate -L 3.64TB  -n dspace  centos

修改 /etc/fstab,改用 ext4

useradd -g users -G lp,wheel,audio,cdrom -m john
passwd john

網誌存檔