跳票了好几次,真心的不容易。
ACME v2 and Wildcard Certificate Support is Live
对很多小站的https证书的推广应该有极大的促进。

看看intermediate CA的内容:(以前是测试的Issuer: CN=Fake LE Root X1

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            0a:01:41:42:00:00:01:53:85:73:6a:0b:85:ec:a7:08
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: O=Digital Signature Trust Co., CN=DST Root CA X3
        Validity
            Not Before: Mar 17 16:40:46 2016 GMT
            Not After : Mar 17 16:40:46 2021 GMT
        Subject: C=US, O=Let's Encrypt, CN=Let's Encrypt Authority X3
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:9c:d3:0c:f0:5a:e5:2e:47:b7:72:5d:37:83:b3:
                    68:63:30:ea:d7:35:26:19:25:e1:bd:be:35:f1:70:
                    92:2f:b7:b8:4b:41:05:ab:a9:9e:35:08:58:ec:b1:
                    2a:c4:68:87:0b:a3:e3:75:e4:e6:f3:a7:62:71:ba:
                    79:81:60:1f:d7:91:9a:9f:f3:d0:78:67:71:c8:69:
                    0e:95:91:cf:fe:e6:99:e9:60:3c:48:cc:7e:ca:4d:
                    77:12:24:9d:47:1b:5a:eb:b9:ec:1e:37:00:1c:9c:
                    ac:7b:a7:05:ea:ce:4a:eb:bd:41:e5:36:98:b9:cb:
                    fd:6d:3c:96:68:df:23:2a:42:90:0c:86:74:67:c8:
                    7f:a5:9a:b8:52:61:14:13:3f:65:e9:82:87:cb:db:
                    fa:0e:56:f6:86:89:f3:85:3f:97:86:af:b0:dc:1a:
                    ef:6b:0d:95:16:7d:c4:2b:a0:65:b2:99:04:36:75:
                    80:6b:ac:4a:f3:1b:90:49:78:2f:a2:96:4f:2a:20:
                    25:29:04:c6:74:c0:d0:31:cd:8f:31:38:95:16:ba:
                    a8:33:b8:43:f1:b1:1f:c3:30:7f:a2:79:31:13:3d:
                    2d:36:f8:e3:fc:f2:33:6a:b9:39:31:c5:af:c4:8d:
                    0d:1d:64:16:33:aa:fa:84:29:b6:d4:0b:c0:d8:7d:
                    c3:93
                Exponent: 65537 (0x10001)

更新了自己的证书:
openssl s_client -connect kexiao8.com:443

depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
verify return:1
depth=0 CN = kexiao8.com
verify return:1
---
Certificate chain
 0 s:/CN=kexiao8.com
   i:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
 1 s:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
   i:/O=Digital Signature Trust Co./CN=DST Root CA X3
---

以前遇到过mysql对index_merge进行优化时,按照它自己的“猜测”优化,导致性能下降的问题, 后来靠强制force index(...)关闭优化,解决了。
最近有发现一个对联合索引的“智能”优化。这导致同一个sql5.5前后的版本执行效率完全不一样。

表结构如下:

CREATE TABLE `feed` (
  `feedid` bigint(11) NOT NULL AUTO_INCREMENT,
  `userid` int(11) NOT NULL,
  `typeid` int(11) NOT NULL,
  `dataid` int(11) NOT NULL,
  `invalid` int(11) NOT NULL DEFAULT '0',
  PRIMARY KEY (`feedid`),
  UNIQUE KEY `dataid_typeid` (`dataid`,`typeid`),
  KEY `userid_t_i` (`userid`,`typeid`,`invalid`)
) ENGINE=TokuDB AUTO_INCREMENT=1516204596 DEFAULT CHARSET=utf8 `compression`='tokudb_zlib'

SQL语句时:

select feedid,userid,dataid,typeid from feed where userid = '25057158'   and typeid in (0,2,6)  and (invalid = 0 or invalid = 11) order by feedid desc limit 0,30 ;

按理说完全符合联合索引,应该很快能执行成功,但是实际上发现在新版mysql中有扫描全表。
explain看:

mysql> explain select feedid,userid,dataid,typeid from feed where userid = '25057158'   and typeid in (0,2,6)  and (invalid = 0 or invalid = 11) order by feedid desc limit 0,30 ;
+------+-------------+-------+-------+---------------+---------+---------+------+------+-------------+
| id   | select_type | table | type  | possible_keys | key     | key_len | ref  | rows | Extra       |
+------+-------------+-------+-------+---------------+---------+---------+------+------+-------------+
|    1 | SIMPLE      | feed  | index | userid_t_i    | PRIMARY | 8       | NULL |   30 | Using where |
+------+-------------+-------+-------+---------------+---------+---------+------+------+-------------+
1 row in set (0.01 sec)

可以看到居然是用了PRIMARY去做查询,实际执行的结果也如此。完全忽略了possible_keys userid_t_i。
我猜想,是mysql发现sql中出现了多处INOR, 认为这个索引的效率是极其低下的,因此“智能”换成了它认为更高效的主键作为索引。

所以如果使用强制索引,或者没有INOR, 它不会做这个优化。 用explain再验证了这一点:

mysql> explain select feedid,userid,dataid,typeid from feed force index (userid) where userid = '178804206'   and typeid in (0,2,6)  and (invalid = 0 or invalid = 11) order by feedid desc limit 0,30;
+------+-------------+-------+-------+---------------+------------+---------+------+------+-----------------------------+
| id   | select_type | table | type  | possible_keys | key        | key_len | ref  | rows | Extra                       |
+------+-------------+-------+-------+---------------+------------+---------+------+------+-----------------------------+
|    1 | SIMPLE      | feed  | range | userid_t_i    | userid_t_i | 12      | NULL | 2076 | Using where; Using filesort |
+------+-------------+-------+-------+---------------+------------+---------+------+------+-----------------------------+

mysql> explain select feedid,userid,dataid,typeid from feed where userid = '25057158'   and typeid = 0  and invalid = 0  order by feedid desc limit 0,30 ;
+------+-------------+-------+------+---------------+------------+---------+-------------------+------+-------------+
| id   | select_type | table | type | possible_keys | key        | key_len | ref               | rows | Extra       |
+------+-------------+-------+------+---------------+------------+---------+-------------------+------+-------------+
|    1 | SIMPLE      | feed  | ref  | userid_t_i    | userid_t_i | 12      | const,const,const |  585 | Using where |
+------+-------------+-------+------+---------------+------------+---------+-------------------+------+-------------+

此外, 这个优化也应该是由于sql末尾有order by引起的, 如果没有排序,应该不会扫描全表:

mysql> explain select feedid,userid,dataid,typeid from feed where userid = '25057158'   and typeid in (0,2,6)  and (invalid = 0 or invalid = 11)  limit 0,30 ;
+------+-------------+-------+-------+---------------+------------+---------+------+------+-------------+
| id   | select_type | table | type  | possible_keys | key        | key_len | ref  | rows | Extra       |
+------+-------------+-------+-------+---------------+------------+---------+------+------+-------------+
|    1 | SIMPLE      | feed  | range | userid_t_i    | userid_t_i | 12      | NULL |  591 | Using where |
+------+-------------+-------+-------+---------------+------------+---------+------+------+-------------+
1 row in set (0.00 sec)

而如果把order by feedid desc去掉的sql真实的执行一遍以后,因为有了缓存,再次去explain第一个sql的时候,居然不提示用PRIMARY key了,难道是一个智能学习的过程? 而实际的执行时也是只用了userid_t_i作为索引。

mysql> explain select feedid,userid,dataid,typeid from feed where userid = '25057158'   and typeid in (0,2,6)  and (invalid = 0 or invalid = 11)  limit 0,30 ;
+------+-------------+-------+-------+---------------+------------+---------+------+------+-------------+
| id   | select_type | table | type  | possible_keys | key        | key_len | ref  | rows | Extra       |
+------+-------------+-------+-------+---------------+------------+---------+------+------+-------------+
|    1 | SIMPLE      | feed  | range | userid_t_i    | userid_t_i | 12      | NULL |    7 | Using where |
+------+-------------+-------+-------+---------------+------------+---------+------+------+-------------+

总之,我猜测是mysql不同版本对optimizer_switch等参数的不同调整导致的, 也许可以在mysql文档中找出问题并解决。
比如: https://dev.mysql.com/doc/refman/5.7/en/index-merge-optimization.html

最近ArchLinux的内核升级加入了针对Meltdown的补丁。正好测试一下为了这个安全性而对性能产生了多大的影响。
用了最简单的小测试代码meltdown_test.c,测试极端情况:

#include <syscall.h>
#include <unistd.h>
#include <stdio.h>
int main(void) {

    for (int i=0; i< (1<<27) ;i++){
        syscall(SYS_time);
    }

    return 0;
}

使用相同的内核版本:4.14.12-1
在不启用补丁的情况下:

$ time ./meltdown_test 

real    0m5.761s
user    0m2.421s
sys    0m3.340s

在启用补丁的情况下:

$ time ./meltdown_test 

real    0m23.715s
user    0m11.745s
sys    0m11.834s

linux的内核补丁是通过PTI(Page Table Isolation)实现的。如果只从测试结果看,对性能的影响还是很明显的。这个补丁,主要是增加了用户态/内核态切换的性能开销,因此,对于有频繁线程切换或系统调用的服务影响会很大, 比如mysql,nginx,redis这些最常见的服务,以及服务器的文件I/O能力等等。

个人用户来说,最好的做法就是 关闭补丁。

今天更新系统的时候发现升级了一个libnghttp2库文件,好奇查看了一下:

pacman -Qi libnghttp2
名字           : libnghttp2
版本           : 1.29.0-1
描述           : Framing layer of HTTP/2 is implemented as a reusable C library
架构           : x86_64
URL            : https://nghttp2.org/
软件许可       : MIT
组             : 无
提供           : 无
依赖于         : glibc
可选依赖       : 无
要求被         : curl
被可选依赖     : 无
冲突与         : nghttp2<1.20.0-2
取代           : 无
安装后大小     : 337.00 KiB
打包者         : Jan de Groot <jgc@archlinux.org>
编译日期       : 2017年12月26日 星期二 06时39分21秒
安装日期       : 2018年01月05日 星期五 21时23分48秒
安装原因       : 作为其他软件包的依赖关系安装
安装脚本       : 否
验证者         : 数字签名

原来是被curl依赖的, 我意识到curl已经支持http2了,测试一下,果然:

curl -v --http2 https://kexiao8.com/
...
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x562250ec3160)
> GET / HTTP/2
> Host: kexiao8.com
> User-Agent: curl/7.57.0
> Accept: */*

看一下官方文档:Starting in 7.43.0, libcurl fully supports HTTP/2 multiplexing, which is the term for doing multiple independent transfers over the same physical TCP connection.
`curl offers the --http2 command line option to enable use of HTTP/2.
Since 7.47.0, the curl tool enables HTTP/2 by default for HTTPS connections.`

以前我的线上某一台服务器有eth0eth1两个网卡,分别对应内外网不同网络。但是有一次用命令dstat -N eth0,eth1tcpdump监测网络的时候发现,发往eth0的IP的流量居然会从eth1网卡流入流出, 这两个ip的网段和路由没有任何联系。 当时震惊了一下,分析原因,因为当初为了节约交换机,这两个网卡插在相同的一台交换机上,而且没有VLAN。由于网络ip层不应该有问题, 所以应该是链路层造成的结果,可能是ARPMac寻址的结果,后来没有深究。
很久以后,出现了一个新的问题:配置iptables的时候,规则DROP无效,于是又找回这里。iptables精简配置如下:

# Generated by iptables-save v1.4.14 on Wed Sep 13 18:05:35 2017
*raw
:PREROUTING ACCEPT [69675:4914196]
:OUTPUT ACCEPT [54936:3566904]
-A PREROUTING -p tcp -m tcp --dport 13306 -j TRACE
COMMIT
# Completed on Wed Sep 13 18:05:35 2017
# Generated by iptables-save v1.4.14 on Wed Sep 13 18:05:35 2017
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [93762:6266296]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -i eth1 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --dport 13306 -j DROP
-A INPUT -p icmp -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
COMMIT

从上可知, 默认允许lo和eth1内网所有流量, 拒绝了外网eth0的6379, 但是事实上可以从外网从容访问13306端口。

打开iptables的调试:

sudo modprobe ipt_LOG
iptables  -t raw -A PREROUTING  -p tcp --dport 13306 -j TRACE
sudo tail /var/log/kern.log

得到访问日志:

Sep 13 17:49:21 host239 kernel: [141463786.397726] TRACE: raw:PREROUTING:policy:2 IN=eth1 OUT= MAC=90:b1:1c:36:34:2c:70:54:f5:9d:bc:7b:08:00 SRC=59.151.**.** DST=113.31.*.* LEN=52 TOS=0x00 PREC=0x00 TTL=48 ID=45702 DF PROTO=TCP SPT=49983 DPT=13306 SEQ=296903697 ACK=934588750 WINDOW=115 RES=0x00 ACK FIN URGP=0 OPT (0101080A4BD6D6963FB18493) 
Sep 13 17:49:21 host239 kernel: [141463786.397760] TRACE: filter:INPUT:rule:1 IN=eth1 OUT= MAC=90:b1:1c:36:34:2c:70:54:f5:9d:bc:7b:08:00 SRC=59.151.*.* DST=113.31.*.* LEN=52 TOS=0x00 PREC=0x00 TTL=48 ID=45702 DF PROTO=TCP SPT=49983 DPT=13306 SEQ=296903697 ACK=934588750 WINDOW=115 RES=0x00 ACK FIN URGP=0 OPT (0101080A4BD6D6963FB18493)

上面有神奇的是,日志中外部访问13306端口的流量指向的外网ip(属于eth0), 但是日志记录的是IN=eth1, 且mac address 90:b1:1c:36:34:2c 属于eth1。这简直是以前内外网混流量问题的翻版。研究了一下,虽然关键词贫乏google很久没有有效信息, 想起以前用vpn的时候arp_proxy相关配置似乎有些关联性, 用arp继续google,发现关键词arp_announce

arp_announce - INTEGER
    Define different restriction levels for announcing the local
    source IP address from IP packets in ARP requests sent on
    interface:
    0 - (default) Use any local address, configured on any interface
    1 - Try to avoid local addresses that are not in the target's
    subnet for this interface. This mode is useful when target
    hosts reachable via this interface require the source IP
    address in ARP requests to be part of their logical network
    configured on the receiving interface. When we generate the
    request we will check all our subnets that include the
    target IP and will preserve the source address if it is from
    such subnet. If there is no such subnet we select source
    address according to the rules for level 2.
    2 - Always use the best local address for this target.
    In this mode we ignore the source address in the IP packet
    and try to select local address that we prefer for talks with
    the target host. Such local address is selected by looking
    for primary IP addresses on all our subnets on the outgoing
    interface that include the target IP address. If no suitable
    local address is found we select the first local address
    we have on the outgoing interface or on all other interfaces,
    with the hope we will receive reply for our request and
    even sometimes no matter the source IP address we announce.

于是解决方法是:

echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce

或者把iptables的-A INPUT -i eth1 -j ACCEPT换成-A INPUT -s 192.168.0.0/16 -j ACCEPT

回顾一下,这个"bug"其实很隐蔽, 因为iptables的配置看起来完美无缺, 但是, 但是, 谁知道问题出现在链路层。