Actually use multimarkdown :v

This commit is contained in:
Matthew Connelly 2015-03-14 22:55:29 +00:00
parent aba39e5cab
commit 52f8e7a6a8
3 changed files with 65 additions and 51 deletions

View File

@ -9,8 +9,8 @@ Yes I know I still haven't done part two of that mail server post. I'll get it d
While chatting on IRC, someone mentioned that they were having a problem with a process going mental and creating a bunch of file descriptors in linux, eventually hitting the "max FD limit" linux has. They couldn't figure out which process it was and they couldn't find a program that would list a count of how many FDs a process has open. A few minutes later I'd thrown together this bash one-liner for him. I'm posting it here just in case someone else might find it useful.
``
```
echo "$(for pid in $(ls -a /proc|egrep '^([0-9])*$'|sort -n 2>/dev/null); do if [ -e /proc/$pid/fd ]; then FHC=$(ls -l /proc/$pid/fd|wc -l); if [ $FHC -gt 0 ]; then PNAME="$(cat /proc/$pid/comm)"; echo "$FHC files opened by $pid ($PNAME)"; fi; fi; done)"|sort -r -n|head -n4
``
```
To explain: It loops through every file/folder in /proc that is a process ID, then checks that there's a file descriptor folder. Then it gets a count of all the FDs that process currently holds, gets the process name and outputs how many file descriptors that process has open, as well as the process name. This is then reverse-sorted and cut down to only the four processes with the most FDs open.

View File

@ -12,27 +12,33 @@ But it's not just the low resource usage that I love about lighttpd. It's the do
We recently began rolling out proper SSL to all of our client-accessable services at work. We're primarily an Apache shop at work, but one server runs lighttpd. Forcing all connections to run over SSL was as simple as:
server.modules += ("mod_redirect")
$HTTP["scheme"] == "http" {
$HTTP["host"] =~ "(.*)" {
url.redirect = ("^/(.*)" => "https://%1/$1")
}
}
```
server.modules += ("mod_redirect")
$HTTP["scheme"] == "http" {
$HTTP["host"] =~ "(.*)" {
url.redirect = ("^/(.*)" => "https://%1/$1")
}
}
```
And then there's the fact that managing vhosts is just brilliant. Adding a new vhost is as simple as `mkdir /var/www/new-vhost` and adding the following to my lighttpd config file:
$HTTP["host"] =~ "^new\.vhost\.net$" {
server.document-root = var.basedir + "/new-vhost"
accesslog.filename = var.logdir + "/access-new-vhost.log"
}
```
$HTTP["host"] =~ "^new\.vhost\.net$" {
server.document-root = var.basedir + "/new-vhost"
accesslog.filename = var.logdir + "/access-new-vhost.log"
}
```
There's a lot more to love about lighttpd, though. I'll be updating this post with more tips and config snippets as I go.
Adding on to the earlier config snippet for forcing SSL across all vhosts, it might even be possible to have a single block for all vhosts (I haven't personally tested this, use at your own risk):
$HTTP["host"] =~ "(.*)" {
server.document-root = var.basedir + "/$1"
accesslog.filename = var.logdir + "/access-$1.log"
}
```
$HTTP["host"] =~ "(.*)" {
server.document-root = var.basedir + "/$1"
accesslog.filename = var.logdir + "/access-$1.log"
}
```
This would, in theory, accept any vhost as long as it has a corresponding folder in /var/www.

View File

@ -19,24 +19,28 @@ Here's some benchmarks:
Network speed:
~ wget cachefly.cachefly.net/100mb.test -O /dev/null
--2012-04-28 02:34:32-- http://cachefly.cachefly.net/100mb.test
Resolving cachefly.cachefly.net... 140.99.94.175
Connecting to cachefly.cachefly.net|140.99.94.175|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: `/dev/null'
```nohilight
~ wget cachefly.cachefly.net/100mb.test -O /dev/null
--2012-04-28 02:34:32-- http://cachefly.cachefly.net/100mb.test
Resolving cachefly.cachefly.net... 140.99.94.175
Connecting to cachefly.cachefly.net|140.99.94.175|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: `/dev/null'
100%[======>] 104,857,600 11.8M/s in 8.7s
100%[======>] 104,857,600 11.8M/s in 8.7s
2012-04-28 02:34:41 (11.5 MB/s) - `/dev/null' saved [104857600/104857600]
2012-04-28 02:34:41 (11.5 MB/s) - `/dev/null' saved [104857600/104857600]
```
Disk I/O:
~ dd if=/dev/zero of=/tmp/disktest bs=64k count=16k
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 4.52632 s, 237 MB/s
```nohilight
~ dd if=/dev/zero of=/tmp/disktest bs=64k count=16k
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 4.52632 s, 237 MB/s
```
### **[Bhost][7]**
@ -46,24 +50,28 @@ Their support is pretty great, considering they're a budget host, and the server
Here's some benchmarks, since I still have a server with them: Network speed:
# wget cachefly.cachefly.net/100mb.test -O /dev/null
--2012-04-28 02:30:40-- http://cachefly.cachefly.net/100mb.test
Resolving cachefly.cachefly.net... 205.234.175.175
Connecting to cachefly.cachefly.net|205.234.175.175|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: `/dev/null'
```nohilight
# wget cachefly.cachefly.net/100mb.test -O /dev/null
--2012-04-28 02:30:40-- http://cachefly.cachefly.net/100mb.test
Resolving cachefly.cachefly.net... 205.234.175.175
Connecting to cachefly.cachefly.net|205.234.175.175|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: `/dev/null'
100%[======>] 104,857,600 10.2M/s in 9.8s
100%[======>] 104,857,600 10.2M/s in 9.8s
2012-04-28 02:30:49 (10.2 MB/s) - `/dev/null' saved [104857600/104857600]
2012-04-28 02:30:49 (10.2 MB/s) - `/dev/null' saved [104857600/104857600]
```
Disk I/O:
# dd if=/dev/zero of=/tmp/disktest bs=64k count=16k
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 8.31944 s, 129 MB/s
```nohilight
# dd if=/dev/zero of=/tmp/disktest bs=64k count=16k
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 8.31944 s, 129 MB/s
```
### **[ThrustVPS][8]**
@ -75,12 +83,12 @@ My main issues with ThrustVPS started in September, when I got an email stating
In summary, while the servers and prices themselves are good, uptime is a bit unpredictable, support is unpleasant and they seem to have a habit of restricting your server without explanation, or without supplying any actual evidence to back up their claims. I wouldn't recommend them to anyone.
[1]: http://thrustvps.com
[2]: http://linode.com
[3]: http://bhost.net
[4]: http://simplexwebs.com
[5]: http://vps6.net
[6]: http://www.simplexwebs.com
[7]: http://www.bhost.net
[8]: http://www.thrustvps.com
[9]: http://lowendbox.com
[1]: http://thrustvps.com
[2]: http://linode.com
[3]: http://bhost.net
[4]: http://simplexwebs.com
[5]: http://vps6.net
[6]: http://www.simplexwebs.com
[7]: http://www.bhost.net
[8]: http://www.thrustvps.com
[9]: http://lowendbox.com