Search This Blog

Monday, September 24, 2007

Small script to check MySQL replication

Small script to check MySQL replication on FreeBSD. Path to mysql.sock as a parameter:

cat /usr/local/opt/check_relication.sh
#!/bin/sh
treshold=1200

### Do not edit below
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin

if [ -z $1 ] ; then
echo "Usage: $0 /path/to/mysql.sock"
exit 255
fi

seconds=`mysql -S $1 -N -e "show slave status\G" | grep Seconds_Behind_Master | cut -d: -f 2`

if [ -z $seconds ] ; then
exit 255
fi

if [ $seconds = "NULL" ]; then
echo Replication kaput!!!
exit 255
fi

if [ $seconds -gt $treshold ]; then
echo Slave behind the master more than $treshold seconds!
fi


Just put to crontab something like this:
*/30 * * * * /usr/local/opt/check_relication.sh /tmp/mysql.sock

Friday, August 3, 2007

Move from apache 2.2 to Nginx as a PHP or Ruby web server

Recently we started to move moderately loaded news web site from apache 2.2 to Nginx due to it's incredible performance and security.

So, here the short instruction set to start Nginx with PHP and RubyOnRails under daemontools monitoring on FreeBSD:

PHP:
cat /var/services/backend-phpfcgi/run
#!/bin/sh
exec 2>&1
exec setuidgid backend spawn-fcgi -n -a 127.0.0.1 -p 9000 -f /usr/local/bin/php-cgi

I'm using spawn-fcgi from lighttpd package, however the php-cgi can be ran directly.

RubyOnRail:
cat /var/services/backend-rubyfcgi/run
#!/bin/sh
umask 22
export RAILS_ENV=production
exec setuidgid rubysite spawn-fcgi -n -a 127.0.0.1 -p 9001 -f /home/rubysite/www/public/dispatch.fcgi

Nginx configuration files:

cat /usr/local/etc/nginx/nginx.conf
user www;
worker_processes 30;
timer_resolution 1000ms;

error_log /var/log/nginx-error.log error;
pid /var/run/nginx.pid;

events {
worker_connections 2048;
use kqueue;
}


http {
include mime.types;
default_type application/octet-stream;

access_log off;
gzip on;
gzip_comp_level 6;

sendfile on;
#tcp_nopush on;

keepalive_timeout 65;

include vhosts/conf-*;
server {
listen ip.add.re.ss:80 default accept_filter=httpready;

error_page 404 /404.html;
location / {
root /home/default/www;
index index.html;
}
}
}

cat /usr/local/etc/nginx/vhosts/conf-site.com
server {
listen ip.add.re.ss;
server_name site.com www.site.com

location / {
root /home/site/www;
index index.php;
}

location ~* ^.+\.(php)$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /home/site/www/$fastcgi_script_name;

include fastcgi_params;
}
}

cat /usr/local/etc/nginx/fastcgi_params

fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;

fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;

fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx;

fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;

# PHP only, required if PHP was built with --enable-force-cgi-redirect
fastcgi_param REDIRECT_STATUS 200;


Nginx runs under www privileges and the back-end as a backend user so security is much improved.

P.S.
Interesting link shows nginx as third popular web server.

Sunday, June 17, 2007

new natd patch for FreeBSD using kqueue

UPDATE, this patch is not going to work properly, so forget the cricket!!!

Yesterday I've rewrote patch to natd using kqueue. Since I do not have FreeBSD with nat around, patch is not tested yet. Probably tomorrow will be chance to test. Long story- short. Testers are welcome. No warranty!!!

How to apply this patch:
(become root)
(at this point you should already have the system sources)
cd /usr/src/sbin/natd &&amp;amp; patch -p2 < /path/to/natd.patch && make && make install clean

Saturday, May 26, 2007

FreeBSD network settings for highly utilized squid proxy server by dial-up users

Port config knobs:
cat /var/db/ports/squid26/options
# This file is auto-generated by 'make config'.
# No user-servicable parts inside!
# Options for squid-2.6.9
_OPTIONS_READ=squid-2.6.9
WITHOUT_SQUID_LDAP_AUTH=true
WITHOUT_SQUID_SASL_AUTH=true
WITHOUT_SQUID_DELAY_POOLS=true
WITH_SQUID_SNMP=true
WITHOUT_SQUID_CARP=true
WITHOUT_SQUID_SSL=true
WITHOUT_SQUID_PINGER=true
WITHOUT_SQUID_DNS_HELPER=true
WITH_SQUID_HTCP=true
WITHOUT_SQUID_VIA_DB=true
WITH_SQUID_CACHE_DIGESTS=true
WITH_SQUID_WCCP=true
WITHOUT_SQUID_WCCPV2=true
WITH_SQUID_STRICT_HTTP=true
WITHOUT_SQUID_IDENT=true
WITHOUT_SQUID_REFERER_LOG=true
WITHOUT_SQUID_USERAGENT_LOG=true
WITHOUT_SQUID_ARP_ACL=true
WITH_SQUID_PF=true
WITHOUT_SQUID_IPFILTER=true
WITH_SQUID_FOLLOW_XFF=true
WITHOUT_SQUID_ICAP=true
WITH_SQUID_AUFS=true
WITH_SQUID_COSS=true
WITH_SQUID_KQUEUE=true
WITH_SQUID_LARGEFILE=true
WITH_SQUID_STACKTRACES=true

Boot network settings:

cat /boot/loader.conf
accf_http_load="YES"
accf_data_load="YES"

#kern.hz=1000
kern.maxproc=6164
kern.maxdsiz="1536M"
kern.dfldsiz="1536M"
kern.maxssiz="512M"

kern.ipc.msgseg=768
kern.ipc.msgssz=128
kern.ipc.msgtql=3072
kern.ipc.msgmnb=12288
kern.ipc.msgmni=60

kern.ipc.shmall=6144
kern.ipc.shmseg=24
kern.ipc.shmmni=48
kern.ipc.shmmax=51457280

kern.ipc.shm_use_phys=1
kern.ipc.nmbclusters=131072
kern.ipc.maxsockbuf=524288

net.inet.tcp.tcbhashsize=16384

Run-time settings:

cat /etc/sysctl.conf
security.bsd.see_other_uids=0

kern.maxfiles=40960
kern.maxfilesperproc=22190
kern.timecounter.hardware=TSC
kern.ipc.somaxconn=4096
To repartition disk drive /duplicate file system in FreeBSD :
1. create new slice using bsdlabel
2. newfs newly created slice
3. mount it to /mnt (or wherever)
4. cd /mnt (or wherever)
5. try to minimize amount of the open files on source filesystem by killing all unnecessary daemons
6. dump -0ab 128 -C 32 -f - /dev/slicename | restore -rb 128 -f -
7. ensure backup is adequate
8. modify /etc/fstab and point new filesystem to desired mount point
9. check is everything ok throughly.
10 umount old mount point && mount new one or reboot if the live fs like e.g. /usr

Monday, April 9, 2007

Organize files backup and upload them to ftp server

This script performs a daily files backup. The goal is to get possibility to restore any file within seven days. To economically waste disk space, the full backup is performed only once within a week on Sunday morning. On all other days only differential backup is performed (backup files changes from last FULL backup only). This technique allows to significantly speedup the restore. Scheme is simple, one full backup followed by six differential ones.

The /usr/local/etc/backup.conf file stores directories to include/exclude to/from backup, syntax is simple: the first "+" means to backup, and the "-" means to exclude from the backup.

cat /usr/local/etc/backup.conf
+/etc
+/usr/local/etc
+/usr/local/opt
+/home
-/home/www/logs
+/var/mail
+/var/qmail/control
+/var/qmail/alias
+/var/cron/tabs
-/var/mail/spam

So, here we go:
cat /usr/local/opt/files_backup.sh
#!/bin/sh
to_backup=/usr/local/etc/backup.conf
backupdir=/path/to/store/backups
ftp_to=ftp://host.name.to.put.backups.to/`hostname -s`/files
# Days to make diffs
diff_days=6
# Days to keep backup
restore_days=7

### Do not edit below
umask 037
PATH=/bin:/usr/bin

date=`date "+%Y-%m-%d"`

# Find last full backup in ${diff_days} days if exists
last_full=`find ${backupdir} -name "*-full.tbz2" -type f -mtime -${diff_days} | sort -r | head -1`

# Enumerate directories to include in backup
for dir in `cat ${to_backup}` ; do
case `expr -- "${dir}" : '\(^.\)'` in
+) include="${include} `expr -- "${dir}" : '+\(.*\)'`" ;;
-) exclude="${exclude} --exclude `expr -- "${dir}" : '-\(.*\)'`" ;;
esac
done

if [ ${last_full} ]; then
if [ "`find ${to_backup} -newer ${last_full}`" ]; then
filesuff="full.tbz2"
else
filesuff="diff.tbz2"
newer="-W newer-mtime-than=${last_full}"
fi
else
filesuff="full.tbz2"
fi
filename=${date}-${filesuff}

# Backup files and put to ftp server
tar jPpcf ${backupdir}/${filename} ${newer} ${exclude} ${include} && \
ftp -Vu ${ftp_to}/${filename} ${backupdir}/${filename}

# Find Last Full Backup to Keep ( LFB2K ) in ${restore_days} days
full2keep=`find ${backupdir} -name '*-full.tbz2' -type f -mtime +${restore_days} | sort -r | head -1`

if [ ${full2keep} ]; then

full2keep_name=`expr "//${full2keep}" : '.*/\(.*\)'`

# Delete files older than LFB2K
find ${backupdir} -type f ! -newer ${full2keep} ! -name ${full2keep_name} -name '*.tbz2' -delete

# Delete unnecessary diff files belongs to LFB2K
find ${backupdir} -type f -newerBm ${full2keep} -Btime +${restore_days} -name '*.tbz2' -delete
fi

This script can be downloaded from here.

There were some fixes and modifications.

Monday, March 19, 2007

Organize MySQL backups and put to ftp server

So we have very important MySQL databases hosted on FreeBSD OS we do not want to lose.
First step is to create backup procedures to make backups to local disk.
But this will not help us much if server will crash.
As the second step we will put local backups to remote ftp server configured to allow only upload with unique filenames. The download and even delete or overwrite is prohibited.
FTP server configuration is the subject of another post.

I prefer to put my scripts in /usr/local/opt/. Let's name the script "mysql_backup.sh"

# cat > /usr/local/opt/mysql_backup.sh
#!/bin/sh
username=USERNAME
password=PASSWORD
backupdir=/path/to/backup
ftp_to=ftp://host.name.to.put.backups.to/`hostname -s`/db
days=7

### Do not edit below
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin
umask 077
now=`date "+%Y-%m-%dT%H:%M:%S"`

databases=`mysql -u ${username} -p${password} -N -e "show databases"`

for db in ${databases} ; do
mysqldump -u ${username} -p${password} --opt -F -l ${db} > ${backupdir}/${db}-${now} && \
bzip2 -9 ${backupdir}/${db}-${now} && \
ftp -Vu ${ftp_to}/${db}-${now}.bz2 ${backupdir}/${db}-${now}.bz2
done

find ${backupdir} -name '*-*.bz2' -a -type f -mtime +${days} -delete

^D

Script enumerates all databases and then dumps each in backup folder using DBNAME-YEAR-MONTH-DAY_HOUR:MINUTE scheme as the name. After, script archives the dump using bzip2 and upload archive to the ftp server. Last part of the script removes files in backup folder older than, in my case, 7 days.

So, everything is ready, and let's modify /etc/crontab to run script every night at 05:30

# echo "30 5 * * * root /usr/local/opt/mysql_backup.sh" >> /etc/crontab

That's it.

P.S.
Will be wise to run script manually before putting to crontab.

Script to flush logs from replicant host to master

#!/bin/sh
# Format host:username:password
hosts="host1:username1:password1 host2:username2:password2"

###
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin
for system in $hosts ; do
host=`echo $system | cut -d : -f 1`
user=`echo $system | cut -d : -f 2`
password=`echo $system | cut -d : -f 3`

log=`mysql -S /tmp/$host-mysql.sock -B --skip-column-names mysql -e "show slave status" | cut -f 6`

if [ ${log} ]; then
mysql -h $host -u $user -p$password -e "PURGE MASTER LOGS TO '$log';"
fi
pid="/home/$host/`hostname`.pid"
kill -HUP `cat $pid`
done