Search This Blog

Thursday, November 26, 2009

Get NetFlow entries from a flowd via logsock to put to PostgreSQL in Perl

Here is a small piece of code in Perl to get NetFlow entries from a flowd via logging socket (logsock) and put to a PostgreSQL database.
Beware, no sanity checking at all!!!

=== cut here ===

#!/usr/bin/perl
use IO::Socket;
use Socket;
use Flowd;
use DBI;

# Database settings
my $DBI_DRIVER = "Pg"; # or one of "Pg" "mysql" "mysqlPP"
my $DB = "netflow";
my $HOST = "localhost";
my $TABLE = "flow";
my $USER = "netflow";
my $PASS = "password";

$sock_addr="/var/run/flowd/flowd.sock";
unlink($sock_addr);

$sock = IO::Socket::UNIX->new( Local => $sock_addr, Type => SOCK_DGRAM)
or die "Can't bind to Unix Socket: $!\n";
$sock->setsockopt(SOL_SOCKET, SO_RCVBUF, 65440);

my $db = DBI->connect("dbi:$DBI_DRIVER:host=$HOST;database=$DB", $USER, $PASS)
or die "DBI->connect error: " . $DBI::errstr;

print "Started.\n";
while ($bytes = $sock->recv($input,1024)) {
$flowfields = Flowd::deserialise($input);

$recv_time = sprintf "%s.%03d",$flowfields->{recv_sec}, $flowfields->{recv_usec};
$flow_start = $recv_time + ($flowfields->{flow_start} - $flowfields->{sys_uptime_ms})/1000;
$flow_finish = $recv_time + ($flowfields->{flow_finish} - $flowfields->{sys_uptime_ms})/1000;

$sql = sprintf("INSERT INTO flows (recv_time, agent_addr, protocol_id, src_addr, src_port, dst_addr, dst_port, packets, octets, flow_start, flow_finish) VALUES (to_timestamp('%s'), '%s', '%u', '%s', '%u', '%s', '%u', '%s', '%s', to_timestamp('%s'), to_timestamp('%s'))",
$recv_time,
$flowfields->{agent_addr},
$flowfields->{protocol},
$flowfields->{src_addr},
$flowfields->{src_port},
$flowfields->{dst_addr},
$flowfields->{dst_port},
$flowfields->{flow_packets},
$flowfields->{flow_octets},
$flow_start,
$flow_finish
);
$db->do($sql) or die "db->do failed: " . $DBI::errstr;
}
1;

=== cut here ===

The SQL schema:
CREATE TABLE flows (
id serial NOT NULL,
recv_time timestamp with time zone DEFAULT now() NOT NULL,
agent_addr inet NOT NULL,
protocol_id integer NOT NULL,
src_addr inet NOT NULL,
src_port integer NOT NULL,
dst_addr inet NOT NULL,
dst_port integer NOT NULL,
packets bigint DEFAULT 0 NOT NULL,
octets bigint DEFAULT 0 NOT NULL,
flow_start timestamp with time zone NOT NULL,
flow_finish timestamp with time zone NOT NULL
);

Saturday, March 8, 2008

Patching Squid to support gzip/deflate encoding

Squid is a caching proxy for the Web supporting HTTP, HTTPS, FTP, and more. It reduces bandwidth and improves response times by caching and reusing frequently-requested web pages. Squid has extensive access controls and makes a great server accelerator.
Unfortunately, Squid is HTTP/1.0 due to the lack of certain features and the main disadvantage, from my point of view, is a lack of gzip/deflate content encoding.
There is a patch created by Swell Technology and committed to the HEAD. But it was not tested and little bit outdated.
So I've spent around six late night/early morning hours to merge the patch to Squid 3.0-Stable1. Right now patched squid it running on the old AMD Athlon64 3000+ machine serving around 500 simultaneously connected clients through the dial-up lines. The main concern was the CPU utilization, but FreeBSD team made an excellent work in regards of the performance in FreeBSD 7.
So here is the snapshot from top:
56897 squid 1 4 0 1085M 1044M kqread 9:28 5.18% squid

and uptime is:
1:51PM up 7 days, 10:45, 2 users, load averages: 0.01, 0.06, 0.15


The FreeBSD port patch is here.

Enjoy.
UPDATE:

Link to FreeBSD PR

UPDATE 1:
FreeBSD maintainer refused to integrate patch to ports tree:

From:Thomas-Martin Seck
Date:Sat, 8 Mar 2008 15:59:46 +0100
I am sorry, but I am not going to integrate third-party patches into
the Squid ports any more (this includes patchsets that are available from
devel.squid-cache.org). Please work with the Squid developers on
integrating this feature into mainline Squid.

Short rationale: Third party patches are a headache to maintain,
especially when they are no longer maintained (cf ICAP for Squid-2) and
they can be a major source of trouble when they contain bugs that are
then wrongly attributed to bugs in Squid itself. I would therefore like
to keep the port as close to the mainline source as possible to make it
easy for users to get support for it from the Squid developers in case
of problems.

Monday, September 24, 2007

Small script to check MySQL replication

Small script to check MySQL replication on FreeBSD. Path to mysql.sock as a parameter:

cat /usr/local/opt/check_relication.sh
#!/bin/sh
treshold=1200

### Do not edit below
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin

if [ -z $1 ] ; then
echo "Usage: $0 /path/to/mysql.sock"
exit 255
fi

seconds=`mysql -S $1 -N -e "show slave status\G" | grep Seconds_Behind_Master | cut -d: -f 2`

if [ -z $seconds ] ; then
exit 255
fi

if [ $seconds = "NULL" ]; then
echo Replication kaput!!!
exit 255
fi

if [ $seconds -gt $treshold ]; then
echo Slave behind the master more than $treshold seconds!
fi


Just put to crontab something like this:
*/30 * * * * /usr/local/opt/check_relication.sh /tmp/mysql.sock

Friday, August 3, 2007

Move from apache 2.2 to Nginx as a PHP or Ruby web server

Recently we started to move moderately loaded news web site from apache 2.2 to Nginx due to it's incredible performance and security.

So, here the short instruction set to start Nginx with PHP and RubyOnRails under daemontools monitoring on FreeBSD:

PHP:
cat /var/services/backend-phpfcgi/run
#!/bin/sh
exec 2>&1
exec setuidgid backend spawn-fcgi -n -a 127.0.0.1 -p 9000 -f /usr/local/bin/php-cgi

I'm using spawn-fcgi from lighttpd package, however the php-cgi can be ran directly.

RubyOnRail:
cat /var/services/backend-rubyfcgi/run
#!/bin/sh
umask 22
export RAILS_ENV=production
exec setuidgid rubysite spawn-fcgi -n -a 127.0.0.1 -p 9001 -f /home/rubysite/www/public/dispatch.fcgi

Nginx configuration files:

cat /usr/local/etc/nginx/nginx.conf
user www;
worker_processes 30;
timer_resolution 1000ms;

error_log /var/log/nginx-error.log error;
pid /var/run/nginx.pid;

events {
worker_connections 2048;
use kqueue;
}


http {
include mime.types;
default_type application/octet-stream;

access_log off;
gzip on;
gzip_comp_level 6;

sendfile on;
#tcp_nopush on;

keepalive_timeout 65;

include vhosts/conf-*;
server {
listen ip.add.re.ss:80 default accept_filter=httpready;

error_page 404 /404.html;
location / {
root /home/default/www;
index index.html;
}
}
}

cat /usr/local/etc/nginx/vhosts/conf-site.com
server {
listen ip.add.re.ss;
server_name site.com www.site.com

location / {
root /home/site/www;
index index.php;
}

location ~* ^.+\.(php)$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /home/site/www/$fastcgi_script_name;

include fastcgi_params;
}
}

cat /usr/local/etc/nginx/fastcgi_params

fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;

fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;

fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx;

fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;

# PHP only, required if PHP was built with --enable-force-cgi-redirect
fastcgi_param REDIRECT_STATUS 200;


Nginx runs under www privileges and the back-end as a backend user so security is much improved.

P.S.
Interesting link shows nginx as third popular web server.

Sunday, June 17, 2007

new natd patch for FreeBSD using kqueue

UPDATE, this patch is not going to work properly, so forget the cricket!!!

Yesterday I've rewrote patch to natd using kqueue. Since I do not have FreeBSD with nat around, patch is not tested yet. Probably tomorrow will be chance to test. Long story- short. Testers are welcome. No warranty!!!

How to apply this patch:
(become root)
(at this point you should already have the system sources)
cd /usr/src/sbin/natd &&amp;amp; patch -p2 < /path/to/natd.patch && make && make install clean

Saturday, May 26, 2007

FreeBSD network settings for highly utilized squid proxy server by dial-up users

Port config knobs:
cat /var/db/ports/squid26/options
# This file is auto-generated by 'make config'.
# No user-servicable parts inside!
# Options for squid-2.6.9
_OPTIONS_READ=squid-2.6.9
WITHOUT_SQUID_LDAP_AUTH=true
WITHOUT_SQUID_SASL_AUTH=true
WITHOUT_SQUID_DELAY_POOLS=true
WITH_SQUID_SNMP=true
WITHOUT_SQUID_CARP=true
WITHOUT_SQUID_SSL=true
WITHOUT_SQUID_PINGER=true
WITHOUT_SQUID_DNS_HELPER=true
WITH_SQUID_HTCP=true
WITHOUT_SQUID_VIA_DB=true
WITH_SQUID_CACHE_DIGESTS=true
WITH_SQUID_WCCP=true
WITHOUT_SQUID_WCCPV2=true
WITH_SQUID_STRICT_HTTP=true
WITHOUT_SQUID_IDENT=true
WITHOUT_SQUID_REFERER_LOG=true
WITHOUT_SQUID_USERAGENT_LOG=true
WITHOUT_SQUID_ARP_ACL=true
WITH_SQUID_PF=true
WITHOUT_SQUID_IPFILTER=true
WITH_SQUID_FOLLOW_XFF=true
WITHOUT_SQUID_ICAP=true
WITH_SQUID_AUFS=true
WITH_SQUID_COSS=true
WITH_SQUID_KQUEUE=true
WITH_SQUID_LARGEFILE=true
WITH_SQUID_STACKTRACES=true

Boot network settings:

cat /boot/loader.conf
accf_http_load="YES"
accf_data_load="YES"

#kern.hz=1000
kern.maxproc=6164
kern.maxdsiz="1536M"
kern.dfldsiz="1536M"
kern.maxssiz="512M"

kern.ipc.msgseg=768
kern.ipc.msgssz=128
kern.ipc.msgtql=3072
kern.ipc.msgmnb=12288
kern.ipc.msgmni=60

kern.ipc.shmall=6144
kern.ipc.shmseg=24
kern.ipc.shmmni=48
kern.ipc.shmmax=51457280

kern.ipc.shm_use_phys=1
kern.ipc.nmbclusters=131072
kern.ipc.maxsockbuf=524288

net.inet.tcp.tcbhashsize=16384

Run-time settings:

cat /etc/sysctl.conf
security.bsd.see_other_uids=0

kern.maxfiles=40960
kern.maxfilesperproc=22190
kern.timecounter.hardware=TSC
kern.ipc.somaxconn=4096
To repartition disk drive /duplicate file system in FreeBSD :
1. create new slice using bsdlabel
2. newfs newly created slice
3. mount it to /mnt (or wherever)
4. cd /mnt (or wherever)
5. try to minimize amount of the open files on source filesystem by killing all unnecessary daemons
6. dump -0ab 128 -C 32 -f - /dev/slicename | restore -rb 128 -f -
7. ensure backup is adequate
8. modify /etc/fstab and point new filesystem to desired mount point
9. check is everything ok throughly.
10 umount old mount point && mount new one or reboot if the live fs like e.g. /usr