Skip to content

Vsftpd with SSL

Few days back one of customer asked to have more security on their data transfer via ftp. I have heard of encryption,SSL and all. I know how to build the CA and create the certificates but don’t know how to integrate it with VSFTPD or alternatively I can say that I did not get such requirement. Vsftpd with SSL is pretty straight forward and very easy to configure just create the self sign certificate just like i did, if you can not buy the trusted certificate from registered CA. Procedure to configure the vsftpd with SSL supports is as given below:

vsftpd is the default FTP server supplied with CentOS. It should be installed by default (?) If it isn’t you may install it by one of these methods:
Using yum (if you’ve installed yum):
Install VSFTPD:

[root@Gladiator]#yum install vsftpd

Generate a Certificate:
You use OpenSSL to generate a certificate for vsftpd. The certificate is store on your server, in a location of your choice. Here I choose to put it in the /etc/vsftpd directory. As well, you specify a ‘lifetime’ for the certificate; here’s it set for a year (“-days 365”).
Note that the backslashes only signify line breaks. You should be able to copy/paste & run it as it is, or remove the backslashes and the line breaks. You may need to create this directory first (mkdir /etc/vsftpd).

[root@Gladiator]#openssl req -x509 -nodes -days 365 -newkey rsa:1024 \
 -keyout /etc/vsftpd/vsftpd.pem \
 -out /etc/vsftpd/vsftpd.pem

You will be prompted with a series of question, which you answer as they appear. When done the certificate will be installed in the /etc/vsftpd directory.
Configure vsftpd:
To configure vsftpd you edit the file /etc/vsftpd/vsftpd.conf and add the following lines:


Restart vsftpd for these settings to take effect:

[root@Gladiator]#/etc/rc.d/init.d/vsftpd restart

”’NOTE:”’If you set “force_local_logins_ssl=YES” then your clients will be required to use an FTP client that supports AUTH TLS/SSL in order to connect. If you leave it at “NO” then people can connect securely or insecurely.

Posted in Linux.

Installing and configuring mod_jk

Installation of Mod_jk is not that hard but to make it work or integrate with apache and tomcat a bit tricky. I am explaining here how to install and configure apache to serve the java pages or webapps with the help of mod_jk module.

Let me brief my scenario here, yours may be different. You can take the refference from here. I am having app1 and app2 and i want URL to serve the pages from app1 tomcat webapp and serve the pages from app2 tomcat webapp. Also you need to take care for the tomcat port also, if you want to use two tomcat instace you you have to use the two diff ports like i am using. app1 is on 8080 port and app2 is on 8081 port.

You can install apache and tomcat via yum if you are using Redhat/CentOS distro and if you are using any Debian based system you can use apt-get/aptitude utility for the same.
I am explaining here on CentOS-5.4 disto

#yum install httpd 
#/etc/init.d/httpd restart
#chkconfig httpd on

Now its time to install mod_jk, i am using here the rpm package you can even compile it from source as well.
You can download it from centOS testing repo.


#rpm -ivh mod_jk-ap20-1.2.26-1jpp.i386.rpm or 
#rpm -ivh  mod_jk-ap20-1.2.28-2.el5.centos.i386.rpm

Now its time to install tomcat. You can install it via yum or compile it from source. I am using the source here.

Get the tar.gz for Tomcat 5.5 — you can download it from the Apache Tomcat download site( I am using tomcat-5.5 version you can use the latest release also.

Unpack apache-tomcat-5.5.23.tar.gz under /usr/local. Rename apache-tomcat-5.5.23 to tomcat8080. Unpack the tar.gz one more time, rename it to tomcat8081.

cd /usr/local/tomcat8081/conf
- edit server.xml and change following ports:
8005 (shutdown port) -> 8006
8080 (non-SSL HTTP/1.1 connector) -> 8081
8009 (AJP 1.3 connector) -> 8010

There are other ports in server.xml, but I found that just changing the 3 ports above does the trick.

I won’t go into the details of getting the 2 Tomcat instances to run. You need to create a tomcat user, make sure you have a Java JDK or JRE installed, etc., etc.
One more thing i want to mention here, you have to set JAVA_HOME variable set to make the java application to find the exact JRE/JAVA location. If you want to set them system wide then mention that variable in /etc/profile file instead if ‘export’ on shell.

The startup/shutdown scripts for Tomcat are /usr/local/tomcat808X/bin/|

I will assume that at this point you are able to start up the 2 Tomcat instances. The first one will listen on port 8080 and will have an AJP 1.3 connector (used by mod_jk) listening on port 8009. The second one will listen on port 8081 and will have the AJP 1.3 connector listening on port 8010.

I am assuming that you are well aware, how to deploy the tomcat apps.So i am skipping that section. Please write me at if you want to have a chapter on this as well.

Create Apache virtual hosts for and and tie them to the 2 Tomcat instances via mod_jk.

Here is the general mod_jk section in httpd.conf — note that it needs to be OUTSIDE of the virtual host sections:

# Mod_jk settings
# Load mod_jk module
LoadModule    jk_module  modules/
# Where to find
JkWorkersFile conf/
# Where to put jk logs
JkLogFile     logs/mod_jk.log
# Set the jk log level [debug/error/info]
JkLogLevel    emerg
# Select the log format
JkLogStampFormat "[%a %b %d %H:%M:%S %Y] "
# JkOptions indicate to send SSL KEY SIZE,
JkOptions     +ForwardKeySize +ForwardURICompat -ForwardDirectories
# JkRequestLogFormat set the request format
JkRequestLogFormat     "%w %V %T"

Note that the section above has an entry called JkWorkersFile, referring to a file called, which I put in /etc/httpd/conf. This file contains information about so-called workers, which correspond to the Tomcat instances we’re running on that server. Here are the contents of my file:

# This file provides minimal jk configuration properties needed to
# connect to Tomcat.
# The workers that jk should create and work with

worker.list=app1, app2



The file declares 2 workers that I named app1 and app2. The first worker corresponds to the AJP 1.3 connector running on port 8009 (which is part of the Tomcat instance running on port 8080), and the second worker corresponds to the AJP 1.3 connector running on port 8010 (which is part of the Tomcat instance running on port 8081).

The way Apache ties into Tomcat is that each of the VirtualHost sections configured for and declares a specific worker. Here is the VirtualHost section I have in httpd.conf for

DocumentRoot "/usr/local/tomcat8080/webapps/ROOT"

  # Options Indexes FollowSymLinks MultiViews
  Options None
  AllowOverride None
  Order allow,deny
  allow from all

ErrorLog logs/app1-error.log
CustomLog logs/app1-access.log combined
# Send ROOT app. to worker named app1
JkMount  /* app1
RewriteEngine On
RewriteRule ^/(images/.+);jsessionid=\w+$ /$1

The 2 important lines as far as the Apache/mod_jk/Tomcat configuration is concerned are:

JkMount /* app1

The line “JkMount /* app1” tells Apache to send everything to the worker app1, which then ties into the Tomcat instance on port 8080.

The line “JkUnMount /images/* app1” tells Apache to handle everything under /images itself — which was one of our goals.

At this point, you need to restart Apache, for example via ‘sudo service httpd restart’. If everything went well, you should be able to go to and and see your 2 Web applications running merrily.

You may have noticed a RewriteRule in each of the 2 VirtualHost sections in httpd.conf. What happens with many Java-based Web application is that when a user first visits a page, the application does not know yet if the user has cookies enabled or not, so the application will use a session ID mechanism fondly known as jsessionid. If the user does have cookies enabled, the application will not use jsessionid the second time a page is loaded. If cookies are not enabled, the application (Tomcat in our example) will continue generating URLs such as;jsessionid=0E45D13A0815A172BD1DC1D985793D02

In our example, we told Apache to process all URLs that start with ‘images’. But those URLs have already been polluted by Tomcat with jsessionid the very first time they were hit. As a result, Apache was trying to process them, and was failing miserably, so images didn’t get displayed the first time a user hit a page. If the user refreshed the page, images would get displayed properly (if the user had cookies enabled).

The solution I found for this issue was to use a RewriteRule that would get rid of the jsessionid in every URL that starts with ‘images’. This seemed to do the trick.

That’s about it. I hope this helps somebody

Posted in Linux.

Recovering deleted data from ext3 filesystem on linux


Linux machine with/home having ext3 type of filesystem.
You have welcome.jpg file in /home/test. And you have deleted it by “rm -f ” command.
Now we will recover that welcome.jpg
Required Tools: debugfs, foremost & blkls

Step 1. –> Check which Filesystem /home is.

 Gladiator:~ # df -h
    Filesystem    Size     Used     Avail     Use%      Mounted on
    /dev/sda       2 7.8G   5.3G     2.2G      71%          /
    udev              122M    168K    121M       1%         /dev
    /dev/sda3      12G       158M    11G         2%         /home 

So we got Filesystem ID – /dev/sda3

Step 2. –> Debugfs to get necessary information
The debugfs program is an interactive file system debugger that is installed by default with most common Linux distributions. This program is used to manually examine and change the state of a filesystem. In our situation, we’re going to use this program to determine the inode which stored information about the deleted file and to what block group the deleted file belonged.

  Gladiator:~ # debugfs /dev/sda3
    debugfs 1.41.1 (01-Sep-2008)
    debugfs:  cd test
    debugfs:  ls -d
    32769  (12) .    2  (4084) ..   <32770> (4072) welcome.jpg    ---> Here we got Inode number which is in RED

The next command we want to run is imap, giving it the inode number above so we can determine to which block group the file belonged. We see by the output that it belonged to block group 4.

debugfs:  imap <32770>
    Inode 32770 is part of block group 4    -----------> Here we got block group no. ---> BG
    located at block 131074, offset 0x0100

Running the stats command will generate a lot of output. The only data we are interested in from this list, however, is the number of blocks per group. In this case, and most cases, its 32768. Now we have enough data to be able to determine the specific set of blocks in which the data resided. We’re done with debugfs now, so we type q to quit.

debugfs: stats
    << lots of content>>
    Blocks per group:         32768   ---> BPG
    debufs: q    -------> To quit debugfs

Step 3. –> Recovering data in dat format.

The next thing we need to do is pull all unallocated blocks from block group 56 so we can examine their content. The blkls program, from The Sleuth Kit (TSK), allows us to do just that. We simply need to know the device file, a range of blocks, and have enough space in the appropriate place to output this data. Using the information above, we can calculate the block range by multiplying the block group number and the block group size and then multiplying the block group number plus one by the blocks per group minus one. In this case, the formula would look like this:

(BG * BPG) through ((BG + 1) * BPG -1)

In above example, it will look like:
BPG –> 32768
BG –> 4
(4 * 32768) through ((4+1) * 32768 -1)
131072 through 163839

    So now need to give following command:
 Gladiator:~ # blkls /dev/sda3 131072-163839 > /root/block.dat

Step 4. –> Recovering file from dat file using “Foremost” tool

Create output directory first.
    linux-remo:~ # mkdir /root/output   
    linux-remo:~ # foremost -dv -t jpg -o /root/output/ -i /root/block.dat 

Foremost version 1.5.6 by Jesse Kornblum, Kris Kendall, and Nick Mikus
Audit File

oremost started at Sat Sep 26 12:11:59 2009
Invocation: foremost -dv -t jpg -o /root/output/ -i /root/block.dat
Output directory: /root/output
Configuration file: /usr/local/etc/foremost.conf
Processing: /root/block.dat
File: /root/block.dat
Start: Sat Sep 26 12:11:59 2009
Length: 125 MB (132108288 bytes)

Num Name (bs=512) Size File Offset Comment

0: 00012272.jpg 65 KB 6283264 (IND BLK bs:=4096)
Finish: Sat Sep 26 12:12:03 2009

jpg:= 1

Foremost finished at Sat Sep 26 12:12:03 2009

And here we got the jpg file in /root/output directory. Filename will be different that original. But content will be same.

Comparing size only works, of course, if you “know your data”. Integrity checking programs such as Tripwire play a big role in a recovery operation as you can identify the recovered data without ever inspecting the content, as well as verify its integrity. This becomes quite useful if the information you’re attempting to recover is confidential and you are not authorized to view the data.

File formats supported by Foremosts are jpg, gif, png, bmp, avi, exe, mpg, wav, riff, wmv, mov, pdf, ole, doc, zip, rar, htm, and cpp. If you need to recover data beyond these built-in data types, you will need to define custom types in Foremost’s configuration file foremost.conf.

NOTE: All credit goes to Neelesh Gurjar who has been posted the same article here:

Posted in Linux.

Configure the Network Interface in Unix

Configuring a Solaris network interface may be necessary because it needs to be reconfigured on the fly without a reboot. Fortunately, the process to configure a Solaris network interface is relatively simple. Once the Solaris network interface is configured and activated, it allows the Solaris system to communicate on the network.

Please follow the below mentioned steps to configure your network interface via DHCP or STATIC


ifconfig e1000g0 dhcp start
ifconfig e1000g0 dhcp status

If you want to release the existing IP

ifconfig e1000g0 dhcp release

To check the IP you can use the ifconfig -a command and for nameserver settings use cat /etc/resolve.conf’ command.


STEP 1: Type “ifconfig -a“. The output lists two types of network interfaces. One of them is lo0, which is the loop-back network interface and not used to connect to the network. The rest of the ifconfig listing displays all available network interfaces. Some possible names for the network interface include ce0, hme0,be0, le0,e1000g0(in intel based machines) and ge0. Use the information from ifconfig to find the name of the Solaris network interface you want to configure.

STEP 2: Type “ifconfig e1000g0 plumb” where e1000g0 is the Solaris network interface that you want to configure. This command initializes the Solaris network interface.

STEP 3: Type “ifconfig e1000g0 netmask” to configure the le0 Solaris network interface. In this example, e1000g0 is the name of the network interface, is the IP address of the Solaris system and is the netmask.

STEP 4: Type “ifconfig e1000g0 up” to activate the Solaris network interface and put the Solaris system on the network.

Persistent IPv4 Configuration

In order to have the system configure our NIC at boot, the first step is to get an IP address and subnet mask.
Add a line to the /etc/hosts file for our new card:		host1

Now, we create a file in /etc that is named hostname. For example, our first NIC’s file is /etc/hostname.e1000g0
In this file, we will put the name associated with the IP (as found in the /etc/hosts file). It should be the first name in the /etc/hosts file. In our scenario, /etc/hostname.e1000g0 should contain:

Then we edit the /etc/netmasks file for our new network:

Reboot the system, and your network card has been configured for the new network with the proper subnet mask. You can check it by running an ifconfig -a again:

lo0: flags=1000849 mtu 8232 index 1
        inet netmask ff000000 
e100g0: flags=1000843 mtu 1500 index 2
        inet netmask ffffff00 broadcast

Posted in Linux.

Bugzilla Installation on Ubuntu System

Installation checklist

  1. Perl 5.8.1 or above – this usually comes with Ubuntu 8.04 or above by default.
  2. MySQL Server
  3. Apache Web Server 1.3.x or 2.x

Install the required packages

  1. For MySQL, do apt-get install mysql-server mysql-client

  2. For Apache, do apt-get install apache2 apache2-common

  3. For Perl Modules, there’s a long list. Do the following
apt-get install libchart-perl perlmagick libgd-gd2-perl
libgd-graph-perl libgd-text-perl libnet-ldap-perl
libtemplate-perl-doc libtemplate-plugin-gd-perl
libappconfig-perl libconvert-binhex-perl libfile-temp-perl
libio-stringy-perl libmailtools-perl libmime-perl libmime-tools-perl
libtemplate-perl libtimedate-perl libemail-send-perl
libemail-mime-perl libemail-mime-modifier-perl libdbd-pg-perl
libauthen-sasl-perl libsoap-lite-perl libhtml-scrubber-perl
libemail-mime-contenttype-perl libemail-mime-encodings-perl

4. Extract the bugzilla tar ball into /var/www directory and rename the directory name to bugzilla
Change the ownership of the directory and files to www-data by using
chown www-data /var/www/bugzilla/ -R
6. Go into /var/www/bugzilla and run the following

./ --check-modules 

It will list out all the modules which have been installed and the

On Ubuntu 8.04, an old perl-cgi module is there (version 3.15)
whereas bugzilla 3.2.2 requires 3.21 or above. Do the following to update it
  • Download the file

  • The following commands to be used
cd /tmp
tar -zxf
perl Makefile.PL       # to configure the file
make                   # to compile the file
make test              # to test if the compilation has been okay
make install           # to install the module CGI version 3.42

optional modules which you can install. Use your own discretion.

Posted in Linux.

How to Install a Puppet Master and Client Server

Puppet is an open-source next-generation server automation tool. It is composed of a declarative language for expressing system configuration, a client and server for distributing it, and a library for realizing the configuration.

Setup the EPEL repos for Centos – choose the correct package depending on your installation.

rpm -Uvh

rpm -Uvh

Install puppet-server

yum install puppet-server

The 1.8.5 branch of Ruby shipped will RHEL5 can exhibit memory leaks. So install ruby 1.8.6++ (I did not on this server as it was test not a production server).

Install the help docs

yum install ruby-rdoc

Create a manifest file at /etc/puppet/manifests/default.pp

vi /etc/puppet/manifests/default.pp

put this in it

# Create “/tmp/testfile” if it doesn’t exist.

class test_class {

    file { “/tmp/testfile”:

       ensure => present,

       mode   => 644,

       owner  => root,

       group  => root



# tell puppet on which client to run the class

node pclient {

    include test_class


Start the puppet server

service puppetmaster start

Enable start on boot

chkconfig puppetmaster on

Now to install the Puppet Client on another server

Setup the EPEL repos for Centos – choose the correct package depending on your installation.

rpm -Uvh


rpm -Uvh

Install puppet

yum install puppet

Setup puppet client to generate its own certificate request to the server

/etc/init.d/puppet once -v

Sign the certificate request on the puppet master server. Use puppetca –list to see if any are available to sign.

puppetca –sign puppet01

Puppet01 must be the fully qualified domain name (FQDN) of you client server.

Run this on the client server again to retrieve the certificate

/etc/init.d/puppet once -v

Make the puppet start with the system

chkconfig puppet on

Make sure it is working on the client server.

puppet –test

You should see a dialog that creates the file /tmp/testfile

Posted in Linux.

Email to RSS Feeder

Some days back my boss given me assignment to setup an system to move the noisy emails to some RSS feeder or newsgroup. I never work on RSS or any newsgroup distribution lists. But yes i know the RSS concept.

Generally people use RSS to feed the updates of there website, or one step ahead to it, integrate RSS to emails, Means if any updates came to subscribed RSS you will get the corresponding email. All these kond of requirement you can easily workout by searching on google. But my requirement is opposite to it. I want all the noisy emails like houry reports or lots of other unwanted emails some of them came on 15 mins basis also. So there are huge emails coming to everybody’s mailbox.

I searched lot on google to workout on this requirement finally found a cool solution(php script) that worked for me like a charm.

Below is the script:

// imap2rss.php .
// A simple PHP script to convert the data in an IMAP mailbox available
// over the internet to an RSS file readable by news aggregators.

// Version of imap2rss.php
$vers = "1.0beta3";
// Permalink for this feed - just points to the current page.
$feedLink = "http://".$_SERVER['HTTP_HOST'] . $_SERVER['PHP_SELF'];
// Configuration file directory - used if you want to place your configuration file somewhere outside your web server path.  Make sure the user
// running the web server has read permission to this file
$configurationDirectory = "/etc";

// Part types taken from documentation
$parttypes = array ("text", "multipart", "message", "application", "audio", "image", "video", "other");

// Check if an external configuration has been chosen and load it if possible
if(isset($_GET['conf'])) {
// First, strip out dangerous characters
$slashpos = strrpos($_GET['conf'], '/');
if($slashpos === false)
$slashpos = strrpos($_GET['conf'], '\\');
if($slashpos !== false)
$conf = substr($_GET['conf'], $slashpos+1);
// Open the file inside the configuration directory
$conf = $configurationDirectory."/".$_GET['conf'].".conf.php";
// If the file exists, load the parameters.
if(file_exists($conf)) {
$feedLink .= "?conf=".$_GET['conf'];
} else {
// Configuration file does not exist - error and exit
die("Selected configuration not available");

// Load variables only if an external configuration was not selected.
if(!isset($srvStr)) {
// Server string for IMAP connection.
// Hint: Changing "localhost" for your hostname should do it.
$srvStr = "{localhost:143/notls}INBOX";
// IMAP account username
$accountUser = "username";
// IMAP account password
$accountPass = "password";
// Maximum number of messages to include in the feed
// (from newest, 0 for no limit)
$maxMsgNum = 0;
// Title for this feed
$feedTitle = "imap2rss.php Feed";
// Feed description
$feedDesc = "Sample description";
// Feed language
$feedLang = "en-gb";
// Feed editor's name
$feedEditor = "Feed Editor";
// Feed editor's email
$feedEditorMail = "";
// General Options
// Munge sender emails - 1 for yes, 0 for no
$mungeSenderEmail = 0;
// Make http addresses links in plain-text emails
$makeHttpLinks = 1;
// Make email addresses mailto: links in plain-text emails
$makeMailtoLinks = 1;

// reEncodeString()
// Goes through a string an reencodes all the html entities that it
// finds into a format that won't make XML parsers choke.
function reEncodeString($string) {
$temp = $string;
$ents = get_html_translation_table(HTML_ENTITIES);
$special = get_html_translation_table();
$table = array_diff($ents, $special);
foreach($table as $item) {
$temp = str_replace($item, "&amp;".substr($item, 1), $temp);
return $temp;

// mungeEmailAddress()
// Munges an email address to prevent being harvested by spambots.
// This is really simple, you can replace this with whatever technique you prefer.
// Remember that for the feed to validate as RSS, this needs to be a valid address,
// @ sign and all, so the munging is limited.
function mungeEmailAddress($address) {
return str_replace("@", ".NOSP@MMER.", $address);

// renderPlainText()
// Processes plain text so that it looks decent when rendered as HTML.
// All it does is substitute newline characters for <br> tags.
// It also substitutes URLs for links and email addresses for mailto:
// links.
function renderPlainText($text) {
global $makeHttpLinks, $makeMailtoLinks;
// Throw in <br> tags
$retval = str_replace("\n", "<br/>\n", $text);
if($makeHttpLinks) {
// Replace urls with links
$retval = preg_replace('/\s(\w+:\/\/)(\S+)/',
' <a href="\\1\\2" target="_blank">\\1\\2</a>', $retval);
if($makeMailtoLinks) {
// Replace email addresses with mailto: links
$retval = preg_replace('/\s(\w+@)(\S+)/',
' <a href="mailto:\\1\\2">\\1\\2</a>', $retval);
return $retval;

// returnAttachment()
// This function returns a given attachment from an item.
function returnAttachment($itemId, $attachId) {
global $vers, $parttypes, $srvStr, $accountUser, $accountPass;
$inbox = imap_open($srvStr, $accountUser, $accountPass);
$msgStructure = imap_fetchstructure($inbox, $itemId);
$part = $msgStructure->parts[$attachId-1];
$ctype = $parttypes[$part->type]."/".$part->subtype;
$filename = "filename";
foreach($part->parameters as $param) {
$filename = $param->value;
header("content-type: ".$ctype);
header("content-disposition: attachment; filename=".$filename);
// Returned data depends on whether the attachment is binary or text
if($part->type>0) {
// Binary attachment - convert from base64 to binary
echo base64_decode(imap_fetchbody($inbox, $itemId, $attachId));
} else {
// Text attachment - just display it as-is
echo imap_fetchbody($inbox, $itemId, $attachId);

// showArticle()
// This function displays a post in an html page. This
// functionality exists to complement permalink/guid behaviour
// in RSS and also to enable compatibility with readers like
// Thunderbird, that always load the permalink instead of
// displaying the summary.
function showArticle($articleId) {
global $vers, $srvStr, $accountUser, $accountPass, $mungeSenderEmail;
$inbox = imap_open($srvStr, $accountUser, $accountPass);
header('Content-type: text/html');
// Retrieve post information from the message header
$headers = imap_headerinfo($inbox, $articleId);
$subject = htmlentities($headers->subject);
$author = htmlentities($headers->fromaddress);
// If author email munging is enabled...
if($mungeSenderEmail) {
$author = mungeEmailAddress($author);
// Format the date according to the standard
$entryDate = date("D, d M Y H:i:s O", $headers->udate);
// Get the message body.
// Negotiate the presence of attachments.
$msgStructure = imap_fetchstructure($inbox, $articleId);
if(count($msgStructure->parts)>1) {
$body = imap_fetchbody($inbox, $articleId, "1");
$body = renderPlainText($body);
$body .= "<h3>Attachments:</h3>\n";
$partCount = 0;
foreach($msgStructure->parts as $part) {
if (isset($part->disposition)) {
foreach($part->parameters as $param) {
// Generate the link for retrieving attachments
$body .="<a href=\"".$feedLink;
if(isset($_GET['conf'])) {
$body .= "?conf=".$_GET['conf']."&amp;";
} else {
$body .= "?";
$body .="itemId=".$articleId."&attachId=".$partCount."\">";
$body .=$param->value."</a><br/>\n";
} else {
$body = imap_body($inbox, $articleId);
// If the body is plain-text, run the HTML rendering function
$body = renderPlainText($body);
// The HTML used for displaying post content.
<title><?php echo $subject;?></title>
<div style="background:#eeeeee; border:solid 1px"
<strong><?php echo $subject; ?></strong><br />
<i><?php echo "by: ".$author.", @ ".$entryDate;?></i>
<br />
<?php echo $body; ?>
// generateFeed()
// Opens an IMAP connection to the specified server and converts the
// contents of the inbox to an RSS feed.
function generateFeed() {
global     $vers, $srvStr, $accountUser, $accountPass, $feedTitle, $maxMsgNum,
$feedLink, $feedDesc, $feedLang, $feedEditor, $feedEditorMail, $mungeSenderEmail;

$inbox = imap_open($srvStr, $accountUser, $accountPass);

$pubDate = date("D, d M Y H:i:s O", time());

header('Content-type: text/xml');

// RSS header
echo "<?xml version=\"1.0\"?>\n";
echo "<rss version=\"2.0\" xmlns:dc=\"\">\n";
echo "   <channel>\n";
echo "       <title>$feedTitle</title>\n";
echo "       <link>$feedLink</link>\n";
echo "       <description>$feedDesc</description>\n";
echo "       <language>$feedLang</language>\n";
echo "       <generator>IMAP2RSS v.$vers</generator>\n";
echo "       <managingEditor>$feedEditor ($feedEditorMail)</managingEditor>\n";
echo "       <webMaster>$feedEditor ($feedEditorMail)</webMaster>\n";
echo "       <pubDate>$pubDate</pubDate>\n";

// Calculate the number of items to include in the feed.
$msgCount = imap_num_msg($inbox);
if($maxMsgNum && $msgCount>$maxMsgNum)
$lowerLimit = $msgCount - $maxMsgNum;
$lowerLimit = 0;
// Generate item entries
for($i=$msgCount; $i>$lowerLimit; $i--) {
$headers = imap_headerinfo($inbox, $i);
$subject = reEncodeString(htmlentities($headers->subject));
// Use htmlentities() because sometimes the address appears
// inside angle brackets.
$author = reEncodeString(htmlentities($headers->fromaddress));
// If author email munging is enabled...
if($mungeSenderEmail) {
$author = mungeEmailAddress($author);
// Format the date according to the standard
$entryDate = date("D, d M Y H:i:s O", $headers->udate);
// Set the item link depending on whether there is a custom
// configuration in use or not.
$itemUrl = $feedLink.((isset($_GET['conf']))?"&amp;":"?")."itemId=$i";

// Negotiate the presence of attachments.
$msgStructure = imap_fetchstructure($inbox, $i);
if(count($msgStructure->parts)>1) {
$body = imap_fetchbody($inbox, $i, "1");
// If the body is plain-text, run the HTML rendering function
$body = renderPlainText($body);
$body .= "<h3>Attachments:</h3>\n";
$partCount = 0;
foreach($msgStructure->parts as $part) {
if (isset($part->disposition)) {
foreach($part->parameters as $param) {
// Generate the link for retrieving attachments
$body .="<a href=\"".$feedLink;
if(isset($_GET['conf'])) {
$body .= "&amp;";
} else {
$body .= "?";
$body .="itemId=".$i."&attachId=".$partCount."\">";
$body .=$param->value."</a><br/>\n";
} else {
$body = imap_body($inbox, $i);
// If the body is plain-text, run the HTML rendering function
$body = renderPlainText($body);
// Clean up output to avoid problems with the XML produced
$body = reEncodeString(htmlentities($body));
echo "       <item>\n";
echo "        <title>$subject</title>\n";
echo "               <link>$itemUrl</link>\n";
echo "               <pubDate>$entryDate</pubDate>\n";
echo "               <description>$body</description>\n";
echo "               <dc:creator>$author</dc:creator>\n";
echo "               <guid>$itemUrl</guid>\n";
echo "       </item>\n";
echo "     </channel>\n";
echo "</rss> ";


// display page body
// If an itemId has been set, display that item in an HTML page.
// If an itemId and an attachId have been set, return that attachment
// If not, show the entire feed.
if(isset($_GET['itemId'])) {
if(isset($_GET['attachId'])) {
returnAttachment($_GET['itemId'], $_GET['attachId']);
} else {
} else {

You can download the script directly from here

Just make the IMAP mail server settings thats it. Now place the script any of the webserver document root, then you can access these feeds using any of the RSS reader.
To test wheather your script is working or not, you can check that by accessing the mai2rss.php from URL. For eg.
I have placed the script on linuxtrove document root now my URL would be it will show the mails for the user that you have configured in your script in XML format. This script is valid only for one mailbox but you can configure it for multiple mailboxes also.

To use one installation of imap2rss.php to access several mailboxes, you need to create a file in the same directory that you installed the script in, and call it .conf.php. The file should look something like this:

Once you have configured this, you can call imap2rss.php with an additional parameter passed to the script, conf, which should be set to whatever you called the configuration. For example, if your configuration file is called mycfg.conf.php, the url would look like http://yourserver/imap2rss.php?conf=mycfg.
If you have problems hooking up to your IMAP server, read the page referred to near the beginning of the file. Correct configuration for your IMAP server can be tricky

I have tested it with thunderbird and firefox. Firefox need an add-on for this. Any RSS reader will work with it.

All credit goes to who resolved my problem.

Posted in Linux.

Splunk Server Setup and Configuration

Installation Of Splunk Server

Configure Splunk server on
1. Download latest splunk.tar.gz from
2. copy download files to /opt
3. untar the downloaded splunk file

# cd /opt
# tar -xzvf splunk-4.0.8-73243-Linux-i686.tgz
# cd splunk/bin/
# ./splunk start

Accecpt the agreement and default settings.

4. Open the splunk webUI (http://localhost:8000)
5. Use the default username password to login i.e. admin/changeme

#### Setup splunk as a Reciever #####
1. Login to WebUI using the above mentioned credentionals. eg.
2. Go to Manager » Forwarding and receiving » Receive data
3. Click on New Button and add default port i.e. 9997
4. Click on save button to save the settings.
Now Splunk server has been setup as reciever on port 9997.

Note: If you are running any firewall please allow the above Port.

####### Setup Splunk as a Forwarder ####
IP Add of forwarder machine:
IP Add for Reciever server:

You have the following preconfigured forwarder choices:
* Splunk forwarder
* Splunk light forwarder
1. ssh to forwarder machine(whom to be monitored) eg. ssh ramesh@
2. Use the above mentioned installation steps to install splunk on client machine

 # cd /opt/splunk/bin 
# ./splunk enable app SplunkLightForwarder -auth admin
# ./splunk add forward-server reciever_serverip:port -auth admin
eg.  ./splunk add forward-server -auth admin  
# ./splunk restart

######## Setup Splunk Alerts #########
NOTE: We assume that splunk server has been installed on a Linux Box.

1. Login to Splunk server (
2. Go to App >> Search
3. Click on /var/log/secure under source section
Above will show the whole data of secure file
4. Click on the string/strings that you want to search or setup alert. Eg. “Accepted Password”

It will look like source=”/var/log/secure” “Accepted Password” in search box.

5. Then go to Action >> Save Search
It will pop-up a window.
6. Name – SSH Access Authenticated
Search – will be coming default that we search earlier.
Description – It can be anything you like.
Check on Schedule this search
Schedule Type – Basic
Run Every – Minute
Alert Condition
Perform actions (optional) – if no. of events – is greater than – 0
Alert Action
check on send Email
Email Addresses:,

Click on save Button to save your Alert.

To verify Your alert setup go to
Manager » Searches and reports >> SSH Access Authenticated


Posted in Linux.

subprocess pre-removal script returned error exit status 2? error

Recently, I encountered a package management related error in Ubuntu Jaunty Jackalope 32bit.

The package was sharutils and the error was: “E: sharutils: subprocess pre-removal script returned error exit status 2? and details showed:
“dpkg (subprocess): unable to execute pre-removal script: Exec format error
dpkg: error processing sharutils (–remove):
subprocess pre-removal script returned error exit status 2
dpkg (subprocess): unable to execute post-installation script: Exec format error
dpkg: error while cleaning up:
subprocess post-installation script returned error exit status 2
Errors were encountered while processing:
E: Sub-process /usr/bin/dpkg returned an error code (1)”

I managed to fix it. Read on for how I did it.

Precaution: The process can break your system if not followed as mentioned (you may get into problems even if you follow the process). Please proceed at your own risk.

First, please try the following in terminal:

sudo aptitude update
sudo aptitude -f install 

If it does not work, then you may want to try:

sudo dpkg --force all --remove

If both of them still produce similar errors, then continue. If the above commands fix your problem, you should not continue.

1. Close Synaptic or any package manager. Wait for or cancel any updates or install\uninstall.
2. Make Backups of current /var/lib/dpkg/status file. Just copy and paste to your home directory or Desktop.
3. Alt+F2 and launch type in gksu gedit /var/lib/dpkg/status and run it. Gedit will be launched with a text file open.
4. Now, search for the exact name of the package with problems and find it.
5. In my case, it I found the entry for the package sharutils:

Package: sharutils
Status: deinstall ok half-configured
Priority: standard
Section: utils
Installed-Size: 968
Maintainer: Ubuntu Core Developers <>
Architecture: i386
Version: 1:4.6.3-1build1
Depends: libc6 (>= 2.6-1)
Suggests: mailx
Conflicts: shar, uuencode
Description: shar, unshar, uuencode, uudecode
`shar' makes so-called shell archives out of many files, preparing
them for transmission by electronic mail services.  `unshar' helps
unpacking shell archives after reception.  Other related utility
programs help with other tasks.
`uuencode' prepares a file for transmission over an electronic
channel which ignores or otherwise mangles the eight bit (high
order bit) of bytes.  `uudecode' does the converse transformation.
Original-Maintainer: Santiago Vila <>

6. Select and delete that information and that much information only, i.e. you will remove “Package: culprit-package-name” to “Description: culprit package description”. Remmber, “culprit package description” may be multiline and you will need to remove all the lines till a blank line. Don’t forget to leave a line blank between the package description above and the one below. Be careful delete only the culprit package information. As i have mentioned in the highlighted box.

7. Launch Synaptic (or any package manager) and then search for package. You will see the package as not installed. Mark it for installation and install it. If you start to see the same error, restart the same process from Step 1 but the next time, stop at step 6.
8. Now, you should be able to remove it if you no longer want the package. If you can’t, restart the same process from Step 1 but the next time, stop at step 6.

This resolved  my problem, hope will resolve yours too…

NOTE: All credit goes to

Posted in Linux, Open Source.

Reliance DataCard on Ubuntu

First of all create a file /etc/wvdial.conf

vi /etc/wvdial.conf

Modem = /dev/ttyUSB0
Baud = 115200
SetVolume = 0
Dial Command = ATDT
Init1 = ATZ
FlowControl = Hardware (CRTSCTS) [Dialer cdma]
Username = 
Password =
Phone = #777
Stupid Mode = 1
Inherits = Modem0 

I had connected datacard before booting my system.
(I have enabled root login):
I executed following commands.


It listed many things along with my new reliance usb datacrd.
Bus 006 Device 019: ID 12d1:1411 Huawei Technologies Co., Ltd. )

#wvdial cdma
--> WvDial: Internet dialer version 1.60
--> Cannot get information for serial port.
--> Initializing modem.
--> Sending: ATZ
--> Modem initialized.
--> Sending: ATDT#777
--> Waiting for carrier.
CONNECT 153600
--> Carrier detected. Starting PPP immediately.
--> Starting pppd at Sat Dec 13 09:04:51 2008
--> Pid of pppd: 6410
--> Using interface ppp0
--> pppd: `?[13] `?[13] ??[13]
--> pppd: `?[13] `?[13] ??[13]
--> pppd: `?[13] `?[13] ??[13]
--> pppd: `?[13] `?[13] ??[13]
--> pppd: `?[13] `?[13] ??[13]
--> pppd: `?[13] `?[13] ??[13]
--> local IP address
--> pppd: `?[13] `?[13] ??[13]
--> remote IP address
--> pppd: `?[13] `?[13] ??[13]
--> primary DNS address
--> pppd: `?[13] `?[13] ??[13]
--> secondary DNS address
--> pppd: `?[13] `?[13] ??[13]

Now minimized the terminal and started firefox (work offline has to be disabled). It worked.
(For disconnecting I use ctrl+c in the terminal. can anyone suggest the correct method?)
If datacard inserted after booting of Ubuntu, command #lsusb didn’t show the datacard.
I executed

#modprobe usbserial

after sometime,


it worked. (I am not sure ubuntu automatically deteced it or ‘modprobe’ command did it).
Then as usual

#wvdial cdma

You will get the similar output as shown above then minimize the screen and browse internet.

Posted in Linux.

I'm happy to use Increase Sociability.