Monday, August 25, 2014

Incremental Backups in MySQL

Both xtrabackup and innobackupex tools supports incremental backups, which means that it can copy only the data that has changed since the last full backup. You can perform many incremental backups between each full backup, so you can set up a backup process such as a full backup once a week and an incremental backup every day, or full backups every day and incremental backups every hour.
Incremental backups work because each InnoDB page (usually 16kb in size) contains a log sequence number, or LSN. The LSN is the system version number for the entire database. Each page’s LSN shows how recently it was changed. An incremental backup copies each page whose LSN is newer than the previous incremental or full backup’s LSN. There are two algorithms in use to find the set of such pages to be copied. The first one, available with all the server types and versions, is to check the page LSN directly by reading all the data pages. The second one, available with Percona Server, is to enable the changed page tracking feature on the server, which will note the pages as they are being changed. This information will be then written out in a compact separate so-called bitmap file. The xtrabackup binary will use that file to read only the data pages it needs for the incremental backup, potentially saving many read requests. The latter algorithm is enabled by default if the xtrabackup binary finds the bitmap file. It is possible to specify --incremental-force-scan to read all the pages even if the bitmap data is available.
Incremental backups do not actually compare the data files to the previous backup’s data files. In fact, you can use --incremental-lsn to perform an incremental backup without even having the previous backup, if you know its LSN. Incremental backups simply read the pages and compare their LSN to the last backup’s LSN. You still need a full backup to recover the incremental changes, however; without a full backup to act as a base, the incremental backups are useless.

Creating an Incremental Backup

To make an incremental backup, begin with a full backup as usual. The xtrabackup binary writes a file called xtrabackup_checkpoints into the backup’s target directory. This file contains a line showing the to_lsn, which is the database’s LSN at the end of the backup. Create the full backup with a command such as the following:
xtrabackup --backup --target-dir=/data/backups/base --datadir=/var/lib/mysql/
If you want a usable full backup, use innobackupex since xtrabackup itself won’t copy table definitions, triggers, or anything else that’s not .ibd.
If you look at the xtrabackup_checkpoints file, you should see some contents similar to the following:
backup_type = full-backuped
from_lsn = 0
to_lsn = 1291135
Now that you have a full backup, you can make an incremental backup based on it. Use a command such as the following:
xtrabackup --backup --target-dir=/data/backups/inc1 \
--incremental-basedir=/data/backups/base --datadir=/var/lib/mysql/
The /data/backups/inc1/ directory should now contain delta files, such as ibdata1.delta and test/table1.ibd.delta. These represent the changes since the LSN 1291135. If you examine the xtrabackup_checkpoints file in this directory, you should see something similar to the following:
backup_type = incremental
from_lsn = 1291135
to_lsn = 1291340
The meaning should be self-evident. It’s now possible to use this directory as the base for yet another incremental backup:
xtrabackup --backup --target-dir=/data/backups/inc2 \
--incremental-basedir=/data/backups/inc1 --datadir=/var/lib/mysql/

Preparing the Incremental Backups

The --prepare step for incremental backups is not the same as for normal backups. In normal backups, two types of operations are performed to make the database consistent: committed transactions are replayed from the log file against the data files, and uncommitted transactions are rolled back. You must skip the rollback of uncommitted transactions when preparing a backup, because transactions that were uncommitted at the time of your backup may be in progress, and it’s likely that they will be committed in the next incremental backup. You should use the --apply-log-only option to prevent the rollback phase.
If you do not use the --apply-log-only option to prevent the rollback phase, then your incremental backups will be useless. After transactions have been rolled back, further incremental backups cannot be applied.
Beginning with the full backup you created, you can prepare it, and then apply the incremental differences to it. Recall that you have the following backups:
/data/backups/base
/data/backups/inc1
/data/backups/inc2
To prepare the base backup, you need to run --prepare as usual, but prevent the rollback phase:
xtrabackup --prepare --apply-log-only --target-dir=/data/backups/base
The output should end with some text such as the following:
101107 20:49:43  InnoDB: Shutdown completed; log sequence number 1291135
The log sequence number should match the to_lsn of the base backup, which you saw previously.
This backup is actually safe to restore as-is now, even though the rollback phase has been skipped. If you restore it and start MySQL, InnoDB will detect that the rollback phase was not performed, and it will do that in the background, as it usually does for a crash recovery upon start. It will notify you that the database was not shut down normally.
To apply the first incremental backup to the full backup, you should use the following command:
xtrabackup --prepare --apply-log-only --target-dir=/data/backups/base \
--incremental-dir=/data/backups/inc1
This applies the delta files to the files in /data/backups/base, which rolls them forward in time to the time of the incremental backup. It then applies the redo log as usual to the result. The final data is in /data/backups/base, not in the incremental directory. You should see some output such as the following:
incremental backup from 1291135 is enabled.
xtrabackup: cd to /data/backups/base/
xtrabackup: This target seems to be already prepared.
xtrabackup: xtrabackup_logfile detected: size=2097152, start_lsn=(1291340)
Applying /data/backups/inc1/ibdata1.delta ...
Applying /data/backups/inc1/test/table1.ibd.delta ...
.... snip
101107 20:56:30  InnoDB: Shutdown completed; log sequence number 1291340
Again, the LSN should match what you saw from your earlier inspection of the first incremental backup. If you restore the files from /data/backups/base, you should see the state of the database as of the first incremental backup.
Preparing the second incremental backup is a similar process: apply the deltas to the (modified) base backup, and you will roll its data forward in time to the point of the second incremental backup:
xtrabackup --prepare --target-dir=/data/backups/base \
--incremental-dir=/data/backups/inc2
Note
--apply-log-only should be used when merging all incrementals except the last one. That’s why the previous line doesn’t contain the --apply-log-only option. Even if the --apply-log-only was used on the last step, backup would still be consistent but in that case server would perform the rollback phase.
If you wish to avoid the notice that InnoDB was not shut down normally, when you have applied the desired deltas to the base backup, you can run --prepare again without disabling the rollback phase.

Wednesday, August 13, 2014

How to Create an Oracle OVM VM from an Linux ISO DVD Image

If you are a Linux sysadmin, working in an environment that uses lot of Oracle products, you might be in a situation to explore the Oracle virtualization product called OVM.
In OVM, you’ll typically use a pre-existing OVM templates for a particular Linux distro (or Windows) to create a virtual machine.
But, you can also create an OVM VM from an ISO image. If you are installing a specific version of OS on your environment a lot, you can upload that particular distro ISO image to an Oracle VM repository, and use it to create new virtual machine as many times as you like.

Import ISO to an OVM Repository

Before you create an VM, you should first import the ISO to the OVM.
You have to select a specific Repository where you want to upload the ISO image. Please note that you can use the ISO only on those servers pools (or servers) where this particular repository is mounted.
Login to OVM Manager, go to “Repositories”, select a repository and expand it, then select the “ISOs” folder, and click on “Import ISO” icon on the toolbar.
Oracle VM Import ISO
Next, select the server-name from the drop-down list. This will display all the OVM servers where this particular repository is available. In this example, we’ve selected “ovm1″ server. Next, specify the location of the ISO image.
Oracle VM Import ISO Location
In this case, I’ve uploaded the Oracle Enterprise Linux 5.3 DVD ISO image to a local server on the internal network that runs Apache webserver, and specified that URL. Click on “Import”, which will import the ISO image to the selected OVM repository.
Note: If you are running an older version of the Apache on the server where the ISO image is located, you might get the following error message on the Apache error_log and on the OVM Manager when you are trying to upload the ISO image. In that case, move the ISO image to a different server that runs latest version of Apache, and specify that URL.
On Apache:
[error] [client 192.168.1.1] (75)Value too large for defined data type: 
access to /iso/Enterprise-R5-U3-Server-i386-dvd.iso failed

On OVM:
Async operation failed on server: ovm1. Enterprise-R5-U3-Server-i386-dvd.iso
PID: 13946, Server error message: [Errno 14] 
curl#22 - "The requested URL returned error: 403

Create OVM Virtual Machine from ISO

In OVM Manager, go to “Servers and VMs” tab -> expand a particular server pool -> select your OVM server -> and click on “Create VM” icon from the tool bar, which will display the following screen. Fill-out appropriate information here.
Please note that the repository we selected here should be the repository where we uploaded the ISO image.
Oracle OVM Create VM Name

Set-up Networks

The Unassigned NICs will display the next available virtual NIC, click on the “Add VNIC” button, which will add this particular MAC address to this new VM on the specified network.
If you don’t see any MACs available in the “Unassigned NICs” drop-down list, you can always create new NICs, by clicking on “Create More NICs” button.
Oracle VM Create VM Networks

Arrange Disks (Select ISO to be installed)

In this screen, we’ll specify the ISO that is needed to install Linux distro on this new VM. We’ll also specify the virtual disks that we need on this particular VM.
Select “CD/DVD” from the Disk Type drop-down list, which will add “EMPTY_CDROM” in the “content” column, click on the “Search” icon under the “Action” column.
Oracle OVM Create VM Arrange Disks ISO
This will display all the ISOs that are available for the server. You should’ve uploaded the ISO that you need as we explained in the 1st step of this tutorial.
Oracle OVM Create VM Select an ISO Image
Once you select an ISO image, select “Virtual Disk” from the “Disk Type” drop-down list, and click on “Add +” icon to add a virtual disk to this new VM as shown below.
Oracle VM Create VM Virtual Disk
In this example, I’m assigning a virtual disk of 10GB to the new VM. In the “Boot” and “Tag” screen, leave all the values at default, and don’t change anything. Click on “Finish” to create the new VM.
Oracle OVM Create Virtual Disk

New VM from ISO Created

This will create the new VM from our ISO image on the selected OVM server as shown below. Once a VM is created, you can start/stop/restart, change config, and do all typical VM operations that you would can do in any typical virtualization environment.
Oracle OVM New VM Created

How to Migrate Microsoft SQL Server to MySQL Database

If you are using mostly open source in your enterprise, and have few MS SQL server database around, you might want to consider migrating those to MySQL database.
The following are few reasons why you might want to consider migrating Microsoft SQL Server to MySQL database:

  • To avoid huge License and support fees of MS SQL Server. In MySQL, even if you decide to use the MySQL enterprise edition, it is less expensive.
  • Unlike SQL Server, MySQL supports wide range of Operating Systems including several Linux distros, Solaris and Mac.
  • To implement a highly scalable database infrastructure
  • To take advantage of several advanced features of MySQL database that have been tested intensively over the years by a huge open source community
We can migrate MS SQL database to MySQL using migration module of “MySQL Workbench” utility.
The most easiest way to install MySQL Workbench is to install “Oracle MySQL installer for windows”, which installs several MySQL tools including the Workbench.
Download and install this MySQL Installer, which includes Workbench and other necessary connectors and drivers required for the migration.
The following is an overview of the steps involved in the migration of MsSql database to MySQL using Workbench migration wizard.

1. Take care of Prerequisites

Before starting the MySQL database migration wizard in Workbench, we need to ensure that ODBC drivers are present for connecting to the source Microsoft SQL Server database, as it is not bundled with Workbench.
Verify that the max_allowed_packet option in the MySQL server is sufficient for the largest field to be migrated.
Ensure that we can connect to both destination MySQL server database, and source MsSQL Server database with appropriate privileges that are required for migrating the data across.
In the MySQL Workbench, the migration wizard will display the following “Migration task list” that you’ll need to go through to finish the migration.
MySQL Workbench Migration Overview

2. Select Source and Target Database

First, define the source Microsoft SQL Server database connection parameter. Select “Microsoft SQL Server” from the database system dropdown list. In the parameters tab, select the DSN, and specify the username to the source database.
MySQL Workbench Migration Select Source
Next, define the destination MySQL database connection paramter. Select “Local Instance MySQL” or “Remote Instance MySQL” depending on your situation. In the parameters tab, specify the hostname or the ip-address where the MySQL database is running, the MySQL port, username. If you don’t specify the password, it will prompt you.
MySQL Workbench Migration Select Target DB
Once you specify the source and destination, all available schemas and databases will be listed. You can select the specific schema that you like to migration (or select all), and you can also specify custom schema mapping to the destination MySQL database.
MySQL Workbench Migration Select Schema

3. Migrate the Objects

In this step the Microsoft SQL Server schema objects, table objects, data types, default values, indexes, primary keys are converted. Please note that view object, function objects and stored procedures are just copied and is commented out as we will need to convert those manually.

4. Data Migration

In this step the automated copy of data is done from source to destination database for the migrated tables.
MySQL Workbench Migration Data Transfer
Please note that using the migration wizard we can only convert tables and copy data but cannot convert the triggers, views and stored procedures. We’ll have to do those manually, which we might cover in one of the future article on how to migrate MS SQL stored procedures to MySQL stored procedures.

How to Setup MongoDB Replication Using Replica Set and Arbiters

If you are running MongoDB on production environment it is essential that you setup a real time replication of your primary instance.
Using replica set, you can also scale horizontally and distribute the read load across multiple mongodb nodes.
This tutorial explains in detail on how to setup MongoDB replication.

You can setup mongodb replication in several configurations. But we’ll discuss the minimum configuration setup.
MongoDB recommends that you have minimum of three nodes in a replica set. But, out of those three nodes, two nodes stores data and one node can be just an arbiter node.
An arbiter node doesn’t hold any data, but it participates in the voting process when the primary goes down.
The following is the minimal setup that is explained in this tutorial.
MongoDB Replica Set

1. Install MongoDB

First, install mongodb on all three nodes.
yum install mongo-10gen mongo-10gen-server

2. Modify /etc/hosts file

On mongodb1 (and mongodb2) server, modify the /etc/hosts file and add the following.
192.168.100.1 mongodb1
192.168.100.2 mongodb2
192.168.100.3 arbiter1
Make sure all these nodes are able to talk to each other using the hostname. You can also use FQDN here. For example, instead of mongodb1, you can use mongodb1.thegeekstuff.com

3. Enable Auth on all MongoDB Nodes

While this is not mandatory for the replica set functionality, don’t run your prod mongodb instance without authentication enabled.
By default auth is not enabled on mongodb. Add the following line to your mongod.conf file.
# vi /etc/mongod.conf
auth = true
Restart the mongodb instance after the above change.
service mongod restart
Also make sure you create an admin username and password for the admin database to use replica set commands.
> use admin
switched to db admin
> db.addUser("admin", "SecretPwd");
Note: Do the above on all the mongodb nodes.

4. On mongodb1: Restore Existing DB

If you already have a single instance mongodb server running, and like to migrate it to this new replica set configuration using the three nodes, take a backup of the mongodb using mongodump, and restore it on the mongodb1 instance using mongorestore command.
mongorestore --dbpath /var/lib/mongo --db ${db_destination} --drop dump/${db_source}
After the restore, if the file permissions under /var/lib/mongo directory are different, change it accordingly as shown below.
cd /var/lib/gmongo
chown mongod:mongod *
service mongod start

5. On mongodb1: Add replSet to mongod.conf

Add the following line to mongod.conf file. You can give any value here. I’ve given “prodRepl”, but it can be anything, as long as they are same on all the nodes.
# vi /etc/mongod.conf
replSet = prodRepl
At this stage, you’ll notice that there is no replica set configuration. It will display “null” as shown below.
> use admin;
> db.auth("admin","SecretPwd");
> rs.conf();
null

6. Setup KeyFile for Replication Auth on all MongoDB Nodes

On all the mongodb nodes, create a keyfile with some random password. The main thing is that this password should be same across all mongodb nodes.
# mkdir /root/data/mongodb/
# vi /root/data/keyfile
SecretPwdReplicaSetMongoDB
Add the following line to mongod.conf file on all the nodes
# vi /etc/mongod.conf
keyFile = /root/data/keyfile
Setup appropriate permissions to the keyfile and restart the mongodb instance as shown below.
chown mongod:mongod /root/data/keyfile
chmod 700 /srv/mongodb/keyfile
service mongod restart

7. On mongodb1: Initiate the Replica Set

Now, it is time to initiate the replica set as shown below using the rs.initiate() command.
> use admin
> db.auth("admin","SecretPwd");
> rs.initiate();
{
  "info2" : "no configuration explicitly specified -- making one",
  "me" : "mongodb1:27017",
  "info" : "Config now saved locally.  Should come online in about a minute.",
  "ok" : 1
}
Right after the initiate, you’ll notice that the configuration is not null anymore. Also, you’ll notice that the mongodb prompt changed from “>” to “replicasetName:PRIMARY>” as shown below.
> rs.config();
{
  "_id" : "prodRepl",
  "version" : 1,
  "members" : [
     {
       "_id" : 0,
       "host" : "mongodb1:27017"
     }
  ]
}

8. On mongodb1: View Log and Replica Set Status

At this stage, on mongodb1, mongod.log will display something similar to the following:
# tail /var/log/mongo/mongod.log
Sat Feb 22 18:11:30.995 [conn2] ******
Sat Feb 22 18:11:30.995 [conn2] replSet info saving a newer config version to local.system.replset
Sat Feb 22 18:11:30.996 [conn2] replSet saveConfigLocally done
Sat Feb 22 18:11:30.996 [conn2] replSet replSetInitiate config now saved locally.  Should come online in about a minute.
Sat Feb 22 18:11:34.568 [rsStart] replSet I am mongodb1:27017
Sat Feb 22 18:11:34.569 [rsStart] replSet STARTUP2
Sat Feb 22 18:11:35.570 [rsSync] replSet SECONDARY
Sat Feb 22 18:11:35.570 [rsMgr] replSet info electSelf 0
Sat Feb 22 18:11:36.570 [rsMgr] replSet PRIMARY
...
Also, the status will display the following, indicating that there is only one node added to the replica set so far.
prodRepl:PRIMARY> rs.status();
{
"set" : "prodRepl",
"date" : ISODate("2014-02-22T06:28:49Z"),
"myState" : 1,
"members" : [
  {
    "_id" : 0,
    "name" : "mongodb1:27017",
    "health" : 1,
    "state" : 1,
    "stateStr" : "PRIMARY",
    "uptime" : 645,
    "optime" : Timestamp(1438853170, 1),
    "optimeDate" : ISODate("2014-02-22T06:19:30Z"),
    "self" : true
  }
],
"ok" : 1
}

9. On mongodb1: Add the 2nd node

On mongodb1 server, add the 2nd node using rs.add command as shown below.
prodRepl:PRIMARY> rs.add("mongodb2");
{ "ok" : 1 }
Afer you’ve added the node, you’ll notice that the rs.config command will show both the nodes as shown below.
prodRepl:PRIMARY> rs.config();
{
  "_id" : "prodRepl",
  "version" : 2,
  "members" : [
     {
       "_id" : 0,
       "host" : "mongodb1:27017"
     },
     {
       "_id" : 1,
       "host" : "mongodb2:27017"
     }
  ]
}

10. Sync started Between Nodes

As shown from the rs.status() command, you’ll notice that the mongodb2 node is in startup state, and it is performing the initial cloning of the database from mongodb1 to mongodb2.
Depending on the size of the database on mongodb1, this might take some time to complete.
prodRepl:PRIMARY> rs.status();
{
  "set" : "prodRepl",
  "date" : ISODate("2014-02-22T21:27:53Z"),
  "myState" : 1,
  "members" : [
   {
     "_id" : 0,
     "name" : "mongodb1:27017",
     "health" : 1,
     "state" : 1,
     "stateStr" : "PRIMARY",
     "uptime" : 225,
     "optime" : Timestamp(1343239634, 1),
     "optimeDate" : ISODate("2014-02-22T21:27:14Z"),
     "self" : true
   },
   {
     "_id" : 1,
     "name" : "mongodb2:27017",
     "health" : 1,
     "state" : 5,
     "stateStr" : "STARTUP2",
     "uptime" : 39,
     "optime" : Timestamp(0, 0),
     "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
     "lastHeartbeat" : ISODate("2014-02-22T21:27:52Z"),
     "lastHeartbeatRecv" : ISODate("2014-02-22T21:27:52Z"),
     "pingMs" : 2,
     "lastHeartbeatMessage" : "initial sync cloning db: mongoprod"
   }
  ],
  "ok" : 1
}
Once the sync is completed, the state on the mongodb1 will change to “SECONDARY” and you’ll see the last heartbeat message indicating that the data is all synced and ready to go.
prodRepl:PRIMARY> rs.status();
{
  "set" : "prodRepl",
  "date" : ISODate("2014-02-22T22:03:21Z"),
  "myState" : 1,
  "members" : [
    {
      "_id" : 0,
      "name" : "mongodb1:27017",
      "health" : 1,
      "state" : 1,
      "stateStr" : "PRIMARY",
      "uptime" : 2353,
      "optime" : Timestamp(1394309634, 1),
      "optimeDate" : ISODate("2014-02-22T21:27:14Z"),
      "self" : true
    },
    {
      "_id" : 1,
      "name" : "mongodb2:27017",
      "health" : 1,
      "state" : 2,
      "stateStr" : "SECONDARY",
      "uptime" : 2167,
      "optime" : Timestamp(1394309634, 1),
      "optimeDate" : ISODate("2014-02-22T21:27:14Z"),
      "lastHeartbeat" : ISODate("2014-02-22T22:03:21Z"),
      "lastHeartbeatRecv" : ISODate("2014-02-22T22:03:20Z"),
      "pingMs" : 0,
      "syncingTo" : "mongodb1:27017"
    }
],
"ok" : 1
}
Note: If you login to mongodb2 node, and execute the above command, you’ll see the exact same message, as both the nodes are now part of the same replica set.

11. On arbiter1: Start the mongod instance

The arbiter node doesn’t need to be as powerful as the mongodb1 and mongodb2 nodes. You can pick some existing server that is running some other production application, and run arbiter on it, as it doesn’t consume lot of resources.
Create an empty directory for the mongod instance dbpath. As we explained earlier, this directory will not contain any data from mongodb1 or mongodb2 server. This is needed just to start the mongodb instance on the arbiter node.
The key thing is that specify the replica set name using –replSet as shown below. If you like to change the default mongodb port, you can use –port option as shown below. Run this in the background.
mkdir /var/lib/mongo/data

nohup mongod --port 30000 --dbpath /var/lib/mongo/data --replSet prodRepl &

12. On mongodb1: Add arbiter1 node

Now on the mongodb1 primary node, add this arbiter node using rs.addArb command as shown below.
prodRepl:PRIMARY> rs.addArb("arbiter1:30000");
{ "ok" : 1 }
Now if you view the configuration using rs.config command, you’ll see all three nodes.
prodRepl:PRIMARY> rs.config();
{
  "_id" : "prodRepl",
  "version" : 3,
  "members" : [
    {
      "_id" : 0,
      "host" : "mongodb1:27017"
    },
    {
      "_id" : 1,
      "host" : "mongodb2:27017"
    },
    {
      "_id" : 2,
      "host" : "arbiter1:30000",
      "arbiterOnly" : true
    }
  ]
}

13. Verify Final Replication Status

Execute the rs.status() command which will show the current status of all the three nodes. In this case, everything looks good and functional.
As indicated by the “stateStr”, we now have a Primary, Secondary and an Arbiter. This is the minimum configuration required to get an working mongodb replica set.
Please note that you can execute rs.config() and rs.status() command on any one of the three nodes, which will display the same results.
prodRepl:PRIMARY> rs.status();
{
  "set" : "prodRepl",
  "date" : ISODate("2014-02-22T22:59:15Z"),
  "myState" : 1,
  "members" : [
     {
       "_id" : 0,
       "name" : "mongodb1:27017",
       "health" : 1,
       "state" : 1,
       "stateStr" : "PRIMARY",
       "uptime" : 5707,
       "optime" : Timestamp(1343042482, 1),
       "optimeDate" : ISODate("2014-02-22T22:14:42Z"),
       "self" : true
     },
     {
       "_id" : 1,
       "name" : "mongodb2:27017",
       "health" : 1,
       "state" : 2,
       "stateStr" : "SECONDARY",
       "uptime" : 5521,
       "optime" : Timestamp(1343042482, 1),
       "optimeDate" : ISODate("2014-02-22T22:14:42Z"),
       "lastHeartbeat" : ISODate("2014-02-22T22:59:14Z"),
       "lastHeartbeatRecv" : ISODate("2014-02-22T22:59:13Z"),
       "pingMs" : 0,
       "syncingTo" : "mongodb1:27017"
     },
     {
       "_id" : 2,
       "name" : "arbiter1:30000",
       "health" : 1,
       "state" : 7,
       "stateStr" : "ARBITER",
       "uptime" : 39,
       "lastHeartbeat" : ISODate("2014-02-22T22:59:14Z"),
       "lastHeartbeatRecv" : ISODate("2014-02-22T22:59:15Z"),
       "pingMs" : 0
     }
  ],
  "ok" : 1
}

How to Convert MS SQL Server Stored Procedure Queries to MySQL

When you migrate from MS SQL to MySQL, apart from migrating the data, you should also migrate the application code that resides in the database.
Earlier we discussed how to migrate MS SQL to MySQL database using the WorkSQL Workbench tool.
As part of the migration, it will only convert tables and copy the data, but it will not convert triggers, views and stored procedures. You have to manually convert these over to MySQL database.

To perform this manual conversion, you need to understand the key differences between MS SQL and MySQL queries.
During my conversion from Microsoft SQL Server to MySQL database, I encountered the following MS SQL statements and queries, which was not compatible with MySQL and I have to convert them as shown below.

1. Stored Procedure Creation Syntax

The basic stored procedure creation syntax itself is different.
MS SQL Stored Procedure creation syntax:
CREATE PROCEDURE [dbo].[storedProcedureName]
@someString VarChar(150)
As
BEGIN
  --  Sql queries goes here 
END
MySQL Stored Procedure creation syntax:
CREATE PROCEDURE storedProcedureName( IN someString VarChar(150) )
BEGIN
  -- Sql queries goes here
END

2. Temporary Table Creation

In my MS SQL code, I’ve created few temporary tables that are required by the application. The syntax for temporary table creation differs as shown below.
MS SQL temporary table creation syntax:
CREATE TABLE #tableName( 
emp_id VARCHAR(10)COLLATE Database_Default PRIMARY KEY, 
emp_Name VARCHAR(50) COLLATE Database_Default, 
emp_Code VARCHAR(30) COLLATE Database_Default, 
emp_Department VARCHAR(30) COLLATE Database_Default
)
MySQL temporary table creation syntax:
CREATE TEMPORARY TABLE tableName(
emp_id VARCHAR(10),
emp_Name VARCHAR(50),
emp_Code VARCHAR(30),
emp_Department VARCHAR(30)
);

3. IF Condition

I’ve used lot of IF conditions in my stored procedures and triggers, which didn’t work after the conversion to MySQL, as the syntax is different as shown below.
MS SQL IF condition syntax:
if(@intSomeVal='')
BEGIN
 SET @intSomeVal=10
END
MySQL IF condition syntax:
IF @intSomeVal='' THEN
  SET @intSomeVal=10;
END IF;

4. IF EXIST Condition

Another common use of if condition is to check whether a query returned any rows or not; and if it returns some rows, do something. For this, I used IF EXISTS in MS SQL, which should be converted to MySQL IF command as explained below.
MS SQL IF EXITS Example:
IF EXISTS(SELECT 1 FROM #tableName WITH(NOLOCK) WHERE ColName='empType' ) 
BEGIN
  --  Sql queries goes here
END
MySQL equivalent of the above using IF condition:
IF(SELECT count(*) FROM tableName WHERE ColName='empType') > 0  THEN
  --  Sql queries goes here
END IF;

5. Date Functions

Using data functions inside stored procedure is pretty common. The following table gives the difference between MS SQL and MySQL data related functions.
MS SQL Server MySQL Server
GETDATE( ) NOW( )
SYSDATE( )
CURRENT_TIMESTAMP( )
GETDATE( ) + 1 NOW( ) + INTERVAL 1 DAY
CURRENT_TIMESTAMP +INTERVAL 1 DAY
DATEADD(dd, -1, GETDATE()) ADDDATE(NOW(), INTERVAL -1 DAY)
CONVERT(VARCHAR(19),GETDATE()) DATE_FORMAT(NOW(),’%b %d %Y %h:%i %p’)
CONVERT(VARCHAR(10),GETDATE(),110) DATE_FORMAT(NOW(),’%m-%d-%Y’)
CONVERT(VARCHAR(24),GETDATE(),113) DATE_FORMAT(NOW(),’%d %b %Y %T:%f’)
CONVERT(VARCHAR(11),GETDATE(),6) DATE_FORMAT(NOW(),’%d %b %y’)

6. Declare Variables

In MS SQL stored procedure, you can declare variables anywhere between “Begin” and “end”.
However in MySql you will have to declare it just after the stored procedure’s “begin” statement. Declaration of the variable anywhere in between is not allowed.

7. Select First N Rows

In MS SQL, you’ll be using SELECT TOP” when you want to select only first few records. For example, to select 1st 10 records, you’ll do the following:
SELECT TOP 10 * FROM TABLE;
In MySQL, you’ll have to use LIMIT instead of TOP as shown below.
SELECT * FROM TABLE LIMIT 10;

8. Convert Integer to Char

In MS SQL you’ll do the following (CONVERT function) to convert integer to char.
CONVERT(VARCHAR(50),  someIntVal)
In MySQL, you’ll use CAST function to convert integer to char as shown below.
CAST( someIntVal as CHAR)

9. Concatenation Operator

If you are manipulating lot of data inside your stored procedure, you might be performing some string concatenation.
In MS SQL the concatenation operator is + symbol. An example of this usage is shown below.
SET @someString = '%|' + @someStringVal + '|%'
In MySQL if you are using ansi mode, it is same as MS SQL. i.e + symbol will work for concatenation.
But, in the default mode, in MySQL, we need to use CONCAT( “str1″, “str2″, “str3″.. “strN”) function.
SET someString = CONCAT('%|', someStringVal, '|%');

7 Essential MongoDB Books for Administrators and Developers

MongoDB is one of the most popular NoSQL database. Unlike traditional SQL database, you don’t need to define a schema. The schema is embedded in the data document itself, making it easy for you to change the schema at anytime without worrying about changing any of the previous documents that are loaded.
If you are thinking about getting started on MongoDB, pick one of the book from the following list and get started.
I recommend the 1st three books from the following list, which complement each-other and are essential for your library, if you like to become proficient in MongoDB.

  1. MongoDB: The Definitive Guide by Kristina Chodorow – This book is already in 2nd edition, which covers the basics of MongoDB. If you are new to MongoDB, this book will walk you through and make you comfortable with the database. This book is close to 400 pages with 23 chapters that are organized into four parts 1) Introduction 2) Designing your Application 3) Replication 4) Sharding 5) Application Administration 6) Server Administration
  2. MongoDB Applied Design Patterns by Rick Copeland – If you already have some experience on MongoDB and like to know how you can efficiently design your application, you should read this book. This book have several practical uses cases that are focussed on specific industries like Ecommerce, Content Management System, etc. Just going through all the uses cases will help you to design an application for your specific requirement.
  3. Scaling MongoDB by Kristina Chodorow – Once you have the basic MongoDB setup and working effectively, you need to know about sharding, cluster and administration. This is a small book with less than 70 pages that focus mainly on Sharding and clustering administration. Again, this is not for newbies. You should already have experience on MongoDB before you get this book.
  4. MongoDB in Action by Kyle Banker – While this book is little bit outdated, you’ll still get tons of value from this book, if you are new to MongoDB. If you like the “in Action” series of books, and if you are new to MongoDB, you should consider this book.
  5. The Definitive Guide to MongoDB by David Hows, Eelco Plugge, Peter Membrey and Tim Hawkins – This is one of the most recent edition of MongoDB, which covers all the new features of MongoDB. This book is helpful for both newbies and those who are already experienced, but like to explore the new MongoDB features. There is a separate section dedicated for developing with MongoDB, which covers how to use PHP and python with MongoDB.
  6. Pro Hibernate and MongoDB by Anghel Leonard – This is an unique combination of how to use Hibernate and MongoDB to develop your application. This book is helpful if you are already using Hibernate and like to understand how to take advantage of the NoSQL big data solution in your application by integrating MongoDB and Hibernate.
  7. MongoDB and PHP by Steve Francia – This is a very short book with just 60 pages. If you are a PHP programmer, who are used to developing application with MySQL or Oracle, and like to explore and understand how to to program for MongoDB, this might give you a good start. But, don’t expect this to go very deep into everything that you need to know about PHP-MongoDB programming. This will give you a jumpstart and you need to take it up from there. If you program in Python, MongoDB and Python will give you a jumpstat. Also, in general programmers will find this Tips and Tricks for MongoDB Developers book helpful.
Note: Again, if you are not sure which book to buy from this list, I recommend the 1st three books from the above list, which complement each-other and are essential if you want to implement MongoDB in a production environment.

How to Recover InnoDB MySQL Table Data from ibdata and .frm Files

This tutorial explains how to restore MySQL tables when all or some of the tables are lost, or when MySQL fails to load table data.
One of the reason for this to happen is when the table data is corrupted.
In this particular scenario, when you connect to the MySQL database server, you cannot see one more tables, as they are missing.

Under this scenario, the MySQL log file contained the following messages:
InnoDB: Error: log file ./ib_logfile0 is of different size 0 50331648 bytes
InnoDB: than specified in the .cnf file 0 5242880 bytes!
[ERROR] Plugin 'InnoDB' init function returned error.
[ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
[ERROR] Unknown/unsupported storage engine: InnoDB
[ERROR] Aborting
The method explained below will work only for InnoDB database.
Note: Before you do anything, take a backup of all the MySQL files and database in the current condition, and keep it somewhere safe.
To restore the table data you have make sure that data directory and its contents are intact. In my case it was fine.
drwx------ 2 mysql mysql     4096 Oct 11  2012 performance_schema
drwx------ 2 mysql mysql     4096 Dec 10  2012 ndbinfo
drwx--x--x 2 mysql mysql     4096 Dec 10  2012 mysql
-rw-rw---- 1 mysql mysql       56 Dec 19  2012 auto.cnf
drwx------ 2 mysql mysql     4096 Jul 30  2013 bugs
-rw-r----- 1 mysql mysql 50331648 Mar 18 10:35 ib_logfile0
-rw-r----- 1 mysql mysql 50331648 Apr 22  2013 ib_logfile1
-rw-r----- 1 mysql mysql 35651584 Mar 18 10:35 ibdata1
..
  • Ibdata1 – This file is the InnoDB system table space, which contains multiple InnoDB tables and associated indexes.
  • *.frm – Holds metadata information for all MySQL tables. These files are located inside the folder of the corresponding MySQL database. (for example, inside “bugs” directory)
  • ib_logfile* – All data changes are written into these log files. This is similar to the archive logs concepts that we find in other RDBMS databases.

Copy the Files

To restore the data from the above files, first stop the MySQL server.
# service mysqld stop
Copy the ibdata files, and the database schema folder to some other directory. We will use this to restore our Mysql database. In this case, we’ll copy it to the /tmp directory. The name of the database scheme in this example is bugs.
cp –r ibdata* ib_logfile* /tmp

cp –r schema_name/  /tmp/schema_name/
Start the MySQL server:
# service mysqld start
On a related note, for a typical MySQL database backup and restore, you should use the mysqldump command.

Restore the Data

Next, restore the table data as explained below.
In the my.cnf configuration file, set the value of the following parameter to the current size of the ib_logfile0 file. In the following example, I’ve set it to 48M, as that is the size I see for the ib_logfile0 file when I did “ls -lh ib_logfile0″
innodb_log_file_size=48M
Please note that both the ib_logfile0 and ib_logfile1 file size will be the same.
Copy the previous ibdata files to respective position, inside mysql data directory.
cp –r /tmp/ibdata* /var/lib/mysql/
Create an empty folder inside data directory with the same name as the database schema name that you are trying to restore, and copy the previous .frm files inside this folder as shown below:
cp –r /tmp/ib_logfile* /var/lib/mysql/
cp –r /tmp/schema_name/*.frm /var/lib/mysql/schema_name/
Finally, restart the MySQL server.
service mysqld restart
Now you have MySQL server running with the restored tables. Don’t forget to grant appropriate privileges for the clients to connect to the MySQL database.

10 Essential Git Log Command Examples on Linux to View Commits

The Git Log tool allows you to view information about previous commits that have occurred in a project.
The simplest version of the log command shows the commits that lead up to the state of the currently checked out branch.
These commits are shown in reverse chronological order (the most recent commits first).

1. Display All Commits

You can force the log tool display all commits (regardless of the branch checked out) by using the –all option.
$ git log --all
commit c36d2103222cfd9ad62f755fee16b3f256f1cb21
Author: Bob Smith <BSmith@example.com>
Date:   Tue Mar 25 22:09:26 2014 -0300

    Ut sit.

commit 97eda7d2dab729eda23eefdc14336a5644e3c748
Author: John Doe <JDoe@example.com>
Date:   Mon Mar 24 10:14:08 2014 -0300

    Mollis interdum ullamcorper sociosqu, habitasse arcu magna risus congue dictum arcu, odio.
.
.
.

2. View n Most Recent Commits

The real power of the Git Log tool, however, is in its diversity. There are many options that not only allow you to filter commits to almost any granularity you desire, but to also tailor the format of the output to you personal needs.
The most trivial way to filter commits is to limit output to the ‘n’ most recently committed ones. This can be accomplished using the -<n> option. Replace <n> with the number of commits you would like to see. i.e. To see the 3 most recent commits:
$ git log -3
commit c36d2103222cfd9ad62f755fee16b3f256f1cb21
Author: Bob Smith <BSmith@example.com>
Date:   Tue Mar 25 22:09:26 2014 -0300

    Ut sit.

commit 97eda7d2dab729eda23eefdc14336a5644e3c748
Author: John Doe <JDoe@example.com>
Date:   Mon Mar 24 10:14:08 2014 -0300

    Mollis interdum ullamcorper sociosqu, habitasse arcu magna risus congue dictum arcu, odio.

commit 3ca28cfa2b8ea0d765e808cc565e056a94aceaf5
Author: Bobby Jones <BJones@example.com>
Date:   Mon Mar 24 01:52:04 2014 -0300

    Fermentum magnis facilisis torquent platea sapien hac, aliquet torquent ad netus risus.

3. Filter Commits By Author or Committer

Another common way to filter commits is by the person who wrote or committed the changes. This can be done using the –author and –committer options. The syntax is
git log --author <name>
git log --committer <name>
The author option will limit results to commits in which the changes were written by <name>. The –committer option, on the other hand, will limit results to commits that were committed by that individual. Many times, the author and committer will be the same person (you would generally expect this to be the case) but, in the case where a developer submits a patch of their work for approval, the developer may not actually commit the code.
$ git log --author=Bob
commit c36d2103222cfd9ad62f755fee16b3f256f1cb21
Author: Bob Smith <BSmith@example.com>
Date:   Tue Mar 25 22:09:26 2014 -0300

    Ut sit.

commit 3ca28cfa2b8ea0d765e808cc565e056a94aceaf5
Author: Bobby Jones <BJones@example.com>
Date:   Mon Mar 24 01:52:04 2014 -0300

    Fermentum magnis facilisis torquent platea sapien hac, aliquet torquent ad netus risus.

commit cfc101ad280f5b005c8d49c91e849c6c40a1d275
Author: Bob Smith <BSmith@example.com>
Date:   Thu Mar 20 10:31:22 2014 -0300

    Natoque, turpis per vestibulum neque nibh ullamcorper.
.
.
.
Note: Notice how this option matches any commit in which the <name> we specify is a substring match of the commit’s author (We’ve found both Bob Smith’s and Bobby Jones’ commits).

4. Filter Commits by X Days Ago

Many times you’ll want to limit the commits to those within a given date range. This can be accomplished using the –before and –after options:
git log --before <date>
git log --after <date>
The date can be specified as a string with the format: “yyyy-mm-dd”. Git will also accept Ruby expressions as arguments here, so you can do things like the following to see commits that occurred in the last 2 days
$ git log --after 2.days.ago
commit c36d2103222cfd9ad62f755fee16b3f256f1cb21
Author: Bob Smith <BSmith@example.com>
Date:   Tue Mar 25 22:09:26 2014 -0300

    Ut sit.

commit 97eda7d2dab729eda23eefdc14336a5644e3c748
Author: John Doe <JDoe@example.com>
Date:   Mon Mar 24 10:14:08 2014 -0300

    Mollis interdum ullamcorper sociosqu, habitasse arcu magna risus congue dictum arcu, odio.

commit 3ca28cfa2b8ea0d765e808cc565e056a94aceaf5
Author: Bobby Jones <BJones@example.com>
Date:   Mon Mar 24 01:52:04 2014 -0300

    Fermentum magnis facilisis torquent platea sapien hac, aliquet torquent ad netus risus.

5. Filter Commits by Date Range

To specify a date range, use both options:
git log --after <date> --before <date>
To see the commits that occurred on Feb 2nd, 2014:
$ git log --after "2014-02-01" --before "2014-02-02"
commit 69e1684ae9605544707fc36a7bf37da93dc7b015
Author: Bob Smith <BSmith@example.com>
Date:   Sun Feb 2 01:26:00 2014 -0400

    Praesent tempus varius vel feugiat mi tempor felis parturient.

6. View All Diff of Changes for Each Commit

Options to Modifying format of the output: To view the entire diff of changes for each commit found, use the -p option (think ‘p’ for patch):
$ git log -p 
commit c36d2103222cfd9ad62f755fee16b3f256f1cb21
Author: Bob Smith <BSmith@example.com>
Date:   Tue Mar 25 22:09:26 2014 -0300

    Ut sit.

    diff --git a/foo.txt b/foo.txt
    index 5554f5b..2773ba4 100644
    --- a/foo.txt
    +++ b/foo.txt
    @@ -436,3 +436,4 @@ Fermentum mollis.
     Lacus fermentum nonummy purus amet aliquam taciti fusce facilisis magna.
      Viverra facilisi curae augue.
       Purus ve nunc mi consectetuer cras.
       +Ad, maecenas egestas viverra blandit odio.

commit 97eda7d2dab729eda23eefdc14336a5644e3c748
Author: John Doe <JDoe@example.com>
Date:   Mon Mar 24 10:14:08 2014 -0300

   Mollis interdum ullamcorper sociosqu, habitasse arcu magna risus congue dictum arcu, odio.

   diff --git a/foo.txt b/foo.txt
   index 9cdef98..5554f5b 100644
   --- a/foo.txt
   +++ b/foo.txt
   @@ -435,3 +435,4 @@ Lacinia et enim suspendisse conubia lacus.
    Fermentum mollis.
     Lacus fermentum nonummy purus amet aliquam taciti fusce facilisis magna.
      Viverra facilisi curae augue.
      +Purus ve nunc mi consectetuer cras.
.
.
.

7. View Summary of Changes for Each Commit

To view a summary of the changes made in each commit (# of lines added, removed, etc), use the –stat option:
$ git log --stat 
commit c36d2103222cfd9ad62f755fee16b3f256f1cb21
Author: Bob Smith <BSmith@example.com>
Date:   Tue Mar 25 22:09:26 2014 -0300

    Ut sit.

     foo.txt | 1 +
      1 file changed, 1 insertion(+)

commit 97eda7d2dab729eda23eefdc14336a5644e3c748
Author: John Doe <JDoe@example.com>
Date:   Mon Mar 24 10:14:08 2014 -0300

    Mollis interdum ullamcorper sociosqu, habitasse arcu magna risus congue dictum arcu, odio.

     foo.txt | 1 +
      1 file changed, 1 insertion(+)

8. View Just One Line Per Commit

To just get the bare minimum information in a single line per commit, use the –oneline option. Each commit will be shown as simply the commit hash followed by the commit message, on a single line:
$ git log --oneline
c36d210 Ut sit.
97eda7d Mollis interdum ullamcorper sociosqu, habitasse arcu magna risus congue dictum arcu, odio.
3ca28cf Fermentum magnis facilisis torquent platea sapien hac, aliquet torquent ad netus risus.
3a96c1e Proin aenean vestibulum sociosqu vitae platea, odio, nisi habitasse at, in lorem odio varius.
1f0548c Nulla odio feugiat, id, volutpat litora, adipiscing.
cfc101a Natoque, turpis per vestibulum neque nibh ullamcorper.
.
.
.

9. View Commit History in ASCII Graph

The Git Log tool can also display the commit history in an ascii art graphical representation with the –graph option. This option works well when combined with the –oneline option mentioned above.
$ git log --graph
* commit c36d2103222cfd9ad62f755fee16b3f256f1cb21
| Author: Bob Smith <BSmith@example.com>
| Date:   Tue Mar 25 22:09:26 2014 -0300
|
|     Ut sit.
|
* commit 97eda7d2dab729eda23eefdc14336a5644e3c748
| Author: John Doe <JDoe@example.com>
| Date:   Mon Mar 24 10:14:08 2014 -0300
|
|     Mollis interdum ullamcorper sociosqu, habitasse arcu magna risus congue dictum arcu, odio.
|
* commit 3ca28cfa2b8ea0d765e808cc565e056a94aceaf5
| Author: Bobby Jones <BJones@example.com>
| Date:   Mon Mar 24 01:52:04 2014 -0300
|
|     Fermentum magnis facilisis torquent platea sapien hac, aliquet torquent ad netus risus.

.
.
.

10. Format the Git Log Output

To take complete control over the format of the output, use the –pretty option. This can be extremely useful if you are using the output of the log tool for further reporting. The syntax of this option is:
git log --pretty=format:"<options>"
The <options> are specified in a similar way as formatted strings are in many languages. For example:
$ git log --pretty=format:"Commit Hash: %H, Author: %aN, Date: %aD"
Commit Hash: c36d2103222cfd9ad62f755fee16b3f256f1cb21, Author: Bob Smith, Date: Tue, 25 Mar 2014 22:09:26 -0300
Commit Hash: 97eda7d2dab729eda23eefdc14336a5644e3c748, Author: John Doe, Date: Mon, 24 Mar 2014 10:14:08 -0300
Commit Hash: 3ca28cfa2b8ea0d765e808cc565e056a94aceaf5, Author: Bobby Jones, Date: Mon, 24 Mar 2014 01:52:04 -0300
Commit Hash: 3a96c1ed29e85f1a119ad39033511413aad616d1, Author: John Doe, Date: Sun, 23 Mar 2014 06:05:49 -0300
Commit Hash: 1f0548cc700988903380b8ca40fd1fecfa50347a, Author: John Doe, Date: Fri, 21 Mar 2014 17:53:49 -0300
.
.
.
For a full list of the available formatting options available, see the man page for the Git Log tool, or visit the online documentation here.
git help log
The final thing to note is that you can combine these options in almost anyway you see fit. This allows you to not only customize the queries you perform, but also how the results are displayed.

How to Backup Samba Domain Controller Configuration in Linux

We have ntbackup in Windows where we can take the “system state backup” for backing up the domain controller.
This tutorial explains how we can backup the Samba configuration, after you’ve setup Samba as active directory domain controller in Linux.
First, we need to understand what files and folders we are going to backup, and what tools we need to scheduled samba backup.
For Samba, we need to backup two database called as LDB and TDB. We are also looking at backing up the configuration files and sysvol .

What is LDB?
LDB is nothing but LDAP like database. This provides a fast database along with an LDAP-like API. In simple terminology LDB works as a intermediate between TDB and real LDAP database. Refer to this LDB website for more information.
What is TDB?
TDB stands for Trivial DataBase. Its a key/value pair database. Each value has a key with some data associated with it. It performs tasks like tdb_open, tdb_close, tdb_delete, tdb_exists,tdb_fetch and tdb_store. Refer to TDB website for more information.
What is Sysvol?
Sysvol stands for System Volume, which is nothing but a shared directory that stores public files which are needed for common access and replication throughout a domain.
So we are looking at backing up the databases, configuration files, and sysvol folder.
If you are new to Samba, you should first understand how to setup Samba domain controller.
Samba Server comes with a basic backup script. Using this script, you need to modify the source and target and schedule it with Crontab.
Samba backup utilities are part of tdb-tools package. Install it as shown below:
# yum install tdb-tools
Instead of writing your own backup shell script, you can use the default script that caomes as part of the samba source code.
Copy samba_backup script from this directory source4/scripting/bin/ to /usr/sbin directory.
If you’ve extracted the samba source under /usr/src, do the following:
cd /usr/src
cd source4/scripting/bin/
cp samba_backup /usr/sbin
Also, make sure the samba_backup script is owned by root, and root has execute permission.
In the samba_backup script, you can change the values of the following three parameters based on your specific configuration:
  1. FROM=/usr/local/samba
  2. WHERE=/backup
  3. DAYS=30
Add the samba_backup script to the crontab to take regular backups.
When samba backup script runs, it will create three files under the /backup directory as shown below.
$ ls -l
-rw-r--r-- 1 root root 366 May 14 12:53 etc.2014-05-14.tar.bz2 
-rw-r--r-- 1 root root 12M May 14 12:53 samba4_private.2014-05-14.tar.bz2 
-rw-r--r-- 1 root root 475 May 14 12:53 sysvol.2014-05-14.tar.bz2

How to use kdump for Linux Kernel Crash Analysis

Kdump is an utility used to capture the system core dump in the event of system crashes.
These captured core dumps can be used later to analyze the exact cause of the system failure and implement the necessary fix to prevent the crashes in future.
Kdump reserves a small portion of the memory for the secondary kernel called crashkernel.
This secondary or crash kernel is used the capture the core dump image whenever the system crashes.

1. Install Kdump Tools

First, install the kdump, which is part of kexec-tools package.
# yum install kexec-tools

2. Set crashkernel in grub.conf

Once the package is installed, edit /boot/grub/grub.conf file and set the amount of memory to be reserved for the kdump crash kernel.
You can edit the /boot/grub/grub.conf for the value crashkernel and set it to either auto or user-specified value. It is recommended to use minimum of 128M for a machine with 2G memory or higher.
In the following example, look for the line that start with “kernel”, where it is set to “crashkernel=auto”.
# vi /boot/grub/grub.conf
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux (2.6.32-419.el6.x86_64)
  root (hd0,0)
  kernel /vmlinuz-2.6.32-419.el6.x86_64 ro root=/dev/mapper/VolGroup-lv_root rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD rd_LVM_LV=VolGroup/lv_swap SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=VolGroup/lv_root  KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet
  initrd /initramfs-2.6.32-419.el6.x86_64.img

3. Configure Dump Location

Once the kernel crashes, the core dump can be captured to local filesystem or remote filesystem(NFS) based on the settings defined in /etc/kdump.conf (in SLES operating system the path is /etc/sysconfig/kdump).
This file is automatically created when the kexec-tools package is installed.
All the entries in this file will be commented out by default. You can uncomment the ones that are needed for your best options.
# vi /etc/kdump.conf
#raw /dev/sda5
#ext4 /dev/sda3
#ext4 LABEL=/boot
#ext4 UUID=03138356-5e61-4ab3-b58e-27507ac41937
#net my.server.com:/export/tmp
#net user@my.server.com
path /var/crash
core_collector makedumpfile -c --message-level 1 -d 31
#core_collector scp
#core_collector cp --sparse=always
#extra_bins /bin/cp
#link_delay 60
#kdump_post /var/crash/scripts/kdump-post.sh
#extra_bins /usr/bin/lftp
#disk_timeout 30
#extra_modules gfs2
#options modulename options
#default shell
#debug_mem_level 0
#force_rebuild 1
#sshkey /root/.ssh/kdump_id_rsa
In the above file:
  • To write the dump to a raw device, you can uncomment “raw /dev/sda5″ and change it to point to correct dump location.
  • If you want to change the path of the dump location, uncomment and change “path /var/crash” to point to the new location.
  • For NFS, you can uncomment “#net my.server.com:/export/tmp” and point to the current NFS server location.

4. Configure Core Collector

The next step is to configure the core collector in Kdump configuration file. It is important to compress the data captured and filter all the unnecessary information from the captured core file.
To enable the core collector, uncomment the following line that starts with core_collector.
core_collector makedumpfile -c --message-level 1 -d 31
  • makedumpfile specified in the core_collector actually makes a small DUMPFILE by compressing the data.
  • makedumpfile provides two DUMPFILE formats (the ELF format and the kdump-compressed format).
  • By default, makedumpfile makes a DUMPFILE in the kdump-compressed format.
  • The kdump-compressed format can be read only with the crash utility, and it can be smaller than the ELF format because of the compression support.
  • The ELF format is readable with GDB and the crash utility.
  • -c is to compresses dump data by each page
  • -d is the number of pages that are unnecessary and can be ignored.
If you uncomment the line #default shell then the shell is invoked if the kdump fails to collect the core. Then the administrator can manually take the core dump using makedumpfile commands.

5. Restart kdump Services

Once kdump is configured, restart the kdump services,
# service kdump restart
Stopping kdump:   [  OK  ]
Starting kdump:   [  OK  ]

# service kdump status
Kdump is operational
If you have any issues in starting the services, then kdump module or crashkernel parameter has not been setup properly. So, verify /proc/cmdline and make sure it reflects to include the crashkernel value.

6. Manually Trigger the Core Dump

You can manually trigger the core dump using the following commands:
echo 1 > /proc/sys/kernel/sysrq
echo c > /proc/sysrq-trigger
The server will reboot itself and the crash dump will be generated.

7. View the Core Files

Once the server is rebooted, you will see the core file is generated under /var/crash based on location defined in /var/crash.
You will see vmcore and vmcore-dmseg.txt file:
# ls -lR /var/crash
drwxr-xr-x. 2 root root 4096 Mar 26 11:06 127.0.0.1-2014-03-26-11:06:43

/var/crash/127.0.0.1-2014-03-26-11:06:43:
-rw-------. 1 root root 33595159 Mar 26 11:06 vmcore
-rw-r--r--. 1 root root    79498 Mar 26 11:06 vmcore-dmesg.txt

8. Kdump analysis using crash

Crash utility is used to analyze the core file captured by kdump.
It can also be used to analyze the core files created by other dump utilities like netdump, diskdump, xendump.
You need to ensure the “kernel-debuginfo” package is present and it is at the same level as the kernel.
Launch the crash tool as shown below. After you this command, you will get a cash prompt, where you can execute crash commands:
# crash /var/crash/127.0.0.1-2014-03-26-12\:24\:39/vmcore /usr/lib/debug/lib/modules/`uname –r`/vmlinux

crash>

9. View the Process when System Crashed

Execute ps command at the crash prompt, which will display all the running process when the system crashed.
crash> ps
   PID    PPID  CPU       TASK        ST  %MEM     VSZ    RSS  COMM
      0      0   0  ffffffff81a8d020  RU   0.0       0      0  [swapper]
      1      0   0  ffff88013e7db500  IN   0.0   19356   1544  init
      2      0   0  ffff88013e7daaa0  IN   0.0       0      0  [kthreadd]
      3      2   0  ffff88013e7da040  IN   0.0       0      0  [migration/0]
      4      2   0  ffff88013e7e9540  IN   0.0       0      0  [ksoftirqd/0]
      7      2   0  ffff88013dc19500  IN   0.0       0      0  [events/0]

10. View Swap space when System Crashed

Execute swap command at the crash prompt, which will display the swap space usage when the system crashed.
crash> swap
FILENAME           TYPE         SIZE      USED   PCT  PRIORITY
/dm-1            PARTITION    2064376k       0k   0%     -1

11. View IPCS when System Crashed

Execute ipcs command at the crash prompt, which will display the shared memory usage when the system crashed.
crash> ipcs
SHMID_KERNEL     KEY      SHMID      UID   PERMS BYTES      NATTCH STATUS
(none allocated)

SEM_ARRAY        KEY      SEMID      UID   PERMS NSEMS
ffff8801394c0990 00000000 0          0     600   1
ffff880138f09bd0 00000000 65537      0     600   1

MSG_QUEUE        KEY      MSQID      UID   PERMS USED-BYTES   MESSAGES
(none allocated)

12. View IRQ when System Crashed

Execute irq command at the crash prompt, which will display the IRQ stats when the system crashed.
crash> irq -s
           CPU0
  0:        149  IO-APIC-edge     timer
  1:        453  IO-APIC-edge     i8042
  7:          0  IO-APIC-edge     parport0
  8:          0  IO-APIC-edge     rtc0
  9:          0  IO-APIC-fasteoi  acpi
 12:        111  IO-APIC-edge     i8042
 14:        108  IO-APIC-edge     ata_piix
 .
 .
vtop – This command translates a user or kernel virtual address to its physical address.
foreach – This command displays data for multiple tasks in the system
waitq – This command displays all the tasks queued on a wait queue.

13. View the Virtual Memory when System Crashed

Execute vm command at the crash prompt, which will display the virtual memory usage when the system crashed.
crash> vm
PID: 5210   TASK: ffff8801396f6aa0  CPU: 0   COMMAND: "bash"
       MM                 PGD          RSS    TOTAL_VM
ffff88013975d880  ffff88013a0c5000  1808k   108340k
      VMA           START       END     FLAGS FILE
ffff88013a0c4ed0     400000     4d4000 8001875 /bin/bash
ffff88013cd63210 3804800000 3804820000 8000875 /lib64/ld-2.12.so
ffff880138cf8ed0 3804c00000 3804c02000 8000075 /lib64/libdl-2.12.so

14. View the Open Files when System Crashed

Execute files command at the crash prompt, which will display the open files when the system crashed.
crash> files
PID: 5210   TASK: ffff8801396f6aa0  CPU: 0   COMMAND: "bash"
ROOT: /    CWD: /root
 FD       FILE            DENTRY           INODE       TYPE PATH
  0 ffff88013cf76d40 ffff88013a836480 ffff880139b70d48 CHR  /tty1
  1 ffff88013c4a5d80 ffff88013c90a440 ffff880135992308 REG  /proc/sysrq-trigger
255 ffff88013cf76d40 ffff88013a836480 ffff880139b70d48 CHR  /tty1
..

15. View System Information when System Crashed

Execute sys command at the crash prompt, which will display system information when the system crashed.
crash> sys
      KERNEL: /usr/lib/debug/lib/modules/2.6.32-431.5.1.el6.x86_64/vmlinux
    DUMPFILE: /var/crash/127.0.0.1-2014-03-26-12:24:39/vmcore  [PARTIAL DUMP]
        CPUS: 1
        DATE: Wed Mar 26 12:24:36 2014
      UPTIME: 00:01:32
LOAD AVERAGE: 0.17, 0.09, 0.03
       TASKS: 159
    NODENAME: elserver1.abc.com
     RELEASE: 2.6.32-431.5.1.el6.x86_64
     VERSION: #1 SMP Fri Jan 10 14:46:43 EST 2014
     MACHINE: x86_64  (2132 Mhz)
      MEMORY: 4 GB
       PANIC: "Oops: 0002 [#1] SMP " (check log for details)

How to Install Kerberos 5 KDC Server on Linux for Authentication

Kerberos is a network authentication protocol.
Kerberos provides a strong cryptographic authentication against the devices which lets the client & servers to communicate in a more secured manner. It is designed to address network security problems.
When firewalls acts a solution to address the intrusion from the external networks, Kerberos usually used to address the intrusion and other security problems within the network.

The current version of Kerberos is version 5 which is called as KRB5.
To implement the Kerberos, we need to have the centralized authentication service running on server.
This service is called KEY DISTRIBUTION CENTER (KDC).
A server registered with KDC is trusted by all other computers in the Kerberos realm.

Sample krb5.conf File

Here’s an example krb5.conf file that contains all the REALM and domain to REALM mapping information,
# cat /etc/krb5.conf
[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 default_realm = EXAMPLE.COM
 dns_lookup_realm = false
 dns_lookup_kdc = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true

[realms]
 EXAMPLE.COM = {
  kdc = kerberos.example.com
  admin_server = kerberos.example.com
 }

[domain_realm]
 .example.com = EXAMPLE.COM
 example.com = EXAMPLE.COM

Install Kerberos KDC server

For security reason, it is recommended to run the Kerberos (KDC) server on a separate server.
Download and install the krb5 server package.
# rpm -ivh krb5-server-1.10.3-10.el6_4.6.x86_64.rpm
Preparing...       ########################################### [100%]
   1:krb5-server   ########################################### [100%]
Verify that the following rpm are installed before configuring KDC:
# rpm -qa | grep -i krb5
pam_krb5-2.3.11-9.el6.x86_64
krb5-server-1.10.3-10.el6_4.6.x86_64
krb5-workstation-1.10.3-10.el6_4.6.x86_64
krb5-libs-1.10.3-10.el6_4.6.x86_64

Modify /etc/krb5.conf File

Change /etc/krb5.conf to reflect like the below with the appropriate REALM and DOMAIN_REALM mappings.
# cat /etc/krb5.conf
[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 default_realm = MYREALM.COM
 dns_lookup_realm = false
 dns_lookup_kdc = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true

[realms]
 MYREALM.COM = {
  kdc = elserver1.example.com
  admin_server = elserver1.example.com
 }

[domain_realm]
 .myrealm.com = MYREALM.COM
 myrealm.com = MYREALM.COM

Modify kdc.conf File

Also the kdc.conf should be modified as shown below.
# cat /var/kerberos/krb5kdc/kdc.conf
[kdcdefaults]
 kdc_ports = 88
 kdc_tcp_ports = 88

[realms]
 MYREALM.COM = {
  #master_key_type = aes256-cts
  acl_file = /var/kerberos/krb5kdc/kadm5.acl
  dict_file = /usr/share/dict/words
  admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
  supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
 }

Create KDC database

Next, create the KDC database using the kdb5_util command as shown below. As this stage, enter the appropriate pasword for the KDC database master key.
# /usr/sbin/kdb5_util create -s
Loading random data
Initializing database '/var/kerberos/krb5kdc/principal' for realm 'MYREALM.COM',
master key name 'K/M@MYREALM.COM'
You will be prompted for the database Master Password.
It is important that you NOT FORGET this password.
Enter KDC database master key:
Re-enter KDC database master key to verify:

Assign Administrator Privilege

The users can be granted administrator privileges to the database using the file /var/kerberos/krb5kdc/kadm5.acl.
# cat /var/kerberos/krb5kdc/kadm5.acl
*/admin@MYREALM.COM     *
In the above example, any principal in the MYREALM with an admin instance has all administrator privileges.

Create a Principal

Create the principal using the following command. In this example, I created the principal with the user name “eluser”.
# kadmin.local -q "addprinc eluser/admin"
Authenticating as principal root/admin@MYREALM.COM with password.
WARNING: no policy specified for eluser/admin@MYREALM.COM; defaulting to no policy
Enter password for principal "eluser/admin@MYREALM.COM":
Re-enter password for principal "eluser/admin@MYREALM.COM":
Principal "eluser/admin@MYREALM.COM" created.

Start the Kerberos Service

Start the KDC and kadmin daemons as shown below.
# service krb5kdc start
Starting Kerberos 5 KDC:               [  OK  ]

# service kadmin start
Starting Kerberos 5 Admin Server:      [  OK  ]

Configure Kerberos Client

Configure the Kerberos client to authenticate against the KDC database:
Now let’s see how to configure the krb5 client to authenticate against the Kerberos KDC database we created above.
Step 1: Install the krb5-libs and krb5-workstation packages on the client machine.
Step 2: Copy the /etc/krb5.conf from the KDC server to the client machine.
Step 3: Now we need to create the principal for the client in the KDC/Kerberos database.
You can use the below commands to create the principal for the client machine on the KDC master server. In the below example the I am creating a host principal for the client elserver3.example.com on the master KDC server elserver1.example.com
# kadmin.local -q "addprinc host/elserver3.example.com"
Authenticating as principal root/admin@MYREALM.COM with password.
WARNING: no policy specified for host/elserver1.example.com@MYREALM.COM; defaulting to no policy
Enter password for principal "host/elserver1.example.com@MYREALM.COM":
Re-enter password for principal "host/elserver1.example.com@MYREALM.COM":
Principal "host/elserver1.example.com@MYREALM.COM" created.
Step 4: Extract the krb5.keytab for the client from the KDC master server using the below command:
# kadmin.local -q "ktadd -k /etc/krb5.keytab host/elserver3.example.com"
Authenticating as principal root/admin@MYREALM.COM with password.
Entry for principal host/elserver3.example.com with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/etc/krb5.keytab.
Entry for principal host/elserver3.example.com with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/etc/krb5.keytab.
Entry for principal host/elserver3.example.com with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:/etc/krb5.keytab.
Entry for principal host/elserver3.example.com with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:/etc/krb5.keytab.
Entry for principal host/elserver3.example.com with kvno 2, encryption type des-hmac-sha1 added to keytab WRFILE:/etc/krb5.keytab.
Entry for principal host/elserver3.example.com with kvno 2, encryption type des-cbc-md5 added to keytab WRFILE:/etc/krb5.keytab.
This completes the configuration. You are all done at this stage.
From now on, everytime you establish a SSH, RSH connection the host verifies its identity against the KDC database using keytab file and it establishes secure connection over the Kerberos.
Ktadd is used a generate new keytab or add a principal to an existing keytab from the kadmin command.
Ktremove is used to remove the principal from an existing keytab. The command to remove the principal that we created above is,
kadmin.local -q "ktremove -k /etc/krb5.keytab –q all"

Delete a KDC database

For some reason, if you have to delete a KDC database, use the following command:
# kdb5_util -r MYREALM.COM destroy
kdb5_util: Deleting KDC database stored in /usr/local/var/krb5kdc/principal, you sure
(type yes to confirm)? <== yes
OK, deleting database '/usr/local/var/krb5kdc/principal'...
-f option in the above command forces the deletion of KDC database.

Backup and Restore KDC Database

To backup a KDC database to a file, use krb5_util_dump.
# kdb5_util dump kdcfile

# ls -l kdcfile
-rw-------. 1 root root 5382 Apr 10 07:25 kdcfile
To restore the KDC database from the dump file created in the above step, do the following:
# kdb5_util load kdcfile

How to Setup HTML Server Side Includes SSI on Apache and Nginx

What is SSI?
SSI stands for Server Side Includes. As the name suggests, they are simple server side scripts that are typically used as directives inside html comments.
Where to use SSI? There are several ways to SSI. The two most common reason to use SSI are to serve a dynamic content on your web page, and to reuse a code snippet as shown below.

1. Serve Dynamically Generated Content

For example, to display current time on your html page, you can use server side includes. You don’t need to use any other special server side scripting languages for it.
The following html code snippet shows this example. The line highlighted in bold is an SSI script.
<html>
  <head> 
   <title>thegeekstuff.com</title>
  </head>
  <body>
   <p> 
     Today is <!--#echo var="DATE_LOCAL" --> 
   </p>
  </body>
</html>

2. Reuse the Same HTML Script

You can also use SSI to reuse a html snippet on multiple pages. This is very helpful to reuse header and footer information of a site on different pages.
The following is a sample header.html file that can be reused.
<header>
  <h1> The Geek Stuff</h1>
  <img  href="./images/logo.png"  alt="Logo" />
</header>
The following is a sample footer.html file that can be reused.
<footer>
  <span> The Geek Stuff, Los Angeles, USA <span>
  <ul> 
   <li> <a href="aboutus.html">About Us</a> </li>
   <li> <a href="contactus.html">Contact Us</a> </li>
   <li> <a href="ourworks.html">Portfolio</a> </li>
  </ul>
</footer>
Now, when it is time to reuse the above two files (header and footer), we simple include them on any other html page using the #include SSI as shown below.
This is the index.html, which includes both header and footer using server side includes.
<html>
  <head> 
   <title>The Geek Stuff</title>
  </head>
  <body>
   <!--#include virtual="header.html" --> 
     <div> 
      <!-- Page content goes here   -->
     </div>
   <!--#include virtual="footer.html" --> 
  </body>
</html>
Similar to including a html page using SSI, you can also include the output of cgi script to the html using the following line:
<!--#include virtual="/cgi-bin/sometask.pl" -->

3. Setup SSI in .htaccess File

We can instruct the webserver to interpret Server Side Includes either using .htaccess or modifying the web-server config file directly.
Create .htaccess file in your web root and add the following lines of code:
AddType text/html .html
AddHandler server-parsed .html
Options Indexes FollowSymLinks Includes
The above lines instruct the web server to parse the .html extension for the server side includes present in it.
We can also instruct the server to parse the file with custom extensions as well. For example, we can use the following lines for parsing the “.shtml” file extensions.
AddType text/html .shtml
AddHandler server-parsed .shtml
Similarly for parsing the cgi script we can add following lines:
AddType application/x-httpd-cgi .cgi

4. Modify Apache httpd.conf File

On Apache web server, the following directive lines should be present in httpd.conf file for SSI
Options +Includes
AddType text/html .shtml
AddOutputFilter INCLUDES .shtml
The first line tells Apache to allow the file to be parsed for SSI. The other lines tells the extension of the file to be parsed.

5. SSI On Nginx Web Server

On Nginx web server, SSI should be set to “on” in the “nginx.conf” file settings.
An example of lines in nginx.conf is as follows:
location / {
 ssi on;
 ssi_last_modified on;
 ssi_min_file_chunk 1k;
 ssi_silent_errors off;
 ssi_types text/html;
 ssi_value_length 256;

 root /var/www;
 index index.html 
}
In the above Nginx SSI configuration:
  1. First line tells the server to enable the SSI parsing.
  2. Second line tells the server to keep “Last-modified” field in header.
  3. Third line states the minimum size of the include file.
  4. Fourth line suppresses the error message in case of SSI parsing error.
  5. Fifth line allows the addition of the MIME types.
  6. Sixth line set the maximum length of the parameter values in SSI commands.

How to Switch Between GUI and Core Mode in Windows Server 2012

Most of us might not be aware that there is a Windows Core version, which has only command-line without any GUI.
This core version relatively provides high level of security, as all GUI interface are removed, which also increases the performance of the system.
This might be helpful if you are running Windows server.

Up until Windows Server 2008, once you install the core version, or the full GUI version, you cannot switch back.
However in Windows Server 2012, it provides the following three modes, and you can easily switch between GUI to core, and core to GUI.
  1. Nomal GUI mode – The standard OS with full GUI features.
  2. Graphical mode – Management tools and Infrastructure. This mode provides only few essential GUI tools. For example, management tools like Server manger, Disk management console, etc.
  3. Core mode – No GUI in this mode. You’ll get only command prompt with powershell.
This tutorial explains how you can switch from Core version to GUI mode, and from GUI mode to Core version.

I. Switch from Core Version to GUI Mode

1. Launch Powershell

From the command prompt, launch the powershell as shown below.
C:\> powershell

PS C:\>

2. Import Server Modules

By default, to increase the server performance, all modules and commands are not loaded in server. We have to import the modules using the following command:
PS C:\> import-module serverManager

3. Change User Interface Mode

To change user interface from Command mode to GUI mode:
For full GUI mode:
Install-windowsfeature  Server-Gui-Mgmt-Infra, Server-Gui-Shell -Restart
For graphical management tools and infrastructure:
Install-Windowsfeature  Server-Gui-Mgmt-Infra -Restart
To change user interface from GUI mode to command mode:
For full command (core) mode:
Uninstall-Windowsfeature Server-Gui-Mgmt-Infra,  Server-Gui-Shell  -Restart
For graphical management tools and infrastructure:
Uninstall-Windowsfeature Server-Gui-Shell -Restart
For our example, let us do the following:
PS C:\> install-WindowsFeature Server-Gui-Mgmt-Infra, Server-Gui-Shell -Restart
After entering the above command, it will extract all binary files and start the installation. After completing installation server will reboot automatically. When the system starts, Windows will be running in the new mode.
Windows Install Feature

II. Switch from GUI to Core Command Mode

1. Launch “Remove Roles and Features”

Open server manger -> Select “Manage” from the menu -> Select “Remove Roles and Features” as shown below.
Windows Server Remove Roles
Press next on the welcome page.
In the next screen, select the server from the server pool. By default our local server is selected, but we can perform add/remove feature task to remove server by adding server IP in server pool.

2. Select User Interface Features

Now, you can select or unselect the options under user interface and infrastructure features option as per your requirement.
To change user interface from GUI mode to Command mode:
For full command (core) mode: Uncheck both “Graphical Management and infrastructure” and “Server Graphical shell”.
For graphical management tools and infrastructure: Uncheck only “Server Graphical shell” option
Windows Remove GUI Features

3. Remove GUI Features

Select the features that you like to remove. This will also automatically select any dependent features that needs to be removed.
Windows UI Remove Features Confirm
After the above step, the server will reboot. After the reboot, when the system comes back up, you’ll see not see GUI anymore. You’ll get only the command prompt with powershell.
Window Core Mode

15 aptitude Command Examples for Package Management in Linux

This article explains several aptitude command examples including the following:
  • Install a specific version of a package
  • Install multiple packages using pattern
  • Search for a package using pattern
  • Get packages under a section
  • Don’t update a specific package (Using hold and keep)
  • Mark a package with a specific install type
  • Perform system update
  • Perform safe upgrade

1. Basic Package Install

Aptitude install is used to install packages along with its dependencies. For example, installing a package vim-gtk will also automatically install all the dependent packages.
# aptitude install vim-gtk
The following NEW packages will be installed:
libruby1.9.1{a} libyaml-0-2{a} tcl8.5{a} tcl8.5-lib{a} vim-gtk vim-gui-common{a} 
0 packages upgraded, 6 newly installed, 0 to remove and 317 not upgraded.
Need to get 6,360 kB of archives. After unpacking 19.0 MB will be used.
Do you want to continue? [Y/n/?] y
In the above output, aptitude will display the following:
  • List of all dependent packages that will be installed.
  • Total size of all the packages that will be downloaded, which is helpful to know how much data it will download from the repository.
  • Total disk size required after unpacking the packages.
  • At this stage, if you like to continue the installation, press “y”
Please note that you can also use apt-get command to manage packages as we discussed earlier.

2. Install Specific Version or Multiple Packages

It is also possible to install a particular version of a package as shown below. Specify the version number after the “=” sign.
# aptitude install "perl=5.10.1"
You can also install several packages matching a particular pattern as shown below.
# aptitude install ~nxvnc

3. View Package Information

Get information about a particular package as shown below.
# aptitude show vim-gtk
Package: vim-gtk     
State: not installed
Version: 2:7.3.547-6ubuntu5
Priority: extra
Section: universe/editors
Maintainer: Ubuntu Developers 
Architecture: amd64
Uncompressed Size: 2,442 k
Depends: vim-gui-common (= 2:7.3.547-6ubuntu5), vim-common (=
         2:7.3.547-6ubuntu5), vim-runtime (= 2:7.3.547-6ubuntu5), libacl1 (>=
         2.2.51-8), libc6 (>= 2.15), libgdk-pixbuf2.0-0 (>= 2.22.0),
         libglib2.0-0 (>= 2.12.0), libgpm2 (>= 1.20.4), libgtk2.0-0 (>= 2.24.0),
         libice6 (>= 1:1.0.0), liblua5.1-0, libpango1.0-0 (>= 1.14.0),
         libperl5.14 (>= 5.14.2), libpython2.7 (>= 2.7), libruby1.9.1 (>=
         1.9.2.0), libselinux1 (>= 1.32), libsm6, libtinfo5, libx11-6, libxt6,
         tcl8.5 (>= 8.5.0)
Suggests: cscope, vim-doc, ttf-dejavu, gnome-icon-theme
Conflicts: vim-gtk
Provides: editor, gvim, vim, vim-lua, vim-perl, vim-python, vim-ruby, vim-tcl
Description: Vi IMproved - enhanced vi editor - with GTK2 GUI
 Vim is an almost compatible version of the UNIX editor Vi. 
..

4. Search for a Package using a Pattern

To know the list of packages available in configured repository, use search option of aptitude along with the string pattern of the package name.
The following will display all the packages that has “xvnc” anywhere in the name.
# aptitude search xvnc
p   linuxvnc           - VNC server to allow remote access to a tty
p   linuxvnc:i386      - VNC server to allow remote access to a tty
p   xvnc4viewer        - Virtual network computing client software for X
p   xvnc4viewer:i386   - Virtual network computing client software for X

5. Display all Installed Packages

In order to list all the installed packages, use the search option as shown below:
# aptitude search '~i' | head
i   account-plugin-aim              - Messaging account plugin for AIM          
i   account-plugin-facebook         - GNOME Control Center account plugin for si
i   account-plugin-flickr           - GNOME Control Center account plugin for si
i   account-plugin-generic-oauth    - GNOME Control Center account plugin for si
i   account-plugin-google           - GNOME Control Center account plugin for si
i   account-plugin-jabber           - Messaging account plugin for Jabber/XMPP  
i   account-plugin-salut            - Messaging account plugin for Local XMPP (S
i   account-plugin-twitter          - GNOME Control Center account plugin for si
i   account-plugin-windows-live     - GNOME Control Center account plugin for si
i   account-plugin-yahoo            - Messaging account plugin for Yahoo!

6. Advanced Search for Packages

To display only broken packages on the system, do the following. This indicates that there are no broken packages on this system.
# aptitude search '~b' | head
To find partially uninstalled packages, do the following:
# aptitude search '~c'
c   yelp                  - Help browser for GNOME
To display held packages, do the following:
# aptitude search '~ahold'
ih  python3 - interactive high-level object-oriented language (default python3 version)
To search for the given keyword in the description, do the following. This example searches for the given text “vim” in descriptions of the packages.
# aptitude search '~dvim'

7. Packages under a Section

To list packages under a particular section, do the following. As seen below, there are 968 packages available under gnome package group.
# aptitude search '~sgnome' | wc -l
968
To display installed package under a section, do the following:
# aptitude search '~i~sgnome'| wc -l
142
As seen above, 142 packages are installed which belongs to gnome package group. You might also see uninstalled packages because of existence of their configuration files.

8. Uninstall a Package

To remove an installed package from a system as well as orphaned dependencies, use remove option along with exact installed package name as shown below:
# aptitude remove vim-gtk
The following packages will be REMOVED:  
  vim-gtk 
0 packages upgraded, 0 newly installed, 1 to remove and 317 not upgraded.
Need to get 0 B of archives. After unpacking 2,442 kB will be freed.
(Reading database ... 160189 files and directories currently installed.)
Removing vim-gtk ...
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/vi (vi) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/view (view) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/ex (ex) in auto mode
update-alternatives: using /bin/nano to provide /usr/bin/editor (editor) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/rvim (rvim) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/rview (rview) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/vimdiff (vimdiff) in auto mode
update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/vim (vim) in auto mode

9. Complete Removal of a Package

Use purge option to perform a complete removal. This will uninstall a package as well as orphaned dependencies along with its configuration files
The following will uninstall postgresql package along with its configuration files.
# aptitude purge postgresql

10. Don’t Update a Package – Hold it

To keep the current version of the package, do the following:
# aptitude hold python3
As seen above, hold has been applied on python3 package. It cancels any future installations, removal and upgrade of this package. aptitude safe-upgrade or aptitude full-upgrade can not be done on this package.
The following is a way to hold a package along with install. Append “:” at the end of the package.
# aptitude install perl:
Use unhold to role back the hold applied on the package.

11. Don’t Update a Package – Keep it

To keep only the current version when there is a scheduled updates for packages, do the following:
# aptitude keep perl
keep-all option is to apply the same for all installed packages.

12. Mark a Package with Install Type

There is a provision to mark immediately after installing packages either as automatic or manual by override specifier as explained below.
To set the mark as automatic, do the following
# aptitude install package+M

(or)

# aptitude install package&M
To set the mark as manual, do the following. This is the default option.
# aptitude install package&m
It is mainly used when you want to get a list of manually installed packages. The following displays the automatically installed packages count.
# aptitude search '~M~i' | wc -l
130
The following displays the total count for manually installed packages.
# aptitude search '!~M~i' | wc -l
1556

13. Refresh Available Packages List

To update the list of available package from repositories, do the following:
# aptitude update

14. Upgrade All Packages – Safe and Full Upgrade

safe-upgrade: To upgrade the installed packages to the latest version and new packages might be installed to resolve dependencies, do the following:
# aptitude safe-upgrade
To prevent from installing new packages then use –no-new-installs as shown below:
# aptitude safe-upgrade --no-new-installs
full-upgrade: To do complete upgrade of all packages, and also to install packages which safe-upgrade cannot do, do the following:
# aptitude full-upgrade

15. Clean aptitude Cache

To remove downloaded packages from the cache directory, do the following. By default, the cache directory is /var/apt/apt/archive
# aptitude clean
Use autoclean to remove only the packages from cache which can no longer be downloaded.
# aptitude autoclean