Monday, April 8, 2013

Top 25 Best Linux Performance Monitoring and Debugging Tools

I’ve compiled 25 performance monitoring and debugging tools that will be helpful when you are working on Linux environment. This list is not comprehensive or authoritative by any means.
However this list has enough tools for you to play around and pick the one that is suitable your specific debugging and monitoring scenario.

1. SAR

Using sar utility you can do two things: 1) Monitor system real time performance (CPU, Memory, I/O, etc) 2) Collect performance data in the background on an on-going basis and do analysis on the historical data to identify bottlenecks.
Sar is part of the sysstat package. The following are some of the things you can do using sar utility.
  • Collective CPU usage
  • Individual CPU statistics
  • Memory used and available
  • Swap space used and available
  • Overall I/O activities of the system
  • Individual device I/O activities
  • Context switch statistics
  • Run queue and load average data
  • Network statistics
  • Report sar data from a specific time
  • and lot more..
The following sar command will display the system CPU statistics 3 times (with 1 second interval).
The following “sar -b” command reports I/O statistics. “1 3″ indicates that the sar -b will be executed for every 1 second for a total of 3 times.
$ sar -b 1 3
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:56:28 PM       tps      rtps      wtps   bread/s   bwrtn/s
01:56:29 PM    346.00    264.00     82.00   2208.00    768.00
01:56:30 PM    100.00     36.00     64.00    304.00    816.00
01:56:31 PM    282.83     32.32    250.51    258.59   2537.37
Average:       242.81    111.04    131.77    925.75   1369.90
More SAR examples: How to Install/Configure Sar (sysstat) and 10 Useful Sar Command Examples

2. Tcpdump

tcpdump is a network packet analyzer. Using tcpdump you can capture the packets and analyze it for any performance bottlenecks.
The following tcpdump command example displays captured packets in ASCII.
$ tcpdump -A -i eth0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
14:34:50.913995 IP valh4.lell.net.ssh > yy.domain.innetbcp.net.11006: P 1457239478:1457239594(116) ack 1561461262 win 63652
E.....@.@..]..i...9...*.V...]...P....h....E...>{..U=...g.
......G..7\+KA....A...L.
14:34:51.423640 IP valh4.lell.net.ssh > yy.domain.innetbcp.net.11006: P 116:232(116) ack 1 win 63652
E.....@.@..\..i...9...*.V..*]...P....h....7......X..!....Im.S.g.u:*..O&....^#Ba...
E..(R.@.|.....9...i.*...]...V..*P..OWp........
Using tcpdump you can capture packets based on several custom conditions. For example, capture packets that flow through a particular port, capture tcp communication between two specific hosts, capture packets that belongs to a specific protocol type, etc.
More tcpdump examples: 15 TCPDUMP Command Examples

3. Nagios

Nagios is an open source monitoring solution that can monitor pretty much anything in your IT infrastructure. For example, when a server goes down it can send a notification to your sysadmin team, when a database goes down it can page your DBA team, when the a web server goes down it can notify the appropriate team.
You can also set warning and critical threshold level for various services to help you proactively address the issue. For example, it can notify sysadmin team when a disk partition becomes 80% full, which will give enough time for the sysadmin team to work on adding more space before the issue becomes critical.
Nagios also has a very good user interface from where you can monitor the health of your overall IT infrastructure.
The following are some of the things you can monitor using Nagios:
  • Any hardware (servers, switches, routers, etc)
  • Linux servers and Windows servers
  • Databases (Oracle, MySQL, PostgreSQL, etc)
  • Various services running on your OS (sendmail, nis, nfs, ldap, etc)
  • Web servers
  • Your custom application
  • etc.
More Nagios examples: How to install and configure Nagios, monitor remote Windows machine, and monitor remote Linux server.

4. Iostat

iostat reports CPU, disk I/O, and NFS statistics. The following are some of iostat command examples.
Iostat without any argument displays information about the CPU usage, and I/O statistics about all the partitions on the system as shown below.
$ iostat
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db)       07/09/2011

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.68    0.00    0.52    2.03    0.00   91.76

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             194.72      1096.66      1598.70 2719068704 3963827344
sda1            178.20       773.45      1329.09 1917686794 3295354888
sda2             16.51       323.19       269.61  801326686  668472456
sdb             371.31       945.97      1073.33 2345452365 2661206408
sdb1            371.31       945.95      1073.33 2345396901 2661206408
sdc             408.03       207.05       972.42  513364213 2411023092
sdc1            408.03       207.03       972.42  513308749 2411023092
By default iostat displays I/O data for all the disks available in the system. To view statistics for a specific device (For example, /dev/sda), use the option -p as shown below.
$ iostat -p sda
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db)       07/09/2011

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.68    0.00    0.52    2.03    0.00   91.76

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             194.69      1096.51      1598.48 2719069928 3963829584
sda2            336.38        27.17        54.00   67365064  133905080
sda1            821.89         0.69       243.53    1720833  603892838

5. Mpstat

mpstat reports processors statistics. The following are some of mpstat command examples.
Option -A, displays all the information that can be displayed by the mpstat command as shown below. This is really equivalent to “mpstat -I ALL -u -P ALL” command.
$ mpstat -A
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db)       07/09/2011      _x86_64_        (4 CPU)

10:26:34 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
10:26:34 PM  all    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00   99.99
10:26:34 PM    0    0.01    0.00    0.01    0.01    0.00    0.00    0.00    0.00   99.98
10:26:34 PM    1    0.00    0.00    0.01    0.00    0.00    0.00    0.00    0.00   99.98
10:26:34 PM    2    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
10:26:34 PM    3    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00

10:26:34 PM  CPU    intr/s
10:26:34 PM  all     36.51
10:26:34 PM    0      0.00
10:26:34 PM    1      0.00
10:26:34 PM    2      0.04
10:26:34 PM    3      0.00

10:26:34 PM  CPU     0/s     1/s     8/s     9/s    12/s    14/s    15/s    16/s    19/s    20/s    21/s    33/s   NMI/s   LOC/s   SPU/s   PMI/s   PND/s   RES/s   CAL/s   TLB/s   TRM/s   THR/s   MCE/s   MCP/s   ERR/s   MIS/s
10:26:34 PM    0    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    7.47    0.00    0.00    0.00    0.00    0.02    0.00    0.00    0.00    0.00    0.00    0.00    0.00
10:26:34 PM    1    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    4.90    0.00    0.00    0.00    0.00    0.03    0.00    0.00    0.00    0.00    0.00    0.00    0.00
10:26:34 PM    2    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.04    0.00    0.00    0.00    0.00    0.00    3.32    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00
10:26:34 PM    3    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.
mpstat Option -P ALL, displays all the individual CPUs (or Cores) along with its statistics as shown below.
$ mpstat -P ALL
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db)       07/09/2011      _x86_64_        (4 CPU)

10:28:04 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
10:28:04 PM  all    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00   99.99
10:28:04 PM    0    0.01    0.00    0.01    0.01    0.00    0.00    0.00    0.00   99.98
10:28:04 PM    1    0.00    0.00    0.01    0.00    0.00    0.00    0.00    0.00   99.98
10:28:04 PM    2    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
10:28:04 PM    3    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00

6. Vmstat

vmstat reports virtual memory statistics. The following are some of vmstat command examples.
vmstat by default will display the memory usage (including swap) as shown below.
$ vmstat
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 0  0 305416 260688  29160 2356920    2    2     4     1    0    0  6  1 92  2  0

To execute vmstat every 2 seconds for 10 times, do the following. After executing 10 times, it will stop automatically.
$ vmstat 2 10
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 1  0      0 537144 182736 6789320    0    0     0     0    1    1  0  0 100  0  0
 0  0      0 537004 182736 6789320    0    0     0     0   50   32  0  0 100  0  0
..
iostat and vmstat are part of the sar utility. You should install sysstat package to get iostat and vmstat working.
More examples: 24 iostat, vmstat and mpstat command Examples

7. PS Command

Process is a running instance of a program. Linux is a multitasking operating system, which means that more than one process can be active at once. Use ps command to find out what processes are running on your system.
ps command also give you lot of additional information about the running process which will help you identify any performance bottlenecks on your system.
The following are few ps command examples.
Use -u option to display the process that belongs to a specific username. When you have multiple username, separate them using a comma. The example below displays all the process that are owned by user wwwrun, or postfix.
$ ps -f -u wwwrun,postfix
UID        PID  PPID  C STIME TTY          TIME CMD
postfix   7457  7435  0 Mar09 ?        00:00:00 qmgr -l -t fifo -u
wwwrun    7495  7491  0 Mar09 ?        00:00:00 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf
wwwrun    7496  7491  0 Mar09 ?        00:00:00 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf
wwwrun    7497  7491  0 Mar09 ?        00:00:00 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf
wwwrun    7498  7491  0 Mar09 ?        00:00:00 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf
wwwrun    7499  7491  0 Mar09 ?        00:00:00 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf
wwwrun   10078  7491  0 Mar09 ?        00:00:00 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf
wwwrun   10082  7491  0 Mar09 ?        00:00:00 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf
postfix  15677  7435  0 22:23 ?        00:00:00 pickup -l -t fifo -u
The example below display the process Id and commands in a hierarchy. –forest is an argument to ps command which displays ASCII art of process tree. From this tree, we can identify which is the parent process and the child processes it forked in a recursive manner.
$ ps -e -o pid,args --forest
  468  \_ sshd: root@pts/7
  514  |   \_ -bash
17484  \_ sshd: root@pts/11
17513  |   \_ -bash
24004  |       \_ vi ./790310__11117/journal
15513  \_ sshd: root@pts/1
15522  |   \_ -bash
 4280  \_ sshd: root@pts/5
 4302  |   \_ -bash
More ps examples: 7 Practical PS Command Examples for Process Monitoring

8. Free

Free command displays information about the physical (RAM) and swap memory of your system.
In the example below, the total physical memory on this system is 1GB. The values displayed below are in KB.
# free
       total   used    free   shared  buffers  cached
Mem: 1034624   1006696 27928  0       174136   615892
-/+ buffers/cache:     216668      817956
Swap:    2031608       0    2031608
The following example will display the total memory on your system including RAM and Swap.
In the following command:
  • option m displays the values in MB
  • option t displays the “Total” line, which is sum of physical and swap memory values
  • option o is to hide the buffers/cache line from the above example.
# free -mto
                  total       used      free     shared    buffers     cached
Mem:          1010        983         27              0         170           601
Swap:          1983            0    1983
Total:          2994        983     2011

9. TOP

Top command displays all the running process in the system ordered by certain columns. This displays the information real-time.
You can kill a process without existing from top. Once you’ve located a process that needs to be killed, press “k” which will ask for the process id, and signal to send. If you have the privilege to kill that particular PID, it will get killed successfully.
PID to kill: 1309
Kill PID 1309 with signal [15]:
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 1309 geek   23   0 2483m 1.7g  27m S    0 21.8  45:31.32 gagent
 1882 geek   25   0 2485m 1.7g  26m S    0 21.7  22:38.97 gagent
 5136 root    16   0 38040  14m 9836 S    0  0.2   0:00.39 nautilus
Use top -u to display a specific user processes only in the top command output.
$ top -u geek
While unix top command is running, press u which will ask for username as shown below.
Which user (blank for all): geek
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 1309 geek   23   0 2483m 1.7g  27m S    0 21.8  45:31.32 gagent
 1882 geek   25   0 2485m 1.7g  26m S    0 21.7  22:38.97 gagent
More top examples: 15 Practical Linux Top Command Examples

10. Pmap

pmap command displays the memory map of a given process. You need to pass the pid as an argument to the pmap command.
The following example displays the memory map of the current bash shell. In this example, 5732 is the PID of the bash shell.
$ pmap 5732
5732:   -bash
00393000    104K r-x--  /lib/ld-2.5.so
003b1000   1272K r-x--  /lib/libc-2.5.so
00520000      8K r-x--  /lib/libdl-2.5.so
0053f000     12K r-x--  /lib/libtermcap.so.2.0.8
0084d000     76K r-x--  /lib/libnsl-2.5.so
00c57000     32K r-x--  /lib/libnss_nis-2.5.so
00c8d000     36K r-x--  /lib/libnss_files-2.5.so
b7d6c000   2048K r----  /usr/lib/locale/locale-archive
bfd10000     84K rw---    [ stack ]
 total     4796K
pmap -x gives some additional information about the memory maps.
$  pmap -x 5732
5732:   -bash
Address   Kbytes     RSS    Anon  Locked Mode   Mapping
00393000     104       -       -       - r-x--  ld-2.5.so
003b1000    1272       -       -       - r-x--  libc-2.5.so
00520000       8       -       -       - r-x--  libdl-2.5.so
0053f000      12       -       -       - r-x--  libtermcap.so.2.0.8
0084d000      76       -       -       - r-x--  libnsl-2.5.so
00c57000      32       -       -       - r-x--  libnss_nis-2.5.so
00c8d000      36       -       -       - r-x--  libnss_files-2.5.so
b7d6c000    2048       -       -       - r----  locale-archive
bfd10000      84       -       -       - rw---    [ stack ]
-------- ------- ------- ------- -------
total kB    4796       -       -       -
To display the device information of the process maps use ‘pamp -d pid’.

11. Netstat

Netstat command displays various network related information such as network connections, routing tables, interface statistics, masquerade connections, multicast memberships etc.,
The following are some netstat command examples.
List all ports (both listening and non listening) using netstat -a as shown below.
# netstat -a | more
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 localhost:30037         *:*                     LISTEN
udp        0      0 *:bootpc                *:*                                

Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   Path
unix  2      [ ACC ]     STREAM     LISTENING     6135     /tmp/.X11-unix/X0
unix  2      [ ACC ]     STREAM     LISTENING     5140     /var/run/acpid.socket
Use the following netstat command to find out on which port a program is running.
# netstat -ap | grep ssh
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        1      0 dev-db:ssh           101.174.100.22:39213        CLOSE_WAIT  -
tcp        1      0 dev-db:ssh           101.174.100.22:57643        CLOSE_WAIT  -
Use the following netstat command to find out which process is using a particular port.
# netstat -an | grep ':80'
More netstat examples: 10 Netstat Command Examples

12. IPTraf

IPTraf is a IP Network Monitoring Software. The following are some of the main features of IPTraf:
  • It is a console based (text-based) utility.
  • This displays IP traffic crossing over your network. This displays TCP flag, packet and byte counts, ICMP, OSPF packet types, etc.
  • Displays extended interface statistics (including IP, TCP, UDP, ICMP, packet size and count, checksum errors, etc.)
  • LAN module discovers hosts automatically and displays their activities
  • Protocol display filters to view selective protocol traffic
  • Advanced Logging features
  • Apart from ethernet interface it also supports FDDI, ISDN, SLIP, PPP, and loopback
  • You can also run the utility in full screen mode. This also has a text-based menu.
More info: IPTraf Home Page. IPTraf screenshot.

13. Strace

Strace is used for debugging and troubleshooting the execution of an executable on Linux environment. It displays the system calls used by the process, and the signals received by the process.
Strace monitors the system calls and signals of a specific program. It is helpful when you do not have the source code and would like to debug the execution of a program. strace provides you the execution sequence of a binary from start to end.
Trace a Specific System Calls in an Executable Using Option -e
Be default, strace displays all system calls for the given executable. The following example shows the output of strace for the Linux ls command.
$ strace ls
execve("/bin/ls", ["ls"], [/* 21 vars */]) = 0
brk(0)                                  = 0x8c31000
access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
mmap2(NULL, 8192, PROT_READ, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb78c7000
access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY)      = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=65354, ...}) = 0
To display only a specific system call, use the strace -e option as shown below.
$ strace -e open ls
open("/etc/ld.so.cache", O_RDONLY)      = 3
open("/lib/libselinux.so.1", O_RDONLY)  = 3
open("/lib/librt.so.1", O_RDONLY)       = 3
open("/lib/libacl.so.1", O_RDONLY)      = 3
open("/lib/libc.so.6", O_RDONLY)        = 3
open("/lib/libdl.so.2", O_RDONLY)       = 3
open("/lib/libpthread.so.0", O_RDONLY)  = 3
open("/lib/libattr.so.1", O_RDONLY)     = 3
open("/proc/filesystems", O_RDONLY|O_LARGEFILE) = 3
open("/usr/lib/locale/locale-archive", O_RDONLY|O_LARGEFILE) = 3
open(".", O_RDONLY|O_NONBLOCK|O_LARGEFILE|O_DIRECTORY|O_CLOEXEC) = 3
More strace examples: 7 Strace Examples to Debug the Execution of a Program in Linux

14. Lsof

Lsof stands for ls open files, which will list all the open files in the system. The open files include network connection, devices and directories. The output of the lsof command will have the following columns:
  • COMMAND process name.
  • PID process ID
  • USER Username
  • FD file descriptor
  • TYPE node type of the file
  • DEVICE device number
  • SIZE file size
  • NODE node number
  • NAME full path of the file name.
To view all open files of the system, execute the lsof command without any parameter as shown below.
# lsof | more
COMMAND     PID       USER   FD      TYPE     DEVICE      SIZE       NODE NAME
init          1       root  cwd       DIR        8,1      4096          2 /
init          1       root  rtd       DIR        8,1      4096          2 /
init          1       root  txt       REG        8,1     32684     983101 /sbin/init
init          1       root  mem       REG        8,1    106397     166798 /lib/ld-2.3.4.so
init          1       root  mem       REG        8,1   1454802     166799 /lib/tls/libc-2.3.4.so
init          1       root  mem       REG        8,1     53736     163964 /lib/libsepol.so.1
init          1       root  mem       REG        8,1     56328     166811 /lib/libselinux.so.1
init          1       root   10u     FIFO       0,13                  972 /dev/initctl
migration     2       root  cwd       DIR        8,1      4096          2 /
skipped..
To view open files by a specific user, use lsof -u option to display all the files opened by a specific user.
# lsof -u ramesh
vi      7190 ramesh  txt    REG        8,1   474608   475196 /bin/vi
sshd    7163 ramesh    3u  IPv6   15088263               TCP dev-db:ssh->abc-12-12-12-12.
To list users of a particular file, use lsof as shown below. In this example, it displays all users who are currently using vi.
# lsof /bin/vi
COMMAND  PID  USER    FD   TYPE DEVICE   SIZE   NODE NAME
vi      7258  root   txt    REG    8,1 474608 475196 /bin/vi
vi      7300  ramesh txt    REG    8,1 474608 475196 /bin/vi

15. Ntop

Ntop is just like top, but for network traffic. ntop is a network traffic monitor that displays the network usage.
You can also access ntop from browser to get the traffic information and network status.
The following are some the key features of ntop:
  • Display network traffic broken down by protocols
  • Sort the network traffic output based on several criteria
  • Display network traffic statistics
  • Ability to store the network traffic statistics using RRD
  • Identify the identify of the users, and host os
  • Ability to analyze and display IT traffic
  • Ability to work as NetFlow/sFlow collector for routers and switches
  • Displays network traffic statistics similar to RMON
  • Works on Linux, MacOS and Windows
More info: Ntop home page

16. GkrellM

GKrellM stands for GNU Krell Monitors, or GTK Krell Meters. It is GTK+ toolkit based monitoring program, that monitors various sytem resources. The UI is stakable. i.e you can add as many monitoring objects you want one on top of another. Just like any other desktop UI based monitoring tools, it can monitor CPU, memory, file system, network usage, etc. But using plugins you can monitoring external applications.
More info: GkrellM home page

17. w and uptime

While monitoring system performance, w command will hlep to know who is logged on to the system.
$ w
09:35:06 up 21 days, 23:28,  2 users,  load average: 0.00, 0.00, 0.00
USER     TTY      FROM          LOGIN@   IDLE   JCPU   PCPU WHAT
root     tty1     :0            24Oct11  21days 1:05   1:05 /usr/bin/Xorg :0 -nr -verbose
ramesh   pts/0    192.168.1.10  Mon14    0.00s  15.55s 0.26s sshd: localuser [priv]
john     pts/0    192.168.1.11  Mon07    0.00s  19.05s 0.20s sshd: localuser [priv]
jason    pts/0    192.168.1.12  Mon07    0.00s  21.15s 0.16s sshd: localuser [priv]
For each and every user who is logged on, it displays the following info:
  • Username
  • tty info
  • Remote host ip-address
  • Login time of the user
  • How long the user has been idle
  • JCPU and PCUP
  • The command of the current process the user is executing
Line 1 of the w command output is similar to the uptime command output. It displays the following:
  • Current time
  • How long the system has been up and running
  • Total number of users who are currently logged on the system
  • Load average for the last 1, 5 and 15 minutes
If you want only the uptime information, use the uptime command.
$ uptime
 09:35:02 up 106 days, 28 min,  2 users,  load average: 0.08, 0.11, 0.05
Please note that both w and uptime command gets the information from the /var/run/utmp data file.

18. /proc

/proc is a virtual file system. For example, if you do ls -l /proc/stat, you’ll notice that it has a size of 0 bytes, but if you do “cat /proc/stat”, you’ll see some content inside the file.
Do a ls -l /proc, and you’ll see lot of directories with just numbers. These numbers represents the process ids, the files inside this numbered directory corresponds to the process with that particular PID.
The following are the important files located under each numbered directory (for each process):
  • cmdline – command line of the command.
  • environ – environment variables.
  • fd – Contains the file descriptors which is linked to the appropriate files.
  • limits – Contains the information about the specific limits to the process.
  • mounts – mount related information
The following are the important links under each numbered directory (for each process):
  • cwd – Link to current working directory of the process.
  • exe – Link to executable of the process.
  • root – Link to the root directory of the process.
More /proc examples: Explore Linux /proc File System

19. KDE System Guard

This is also called as KSysGuard. On Linux desktops that run KDE, you can use this tool to monitor system resources. Apart from monitoring the local system, this can also monitor remote systems.
If you are running KDE desktop, go to Applications -> System -> System Monitor, which will launch the KSysGuard. You can also type ksysguard from the command line to launch it.
This tool displays the following two tabs:
  • Process Table – Displays all active processes. You can sort, kill, or change priority of the processes from here
  • System Load – Displays graphs for CPU, Memory, and Network usages. These graphs can be customized by right cliking on any of these graphs.
To connect to a remote host and monitor it, click on File menu -> Monitor Remote Machine -> specify the ip-address of the host, the connection method (for example, ssh). This will ask you for the username/password on the remote machine. Once connected, this will display the system usage of the remote machine in the Process Table and System Load tabs.

20. GNOME System Monitor

On Linux desktops that run GNOME, you can use the this tool to monitor processes, system resources, and file systems from a graphical interface. Apart from monitoring, you can also use this UI tool to kill a process, change the priority of a process.
If you are running GNOME desktop, go to System -> Administration -> System Monitor, which will launch the GNOME System Monitor. You can also type gnome-system-monitor from the command line to launch it.
This tool has the following four tabs:
  • System – Displays the system information including Linux distribution version, system resources, and hardware information.
  • Processes – Displays all active processes that can be sorted based on various fields
  • Resources – Displays CPU, memory and network usages
  • File Systems – Displays information about currently mounted file systems
More info: GNOME System Monitor home page

21. Conky

Conky is a system monitor or X. Conky displays information in the UI using what it calls objects. By default there are more than 250 objects that are bundled with conky, which displays various monitoring information (CPU, memory, network, disk, etc.). It supports IMAP, POP3, several audio players.
You can monitor and display any external application by craeting your own objects using scripting. The monitoring information can be displays in various format: Text, graphs, progress bars, etc. This utility is extremly configurable.
More info: Conky screenshots

22. Cacti

Cacti is a PHP based UI frontend for the RRDTool. Cacti stores the data required to generate the graph in a MySQL database.
The following are some high-level features of Cacti:
  • Ability to perform the data gathering and store it in MySQL database (or round robin archives)
  • Several advanced graphing featurs are available (grouping of GPRINT graph items, auto-padding for graphs, manipulate graph data using CDEF math function, all RRDTool graph items are supported)
  • The data source can gather local or remote data for the graph
  • Ability to fully customize Round robin archive (RRA) settings
  • User can define custom scripts to gather data
  • SNMP support (php-snmp, ucd-snmp, or net-snmp) for data gathering
  • Built-in poller helps to execute custom scripts, get SNMP data, update RRD files, etc.
  • Highly flexible graph template features
  • User friendly and customizable graph display options
  • Create different users with various permission sets to access the cacti frontend
  • Granular permission levels can be set for the individual user
  • and lot more..
More info: Cacti home page

23. Vnstat

vnstat is a command line utility that displays and logs network traffic of the interfaces on your systems. This depends on the network statistics provided by the kernel. So, vnstat doesn’t add any additional load to your system for monitoring and logging the network traffic.
vnstat without any argument will give you a quick summary with the following info:
  • The last time when the vnStat datbase located under /var/lib/vnstat/ was updated
  • From when it started collecting the statistics for a specific interface
  • The network statistic data (bytes transmitted, bytes received) for the last two months, and last two days.
# vnstat
Database updated: Sat Oct 15 11:54:00 2011

   eth0 since 10/01/11

          rx:  12.89 MiB      tx:  6.94 MiB      total:  19.82 MiB

   monthly
                     rx      |     tx      |    total    |   avg. rate
     ------------------------+-------------+-------------+---------------
       Sep '11     12.90 MiB |    6.90 MiB |   19.81 MiB |    0.14 kbit/s
       Oct '11     12.89 MiB |    6.94 MiB |   19.82 MiB |    0.15 kbit/s
     ------------------------+-------------+-------------+---------------
     estimated        29 MiB |      14 MiB |      43 MiB |

  daily
                     rx      |     tx      |    total    |   avg. rate
     ------------------------+-------------+-------------+---------------
     yesterday      4.30 MiB |    2.42 MiB |    6.72 MiB |    0.64 kbit/s
         today      2.03 MiB |    1.07 MiB |    3.10 MiB |    0.59 kbit/s
     ------------------------+-------------+-------------+---------------
     estimated         4 MiB |       2 MiB |       6 MiB |
Use “vnstat -t” or “vnstat –top10″ to display all time top 10 traffic days.
$ vnstat --top10

 eth0  /  top 10

    #      day          rx      |     tx      |    total    |   avg. rate
   -----------------------------+-------------+-------------+---------------
    1   10/12/11       4.30 MiB |    2.42 MiB |    6.72 MiB |    0.64 kbit/s
    2   10/11/11       4.07 MiB |    2.17 MiB |    6.24 MiB |    0.59 kbit/s
    3   10/10/11       2.48 MiB |    1.28 MiB |    3.76 MiB |    0.36 kbit/s
    ....
   -----------------------------+-------------+-------------+---------------
More vnstat Examples: How to Monitor and Log Network Traffic using VNStat

24. Htop

htop is a ncurses-based process viewer. This is similar to top, but is more flexible and user friendly. You can interact with the htop using mouse. You can scroll vertically to view the full process list, and scroll horizontally to view the full command line of the process.
htop output consists of three sections 1) header 2) body and 3) footer.
Header displays the following three bars, and few vital system information. You can change any of these from the htop setup menu.
  • CPU Usage: Displays the %used in text at the end of the bar. The bar itself will show different colors. Low-priority in blue, normal in green, kernel in red.
  • Memory Usage
  • Swap Usage
Body displays the list of processes sorted by %CPU usage. Use arrow keys, page up, page down key to scoll the processes.
Footer displays htop menu commands.
More info: HTOP Screenshot and Examples

25. Socket Statistics – SS

ss stands for socket statistics. This displays information that are similar to netstat command.
To display all listening sockets, do ss -l as shown below.
$ ss -l
Recv-Q Send-Q   Local Address:Port     Peer Address:Port
0      100      :::8009                :::*
0      128      :::sunrpc              :::*
0      100      :::webcache            :::*
0      128      :::ssh                 :::*
0      64       :::nrpe                :::*
The following displays only the established connection.
$ ss -o state established
Recv-Q Send-Q   Local Address:Port   Peer Address:Port
0      52       192.168.1.10:ssh   192.168.2.11:55969    timer:(on,414ms,0)
The following displays socket summary statistics. This displays the total number of sockets broken down by the type.
$ ss -s
Total: 688 (kernel 721)
TCP:   16 (estab 1, closed 0, orphaned 0, synrecv 0, timewait 0/0), ports 11

Transport Total     IP        IPv6
*         721       -         -
RAW       0         0         0
UDP       13        10        3
TCP       16        7         9
INET      29        17        12
FRAG      0         0         0

How to Check and Repair MySQL Tables Using Mysqlcheck

When your mysql table gets corrupted, use mysqlcheck command to repair it.
Mysqlcheck command checks, repairs, optimizes and analyzes the tables.

1. Check a Specific Table in a Database

If your application gives an error message saying that a specific table is corrupted, execute the mysqlcheck command to check that one table.
The following example checks employee table in thegeekstuff database.
# mysqlcheck -c thegeekstuff employee -u root -p
Enter password:
thegeekstuff.employee    OK
You should pass the username/password to the mysqlcheck command. If not, you’ll get the following error message.
# mysqlcheck -c thegeekstuff employee
mysqlcheck: Got error: 1045: Access denied for user 'root'@'localhost' (using password: NO) when trying to connect
Please note that myisamchk command that we discussed a while back works similar to the mysqlcheck command. However, the advantage of mysqlcheck command is that it can be executed when the mysql daemon is running. So, using mysqlcheck command you can check and repair corrupted table while the database is still running.

2. Check All Tables in a Database

To check all the tables in a particular database, don’t specify the table name. Just specify the database name.
The following example checks all the tables in the alfresco database.
# mysqlcheck -c alfresco  -u root -p
Enter password:
alfresco.JBPM_ACTION                               OK
alfresco.JBPM_BYTEARRAY                            OK
alfresco.JBPM_BYTEBLOCK                            OK
alfresco.JBPM_COMMENT                              OK
alfresco.JBPM_DECISIONCONDITIONS                   OK
alfresco.JBPM_DELEGATION                           OK
alfresco.JBPM_EVENT                                OK
..

3. Check All Tables and All Databases

To check all the tables and all the databases use the “–all-databases” along with -c option as shown below.
# mysqlcheck -c  -u root -p --all-databases
Enter password:
thegeekstuff.employee                              OK
alfresco.JBPM_ACTION                               OK
alfresco.JBPM_BYTEARRAY                            OK
alfresco.JBPM_BYTEBLOCK                            OK
..
..
mysql.help_category
error    : Table upgrade required. Please do "REPAIR TABLE `help_category`" or dump/reload to fix it!
mysql.help_keyword
error    : Table upgrade required. Please do "REPAIR TABLE `help_keyword`" or dump/reload to fix it!
..
If you want to check all tables of few databases, specify the database names using “–databases”.
The following example checks all the tables in thegeekstuff and alfresco database.
# mysqlcheck -c  -u root -p --databases thegeekstuff alfresco
Enter password:
thegeekstuff.employee                              OK
alfresco.JBPM_ACTION                               OK
alfresco.JBPM_BYTEARRAY                            OK
alfresco.JBPM_BYTEBLOCK                            OK
..

4. Analyze Tables using Mysqlcheck

The following analyzes employee table that is located in thegeekstuff database.
# mysqlcheck -a thegeekstuff employee -u root -p
Enter password:
thegeekstuff.employee   Table is already up to date
Internally mysqlcheck command uses “ANALYZE TABLE” command. While mysqlcheck is executing the analyze command the table is locked and available for other process only in the read mode.

5. Optimize Tables using Mysqlcheck

The following optimizes employee table that is located in thegeekstuff database.
# mysqlcheck -o thegeekstuff employee -u root -p
Enter password:
thegeekstuff.employee         OK
Internally mysqlcheck command uses “OPTIMIZE TABLE” command. When you delete lot of rows from a table, optimizing it helps to get the unused space and defragment the data file. This might improve performance on huge tables that has gone through several updates.

6. Repair Tables using Mysqlcheck

The following repairs employee table that is located in thegeekstuff database.
# mysqlcheck -r thegeekstuff employee -u root -p
Enter password:
thegeekstuff.employee        OK
Internally mysqlcheck command uses “REPAIR TABLE” command. This will repair and fix a corrupted MyISAM and archive tables.

7. Combine Check, Optimize, and Repair Tables

Instead of checking and repairing separately. You can combine check, optimize and repair functionality together using “–auto-repair” as shown below.
The following checks, optimizes and repairs all the corrupted table in thegeekstuff database.
# mysqlcheck -u root -p --auto-repair -c -o thegeekstuff
You an also check, optimize and repair all the tables across all your databases using the following command.
# mysqlcheck -u root -p --auto-repair -c -o --all-databases
If you want to know what the command is doing while it is checking, add the –debug-info as shown below. This is helpful while you are checking a huge table.
# mysqlcheck --debug-info -u root -p --auto-repair -c -o thegeekstuff employee
Enter password:
thegeekstuff.employee  Table is already up to date

User time 0.00, System time 0.00
Maximum resident set size 0, Integral resident set size 0
Non-physical pagefaults 344, Physical pagefaults 0, Swaps 0
Blocks in 0 out 0, Messages in 0 out 0, Signals 0
Voluntary context switches 12, Involuntary context switches 9

8. Additional Useful Mysqlcheck Options

The following are some of the key options that you can use along with mysqlcheck.
  • -A, –all-databases Consider all the databases
  • -a, –analyze Analyze tables
  • -1, –all-in-1 Use one query per database with tables listed in a comma separated way
  • –auto-repair Repair the table automatically it if is corrupted
  • -c, –check Check table errors
  • -C, –check-only-changed Check tables that are changed since last check
  • -g, –check-upgrade Check for version dependent changes in the tables
  • -B, –databases Check more than one databases
  • -F, –fast Check tables that are not closed properly
  • –fix-db-names Fix DB names
  • –fix-table-names Fix table names
  • -f, –force Continue even when there is an error
  • -e, –extended Perform extended check on a table. This will take a long time to execute.
  • -m, –medium-check Faster than extended check option, but does most checks
  • -o, –optimize Optimize tables
  • -q, –quick Faster than medium check option
  • -r, –repair Fix the table corruption

13 Basic Linux System Calls Explained using a Fun Linux Virus Program

If you are interested in writing Linux system programming, you should learn all the basic library/system calls. This article has an example C program that covers a set of system calls that will help you understand the usage of these basic library calls.

The example C code given below does the following:
  • Automatically opens up some terminals
  • Displays the message that session is running as root or non-root
  • Display the above message on all the open terminals
The following are the 13 important library or system calls that are covered in the below example code.
  1. memset() : This function fills the first n bytes of the memory area pointed to by s with the constant byte c.
  2. fopen() : This function opens the file whose name is the string pointed to by its first argument and associates a stream with it.
  3. getcwd() : This function return a null-terminated string containing an absolute pathname that is the current working directory of the calling process
  4. getuid() : This function returns the real user ID of the calling process
  5. snprintf() : This function produces output according to a format and writes the output to a buffer.
  6. fwrite() : This function is used to write data to a stream
  7. fflush() : This function forces a write of all user space buffered data on to a particular stream
  8. fclose() : This function flushes the associated stream and closes the underlying file descriptor.
  9. system() : This function executes a command
  10. sleep() : This function makes the calling process sleep until specified seconds have elapsed or a signal arrives which is not ignored.
  11. opendir() : This function opens a directory stream
  12. readdir() : This function reads the directory which is opened as a stream
  13. atoi() : This function converts ascii argument to integer.
The following is the C code that shows how to use all of the above 13 system calls.
#include<stdio.h>
#include<stdlib.h>
#include<string.h>
#include<unistd.h>
#include<dirent.h>
#include<sys/types.h>
#include<pwd.h>

// A buffer to hold current working directory
char cwd[512];

void inform(char *path, char *binary_name)
{
    // Declare variables for file operations
    FILE *fp = NULL;

    // A counter to be used in loop
    unsigned int counter = 0;

    // A buffer to hold the information message
    char msg[1024];
    // memset function initializes the bytes
    // in the buffer 'msg' with NULL characters
    memset(msg, '\0', sizeof(msg));

    memset(cwd, '\0', sizeof(cwd));

    // Check for the path to be non NULL
    if(NULL== path)
    {
         printf("\n NULL path detected\n");
         return;
    }

    // fopen will open the file represented
    // by 'path' in read write mode.
    fp = fopen(path,"r+");

    if(!fp)
    {
        printf("\n Failed to open %s\n",path);
        return;
    }
    else
    {
        printf("\n Successfully opened %s\n",path);
    }

    // getcwd() gives us the current working directory
    // of the environemt from which this binary was
    // executed
    if(NULL == getcwd(cwd,sizeof(cwd)))
    {
        printf("\n Failed to get current directory\n");
        return;
    }

    // getuid() returns the real user ID of the calling
    // process.
    // getuid() returns 0 for root and non zero for
    // any other user.
    if( 0 != getuid())
    {
        // This functions fills the buffer 'msg' with the formatted string by replacing %s in the harcoded string with the appropriate values
        snprintf(msg,sizeof(msg),"\n\n\nYOU ARE NOT ROOT!!!!!");
    }
    else
    {
       snprintf(msg, sizeof(msg),"\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nYOU ARE ROOT!!!!!!!!!!!!!!");
    }

   // Make sure the information8 is printed 25 times on each
   // open terminal
   for(counter=0;counter<25;counter++)
   {
       printf("\n fwrite()\n");
       // Write the information message on to the terminal
       fwrite(msg, strlen(msg), 1, fp);
       // Flush the message to the stdout of the terminal
       fflush(fp);
       // Wait for one second.
       sleep(1);
   }
   // close the file representing the terminal
   fclose(fp);

}

int main(int argc, char *argv[])
{
    // Since we will do some directory operations
    // So declare some variables for it.
    DIR *dp = NULL;
    struct dirent *ptr = NULL;

    // This variable will contain the path to
    // terminal
    char *path = NULL;

    // Used as a counter in loops
    int i =0;

    // Step1 :
    // Open 5 terminals each after 2 seconds
    // of delay.
    for(;i<5;i++)
    {
        // The system API executes a shell command
        // We try to execute two commands here
        // Both of these commands will open up
        // a terminal. We have used two commands
        // just in case one of them fails.
        system("gnome-terminal");
        system("/usr/bin/xterm");

        // This call is used to cause a delay in
        // program execution. The argument to this
        // function is the number of seconds for
        // which the delay is required
        sleep(2);
    }

    // Give user some 60 seconds before issuing
    // a information message.
    sleep(60);

    // Now, open the directory /dev/pts which
    // corresponds to the open command terminals.
    dp = opendir("/dev/pts");
    if(NULL == dp)
    {
        printf("\n Failed to open /dev/pts\n");
        return 0;
    }

    // Now iterate over each element in the
    // directory untill all the elements are
    // iterated upon.
    while ( NULL != (ptr = readdir(dp)) )
    {
        // ptr->d_name gives the current device
        // name or the terminal name as a device.
        // All the numeric names correspond to
        // open terminals.

        // To check the numeric values we use
        // atoi().
        // Function atoi() converts the ascii
        // value into integer

        switch(atoi(ptr->d_name))
        {
            // Initialize 'path' accordingly

            case 0:path = "/dev/pts/0";
                   break;
            case 1:
                   path = "/dev/pts/1";
                   break;
            case 2:
                   path = "/dev/pts/2";
                   break;
            case 3:
                   path = "/dev/pts/3";
                   break;
            case 4:
                   path = "/dev/pts/4";
                   break;
            case 5:
                   path = "/dev/pts/5";
                   break;
            case 6:
                   path = "/dev/pts/6";
                   break;
            case 7:
                   path = "/dev/pts/8";
                   break;
            case 9:
                   path = "/dev/pts/9";
                   break;
            default:
                   break;
         }
         if(path)
         {
             // Call this function to throw some information.
             // Pass the path to terminal where the information
             // is to be sent and the binary name of this
             // program
             inform(path, argv[0]);
             // Before next iteration, make path point to
             // NULL
             path = NULL;
         }

    }

    sleep(60);

    return 0;
}
The above code itself is self explanatory as it contains adequate comments that explains what those system calls does. If you are new to Linux system programming, this code gives enough exposure to the usage of all these important functions. For more details and advanced usage please read their man pages carefully.
This code is a simulation of a fun basic virus program. Once you compile and execute the above c program, it will do the following. This code was tested on Linux mint. But, it should work on all the ubuntu derivatives.
  • The user will see 5 terminals opening up one by one each after 1 second.
  • While the user will be wondering what just happened, all of his open terminals will slowly start getting repeated information about the login being root or non-root.
  • Please note that debug logging is enabled in the code for your learning purpose, please comment out the debug printf’s and then execute it if you want to have some fun.

Understand UNIX / Linux Inodes Basics with Examples

Several countries provides a unique identification number (for example, social security number in the USA) to the people who live in that country. This makes it easier to identify an individual uniquely. This makes it easier to handle all the paper work necessary for an individual by various government agencies and financial institutions.
Similar to the social security number, there is a concept of Inode numbers which uniquely exist for all the files on Linux or *nix systems.

Inode Basics

An Inode number points to an Inode. An Inode is a data structure that stores the following information about a file :
  • Size of file
  • Device ID
  • User ID of the file
  • Group ID of the file
  • The file mode information and access privileges for owner, group and others
  • File protection flags
  • The timestamps for file creation, modification etc
  • link counter to determine the number of hard links
  • Pointers to the blocks storing file’s contents
Please note that the above list is not exhaustive. Also, the name of the file is not stored in Inodes (We will come to it later).
When a file is created inside a directory then the file-name and Inode number are assigned to file. These two entries are associated with every file in a directory. The user might think that the directory contains the complete file and all the extra information related to it but this might not be the case always. So we see that a directory associates a file name with its Inode number.
When a user tries to access the file or any information related to the file then he/she uses the file name to do so but internally the file-name is first mapped with its Inode number stored in a table. Then through that Inode number the corresponding Inode is accessed. There is a table (Inode table) where this mapping of Inode numbers with the respective Inodes is provided.

Why no file-name in Inode information?

As pointed out earlier, there is no entry for file name in the Inode, rather the file name is kept as a separate entry parallel to Inode number. The reason for separating out file name from the other information related to same file is for maintaining hard-links to files. This means that once all the other information is separated out from the file name then we can have various file names which point to same Inode.
For example :
$ touch a

$ ln a a1

$ ls -al
drwxr-xr-x 48 himanshu himanshu 4096 2012-01-14 16:30 .
drwxr-xr-x 3 root root 4096 2011-03-12 06:24 ..
-rw-r--r-- 2 himanshu family 0 2012-01-14 16:29 a
-rw-r--r-- 2 himanshu family 0 2012-01-14 16:29 a1
In the above output, we created a file ‘a’ and then created a hard link a1. Now when the command ‘ls -al’ is run, we can see the details of both ‘a’ and ‘a1′. We see that both the files are indistinguishable. Look at the second entry in the output. This entry specifies number of hard links to the file. In this case the entry has value ’2′ for both the files.
Note that Hard links cannot be created on different file systems and also they cannot be created for directories.

When are Inodes created?

As we all now know that Inode is a data structure that contains information of a file. Since data structures occupy storage then an obvious question arises about when the Inodes are created in a system? Well, space for Inodes is allocated when the operating system or a new file system is installed and when it does its initial structuring. So this way we can see that in a file system, maximum number of Inodes and hence maximum number of files are set.
Now, the above concept brings up another interesting fact. A file system can run out of space in two ways :
  • No space for adding new data is left
  • All the Inodes are consumed.
Well, the first way is pretty obvious but we need to look at the second way. Yes, its possible that a case arises where we have free storage space but still we cannot add any new data in file system because all the Inodes are consumed. This may happen in a case where file system contains very large number of very small sized files. This will consume all the Inodes and though there would be free space from a Hard-disk-drive point of view but from file system point of view no Inode available to store any new file.
The above use-case is possible but less encountered because on a typical system the average file size is more than 2KB which makes it more prone to running out of hard disk space first. But, nevertheless there exists an algorithm which is used to create number of Inodes in a file system. This algorithm takes into consideration the size of the file system and average file size. The user can tweak the number of Inodes while creating the file system.

Commands to access Inode numbers

Following are some commands to access the Inode numbers for files :

1) Ls -i Command

As we explained earlier in our Unix LS Command: 15 Practical Examples article, the flag -i is used to print the Inode number for each file.
$ ls -i
1448240 a 1441807 Desktop 1447344 mydata 1441813 Pictures 1442737 testfile 1448145 worm
1448240 a1 1441811 Documents 1442707 my_ls 1442445 practice 1442739 test.py
1447139 alpha 1441808 Downloads 1447278 my_ls_alpha.c 1441810 Public 1447099 Unsaved Document 1
1447478 article_function_pointer.txt 1575132 google 1447274 my_ls.c 1441809 Templates 1441814 Videos
1442390 chmodOctal.txt 1441812 Music 1442363 output.log 1448800 testdisk.log 1575133 vlc
See that the Inode number for ‘a’ and ‘a1′ are same as we created ‘a1′ as hard link.

2) Df -i Command

df -i command displays the inode information of the file system.
$ df -i
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/sda1            1875968  293264 1582704   16% /
none                  210613     764  209849    1% /dev
none                  213415       9  213406    1% /dev/shm
none                  213415      63  213352    1% /var/run
none                  213415       1  213414    1% /var/lock
/dev/sda2            7643136  156663 7486473    3% /home
The flag -i is used for displaying Inode information.

3) Stat Command

Stat command is used to display file statistics that also displays inode number of a file
$ stat a
File: `a'
Size: 0 Blocks: 0 IO Block: 4096 regular empty file
Device: 805h/2053d Inode: 1448240 Links: 2
Access: (0644/-rw-r--r--) Uid: ( 1000/himanshu) Gid: ( 1001/ family)
Access: 2012-01-14 16:30:04.871719357 +0530
Modify: 2012-01-14 16:29:50.918267873 +0530
Change: 2012-01-14 16:30:03.858251514 +0530

Example Usage Scenario of an Inode number

  1. Suppose there exist a file name with some special character in it. For example:  ”ab*
  2. Try to remove it normally using rm command, you will not be able to remove it.
  3. However using the inode number of this file you can remove it.
Lets see these steps in this example :
1) Check if the file exists:
$ ls -i
1448240 a 1447274 my_ls.c
1448240 a1 1442363 output.log
1448239 "ab* 1441813 Pictures
1447139 alpha
So we have a file with name “ab* in this directory
2) Try to remove it normally:
$ rm "ab*
> ^C
$ rm "ab*
> ^C
$
See that I tried couple of times to remove the file but could not.
3) Remove the file using Inode number:
As we discussed earlier in our find command examples article, you can search for a file using inode number and delete it.
$ find . -inum 1448239 -exec rm -i {} \;
rm: remove regular empty file `./"ab*'? y
$ ls -i
1448240 a 1447274 my_ls.c
1448240 a1 1442363 output.log
1447139 alpha 1441813 Pictures
So we used the find command specifying the Inode number of the file we need to delete. The file got deleted. Though we could have deleted the file otherwise also by using the command rm \”ab* instead of using the complicated find command example above but still I used it to demonstrate one of the use of Inode numbers for users.

6 Nagios Command Line Options Explained with Examples

1. Start Nagios Daemon Using nagios -d

Typically you would execute “service nagios start” to start the Nagios daemon, which really calls the /etc/rc.d/init.d/nagios script.
You’ll see the following line inside the /etc/rc.d/init.d/nagios script for the Nagios startup:

$NagiosBin -d $NagiosCfgFile
So, you can also manually start Nagios daemon as shown below.
# /usr/local/nagios/bin/nagios -d /usr/local/nagios/etc/nagios.cfg
The advantage of manually starting the Nagios daemon is that you can run two Nagios instance on one server. If you like to run a small test instance where you can play around with various configuration files, and Nagios options, create a nagios-test.cfg that points to different configuration object directories than the nagios.cfg, and then start the test instance using nagios-test.cfg file as shown below.
# /usr/local/nagios/bin/nagios -d /usr/local/nagios/etc/nagios-test.cfg
If you are new to Nagios, first install Nagios, and configure it to monitor a Linux server.

2. Verify Nagios Configurations Using nagios -v

Anytime you make changes to the configuration files, before you restart the Nagios daemon, verify the configuration changes (for syntax errors, and other invalid configuration errors) using nagios -v option as shown below.
# /usr/local/nagios/bin/nagios -v  /usr/local/nagios/etc/nagios.cfg
Reading configuration data...
Read main config file okay...

Processing object config file '/usr/local/nagios/etc/objects/commands.cfg'...
Processing object config file '/usr/local/nagios/etc/objects/contacts.cfg'...
Processing object config file '/usr/local/nagios/etc/objects/timeperiods.cfg'...
Processing object config file '/usr/local/nagios/etc/objects/templates.cfg'...
Processing object config directory '/usr/local/nagios/etc/servers'...
Read object config files okay...

Running pre-flight check on configuration data...

Checking services...Checked 450 services.
Checking hosts...   Checked 135 hosts.
Checking contacts...Checked 12 contacts.
..

Checking for circular paths between hosts...
Checking for circular host and service dependencies...

Total Warnings: 0
Total Errors:   0
Things look okay - No serious problems were detected during the pre-flight check
If this finds any issues, it will give proper message about the issue. At the end this will also display the total count for both warnings and errors. Make sure it says 0 here.

3. Display Processing Info and Scheduling Info using nagios -s

When you have a huge configuration files with several objects, nagios might take little longer to start. Using nagios -s option, you can see how much time Nagios might spend procesing configuration files. This also gives an approximate estimation on how much time you might save if you cache the configuration objects. How to cache configuration objects during startup is explained in the next item.
# /usr/local/nagios/bin/nagios -s  /usr/local/nagios/etc/nagios.cfg
I’ve split the output of the above command into multiple sections as shown below.
Object configuration processing times section displays the following information. In this example, the total time it took to process the configuration objects is way less than a second. So, caching the objects might not give you any visible difference, even thought it says you could save 4.36% by caching the objects. When you have a huge configuration file, you’ll definitely see some high numbers here.
OBJECT CONFIG PROCESSING TIMES  (* = Potential for precache savings with -u option)
----------------------------------
Read:                 0.002094 sec
Resolve:              0.000046 sec  *
Recomb Contactgroups: 0.000019 sec  *
Recomb Hostgroups:    0.000012 sec  *
Dup Services:         0.000017 sec  *
Recomb Servicegroups: 0.000001 sec  *
Duplicate:            0.000004 sec  *
Inherit:              0.000003 sec  *
Recomb Contacts:      0.000000 sec  *
Sort:                 0.000001 sec  *
Register:             0.000142 sec
Free:                 0.000021 sec
                      ============
TOTAL:                0.002360 sec  * = 0.000103 sec (4.36%) estimated savings
Configuration verification times section displays the amount of time it will take to verify the configuration during startup.
CONFIG VERIFICATION TIMES          (* = Potential for speedup with -x option)
----------------------------------
Object Relationships: 0.000102 sec
Circular Paths:       0.000001 sec  *
Misc:                 0.000117 sec
                      ============
TOTAL:                0.000220 sec  * = 0.000001 sec (0.5%) estimated savings
Even scheduling times section displays the amount of time it will take while processing various events mentioned below.
EVENT SCHEDULING TIMES
-------------------------------------
Get service info:        0.000084 sec
Get host info info:      0.000023 sec
Get service params:      0.000009 sec
Schedule service times:  0.000124 sec
Schedule service events: 0.010329 sec
Get host params:         0.000001 sec
Schedule host times:     0.000029 sec
Schedule host events:    0.000003 sec
                         ============
TOTAL:                   0.010602 sec
The following section displays both host and service scheduling information.
HOST SCHEDULING INFORMATION
---------------------------
Total hosts:                     3
Total scheduled hosts:           3
Host inter-check delay method:   SMART
Average host check interval:     300.00 sec
Host inter-check delay:          100.00 sec
Max host check spread:           30 min
First scheduled check:           Sun Nov 27 10:40:44 2011
Last scheduled check:            Sun Nov 27 10:44:04 2011

SERVICE SCHEDULING INFORMATION
-------------------------------
Total services:                     8
Total scheduled services:           8
Service inter-check delay method:   SMART
Average service check interval:     600.00 sec
Inter-check delay:                  75.00 sec
Interleave factor method:           SMART
Average services per host:          2.67
Service interleave factor:          3
Max service check spread:           30 min
First scheduled check:              Sun Nov 27 10:44:29 2011
Last scheduled check:               Sun Nov 27 10:53:14 2011
Finally, the performance suggestions section will lists all possible performance tunning suggestions for your specific configurations files.
PERFORMANCE SUGGESTIONS
-----------------------
I have no suggestions - things look okay.

4. Pre-cache Nagios Config Objects using nagios -p

When you have a big configuration file with several objects, you might save enough time during Nagios startup by caching the configuration objects.
The precache configuration information will be stored in the /usr/local/nagios/var/objects.precache file. If you’ve never created the pre-cache configuration files before, this file will not be present. If you want to change the location of the precache file, change the precached_object_file directive in the nagios.cfg file.
To create the pre-cache configuration files, use -p option as shown below.
# /usr/local/nagios/bin/nagios -pv /usr/local/nagios/etc/nagios.cfg
After the above command, the objects.precache file will be created. If you view this file, you can see that all Nagios object definitions are listed here. As it says in the beginning of this precache file, do not modify this file manually. If you like to modify any Nagios object, modify the appropriate configuration file, and regenerate the pre-cache file again.
# more /usr/local/nagios/var/objects.precache

5. Use Pre-cached Nagios Config Objects using nagios -u

After creating the pre-cache objects as shown above, stop Nagios daemon, and start it using -u option as shown below. Instead of reading the nagios configuration files again, it will simply use the cached objects that were earlier created from the /usr/local/nagios/var/objects.precache directory.
# /usr/local/nagios/bin/nagios -ud /usr/local/nagios/etc/nagios.cfg

6. Skip Circular Path Check using nagios -x

During the startup, Nagios checks to make sure you don’t have any circular paths in any of your object definitions. During startup, Nagios will make sure it doesn’t end-up in any deadlock situation by verifying circular paths. If you have lot of configuration objects, the circular path check might take some time.
If you have a working Nagios configuration, that you are sure doesn’t have any circular paths, you can instruct Nagios to skip this check during startup using nagios -x as shown below.
/usr/local/nagios/bin/nagios -xd /usr/local/nagios/etc/nagios.cfg
For faster Nagios startup, use both -u and -x option together as shown below, which will use pre-cache objects and skip circular path checks.
/usr/local/nagios/bin/nagios -uxd /usr/local/nagios/etc/nagios.cfg

Linux Time Command Examples

There are times when you might want to profile your program on parameters like:
  • Time taken by program in user mode
  • Time taken by program in kernel mode
  • Average memory usage by the program
  • etc
On Linux we have a utility ‘time’ that is designed specifically for this purpose. The utility ‘time’ takes a program name as an input and displays information about the resources used by the program. Also, if the command exists with non-zero status, this utility displays a warning message and exit status.

The syntax of ‘time’ is :
/usr/bin/time [options] program [arguments]
In the above syntax, ‘options’ refer to set of optional flag/values that can be passed to ‘time’ utility to set or unset a particular functionality. The following are the available time command options:
  • -v, –verbose : This option is passed when a detailed description of the output is required.
  • –quite : This option prevents the ‘time’ utility to report the status of the program.
  • -f, –format : This option lets the user to control the format of output of the ‘time’ utility.
  • -p, –portability : This option sets the following output format to make the output in conformance with POSIX
  • real %e
    user %U
    sys %S
  • -o FILE, –output=FILE : This option lets the user to redirect the output of ‘time’ utility to a file. This option lets the ‘time’ utility to overwrite the file FILE.
  • -a, –append : This option lets the ‘time’ utility to append the information to file FILE rather than overwriting it.
When the ‘time’ command is run, following is the kind of output it gives :
# /usr/bin/time ls
anaconda-ks.cfg  bin  install.log  install.log.syslog  mbox
0.00user 0.00system 0:00.00elapsed 0%CPU (0avgtext+0avgdata 3888maxresident)k
0inputs+0outputs (0major+304minor)pagefaults 0swaps
As we can see above, apart from executing the command, the last two lines of the output are the resource information that ‘time’ command outputs.
Note: In the above example, the command ‘time’ was run without any options. So this is a default output generated by the ‘time’ command, which is not formatted properly.
As we can see from the output, the default format of the output generated is :
%Uuser %Ssystem %Eelapsed %PCPU (%Xtext+%Ddata %Mmax)k
%Iinputs+%Ooutputs (%Fmajor+%Rminor)pagefaults %Wswaps

The Format Option

This option lets the user to decide the output generated by ‘time’ command. In the last section we discussed the default format that is used in output. Here in this section, we will learn how to specify customized formats.
The format string usually consists of `resource specifiers’ interspersed with plain text. A percent sign (`%’) in the format string causes the following character to be interpreted as a resource specifier.
A backslash (`\’) introduces a `backslash escape’, which is translated into a single printing character upon output. `\t’ outputs a tab character, `\n’ outputs a newline, and `\\’ outputs a backslash. A backslash followed by any other character outputs a question mark (`?’) followed by a backslash, to indicate that an invalid backslash escape was given.
Other text in the format string is copied verbatim to the output. time always prints a newline after printing the resource use information, so normally format strings do not end with a newline character (or `0).
For example :
$ /usr/bin/time -f "\t%U user,\t%S system,\t%x status" date
Sun Jan 22 17:46:58 IST 2012
 0.00 user, 0.00 system, 0 status
So we see that in the above example, we tried to change the output format by using a different output format.

Resources

Since we discussed above that ‘time’ utility displays information about the resource usage by a program, In this section lets list the resources that can be tracked by this utility and the corresponding specifiers.
From the man page :
  • C – Name and command line arguments of the command being timed.
  • D – Average size of the process’s unshared data area, in Kilobytes.
  • E – Elapsed real (wall clock) time used by the process, in [hours:]minutes:seconds.
  • F – Number of major, or I/O-requiring, page faults that occurred while the process was running. These are faults where the page has actually migrated out of primary memory.
  • I – Number of file system inputs by the process.
  • K - Average total (data+stack+text) memory use of the process, in Kilobytes.
  • M - Maximum resident set size of the process during its lifetime, in Kilobytes.
  • O - Number of file system outputs by the process.
  • P - Percentage of the CPU that this job got. This is just user + system times divided by the total running time. It also prints a percentage sign.
  • R - Number of minor, or recoverable, page faults. These are pages that are not valid (so they fault) but which have not yet been claimed by other virtual pages. Thus the data in the page is still valid but the system tables must be updated.
  • S - Total number of CPU-seconds used by the system on behalf of the process (in kernel mode), in seconds.
  • U - Total number of CPU-seconds that the process used directly (in user mode), in seconds.
  • W - Number of times the process was swapped out of main memory.
  • X - Average amount of shared text in the process, in Kilobytes.
  • Z - System’s page size, in bytes. This is a per-system constant, but varies between systems.
  • c - Number of times the process was context-switched involuntarily (because the time slice expired).
  • e - Elapsed real (wall clock) time used by the process, in seconds.
  • k - Number of signals delivered to the process.
  • p - Average unshared stack size of the process, in Kilobytes.
  • r - Number of socket messages received by the process.
  • s - Number of socket messages sent by the process.
  • t - Average resident set size of the process, in Kilobytes.
  • w - Number of times that the program was context-switched voluntarily, for instance while waiting for an I/O operation to complete.
  • x - Exit status of the command.
So we can see that there is a long list of resources whose usage can be tracked by the ‘time’ utility.

Why /usr/bin/time? (Instead of just time)

Lets not use /usr/bin/time and use ‘time’ instead.
$ time -f "\t%U user,\t%S system,\t%x status" date
-f: command not found 

real 0m0.255s
user 0m0.230s
sys 0m0.030s
As seen from the output above, the ‘time’ command when used without the complete path (/usr/bin/time) spits out an error regarding the ‘-f’ flag. Also the format of output is neither the one specified by us in the command nor the default format we discussed earlier. This lead to a confusion over how this output got generated.
When ‘time’ command is executed without the complete path (/usr/bin/time), then its the built-in ‘time’ command of the bash shell that is executed.
  • Use ‘man time’ to view the man page of /usr/bin/time
  • Use ‘help time’ to view the information about the bash time built-in.

How to Install Oracle VM VirtualBox and Create a Virtual Machine

Oracle VM VirtualBox is an open source virtualization software that you can install on various x86 systems. You can install Oracle VM Virtualbox on top of Windows, Linux, Mac, or Solaris. Once you install the virtualbox, you can create virtual machines that can be used to run guest operating systems like Windows, Linux, Solaris, etc.
On a high-level Oracle VM VirtualBox is similar to VMware. Oracle got this VirtualBox technology from Sun.
This article cover the basic installation of virtualbox and how to install a guest OS on it.

If you are interested in VMware, use this guide: How to Create VMware Virtual Machine and Install Guest OS using vSphere Client.
The following are the basic terms you should be aware of before we go further:
  • Host – The physical machine where you are going to install VirtualBox
  • Guest – The machines created using VirtualBox. ( Virtual Machine )
  • Guest Additions – A set of software components, which comes with VirtualBox to improve the Guest performance and also to provide some additional features.

1. Installing VirtualBox

This article explains how to install VirtualBox on a Debian based system.
First, add any one of the following mirrors based on your distribution in /etc/apt/sources.lst
deb http://download.virtualbox.org/virtualbox/debian oneiric contrib
deb http://download.virtualbox.org/virtualbox/debian natty contrib
deb http://download.virtualbox.org/virtualbox/debian maverick contrib non-free
deb http://download.virtualbox.org/virtualbox/debian lucid contrib non-free
deb http://download.virtualbox.org/virtualbox/debian karmic contrib non-free
deb http://download.virtualbox.org/virtualbox/debian hardy contrib non-free
deb http://download.virtualbox.org/virtualbox/debian squeeze contrib non-free
deb http://download.virtualbox.org/virtualbox/debian lenny contrib non-free
Next, download the public key and register with apt-key for signature verification
wget -q http://download.virtualbox.org/virtualbox/debian/oracle_vbox.asc -O- | sudo apt-key add -
Finally, installing VirtualBox as shown below.
sudo apt-get update
sudo apt-get install dkms
sudo apt-get install virtualbox-4.1
After successful installation a command named “virtualbox” will be created. Also you can access VirtualBox from “Application -> System Tools” menu.

2. Creating a Virtual Machine ( For Guest OS)

Open Application -> System Tools -> VirtualBox ( Command name is “virtualbox” )
Click Machine -> New. This will launch a “Create New Virtual Machine” wizard. Click Next.
Enter the name of the Guest machine as you desire and choose the Operating system and Version that you are planning to install as follows, and click “Next”.

Enter the RAM size that you want to provide to your Guest machine as follows.

Now it will ask you to choose your “Virtual Hard Disk” for installing the guest OS as follows.

Since this is the first time we are installing, click “Create New Hard disk”.
Create “New Virtual Disk” wizard will open. Click Next
Now we need to choose, whether the disk has to be “Dynamically expanding disk” or “Fixed-size storage”.
Remember, for a guest machine, it sees a file residing in the host machine as the “Hard Disk”. Whenever a guest machine does any write to disk, it will be written into the file which resides on the host machine
If we select “Fixed storage” and if we choose the size as 10GB then, in host machine ( by default under .VirtualBox/Guest-Machine/Guest-Machine.vdi ) a file will be created with 10GB of size
If we select “Dynamic storage” then, .VirtualBox/Guest-Machine/Guest-Machine.vdi will initially be a small size file, but it will grow whenever the guest machine writes data to the disk.
Choose “Dynamic storage” and click Next.

Enter the maximum size that you want to allocate for the guest machine.
Click Finish. Now a file named “Guest-Machine.vdi” will be created under “.VirtualBox/Guest-Machine/”
Click “Finish” to complete the creation of Virtual Machine.

Now a new “Virtual Machine” is created and it will be in “power off” state.

3. Installing OS in a Virtual machine

We can install any OS ( personally tested windows and linux ) on a virtual machine. We can install the OS in virtual machine by 2 methods
  • Through OS-DVD
  • Through ISO image of the OS
Here we will cover the installation using an ISO image, although using DVD is very similar to this.
Make sure that the iso file of your distribution is present in the host machine.
Launch “virtualbox”. Select the newly created virtual machine. Click “Settings”.
Now a new window will open which will list out the settings group on left panel and actual setting on the right side as follows.

Select “System”. On the right panel ensure that the boot order is correct ( Similar to setting the boot order in BIOS ).
Use the “Move Up” or “Move Down” button button to change the boot order, and make sure CD/DVD is selected as the “First boot device” and click “Ok”.
The next step is to map the “ISO file” of your distribution to the virtual CD/DVD device.
Under “Settings” go to “Storage”, the following screen will appear.

Click the “CD icon” and choose the “iso file of the OS”, here I used “Debian-Lenny”.
The following screen will appear once you have choosen the ISO file. Click “Ok”.

Now select the virtual machine, and click “Start”. It will start to boot from the CD/DVD which is mapped to the ISO file.

The OS installation is similar to installing an OS in a physical machine.
Once OS is installed successfully, change the “Boot Order” to boot from HDD, and click “Start”.
Now you can start using the virtual machine as like other machines.

10 Things You (and Your Boss) Can Do To Change Your World

Most of your waking hours are spent at work. While at work, you spend most of your time working on the projects that are assigned to you by your boss. Complaining about your work and boss will not make you happy even when you think your boss is difficult, doesn’t understand your point of view, doesn’t give you freedom, etc.
These are the 10 things you can do at work that will make both you and your boss happy. Surprisingly, these 10 things are relatively easy to do, and mostly requires you to change your mental attitude towards which you approach things.

1. Show up to Work on Time

Don’t be the last one to show up to work, and the first one to leave work. It makes you look like a slacker. If you are enthusiastic about your work, you have to show up early, and preferably before your boss shows up. Leave after your boss leaves. Don’t mistake this suggestion that you are coming early and staying late just for show off. You have to show up to work early because you genuinely want to contribute to your company’s growth.
If you are a boss: You have to show up to work on time, before you can expect others to show up on time. You have to lead by example. You should be the 1st person to come to work, and the last one to leave. It is OK, if your subordinate don’t show up on time occasionally. Just like you, they also have days when they might have to take care of some personal issues.

2. Appreciate Your Boss

Just like you, your boss is also working hard. You might have only one project that you are currently working on. But, your boss might be responsible to his boss to deliver multiple projects on time and within budget. When things go wrong in a project, your boss might be taking lot of heat from his boss, which you might not even know. Once in a while, genuinely appreciate your boss. When your boss is assigning you some exciting project that you really like to work on, genuinely thank your boss for assigning that project to you.
If you are a boss: When your subordinate completes a project, appreciate them. Just a simple ‘Thank You’ might be enough. They need to feel that their work is getting recognized. Ask them what was the most challenging and interesting things they did on the project. Just showing interest in the details of the work they do will make them feel appreciated. When they complete a high profile project that is visible to the whole company, give all the credit to them. Don’t take any credit for yourself. They did the job, and they deserve to be recognized.

3. Go the Extra Mile Before Asking for Promotion or Raise

Don’t just sit and complain that you are not getting the raise or promotion you deserve. Nobody gets a raise or promotion for the things they are supposed to do. i.e If you do only the tasks of your current role, you don’t deseve a raise or promotion. You should be doing additional projects, or doing the jobs of your next role, before you get there. If you are a developer, you should be doing what a team lead does, before you ask for a team lead promotion. If you are a team lead, you should be doing the tasks of a manager before you ask for a manager promotion.
Don’t just do the bare minimum tasks that are necessary to complete a project. Go the extra mail and do additional things that adds value to the project, which nobody expected you to do. Take time to think about all the additional things you can do on a project, that will bring value to your organization and your customers. Put in additional hours to do those. When you deliver more than what people expect you to deliver, you’ll definitely get recognized.
If you are a boss: Give the necessary promotion and bonus even before your subordinate asks for it. Approximately only 2 out of 100 employees will genuinely go the extra mile. They are your super stars. Treat them well and pay them well. Do everything possible within your power, to keep them happy. Your success depends on them.

4. Finish Your Projects on Time

Your boss doesn’t assign you a project, just to keep you busy. Every project that you complete will move your company forward, even when you think the project is not significant. Several small successful projects will eventually have a huge impact on the overall growth of the company. So, put your full heart into everything you do at work, and try to finish all your projects on time. If everybody around you is completing their projects on time, try to complete your project ahead of time. Stay one step ahead of everybody else at your work, which will make you feel good, and everybody will notice your contributions. When you consistently delivery all your projects on time or ahead of time, you’ll definitely get recognized.
If you are a boss: When a project takes long time to complete than you anticipated, don’t come to a conclusion that it is because your subordinate it not capable of delivering the project on time. There might be various reasons for the project to get delayed. Probably you didn’t have a realistic expectation on the project, the project might’ve expanded in scope, etc. Sit with your team and understand the reasons for the delay. If your subordinate has been delivering projects consistently on-time, and slipped on one project, just give them a break, and don’t make it as an issue.

5. Ask for Help from Your Boss

When you are stuck on a project, get your boss involved, and ask for advice. Even when the problem is too technical and your boss can’t solve it, still ask for help. There are few advantages to this–your boss will appreciate you to get him involved in the project, your boss might even assign additional resources to help you solve the problem. Most bosses might not want to hear from you when things are going well, but they definitely want to hear from you when things are not working out, which will help them take appropriate action to get the project on track.
If you are a boss: Ask for suggestion from your subordinates on how to improve various processes. Find out from them whether you can do anything to help them do their job in an effective manner. When they know that you are asking this sincerely, you might be surprised with the kind of answers you get. It might take only few minutes to satisfy their request, which might make them extremely productive and happy. Also, get them involved early in the project life cycle, ask for their suggestion on how this project should be executed.

6. Help Your Boss Proactively

Your boss might assign you one project, but your boss might be working on multiple projects. If that is the case, find out what other projects your boss is working on, and see how you can proactively help without being asked. Probably you can just do some research on some new technologies that might be helpful in executing that project. Create a report and send your findings to your boss on how this new technology might help in those future projects. You can even do simple things like installing few useful add-ons on your boss browser that might make your boss productive. Your boss be very thankful to you for this small help you did without being asked.
If you are a boss: When you assign a project to your subordinate, make sure you give them all the resources they need to successfully complete the project. If there is something in the project that you think you can finish effectively, offer that help. Don’t just delegate all the tasks in a big project. Make sure to take ownership on some of the tasks in a project and assign your name to it, and deliver it on time. When your team sees that you are delivering your tasks on time, they’ll make sure to finish the project on time.

7. Help Others at Work

If you see one of your colleagues is struggling on a task, offer them a helping hand. When you help someone else to finish their task, don’t cc your boss in that email thread. Don’t go to your boss and explain how you went out of your way to help the other person. Just help someone else without expecting anything in return. Make it as a habit to constantly help someone at work. If you do this consistently, when you are stuck in an issue, they’ll voluntarily help you even before you ask them.
If you are a boss: Constantly pay attention to your team and projects, and see if they need any help. Occasionally, assign one of your resources to help another team. When your team sees that you are helping other teams that you are not responsible for without they asking for your help, your team will also get motivated and do everything they can do to make your project successful.

8. Believe in Company’s Mission and Vision

Every company has a mission and vision. Most of the companies do a good job in trying to make it very clear and pass it to the entire organization. You should truly understand what your company stands for. If you don’t believe in the mission of your company, you are wasting your time working there. When your values matches the values of your company, and when you believe in your company mission and vision, you’ll be extremely productive and go to work with full of enthusiasm. If you don’t believe in your company mission, vision and values, you should start looking for a new company who’s mission resonates with you.
If you are a boss: If your company mission statement is fuzzy, and not written using simple words, create a mission statement for your own team. Make sure you believe in it first, before you expect your team to belive in it. Even when your company mission statement is clear, you should still create a mission statement for your own team, that is in alignment with the overall mission of the company, but at the same time should be specific to your team, and motivate your team.

9. When Working from Home, Do the Work

When your boss allows you to telecommute, make sure you really work. Don’t take advantage of working from home and finish your personal tasks on that day. You should be thankful that you don’t have to deal with the stress of driving to work that day. Make sure to put additional hours on the day you work from home and finish few additional tasks. When working from home, don’t send an email 1st thing in the morning to someone at work cc-ing your boss, and last thing at night another email to someone at work cc-ing your boss. It is very lame.
If you are a boss: Make sure to allow your team to work from home. For most part, they’ll be extremely productive when they are working from home. Even if you know that they are not really “working” from home, it is Ok, as long as they deliver all their projects on time.

10. Finish the TPS Report and Move on

Finally, finish the TPS report (or whatever report you boss is asking) on time. Even when your boss is asking you to finish some totally useless report that nobody would ever review, just finish it and submit it to him on time, and move on to the next project. If running TPS report takes only 1 hour, don’t talk to your colleagues for 2 hours about how lame your boss is to request you the TPS report.
If you are a boss: Don’t ask your subordinate to run any useless TPS report. It is better for them to be browse the internet and learn something, than to run your lame TPS report.