Tuesday, February 24, 2015

How to Use JMeter for Performance Load Testing Your Web Application

JMeter LogoJMeter is a desktop application that can be used to perform functional testing and load testing.
While the JMeter application itself is designed as pure Java application, it can be used to perform load testing of any kind of web application, including those that are written in PHP, .NET, etc.

The original intent of the application was to test the performance of Apache Tomcat, which is basically a web server.
Over the years, Jmeter slowly evolved with improvements in user interface, and additional features to make it a viable performance testing and load testing tool for enterprise web applications.

What is JMeter?

JMeter is part of Apache Open Source project.
It was originally written to test the performance of web servers and now it have evolved as a automated testing tool with test data as well as functional testing tool for a web application, file server, web server and even database. We’ll explore the important features of JMeter in this tutorial.
It is configurable to simulate N number of users and threads targeting a particular web server or application. It generates a simulated load for your web application to measure its performance. Moreover, you can induce several iteration with loops to get an average result, implement the assertion, and view graphical and statistical representation of the test results.
As you see from the following screenshot, you can see the various configuration that can be done while creating a test plan. You can create multiple threads, several test fragments, configure the individual elements, set timers for various testing, set-up pre and post processor to do some application specific routines, set assertions and listeners for your test plan.
JMeter Add Test Plan
Test plan in this context is the point where you start setting the testing configuration that are specific to your application.
Right clicking to the “Test Plan” item reveals the menu. In menu, on hover to the “Add” items you can see the basic test features which you can create for a test plan.
The test features are created once, and then later are executed by the tester. The JMeter, as per the provided configurations creates a simulated load/stress, test data, graphical stats as well as asserts the criteria.
Lets see an example on how we can start by using few of the “Add” sub menu items to create a test plan and execute it to analyze the performance of a web application.
As you can imagine, the performance of a web application or website depends on various factors like your internet connection bandwidth, hosting server, web server used and even the way the site is developed in terms of scripting practices followed.

Create a Thread Group under Test Plan

Right click the “Add” item and navigate to Threads -> Threads group.
A thread can be visualized as equivalent to a user accessing a web server in a real time environment.
Click on the “Thread group” item to create it under your test plan.
A thread group will be created nested under the test plan in the left panel. Click on the thread group to view the various options and parameters you can set as per your load testing and performance testing requirement as shown below.
JMeter View Thread Group
You can specify a name to the thread group, and create N number of thread groups to simulate different scenarios.
For example, You may want to know the performance of your web application with 100 concurrent users, 500 concurrent users, etc.
So here you can create different thread groups with different ramp up period and loop count. Just specify the number of user threads you want to generate in “Number of Threads” text box. In the screen shot above it is set to 10.
Ramp up period as the name suggest is nothing but the total amount of time taken by all of the threads together in a thread group to complete the initiation to hit the target.
It should be set within the web servers maximum capability or you can play around with it to find the ideal time for different scenarios.
As you can see in the screen shot above, we can also select the “action to be taken after sample error”. This indicates what needs to be done when a thread launched is not able to successfully complete the request to response cycle due to any reason because of which a sample to be taken gives improper results.
We can continue to take the sample for multiple times by giving the loop count in numbers or by setting it to take sample n no of times by selecting the “forever” option.
You can also schedule the created thread to run automatically at a given time by selecting the scheduler option as shown in the screen shot below.
JMeter Scheduler

Add Request Sampler under Thread Group

Right click the created thread group in the left panel and navigate to Add ? Sampler ? HTTP Request by keeping on hovering the items. Now click on “HTTP Request” to add a sampler of this type/kind.
Similarly you can go through the lots of option there and explore the capability of JMeter to create different type of request/connection threads for various protocol to target different type of servers as shown in the following screenshot.
In the following example, We are creating a http request sampler because our intent is to test a website or say an underlying web server performance.
JMeter Add HTTP Sampler
Now you can view the newly added “http request sampler” under the thread group. Click on it to view the configuration panel. Here we are just setting our target domain to example.com for demonstration purpose as shown below.
JMeter View HTTP Request

Add Listener under Thread Group

A listener is added to track the respective thread group and view the performance stats in graphical view, or record it to a log file.
If you want to view the summary report, do the following: Right click the thread group in the left panel and navigate to Add ? Listener ? Summary Report. Now click on the “Summary Report” to add it under the thread group.
You can now see a “Summary Report” added under the thread group in left panel and on click of it you will be able to see a blank report in main content.

Execute the Thread Group for Load Testing

Select the appropriate thread group, and press Ctrl + R or click the green run button in the shortcuts panel above as shown below. You can now see the numbers counting in the top right corner displaying the current number of threads launched, and the thread sample that are not able to complete the sampling cycle because of errors.
In the summary table in the main content area you will be able to see the summary report.
JMeter Add Listener Summary Report
In the above summary report, unit used for showing the Average, Min and Maximum time taken is shown in milliseconds.
When you execute the test again without clearing this report, it will keep on adding the number of samples to the total summary. Sometimes, you may want to start a brand new test run ignoring the previous results. For this, you should ignore the previous performance data by clicking to the clear button in the shortcut panel.

Sunday, February 22, 2015

Quick Apache Hadoop Admin Command Reference Examples

If you are working on Hadoop, you’ll realize there are several shell commands available to manage your hadoop cluster.
Hadoop LogoThis article provides a quick handy reference to all Hadoop administration commands.

If you are new to big data, read the "introduction to Hadoop article" to understand the basics.

1. Hadoop Namenode Commands

Command Description
hadoop namenode -format Format HDFS filesystem from Namenode
hadoop namenode -upgrade Upgrade the NameNode
start-dfs.sh Start HDFS Daemons
stop-dfs.sh Stop HDFS Daemons
start-mapred.sh Start MapReduce Daemons
stop-mapred.sh Stop MapReduce Daemons
hadoop namenode -recover -force Recover namenode metadata after a cluster failure (may lose data)

2. Hadoop fsck Commands

Command Description
hadoop fsck / Filesystem check on HDFS
hadoop fsck / -files Display files during check
hadoop fsck / -files -blocks Display files and blocks during check
hadoop fsck / -files -blocks -locations Display files, blocks and its location during check
hadoop fsck / -files -blocks -locations -racks Display network topology for data-node locations
hadoop fsck -delete Delete corrupted files
hadoop fsck -move Move corrupted files to /lost+found directory

3. Hadoop Job Commands

Command Description
hadoop job -submit <job-file> Submit the job
hadoop job -status <job-id> Print job status completion percentage
hadoop job -list all List all jobs
hadoop job -list-active-trackers List all available TaskTrackers
hadoop job -set-priority <job-id> <priority> Set priority for a job. Valid priorities: VERY_HIGH, HIGH, NORMAL, LOW, VERY_LOW
hadoop job -kill-task <task-id> Kill a task
hadoop job -history Display job history including job details, failed and killed jobs

4. Hadoop dfsadmin Commands

Command Description
hadoop dfsadmin -report Report filesystem info and statistics
hadoop dfsadmin -metasave file.txt Save namenode’s primary data structures to file.txt
hadoop dfsadmin -setQuota 10 /quotatest Set Hadoop directory quota to only 10 files
hadoop dfsadmin -clrQuota /quotatest Clear Hadoop directory quota
hadoop dfsadmin -refreshNodes Read hosts and exclude files to update datanodes that are allowed to connect to namenode. Mostly used to commission or decommsion nodes
hadoop fs -count -q /mydir Check quota space on directory /mydir
hadoop dfsadmin -setSpaceQuota /mydir 100M Set quota to 100M on hdfs directory named /mydir
hadoop dfsadmin -clrSpaceQuota /mydir Clear quota on a HDFS directory
hadooop dfsadmin -saveNameSpace Backup Metadata (fsimage & edits). Put cluster in safe mode before this command.

5. Hadoop Safe Mode (Maintenance Mode) Commands

The following dfsadmin commands helps the cluster to enter or leave safe mode, which is also called as maintenance mode. In this mode, Namenode does not accept any changes to the name space, it does not replicate or delete blocks.
Command Description
hadoop dfsadmin -safemode enter Enter safe mode
hadoop dfsadmin -safemode leave Leave safe mode
hadoop dfsadmin -safemode get Get the status of mode
hadoop dfsadmin -safemode wait Wait until HDFS finishes data block replication

6. Hadoop Configuration Files

File Description
hadoop-env.sh Sets ENV variables for Hadoop
core-site.xml Parameters for entire Hadoop cluster
hdfs-site.xml Parameters for HDFS and its clients
mapred-site.xml Parameters for MapReduce and its clients
masters Host machines for secondary Namenode
slaves List of slave hosts

7. Hadoop mradmin Commands

Command Description
hadoop mradmin -safemode get Check Job tracker status
hadoop mradmin -refreshQueues Reload mapreduce configuration
hadoop mradmin -refreshNodes Reload active TaskTrackers
hadoop mradmin -refreshServiceAcl Force Jobtracker to reload service ACL
hadoop mradmin -refreshUserToGroupsMappings Force jobtracker to reload user group mappings

8. Hadoop Balancer Commands

Command Description
start-balancer.sh Balance the cluster
hadoop dfsadmin -setBalancerBandwidth <bandwidthinbytes> Adjust bandwidth used by the balancer
hadoop balancer -threshold 20 Limit balancing to only 20% resources in the cluster

9. Hadoop Filesystem Commands

Command Description
hadoop fs -mkdir mydir Create a directory (mydir) in HDFS
hadoop fs -ls List files and directories in HDFS
hadoop fs -cat myfile View a file content
hadoop fs -du Check disk space usage in HDFS
hadoop fs -expunge Empty trash on HDFS
hadoop fs -chgrp hadoop file1 Change group membership of a file
hadoop fs -chown huser file1 Change file ownership
hadoop fs -rm file1 Delete a file in HDFS
hadoop fs -touchz file2 Create an empty file
hadoop fs -stat file1 Check the status of a file
hadoop fs -test -e file1 Check if file exists on HDFS
hadoop fs -test -z file1 Check if file is empty on HDFS
hadoop fs -test -d file1 Check if file1 is a directory on HDFS

10. Additional Hadoop Filesystem Commands

Command Description
hadoop fs -copyFromLocal <source> <destination> Copy from local fileystem to HDFS
hadoop fs -copyFromLocal file1 data e.g: Copies file1 from local FS to data dir in HDFS
hadoop fs -copyToLocal <source> <destination> copy from hdfs to local filesystem
hadoop fs -copyToLocal data/file1 /var/tmp e.g: Copies file1 from HDFS data directory to /var/tmp on local FS
hadoop fs -put <source> <destination> Copy from remote location to HDFS
hadoop fs -get <source> <destination> Copy from HDFS to remote directory
hadoop distcp hdfs://192.168.0.8:8020/input hdfs://192.168.0.8:8020/output Copy data from one cluster to another using the cluster URL
hadoop fs -mv file:///data/datafile /user/hduser/data Move data file from the local directory to HDFS
hadoop fs -setrep -w 3 file1 Set the replication factor for file1 to 3
hadoop fs -getmerge mydir bigfile Merge files in mydir directory and download it as one big file

Apache Hadoop Fundamentals – HDFS and MapReduce Explained with a Diagram

This is the first article in our new ongoing Hadoop series.

In a traditional non distributed architecture, you’ll have data stored in one server and any client program will access this central data server to retrieve the data. The non distributed model has few fundamental issues. In this model, you’ll mostly scale vertically by adding more CPU, adding more storage, etc. This architecture is also not reliable, as if the main server fails, you have to go back to the backup to restore the data. From performance point of view, this architecture will not provide the results faster when you are running a query against a huge data set.


sHadoop is an open source software used for distributed computing that can be used to query a large set of data and get the results faster using reliable and scalable architecture.
In a hadoop distributed architecture, both data and processing are distributed across multiple servers. The following are some of the key points to remember about the hadoop:
  • Each and every server offers local computation and storage. i.e When you run a query against a large data set, every server in this distributed architecture will be executing the query on its local machine against the local data set. Finally, the resultset from all this local servers are consolidated.
  • In simple terms, instead of running a query on a single server, the query is split across multiple servers, and the results are consolidated. This means that the results of a query on a larger dataset are returned faster.
  • You don’t need a powerful server. Just use several less expensive commodity servers as hadoop individual nodes.
  • High fault-tolerance. If any of the nodes fails in the hadoop environment, it will still return the dataset properly, as hadoop takes care of replicating and distributing the data efficiently across the multiple nodes.
  • A simple hadoop implementation can use just two servers. But you can scale up to several thousands of servers without any additional effort.
  • Hadoop is written in Java. So, it can run on any platform.
  • Please keep in mind that hadoop is not a replacement for your RDBMS. You’ll typically use hadoop for unstructured data
  • Originally Google started using the distributed computing model based on GFS (Google Filesystem) and MapReduce. Later Nutch (open source web search software) was rewritten using MapReduce. Hadoop was branced out of Nutch as a separate project. Now Hadoop is a top-level Apache project that has gained tremendous momentum and popularity in recent years.

HDFS

HDFS stands for Hadoop Distributed File System, which is the storage system used by Hadoop. The following is a high-level architecture that explains how HDFS works.

The following are some of the key points to remember about the HDFS:
  • In the above diagram, there is one NameNode, and multiple DataNodes (servers). b1, b2, indicates data blocks.
  • When you dump a file (or data) into the HDFS, it stores them in blocks on the various nodes in the hadoop cluster. HDFS creates several replication of the data blocks and distributes them accordingly in the cluster in way that will be reliable and can be retrieved faster. A typical HDFS block size is 128MB. Each and every data block is replicated to multiple nodes across the cluster.
  • Hadoop will internally make sure that any node failure will never results in a data loss.
  • There will be one NameNode that manages the file system metadata
  • There will be multiple DataNodes (These are the real cheap commodity servers) that will store the data blocks
  • When you execute a query from a client, it will reach out to the NameNode to get the file metadata information, and then it will reach out to the DataNodes to get the real data blocks
  • Hadoop provides a command line interface for administrators to work on HDFS
  • The NameNode comes with an in-built web server from where you can browse the HDFS filesystem and view some basic cluster statistics

MapReduce


The following are some of the key points to remember about the HDFS:
  • MapReduce is a parallel programming model that is used to retrieve the data from the Hadoop cluster
  • In this model, the library handles lot of messy details that programmers doesn’t need to worry about. For example, the library takes care of parallelization, fault tolerance, data distribution, load balancing, etc.
  • This splits the tasks and executes on the various nodes parallely, thus speeding up the computation and retriving required data from a huge dataset in a fast manner.
  • This provides a clear abstraction for programmers. They have to just implement (or use) two functions: map and reduce
  • The data are fed into the map function as key value pairs to produce intermediate key/value pairs
  • Once the mapping is done, all the intermediate results from various nodes are reduced to create the final output
  • JobTracker keeps track of all the MapReduces jobs that are running on various nodes. This schedules the jobs, keeps track of all the map and reduce jobs running across the nodes. If any one of those jobs fails, it reallocates the job to another node, etc. In simple terms, JobTracker is responsible for making sure that the query on a huge dataset runs successfully and the data is returned to the client in a reliable manner.
  • TaskTracker performs the map and reduce tasks that are assigned by the JobTracker. TaskTracker also constantly sends a hearbeat message to JobTracker, which helps JobTracker to decide whether to delegate a new task to this particular node or not.
We’ve only scratched the surface of the Hadoop. This is just the first article in our ongoing series on Hadoop. In the future articles of this series, we’ll explain how to install and configure Hadoop environment, and how to write MapReduce programs to retrieve the data from the cluster, and how to effectively maintain a Hadoop infrastructure.

Saturday, February 7, 2015

7 Steps to Build a RPM Package from Source on CentOS / RedHat

Sometimes you might have access to an open source application source code but might not have the RPM file to install it on your system.
In that situation, you can either compile the source code and install the application from source code, or build a RPM file from source code yourself, and use the RPM file to install the application.
There might also be a situation where you want to build a custom RPM package for the application that you developed.
This tutorial explains how to build a RPM package from the source code.

In order to build RPMs, you will need source code, which usually means a compressed tar file that also includes the SPEC file.
The SPEC file typically contains instructions on how to build RPM, what files are part of package and where it should be installed.
The RPM performs the following tasks during the build process.
  1. Executes the commands and macros mentioned in the prep section of the spec file.
  2. Checks the content of the file list
  3. Executes the commands and macros in the build section of the spec file. Macros from the file list is also executed at this step.
  4. Creates the binary package file
  5. Creates the source package file
Once the RPM executes the above steps, it creates the binary package file and source package file.
The binary package file consists of all source files along with any additional information to install or uninstall the package.
It is usually enabled with all the options for installing the package that are platform specific. Binary package file contain complete applications or libraries of functions compiled for a particular architecture. The source package usually consists of the original compressed tar file, spec file and the patches which are required to create the binary package file.
Let us see how to create a simple source and BIN RPM packages using a tar file.
If you are new to rpm package, you may first want to understand how to use rpm command to install, upgrade and remove packages on CentOS or RedHat.

1. Install rpm-build Package

To build an rpm file based on the spec file that we just created, we need to use rpmbuild command.
rpmbuild command is part of rpm-build package. Install it as shown show below.
# yum install rpm-build
rpm-build is dependent on the following package. If you don’t have these installed already, yum will automatically install these dependencies for you.
elfutils-libelf
rpm
rpm-libs
rpm-python

2. RPM Build Directories

rpm-build will automatically create the following directory structures that will be used during the RPM build.
# ls -lF /root/rpmbuild/
drwxr-xr-x. 2 root root 4096 Feb  4 12:21 BUILD/
drwxr-xr-x. 2 root root 4096 Feb  4 12:21 BUILDROOT/
drwxr-xr-x. 2 root root 4096 Feb  4 12:21 RPMS/
drwxr-xr-x. 2 root root 4096 Feb  4 12:21 SOURCES/
drwxr-xr-x. 2 root root 4096 Feb  4 12:21 SPECS/
drwxr-xr-x. 2 root root 4096 Feb  4 12:21 SRPMS/
Note: The above directory structure is for both CentOS and RedHat when using rpmbuild package. You can also use /usr/src/redhat directory, but you need to change the topdir parameter accordingly during the rpm build. If you are doing this on SuSE Enterprise Linux, use /usr/src/packages directory.
If you want to use your own directory structure instead of the /root/rpmbuild, you can use one of the following option:
  • Use –buildroot option and specify the custom directory during the rpmbuild
  • Specify the topdir parameter in the rpmrc file or rpmmacros file.

3. Download Source Tar File

Next, download the source tar file for the package that you want to build and save it under SOURCES directory.
For this example, I’ve used the source code of icecase open source application, which is a server software for streaming multi-media. But, the steps are exactly the same for building RPM for any other application. You just have to download the corresponding source code for the RPM that you are trying to build.
# cd /root/rpmbuild/SOURCES/

# wget http://downloads.xiph.org/releases/icecast/icecast-2.3.3.tar.gz

# ls -l
-rw-r--r--. 1 root root 1161774 Jun 11  2012 icecast-2.3.3.tar.gz

4. Create the SPEC File

In this step, we direct RPM in the build process by creating a spec file. The spec file usually consists of the following eight different sections:
  1. Preamble – The preamble section contains information about the package being built and define any dependencies to the package. In general, the preamble consists of entries, one per line, that start with a tag followed by a colon, and then some information.
  2. %prep – In this section, we prepare the software for building process. Any previous builds are removed during this process and the source file(.tar) file is expanded, etc.
  3. One more key thing is to understand there are pre-defined macros available to perform various shortcut options to build rpm. You may be using this macros when you try to build any complex packages. In the below example, I have used a macro called %setup which removes any previous builds, untar the source files and changes the ownership of the files. You can also use sh scripts under %prep section to perform this action but %setup macro simplifies the process by using predefined sh scripts.
  4. %description – the description section usually contains description about the package.
  5. %build – This is the section that is responsible for performing the build. Usually the %build section is an sh script.
  6. %install – the % install section is also executed as sh script just like %prep and %build. This is the step that is used for the installation.
  7. %files – This section contains the list of files that are part of the package. If the files are not part of the %files section then it wont be available in the package. Complete paths are required and you can set the attributes and ownership of the files in this section.
  8. %clean – This section instructs the RPM to clean up any files that are not part of the application’s normal build area. Lets say for an example, If the application creates a temporary directory structure in /tmp/ as part of its build, it will not be removed. By adding a sh script in %clean section, the directory can be removed after the build process is completed.
Here is the SPEC file that I created for the icecast application to build an RPM file.
# cat /root/rpmbuild/SPECS/icecast.spec
Name:           icecast
Version:        2.3.3
Release:        0
Summary:        Xiph Streaming media server that supports multiple formats.
Group:          Applications/Multimedia
License:        GPL
URL:            http://www.icecast.org/
Vendor:         Xiph.org Foundation team@icecast.org
Source:         http://downloads.us.xiph.org/releases/icecast/%{name}-%{version}.tar.gz
Prefix:         %{_prefix}
Packager:  James
BuildRoot:      %{_tmppath}/%{name}-root

%description
Icecast is a streaming media server which currently supports Ogg Vorbis
and MP3 audio streams. It can be used to create an Internet radio
station or a privately running jukebox and many things in between.
It is very versatile in that new formats can be added relatively
easily and supports open standards for commuincation and interaction.

%prep
%setup -q -n %{name}-%{version}

%build
CFLAGS="$RPM_OPT_FLAGS" ./configure --prefix=%{_prefix} --mandir=%{_mandir} --sysconfdir=/etc

make

%install
[ "$RPM_BUILD_ROOT" != "/" ] && rm -rf $RPM_BUILD_ROOT

make DESTDIR=$RPM_BUILD_ROOT install
rm -rf $RPM_BUILD_ROOT%{_datadir}/doc/%{name}

%clean
[ "$RPM_BUILD_ROOT" != "/" ] && rm -rf $RPM_BUILD_ROOT

%files
%defattr(-,root,root)
%doc README AUTHORS COPYING NEWS TODO ChangeLog
%doc doc/*.html
%doc doc/*.jpg
%doc doc/*.css
%config(noreplace) /etc/%{name}.xml
%{_bindir}/icecast
%{_prefix}/share/icecast/*

%changelog

In this file, under % prep section you may noticed the macro “%setup -q -n %{name}-%{version}”. This macro executes the following command in the background.

cd /usr/src/redhat/BUILD
rm -rf icecast
gzip -dc /usr/src/redhat/SOURCES/icecast-2.3.3.tar.gz | tar -xvvf -
if [ $? -ne 0 ]; then
  exit $?
fi
cd icecast
cd /usr/src/redhat/BUILD/icecast
chown -R root.root .
chmod -R a+rX,g-w,o-w .
In %build section, you will see the CFLAGS with configure options that defines the options that can be using during RPM installation and the prefix option , mandatory directory to be present for the installation and sysconfig directory under which the system files needs to be copied over.
Below that line, you will see the make utility which determines the list of files needs to be compiled and compiles them appropriately.
In % install section, the line below the %install that says “make install” is used to take the binaries compiled from the previous step and installs or copies them to the appropriate locations so they can be accessed.

5. Create the RPM File using rpmbuild

Once the SPEC file is ready, you can start building your rpm with rpm –b command. The –b option is used to perform all the phases of the build process. If you see any errors during this phase, then you need to resolve it before re-attempting again. The errors will be usually of library dependencies and you can download and install it as necessary.
# cd /root/rpmbuild/SPECS

# rpmbuild -ba icecast.spec
Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.Kohe4t
+ umask 022
+ cd /root/rpmbuild/BUILD
+ cd /root/rpmbuild/BUILD
+ rm -rf icecast-2.3.3
+ /usr/bin/gzip -dc /root/rpmbuild/SOURCES/icecast-2.3.3.tar.gz
+ /bin/tar -xf -
+ STATUS=0
+ '[' 0 -ne 0 ']'
+ cd icecast-2.3.3
+ /bin/chmod -Rf a+rX,u+w,g-w,o-w .
+ exit 0
Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.ynm7H7
+ umask 022
+ cd /root/rpmbuild/BUILD
+ cd icecast-2.3.3
+ CFLAGS='-O2 -g'
+ ./configure --prefix=/usr --mandir=/usr/share/man --sysconfdir=/etc
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether to enable maintainer-specific portions of Makefiles... no
checking for gcc... gcc
..
..
..
Wrote: /root/rpmbuild/SRPMS/icecast-2.3.3-0.src.rpm
Wrote: /root/rpmbuild/RPMS/x86_64/icecast-2.3.3-0.x86_64.rpm
Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.dzahrv
+ umask 022
+ cd /root/rpmbuild/BUILD
+ cd icecast-2.3.3
+ '[' /root/rpmbuild/BUILDROOT/icecast-2.3.3-0.x86_64 '!=' / ']'
+ rm -rf /root/rpmbuild/BUILDROOT/icecast-2.3.3-0.x86_64
+ exit 0
Note: If you are using SuSE Linux, if rpmbuild is not available, try using “rpm -ba” to build the rpm package.
During the above rpmbuild install, you might notice the following error messages:
Error 1: XSLT configuration could not be found
checking for xslt-config... no
configure: error: XSLT configuration could not be found
error: Bad exit status from /var/tmp/rpm-tmp.8J0ynG (%build)
RPM build errors:
    Bad exit status from /var/tmp/rpm-tmp.8J0ynG (%build)
Solution 1: Install libxstl-devel
For the xslt-config, you need to install libxstl-devel package as shown below.
yum install libxstl-devel
This will also install the following dependencies:
  • libgcrypt
  • libgcrypt-devel
  • libgpg-error-devel
Error 2: libvorbis Error
checking for libvorbis... configure: error: must have Ogg Vorbis v1.0 or above installed
error: Bad exit status from /var/tmp/rpm-tmp.m4Gk3f (%build)
Solution 2: Install libvorbis-devel
For the Ogg Vorbis v1.0, install the libvorbis-devel package as shown below:
yum install libvorbis-devel
This will also install the following dependencies:
  • libogg
  • libogg-devel
  • libvorbis

6. Verify the Source and Binary RPM Files

Once the rpmbuild is completed, you can verify the source rpm and binary rpm is created in the below directories.
# ls -l /root/rpmbuild/SRPMS/
-rw-r--r-- 1 root root 1162483 Aug 25 15:46 icecast-2.3.3-0.src.rpm

# ls -l /root/rpmbuild/RPMS/x86_64/
-rw-r--r--. 1 root root 349181 Feb  4 12:54 icecast-2.3.3-0.x86_64.rpm

7. Install the RPM File to Verify

As a final step, you can install the binary rpm to verify that it installs successfully and all the dependencies are resolved.
# rpm -ivvh /root/rpmbuild/RPMS/x86_64/icecast-2.3.3-0.x86_64.rpm
D: ============== /root/rpmbuild/RPMS/x86_64/icecast-2.3.3-0.x86_64.rpm
D: loading keyring from pubkeys in /var/lib/rpm/pubkeys/*.key
D: couldn't find any keys in /var/lib/rpm/pubkeys/*.key
D: loading keyring from rpmdb
D: opening  db environment /var/lib/rpm cdb:mpool:joinenv
D: opening  db index       /var/lib/rpm/Packages rdonly mode=0x0
D:  read h#     210 Header sanity check: OK
D: added key gpg-pubkey-c105b9de-4e0fd3a3 to keyring
D: Using legacy gpg-pubkey(s) from rpmdb
D: Expected size:       349181 = lead(96)+sigs(180)+pad(4)+data(348901)
D:   Actual size:       349181
D: ========== relocations
D:      added binary package [0]
D: found 0 source and 1 binary packages
D: ========== +++ icecast-2.3.3-0 x86_64/linux 0x2
..
..
After the above installation, you can verify that your custom created rpm file was installed successfully as shown below.
# rpm -qa icecast
icecast-2.3.3-0.x86_64

Setup and Configure an FTP Server in IIS

The first thing you’ll need to setup your own FTP server in Windows is to make sure you have Internet Information Services (IIS) installed. Remember, IIS only comes with Pro, Professional, Ultimate or Enterprise versions of Windows.
In Windows Vista and earlier, click on Start, Control Panel and go to Add/Remove Programs. Then click on Add/Remove Windows Components. For Windows 7 and higher, click on Programs and Features from Control Panel and then click on Turn Windows features on or off.
add remove programs
turn features off
In the components wizard, scroll down until you see IIS in the list and check it off. Before you click Next though, make sure you click on Details and then check File Transfer Protocol (FTP) Service.
iis
file transfer protocol
For Windows 7 and up, go ahead and click on the box next to Internet Information Services and FTP Server. You also need to make sure you check the Web Management Tools box otherwise you won’t be able to manage IIS from Administrative Tools later on. For FTP, you need to check the FTP Service box otherwise you won’t have the option to create an FTP server.
iis install
Click OK and then click Next. Windows will go ahead and install the necessary IIS files along with the FTP service. You may be asked to insert your Windows XP or Windows Vista disc at this point. You shouldn’t need a disc for Windows 7 or higher.

Setup and configure IIS for FTP

Once IIS has been installed, you may have to restart your computer. Now we want to go ahead and open the IIS configuration panel to set up the FTP server. So go to Start, then Control Panel and click on Administrative Tools. You should now see an icon for Internet Information Services.
admin tools iis
When you open IIS in Vista or earlier for the first time, you’ll only see your computer name in the left hand menu. Go ahead and click the + symbol next to the computer name and you’ll see a couple of options like Web Sites, FTP Sites, etc. We’re interested in FTP Sites, so expand that out also. You should see Default FTP Site, click on it.
ftp site
You’ll notice after you click on the default FTP site that there are a couple of buttons at the top that look like VCR buttons: Play, Stop, and Pause. If the Play button is greyed out, that means the FTP server is active. Your FTP server is now up and running! You can actually connect to it via your FTP client software. I use SmartFTP, but you can use whatever you like best.
For Windows 7 and higher, you’ll see a different look to IIS. Firstly, there is no play button or anything like that. Also, you’ll see a bunch of configuration options right on the home screen for authentication, SSL settings, directory browsing, etc.
ftp config iis
To start the FTP server here, you have to right-click on Sites and then choose Add FTP Site.
add ftp site
This opens the FTP wizard where you start by giving your FTP site a name and choosing the physical location for the files.
new ftp site
Next, you have to configure the bindings and SSL. Bindings are basically what IP addresses you want the FTP site to use. You can leave it at All Unassigned if you don’t plan on running any other website. Keep the Start FTP site automatically box checked and choose No SSL unless you understand certificates.
bindings and ssl
Lastly, you have to setup authentication and authorization. You have to choose whether you want Anonymous or Basic authentication or both. For authorization, you choose from All Users, Anonymous users or specific users.
iis authentication
You can actually access the FTP server locally by opening Explorer and typing in ftp://localhost. If all worked well, you should see the folder load with no errors.
ftp localhost
If you have an FTP program, you can do the same thing. Open the FTP client software and type in localhost as the host name and choose Anonymous for the login. Connect and you should now see the folder.
localhost
Ok, so now we got the site up and running! Now where do you drop the data you want to share? In IIS, the default FTP site is actually located in C:\Inetpub\ftproot. You can dump data in there, but what if you already have data located somewhere else and don’t want to move it to inetpub?
In Windows 7 and higher, you can pick any location you want via the wizard, but it’s still only one folder. If you want to add more folders to the FTP site, you have to add virtual directories. For now, just open the ftproot directory and dump some files into it.
ftp root directory
Now refresh your FTP client and you should now see your files listed! So you now have an up and running FTP server on your local computer. So how would you connect from another computer on the local network?
In order to do this, you’ll have to open up the Windows Firewall to allow FTP connections to your computer; otherwise all external computers will be blocked. You can do this by going to Start, Control Panel, clicking on Windows Firewall and then clicking on the Advanced Tab.
windows firewall
Under the Network Connection Settings section, make sure all of the connections are checked in the left list and then click on the Settings button. You’ll now be able to open certain ports on your computer based on the service your computer is providing. Since we are hosting our own FTP server, we want to check off FTP Server.
ftp services
A little popup window will appear with some settings that you can change, just leave it as it is and click OK. Click OK again at the main Windows Firewall window.
In Windows 7 and higher, the process is different for opening the firewall port. Open Windows Firewall from the Control Panel and then click on Advanced Settings on the left hand side. Then click on Inbound Rules and scroll down till you see FTP Server (FTP Traffic-In), right click on it and choose Enable Rule.
firewall ftp rule
Then click on Outbound Rules and do the same thing for FTP Server (FTP Traffic-Out). You have now opened up the firewall for FTP connections. Now try to connect to your FTP site from a different computer on your network. You’ll need to get the IP address of the computer first before you can connect into it from a different computer.
Go to Start, click Run and type in CMD. Type IPCONFIG and jot down the number for IP Address:
ip address
In your FTP client on the other computer, type in the IP Address you just wrote down and connect anonymously. You should now be able to see all of your files just like you did on the FTP client that was on the local computer. Again, you can also go to Explorer and just type in FTP:\\ipaddress to connect.
Now that the FTP site is working, you can add as many folders as you like for FTP purposes. In this way, when a user connects, they specify a path that will connect to one specific folder.
Back in IIS, right click on Default FTP Site and choose New, and then Virtual Directory.
virtual directory
In Windows 7, you right-click on the site name and choose Add Virtual Directory.
add virtual directory
When you create a virtual directory in IIS, you’re basically going to create an alias that points to a folder on the local hard drive. So in the wizard, the first thing you’ll be asked is for a alias name. Make is something simple and useful like “WordDocs” or “FreeMovies”, etc.
virtual directory alias
Click Next and now browse to the path where you want the alias to refer to. So if you have a bunch of movies you want to share, browse to that folder.
ftp server
Click Next and choose whether you want it as Read access only or Read and Write access. If you simply want to share files, check Read. If you want people to be able to upload files to your computer, choose Read and Write.
read write
Click Next and then click Finish! Now you’ll see your new virtual directory below the default FTP site. In Windows 7 and up, the process is reduced to one dialog shown below:
add virtual folder
You can connect to you using your FTP client by putting in the Path field “/Test” or “/NameOfFolder”. In Explorer, you would just type in ftp://ipaddress/aliasname.
ftp connection
Now you’ll only see the files that are in the folder that we created the alias for.
anonymouse