Tuesday, July 16, 2013

What is NOSQL.?

Here is a nice video by Martin Fowler on `Introduction to NoSQL`. At the end he talks about polygot databases. It's not like NoSQL is going to replace RDBMs and that RDBMs will vanish for ever. NoSQL and RDBMs have their own space for meeting different requirements and will coexist. Fowler explains it in a very succinct way and it's also fun to watch his talk.




It's interesting to note that he doesn't mention Spanner anywhere. For the impatient to read the Google paper on Spanner, here are some articles on Spanner (1, 2) and a video (1).

I am firm believer that getting a clear understanding of the core concepts is very essential before diving into a new technology. NoSQL Distilled: A Brief Guide to the Emerging World of Polyglot Persistence is a very nice book for those who are getting started with NoSQL. I am also starting a new page with links to NoSQL and will keep it updated as I come across some interesting information.

 

Monday, July 15, 2013

CDH3 Vs CDH4

Starting in CDH 4.2, YARN/MapReduce 2 (MR2) includes an even more powerful Fair Scheduler. In addition to doing nearly all that it could do in MapReduce 1 (MR1), the YARN Fair Scheduler can schedule non-MapReduce jobs, schedule based on fine-grained memory instead of slots, and support hierarchical queues. In this post, you’ll learn what the Fair Scheduler’s role is and how it fulfills it, what it means to be a YARN “scheduler,” and dive into its new features and how to get them running on your cluster.

YARN/MR2 vs. MR1

YARN uses an updated terminology to reflect that it no longer just manages resources for MapReduce. From YARN’s perspective, a MapReduce job is an application. YARN schedules containers for map and reduce tasks to live in. What was referred to as pools in the MR1 Fair Scheduler has been updated to queue for consistency with the capacity scheduler. An excellent and deeper explanation is available here.

How Does it Work?

How a Hadoop scheduler functions can often be confusing, so we’ll start with a short overview of what the Fair Scheduler does and how it works.
A Hadoop scheduler is responsible for deciding which tasks get to run where and when to run them. The Fair Scheduler, originally developed at Facebook, seeks to promote fairness between schedulable entities by awarding free space to those that are the most underserved. (Cloudera recommends the Fair Scheduler for its wide set of features and ease of use, and Cloudera Manager sets it as the default. More than 95% of Cloudera’s customers use it.)

In Hadoop, the scheduler is a pluggable piece of code that lives inside ResourceManager (the JobTracker, in MR1) the central execution managing service. The ResourceManager is constantly receiving updates from the NodeManagers that sit on each node in the cluster, that say “What’s up, here are all the tasks I was running that just completed, do you have any work for me?” The ResourceManager passes these updates to the scheduler, and the scheduler then decides what new tasks, if any, to assign to that node.
How does the scheduler decide? For the Fair Scheduler, it’s simple: every application belongs to a “queue”, and we give a container to the queue that has the fewest resources allocated to it right now. Within that queue, we offer it to the application that has the fewest resources allocated to it right now. The Fair Scheduler supports a number of features that modify this a little, like weights on queues, minimum shares, maximum shares, and FIFO policy within queues, but the basic idea remains the same.

Beyond MapReduce

In MR1, the Fair Scheduler was purely a MapReduce scheduler. If you wanted to run multiple parallel computation frameworks on the same cluster, you would have to statically partition resources — or cross your fingers and hope that the resources given to a MapReduce job wouldn’t also be given to something else by that framework’s scheduler, causing OSes to thrash. With YARN, the same scheduler can manage resources for different applications on the same cluster, which should allow for more multi-tenancy and a richer, more diverse Hadoop ecosystem.

Scheduling Resources, Not Slots

A big change in the YARN Fair Scheduler is how it defines a “resource”. In MR1, the basic unit of scheduling was the “slot”, an abstraction of a space for a task on a machine in the cluster. Because YARN expects to schedule jobs with heterogeneous task resource requests, it instead allows containers to request variable amounts of memory and schedules based on those. Cluster resources no longer need to be partitioned into map and reduce slots, meaning that a large job can use all the resources in the cluster in its map phase and then do so again in its reduce phase. This allows for better utilization of the cluster, better treatment of tasks with high resource requests, and more portability of jobs between clusters — a developer no longer needs to worry about a slot meaning different things on different clusters; rather, they can request concrete resources to satisfy their jobs’ needs. Additionally, work is being done (YARN-326) that will allow the Fair Scheduler to schedule based on CPU requirements and availability as well.
An implementation detail of this change that prevents applications from starving under this new flexibility is the notion of reserved containers. Imagine two jobs are running that each have enough tasks to saturate more than the entire cluster. One job wants each of its mappers to get 1GB, and another job wants its mappers to get 2GB. Suppose the first job starts and fills up the entire cluster. Whenever one of its task finishes, it will leave open a 1GB slot. Even though the second job deserves the space, a naive policy will give it to the first one because it’s the only job with tasks that fit. This could cause the second job to be starved indefinitely.
To prevent this unfortunate situation, when space on a node is offered to an application, if the application cannot immediately use it, it reserves it, and no other application can be allocated a container on that node until the reservation is fulfilled. Each node may have only one reserved container. The total reserved memory amount is reported in the ResourceManager UI. A high number means that it may take longer for new jobs to get space.

Hadoop single node cluster in ease.......

***Cent OS 6.4 ( 32bit/64bit) configuring CDH4***
Single Node Cluster:---

* Install centos iso image in vm workstation.

* Download java1.6 based on Cent OS 32 bit or 64 bit

eg : 32 bit
          jdk-6u43-linux-i586-rpm.bin

* Give executable permission to jdk-6u43-linux-i586-rpm.bin

$chmod 755 jdk-6u43-linux-i586-rpm.bin

* Install java from root user(where java is there)

#./jdk-6u43-linux-i586-rpm.bin

* Export java home
#export JAVA_HOME=/usr/java/jdk1.6.0_43

* Download required hadoop version
ex:hadoop-1.0.3.tar.gz

* Switch to other user   [ To add user:
                                               Switch to root user and
                                               #su adduser <username>  (default group of user is root)]

#su <username>
password:<password>

* Unzip hadoop tar file           
$tar -zxvf hadoop-1.o.3.tar.gz

* Making hadoop recognize java
 cd /hadoop/conf
$vi hadoop-env.sh
and add following line
export JAVA_HOME=/usr/java/jdk1.6.0_43     //your java will be installed here
save and quit

* Configuring HADOOP_HOME directory to hadoop instalation directory
export HADOOP_HOME

* Goto your home dir
$cd ~

open .bashrc file in vi editor and add following lines
export HADOOP_HOME=<hadoop installed location>
export PATH=$PATH:$HADOOP_HOME/bin

Note:
* Add user to sudoers file
goto root user and open /etc/sudoers file and add following line
<username> ALL=(ALL)    NOPASSWD:ALL

*Making update-alternatives working
add following line to .bashrc file in u r home dir

*export PATH=$PATH:/sbin:/usr/sbin:/usr/local/sbin

*Make jps running

goto home $cd ~

open .bashrc file and add following line

export PATH=$PATH:/usr/java/jdk1.6.0_43/bin

*Set all configurations

#vi /home/training/hadoop/hadoop-1.0.3/conf/core-site.xml
<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:8020</value>
</property>

#vi /home/training/hadoop/hadoop-1.0.3/conf/mapred-site.xml

<property>
  <name>mapred.job.tracker</name>
  <value>localhost:8021</value>
</property>

#vi /home/training/hadoop/hadoop-1.0.3/conf/hdfs-site.xml

<property>
  <name>dfs.replication</name>
  <value>1</value>
</property>

Format the namenode [Format your name-node when you set up your cluster for first time]

$hadoop namenode -format

*Start all the services

$/home/training/hadoop/hadoop-1.0.3/bin/start-all.sh
                                      or
    You can directly start services from your home as start-all.sh

*Open browser and check weather the services start or not

        http://localhost:50070         or                  http://localhost:50030


*Installing eclipse in node

change to root user and do the following
#

download eclipse
   eg.: eclipse-java-europa-winter-linux-gtk.tar.gz

create a dir eclipse under /home/training

copy the downloaded file to eclipse folder and untar the file

tar -zxvf eclipse-java-europa-winter-linux-gtk.tar.gz

*Change permissions for eclipse dir

chmod -R +r /opt/eclipse

*Create Eclipse executable on /usr/bin path

touch /usr/bin/eclipse

chmod 755 /usr/bin/eclipse


## Open eclipse file with your favourite editor ##
nano -w /usr/bin/eclipse

## Paste following content to file ##
#!/bin/sh
export ECLIPSE_HOME="/home/training/eclipse"

$ECLIPSE_HOME/eclipse $*


*Bring eclipse icon on desktop

## Create following file, with our favourite editor ##
/usr/share/applications/eclipse.desktop

## Add following content to file and save ##
[Desktop Entry]
Encoding=UTF-8
Name=Eclipse
Comment=Eclipse SDK 4.2.1
Exec=eclipse
Icon=/home/training/eclipse/icon.xpm
Terminal=false
Type=Application
Categories=GNOME;Application;Development;
StartupNotify=true

after successful installation goto Applications->programming->eclipse->right click and ->addthis launcher to desktop

launch eclipse by double clicking eclipse icon on desktop

click on new project-->select mapreduce project-->click on configure Hadoop Install dir adnd give <hadoop install location>

Thursday, July 11, 2013

Hadoop use cases

UseCases

Time to time we come across different interesting scenarios and use cases of Big Data which have a direct positive impact (weather forecast) and negative impact (user surveillance) on our lives. Here are some of the ones we found too interesting to share.

If you are involved in any interesting Big Data scenarios, please let me know and I will add it to this page. You can get my email from the `About Me` section on this blog.

Infrastructure

Managing sewage like traffic thanks to data - http://gigaom.com/data/managing-sewage-like-traffic-thanks-to-data/

Media

How India’s favorite TV show uses data to change the world - http://gigaom.com/cloud/how-indias-favorite-tv-show-uses-data-to-change-the-world/

Process a Million Songs with Apache Pig - http://www.cloudera.com/blog/2012/08/process-a-million-songs-with-apache-pig/

Health Care

Neural Network for Breast Cancer Data Built on Google App Engine - http://googleappengine.blogspot.in/2012/08/neural-network-for-breast-cancer-data.html

Processing Rat Brain Neuronal Signals Using A Hadoop Computing Cluster – Part I - http://www.cloudera.com/blog/2012/07/processing-rat-brain-neuronal-signals-using-a-hadoop-computing-cluster-part-i/

Big Data in Genomics and Cancer Treatment - http://hortonworks.com/blog/big-data-in-genomics-and-cancer-treatment/

Big data is the next big thing in health IT - http://radar.oreilly.com/2012/02/health-it-big-data.html

Big data and DNA: What business can learn from junk gene - http://gigaom.com/cloud/big-data-and-dna-what-business-can-learn-from-junk-genes/

UC Irvine Medical Center: Improving Quality of Care with Apache Hadoop - http://hortonworks.com/blog/improving-quality-of-care-with-apache-hadoop-at-uc-irvine-medical-center/

Lessons from Anime and Big Data (Ghost in the Shell) - http://hortonworks.com/blog/lessons-from-anime-and-big-data-ghost-in-the-shell/

6 Big Data Analytics Use Cases for Healthcare IT - http://www.cio.com.au/article/459879/6_big_data_analytics_use_cases_healthcare_it/ 

IT Infrastructure

Hadoop for Archiving Email
 - http://www.cloudera.com/blog/2011/09/hadoop-for-archiving-email/
 - http://www.cloudera.com/blog/2012/01/hadoop-for-archiving-email-part-2/

The Data Lifecycle, Part One: Avroizing the Enron Emails
 - http://hortonworks.com/blog/the-data-lifecycle-part-one-avroizing-the-enron-emails/
 - http://hortonworks.com/blog/the-data-lifecycle-part-two-mining-avros-with-pig-consuming-data-with-hive

Fraud Detection & Crime

Using Hadoop for Fraud Detection and Prevention - http://www.cloudera.com/blog/2010/08/hadoop-for-fraud-detection-and-prevention/

Big data thwarts fraud - http://radar.oreilly.com/2011/02/big-data-fraud-protection-payment.html

Hadoop : Your Partner in Crime - http://hortonworks.com/blog/hadoop-your-partner-in-crime/

Ad Platform

Why Europe’s Largest Ad Targeting Platform Uses Hadoop - http://www.cloudera.com/blog/2010/03/why-europes-largest-ad-targeting-platform-uses-hadoop/

Travel

Big data is stealth travel site's secret weapon - http://gigaom.com/cloud/hopper-travel-search/

Education

Big Data in Education
- http://hortonworks.com/blog/data-in-education-part-i/
- http://hortonworks.com/blog/big-data-in-education-part-2-of-2/

Network Security

Introducing Packetpig

- http://hortonworks.com/blog/big-data-security-part-one-introducing-packetpig/
- http://hortonworks.com/blog/big-data-security-part-two-introduction-to-packetpig/

Enterprise and security

- http://gigaom.com/data/6-ways-big-data-is-helping-reinvent-enterprise-security/

Others

Adopting Apache Hadoop in the Federal Government - http://www.cloudera.com/blog/2011/04/adopting-apache-hadoop-in-the-federal-government/

10 days big data changes everything - http://gigaom.com/2012/03/11/10-ways-big-data-is-changing-everything/

Biodiversity Indexing: Migration from MySQL to Hadoop - http://www.cloudera.com/blog/2011/06/biodiversity-indexing-migration-from-mysql-to-hadoop/

The ecosystem hub

Ecosystem

Big Data ecosystem is evolving at a very rapid pace and it's difficult to keep track of the changes. The ecosystem provides a lot of choices (open source vs proprietary, free vs commercial, batch vs streaming). For a new-bee, it not only takes good amount of time and effort to get familiar with a framework, but it's also perplexing where to start.

Hadoop has got a lot of attention and many start with Hadoop, but Hadoop is not the solution for everything. Let's take graph processing, Hama and Giraph (though in incubating) are better then Hadoop for it. This page attempts to give an idea of the ecosystem around Big Data.

Here are some of the useful articles/blogs to get started with the Hadoop ecosystem.
 
Sqoop

HBase

Giraph

Oozie

Flume

Pig

Monday, July 8, 2013

The Hadoop questions hub

1. Where is your client located in Hadoop network.?

2. What is the difference between Name node federation and Name node High availability.?

3. What is elastic MapReduce in Hadoop.?

4. List out the difference between Hive and Pig. Where to use which.?

5. How to notify Task tracker failure.?

Saturday, July 6, 2013

The hadoop question hub

1. Can't I go for VSAM for distributed storage and process that in parallel to attain the features of Hadoop.?

2. What is the internal algorithm of HDFS.?

3. How can you scale Hadoop efficiency in data processing.?

4. Compare and contrast SAP HANA and Hadoop

5. Can I have filters while uploading data into Hadoop.?

6. List the scenarios where your job failed