top button
    Got Questions


Global IoT Summit 2018
Oct 31, 2018
The Leela Palace, Bangalore



    Connect to us
      Facebook Login
      Site Registration Why to Join

Facebook Login
Site Registration

HIVE in Apache Hadoop

0 votes
104 views

Introduction:-

Hive is data warehouse infrastructure tool to process structured data (relational data, usually text) in hadoop. Hive is run on top of APACHE HADOOP to summarize a huge amount of data (Big Data). Hive is mostly use operation of mapreduce.

Prerequisites:-

  • Core java ( Basic knowledge )
  • MySQL (Database)
  • Hadoop ( Hadoop File System )
  • Linux ( Operating System )

First of all we have to format a namenode using hadoop command.

  • $ hadoop namenode –format

Then start all daemons of hadoop

  • $start-all.sh

When this command is completed, then check all daemons are started or not to using

  • $jps

Now we have to start a hive for using command:-

  • $hive

And hive is starting process.

Then you have to fire all query in the hive shell.

Now you can create a databases, tables, external tables, and load data into tables.

Creating database. This database is have our all tables contents which are saved in a warehouse

  • Create database Prwatech

Creating Table. Table have schema to fill all details which are defined in uniquely and specify by variables.

  • create table txnrecords(txnno INT, txndate STRING, custno INT, amount DOUBLE, category STRING, product STRING, city STRING, state STRING, spendby STRING) row format delimited fields terminated by ',' stored as textfile;

Load data into file. Loading data is which one table we are created as same as separated by comma file are load into the table

  • LOAD DATA INPATH '/txns1.txt' OVERWRITE INTO TABLE txnrecords;
posted Feb 27 by Nitesh Kumar

  Promote This Article
Facebook Share Button Twitter Share Button Google+ Share Button LinkedIn Share Button Multiple Social Share Button

Related Articles
0 votes

This article informs all online readers about Hadoop Technology that can help them to manage and store Big Data. Today most data management platforms are using this technique to upload and store sensitive data so that the issue of managing online info could be solved with ease. This technique is very cost effective so that every web based company can reap its benefit.

Introduction

Yes! If you want to do Big Data management then Hadoop technology can help you well. It has been observed that daily millions of images, photos and videos are being uploaded on the internet on social networking sites like the You Tube and FaceBook. You may wonder where these contents and stuff are being stored after being used for as many years. With the enhancement in number of social media websites it has become a serious issue where to store uploaded data. For this reason social networking sites like Twitter, Facebook and You Tube are making use of hadoop to do efficient data management.

Know the History of Hadoop

The history of hadoop is unique in itself. This technology was introduced to computer and internet giants during year 2005. The name hadoop was given to this technology after the son of inventor of this technology. Now it is being managed by a software foundation and company named Apache. Till 2008 the technique of hadoop was used by popular news sites like Yahoo and also by some other web based companies like New York Times and Facebook. The Hadoop Architecture has been created with a common file management system and an engine named Map Reduce. These components help internet servers and programmers to store online internet data in a safe way. 

Is the Demand of Hadoop Growing?

You may ask us is the demand of hadoop growing in the internet world? Our answer is yes. Today the market of hadoop has increased with money value of more than $ 2 billion. It may also grow more in the coming years till 2020. This estimate has been done by organizations at the international level. If this hadoop grows in use then companies like Amazon and Horton may also get the benefits of using them. For this reason today this platform is available at a low and affordable cost. It has been predicted that in the coming future many companies may adopt as well as use this technology to store data on the web based platform at big internet servers.

What are the Features of Hadoop Servers?

Hadoop servers can handle failures well in an automatic mode. They can also replicate more copies of the stored data whenever needed. You may also feel amazed to know that the industry of doing Data Management has made a great progress. Retail companies, government companies, software firms and hospitals are using it to store sensitive data on hadoop servers.

Our Reference: Hadoop Certification Courses in Bangalore

...