Category Archives: apache-pig

How to Query Data Associated With Minimum/Maximum in Pig

I'm looking for the coldest hour for each day. My data looks like this:


I grouped on each day to find the Min Temp with this code:

minTemps = FOREACH gdate2 GENERATE group as day,MIN(removeDash.temp) as minTemp;

which gives this output:


My problem:I also need the hour at which the minimum temp occurred. How can I get the hour as well?

Unit Testing Pig Script with streaming to a python script

I have a pig script and I am using org.apache.pig.pigunit.PigTest. I am streaming data through two scripts (a python script and an awk script) in my pig script. This works in my functional tests and when I manually run it, but it does not work in my unit test.

These are the function definitions:


This gives me the error:

org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator for alias bid_recommendation_model_1. Backend error : java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING

If I remove the ship statement (which I can have my unit tests do for testing), I instead get "unable to open iterator," but with no explanation.

Basically all the pig script does is load the data and stream it through those scripts and outputs the union, eg:

py_output = STREAM input THROUGH PY as (...);
awk_output = STREAM input THROUGH AWK as (...);
result = UNION py_putput, awk_output;
STORE result INTO ...;

Apache Pig – Determining and loading the latest dataset in a directory

I have an hdfs location with several timestamped directories and I need my pig script to pick up the latest one. For example

/projects/ABC/dailydata/20170203/ #<---- pig should pick this one

What I've tried is and got working is below, but wondering if there's a cleaner way to get the latest timestamp

sh hdfs dfs -ls /projects/ABC/dailydata/ | tail -1


I'm trying to parse below input (there are 2 records in this input)using Elephantbird json loader

[{"node_disk_lnum_1":36,"node_disk_xfers_in_rate_sum":136.40000000000001,"node_disk_bytes_in_rate_22": 187392.0, "node_disk_lnum_7": 13}]

[{"node_disk_lnum_1": 36, "node_disk_xfers_in_rate_sum": 105.2,"node_disk_bytes_in_rate_22": 123084.8, "node_disk_lnum_7":13}]

Here is my syntax:

register '/home/data/Desktop/elephant-bird-pig-4.1.jar';

a = LOAD '/pig/tc1.log' USING 
com.twitter.elephantbird.pig.load.JsonLoader('-nestedLoad') as (json:map[]);

b = FOREACH a GENERATE flatten(json#'node_disk_lnum_1') AS 
node_disk_lnum_1,flatten(json#'node_disk_xfers_in_rate_sum') AS 
node_disk_xfers_in_rate_sum,flatten(json#'node_disk_bytes_in_rate_22') AS 
node_disk_bytes_in_rate_22, flatten(json#'node_disk_lnum_7') AS


b describe result:

b: {node_disk_lnum_1: bytearray,node_disk_xfers_in_rate_sum: bytearray,node_disk_bytes_in_rate_22: bytearray,node_disk_lnum_7: bytearray}

c = FOREACH b GENERATE node_disk_lnum_1;


c: {node_disk_lnum_1: bytearray}


Expected Result:

36, 136.40000000000001, 187392.0, 13

36, 105.2, 123084.8, 13

Throwing the below error

2017-02-06 01:05:49,337 [main] INFO - Pig features used in the script: UNKNOWN 2017-02-06 01:05:49,386 [main] INFO - Key [pig.schematuple] was not set... will not generate code. 2017-02-06 01:05:49,387 [main] INFO org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, PartitionFilterOptimizer, PredicatePushdownOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]} 2017-02-06 01:05:49,390 [main] INFO org.apache.pig.newplan.logical.rules.ColumnPruneVisitor - Map key required for a: $0->[node_disk_lnum_1, node_disk_xfers_in_rate_sum, node_disk_bytes_in_rate_22, node_disk_lnum_7]

2017-02-06 01:05:49,395 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false 2017-02-06 01:05:49,398 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1 2017-02-06 01:05:49,398 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1 2017-02-06 01:05:49,425 [main] INFO - Pig script settings are added to the job 2017-02-06 01:05:49,426 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3 2017-02-06 01:05:49,428 [main] ERROR - ERROR 2998: Unhandled internal error. com/twitter/elephantbird/util/HadoopCompat

Please help what am I missing?

How to find whether a column with lots of words has a genuine email id or not in Apache Pig?

I've a column which has paragraphs in it. It is a 10000 rows column wherein I need to find which column has genuine email id. I've used columnname matches '(.)@(.).(.*)', which also gave me outputs like '@nelson' '.... @kumar...' etc which I don't need. I only need a genuine email id. Please let me know how to find this in such huge paragraphs using Apache pig code.

Thanks :)

Which file format is available on Apache Pig? [duplicate]

This question already has an answer here:

I'm new to Apache Pig.

I'm not sure which input file format is available on Pig.

For example, Parquet, Text, Avro, RCFile and SequenceFile are available on Impala. (See: How Impala Works with Hadoop File Formats)

I guess text file is okay because data loading example is using .log file. (See: Getting Started) Also I found AvroStorage page, so Avro is available.

And then, how about Parquet, RCFile, SequenceFile and more? Or, am I something wrong?

Please advise me, thanks.

Difference Between Cogroup and group in apache Pig in latest verions

I am not finding any difference between cogroup and group in the latest versions(I have setups from 0.11.0 to 0.16.0) of apache pig latin.

I think for readability purpose group will be used on one relation and cogroup used on two or more relations. Other wise Group can also will be used where every cogroup used

Plz let me know any one, where is special use case to use cogroup than group.

Thanks --Murali

Pig Script to compute Min, Max of emp timings

I am new to Pig and the Hadoop world. The problem I have may be simple but I am not able to proceed.

So I have the below data which is basically swipe in data for a day. I need to compute the total number of hours an employee spends in a day i.e difference between the first in-time(time he reaches office) and the last out-Time(last swipe of the day) using PIG.

EmpID In_Time Out_Time
1     9:00     10:00
2     8:00     11:00
3     10:00    12:00
1     11:00    13:00
1     14:00    18:00
2     12:00    18:00
3     13:00    18:00

So I wrote the below script but it doesnt seem to give the right result.

grunt> emprec = load '/emptime/emptime' using PigStorage() as (empid:int,in:chararray,out:chararray);
grunt> aggdata = group emprec by empid;
grunt> emptime = foreach aggdata generate   (emprec.empid,MIN(,MAX(emprec.out));

I dont seem to get the right results with the script written.

The result i need is

Intermediate result (for my understanding)

EmpID  In_Time   Out_Time
1      9:00      18:00
2      8:00      18:00
3      10:00     18:00

Final Output needed is the difference of the Out_Time-In_Time

EmpID  Total_Time  
    1      9:00      
    2      10:00      
    3      8:00  

I have written the last line to get the Min and Max time so i can subtract the 2 and get the total time spent in office

Please note, in case you want to assume the time as Int or any other format, please do so as this is just an example.

Thanks in Advance

Regards, Chetan