doop-0.20-mapreduce/hadoop-examples.jar pi 10 1000
Number of Maps = 10
Samples per Map = 1000
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
13/03/26 15:48:04 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/03/26 15:48:05 INFO mapred.FileInputFormat: Total input paths to process : 10
13/03/26 15:48:05 INFO mapred.JobClient: Running job: job_201303261534_0001
13/03/26 15:48:06 INFO mapred.JobClient: map 0% reduce 0%
13/03/26 15:48:11 INFO mapred.JobClient: map 20% reduce 0%
13/03/26 15:48:13 INFO mapred.JobClient: map 40% reduce 0%
13/03/26 15:48:15 INFO mapred.JobClient: map 60% reduce 0%
13/03/26 15:48:16 INFO mapred.JobClient: map 80% reduce 0%
13/03/26 15:48:18 INFO mapred.JobClient: map 100% reduce 26%
13/03/26 15:48:21 INFO mapred.JobClient: map 100% reduce 100%
13/03/26 15:48:21 INFO mapred.JobClient: Job complete: job_201303261534_0001
13/03/26 15:48:21 INFO mapred.JobClient: Counters: 33
13/03/26 15:48:21 INFO mapred.JobClient: File System Counters
13/03/26 15:48:21 INFO mapred.JobClient: FILE: Number of bytes read=226
13/03/26 15:48:21 INFO mapred.JobClient: FILE: Number of bytes written=2016361
13/03/26 15:48:21 INFO mapred.JobClient: FILE: Number of read operations=0
13/03/26 15:48:21 INFO mapred.JobClient: FILE: Number of large read operations=0
13/03/26 15:48:21 INFO mapred.JobClient: FILE: Number of write operations=0
13/03/26 15:48:21 INFO mapred.JobClient: HDFS: Number of bytes read=2390
13/03/26 15:48:21 INFO mapred.JobClient: HDFS: Number of bytes written=215
13/03/26 15:48:21 INFO mapred.JobClient: HDFS: Number of read operations=31
13/03/26 15:48:21 INFO mapred.JobClient: HDFS: Number of large read operations=0
13/03/26 15:48:21 INFO mapred.JobClient: HDFS: Number of write operations=3
13/03/26 15:48:21 INFO mapred.JobClient: Job Counters
13/03/26 15:48:21 INFO mapred.JobClient: Launched map tasks=10
13/03/26 15:48:21 INFO mapred.JobClient: Launched reduce tasks=1
13/03/26 15:48:21 INFO mapred.JobClient: Data-local map tasks=10
13/03/26 15:48:21 INFO mapred.JobClient: Total time spent by all maps in occupied slots (ms)=19270
13/03/26 15:48:21 INFO mapred.JobClient: Total time spent by all reduces in occupied slots (ms)=10129
13/03/26 15:48:21 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
13/03/26 15:48:21 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
13/03/26 15:48:21 INFO mapred.JobClient: Map-Reduce Framework
13/03/26 15:48:21 INFO mapred.JobClient: Map input records=10
13/03/26 15:48:21 INFO mapred.JobClient: Map output records=20
13/03/26 15:48:21 INFO mapred.JobClient: Map output bytes=180
13/03/26 15:48:21 INFO mapred.JobClient: Input split bytes=1210
13/03/26 15:48:21 INFO mapred.JobClient: Combine input records=0
13/03/26 15:48:21 INFO mapred.JobClient: Combine output records=0
13/03/26 15:48:21 INFO mapred.JobClient: Reduce input groups=2
13/03/26 15:48:21 INFO mapred.JobClient: Reduce shuffle bytes=280
13/03/26 15:48:21 INFO mapred.JobClient: Reduce input records=20
13/03/26 15:48:21 INFO mapred.JobClient: Reduce output records=0
13/03/26 15:48:21 INFO mapred.JobClient: Spilled Records=40
13/03/26 15:48:21 INFO mapred.JobClient: CPU time spent (ms)=4110
13/03/26 15:48:21 INFO mapred.JobClient: Physical memory (bytes) snapshot=2668306432
13/03/26 15:48:21 INFO mapred.JobClient: Virtual memory (bytes) snapshot=8668086272
13/03/26 15:48:21 INFO mapred.JobClient: Total committed heap usage (bytes)=2210988032
13/03/26 15:48:21 INFO mapred.JobClient: org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter
13/03/26 15:48:21 INFO mapred.JobClient: BYTES_READ=240
Job Finished in 16.69 seconds
Estimated value of Pi is 3.14080000000000000000