A parallel processing model for handling extremely large data sets. First, a Map process runs to reduce a data set to key value pairs (in tuples), and then a second Reduce process combines those pairs into a smaller set of tuples. First introduced by Google, MapReduce is a concept central to Hadoop.