Pointwise vs. Pairwise vs. Listwise Learning to Rank
At a high level, pointwise, pairwise and listwise approaches differ in how many documents you consider at a time in your loss function when training your model.
Pointwise approaches look at a single document at a time in the loss function. They essentially take a single document and train a classifier / regressor on it to predict how relevant it is for the current query. The final ranking is achieved by simply sorting the result list by these document scores. For pointwise approaches, the score for each document is independent of the other documents that are in the result list for the query.
All the standard regression and classification algorithms can be directly used for pointwise learning to rank.
Pairwise approaches look at a pair of documents at a time in the loss function. Given a pair of documents, they try and come up with the optimal ordering for that pair and compare it to the ground truth. The goal for the ranker is to minimize the number of inversions in ranking i.e. cases where the pair of results are in the wrong order relative to the ground truth.
Pairwise approaches work better in practice than pointwise approaches because predicting relative order is closer to the nature of ranking than predicting class label or relevance score. Some of the most popular Learning to Rank algorithms like RankNet, LambdaRank and LambdaMART   are pairwise approaches.
Listwise approaches directly look at the entire list of documents and try to come up with the optimal ordering for it. There are 2 main sub-techniques for doing listwise Learning to Rank:
- Direct optimization of IR measures such as NDCG. E.g. SoftRank , AdaRank 
- Minimize a loss function that is defined based on understanding the unique properties of the kind of ranking you are trying to achieve. E.g. ListNet , ListMLE 
Listwise approaches can get fairly complex compared to pointwise or pairwise approaches.
Here are some good resources in case you want to learn more about this:
Originally published on Quora