Published in


Python Requests Tutorial — GET and POST Requests in Python

In this Requests tutorial article, you will be learning all the basics of the requests module using Python to get you started with using Requests. We will be covering the following topics in this blog:

Let us begin this “Requests Tutorial” blog by first checking out what the Requests Module actually is.

What Is Requests Module?

Requests is a Python module that you can use to send all kinds of HTTP requests. It is an easy-to-use library with a lot of features ranging from passing parameters in URLs to sending custom headers and SSL Verification. In this tutorial, you will learn how to use this library to send simple HTTP requests in Python.

Requests allow you to send HTTP/1.1 requests. You can add headers, form data, multi-part files, and parameters with simple Python dictionaries, and access the response data in the same way.

Installing The Requests module

To install requests, simply:

$ pip install requests

Or, if you absolutely must:

$ easy_install requests

Making a GET Request

It is fairly straightforward to send an HTTP request using Requests. You start by importing the module and then making the request. Check out the example:

import requests 
req = requests.get('')

So, all the information is stored somewhere, correct?

Yes, it is stored in a Response object called as req.

Let’s say, for example, you want the encoding of a web-page so that you can verify it or use it somewhere else. This can be done using the req.encoding property.

An added plus is that you can also extract many features like the status code for example (of the request). This can be done using the req.status_code property.

req.encoding # returns 'utf-8' 
req.status_code # returns 200

We can also access the cookies that the server sent back. This is done using req.cookies, as straightforward as that! Similarly, you can get the response headers as well. This is done by making use of req.headers.

Do note that the req.headers property will return a case-insensitive dictionary of the response headers. So, what does this imply?

This means that req.headers[‘Content-Length’], req.headers[‘content-length’] and req.headers[‘CONTENT-LENGTH’] will all return the value of the just the ‘Content-Length’ response header.

We can also check if the response obtained is a well-formed HTTP redirect (or not) that could have been processed automatically using the req.is_redirect property. This will return True or False based on the response obtained.

You can also get the time elapsed between sending the request and getting back a response using another property. Take a guess? Yes, it is the req.elapsed property.

Remember the URL that you initially passed to the get() function? Well, it can be different than the final URL of the response for many reasons and this includes redirects as well.

And to see the actual response URL, you can use the req.url property.

import requests
req = requests.get('')

req.encoding # returns 'utf-8'
req.status_code # returns 200
req.elapsed # returns datetime.timedelta(0, 1, 666890)
req.url # returns ''

# returns [<Response [301]>, <Response [301]>]

# returns 'text/html; charset=utf-8'

Don’t you think that getting all this information about the webpage is nice? But, the thing is that you most probably want to access the actual content, correct?

If the content you are accessing is text, you can always use the req.text property to access it. Do note that the content is then parsed as Unicode only. You can pass this encoding with which to decode this text using the req.encoding property as we discussed earlier.

In the case of non-text responses, you can access them very easily. In fact it’s done in binary format when you use req.content. This module will automatically decode gzip and deflate transfer-encodings for us. This can be very helpful when you are dealing directly with media files. Also, you can access the JSON-encoded content of the response as well, that is if it exists, using the req.json() function.

Pretty simple and a lot of flexibility, right?

Also, if needed, you can also get the raw response from the server just by using req.raw. Do keep in mind that you will have to pass stream=True in the request to get the raw response as per need.

But, some files that you download from the internet using the Requests module may have a huge size, correct? Well, in such cases, it will not be wise to load the whole response or file in the memory at once. But, it is recommended that you download a file in pieces or chunks using the iter_content(chunk_size = 1, decode_unicode = False) method.

So, this method iterates over the response data in chunk_size number of bytes at once. And when the stream=True has been set on the request, this method will avoid reading the whole file into memory at once for just the large responses.

Do note that the chunk_size parameter can be either an integer or None. But, when set to an integer value, chunk_size determines the number of bytes that should be read into the memory at once.

When chunk_size is set to None and stream is set to True, the data will be read as it arrives in whatever size of chunks are received as and when they are. But, when chunk_size is set to None and stream is set to False, all the data will be returned as a single chunk of data only.

Downloading An Image Using Requests Module

So let’s download the following image of a forest on Pixabay using the Requests module we learned about. Here is the actual image:

This is the code that you will need to download the image:

import requests
req = requests.get('path/to/forest.jpg', stream=True)
with open('Forest.jpg', 'wb') as fd:
for chunk in req.iter_content(chunk_size=50000):
print('Received a Chunk')

Note that the ‘path/to/forest.jpg’ is the actual image URL. You can put the URL of any other image here to download something else as well. This is just an example showed here and the given image file is about 185kb in size and you have set chunk_size to 50,000 bytes.

This means that the “Received a Chunk” message should be printed four times in the terminal. The size of the last chunk will just be 39350 bytes because the part of the file that remains to be received after the first three iterations is 39350 bytes.

Requests also allow you to pass parameters in a URL. This is particularly helpful when you are searching for a webpage for some results like a tutorial or a specific image. You can provide these query strings as a dictionary of strings using the params keyword in the GET request. Check out this easy example:

import requests

query = {'q': 'Forest', 'order': 'popular', 'min_width': '800', 'min_height': '600'}
req = requests.get('', params=query)

# returns ''

Next up in this “Requests Tutorial” blog, let us look at how we can make a POST request!

Making a POST Request

Making a POST request is just as easy as making GET requests. You just use the post() function instead of get().

This can be useful when you are automatically submitting forms. For example, the following code will download the whole Wikipedia page on Nanotechnology and save it on your PC.

import requests
req ='', data = {'search':'Nanotechnology'})
with open('Nanotechnology.html', 'wb') as fd:
for chunk in req.iter_content(chunk_size=50000):

Sending Cookies and Headers

As previously mentioned, you can access the cookies and headers that the server sends back to you using req.cookies and req.headers. Requests also allow you to send your own custom cookies and headers with a request. This can be helpful when you want to, let’s say, set a custom user agent for your request.

To add HTTP headers to a request, you can simply pass them in a dict to the headers parameter. Similarly, you can also send your own cookies to a server using a dict passed to the cookies parameter.

import requests

url = ''

headers = {'user-agent': 'your-own-user-agent/0.0.1'}
cookies = {'visit-month': 'February'}

req = requests.get(url, headers=headers, cookies=cookies)

Cookies can also be passed in a Cookie Jar. They provide a more complete interface to allow you to use those cookies over multiple paths.

Check out this example below:

import requests

jar = requests.cookies.RequestsCookieJar()
jar.set('first_cookie', 'first', domain='', path='/cookies')
jar.set('second_cookie', 'second', domain='', path='/extra')
jar.set('third_cookie', 'third', domain='', path='/cookies')

url = ''
req = requests.get(url, cookies=jar)


# returns '{ "cookies": { "first_cookie": "first", "third_cookie": "third" }}'

Next up on this “Requests Tutorial” blog, let us look at session objects!

Session Objects

Sometimes it is useful to preserve certain parameters across multiple requests. The Session object does exactly that. For example, it will persist cookie data across all requests made using the same session.

The Session object uses urllib3’s connection pooling. This means that the underlying TCP connection will be reused for all the requests made to the same host.

This can significantly boost the performance. You can also use methods of the Requests object with the Session object.

Sessions are also helpful when you want to send the same data across all requests. For example, if you decide to send a cookie or a user-agent header with all the requests to a given domain, you can use Session objects.

Here is an example:

import requests

ssn = requests.Session()
ssn.cookies.update({'visit-month': 'February'})

reqOne = ssn.get('')
# prints information about "visit-month" cookie

reqTwo = ssn.get('', cookies={'visit-year': '2017'})
# prints information about "visit-month" and "visit-year" cookie

reqThree = ssn.get('')
# prints information about "visit-month" cookie

As you can see, the “visit-month” session cookie is sent with all three requests. However, the “visit-year” cookie is sent only during the second request. There is no mention of the “visit-year” cookie in the third request too. This confirms the fact that cookies or other data set on individual requests won’t be sent with other session requests.


The concepts discussed in this tutorial should help you make basic requests to a server by passing specific headers, cookies, or query strings.

This will be very handy when you are trying to scrape some web pages for information. Now, you should also be able to automatically download music files and wallpapers from different websites once you have figured out a pattern in the URLs.

I hope you have enjoyed this post on Requests Tutorial.If you wish to check out more articles on the market’s most trending technologies like Artificial Intelligence, DevOps, Ethical Hacking, then you can refer to Edureka’s official site.

Do look out for other articles in this series which will explain the various other aspects of Python and Data Science.

1. Python Tutorial

2. Python Programming Language

3. Python Functions

4. File Handling in Python

5. Python Numpy Tutorial

6. Scikit Learn Machine Learning

7. Python Pandas Tutorial

8. Matplotlib Tutorial

9. Tkinter Tutorial

10. PyGame Tutorial

11. OpenCV Tutorial

12. Web Scraping With Python

13. PyCharm Tutorial

14. Machine Learning Tutorial

15. Linear Regression Algorithm from scratch in Python

16. Python for Data Science

17. Python Regex

18. Loops in Python

19. Python Projects

20. Machine Learning Projects

21. Arrays in Python

22. Sets in Python

23. Multithreading in Python

24. Python Interview Questions

25. Java vs Python

26. How To Become A Python Developer?

27. Python Lambda Functions

28. How Netflix uses Python?

29. What is Socket Programming in Python

30. Python Database Connection

31. Golang vs Python

32. Python Seaborn Tutorial

33. Python Career Opportunities

Originally published at on December 20, 2018.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Aayushi Johari

A technology enthusiast who likes writing about different technologies including Python, Data Science, Java, etc. and spreading knowledge.