OiO.lk Community platform!

Oio.lk is an excellent forum for developers, providing a wide range of resources, discussions, and support for those in the developer community. Join oio.lk today to connect with like-minded professionals, share insights, and stay updated on the latest trends and technologies in the development field.
  You need to log in or register to access the solved answers to this problem.
  • You have reached the maximum number of guest views allowed
  • Please register below to remove this limitation

Python GET request times for specific website, but website is not down

  • Thread starter Thread starter Sam Hill
  • Start date Start date
S

Sam Hill

Guest
When I try to make a GET request in Python like so:

Code:
import requests

url = "https://www.apartments.com/" # any path on this domain has this issue

req = None
try:
    req = requests.get(url, timeout=10, allow_redirects=True) # tried a variety of headers to pretend it's a browser, nothing changes
except requests.exceptions.Timeout:
    print("Timed out")

if req:
    print(req.text)

It just times out. Without a timeout it just hangs forever as if the server were down. But if you go to the URL in the browser, the page loads just fine, so the web server isn't just down. cURL has the same result as the Python requests.

I have never had this happen before. I'm a web developer, so I've worked with the requests module a lot in the past as well as my own web servers, and it was my understanding that unless the server is down, it should always be returning something. If I'm just missing a specific header or cookie I'd expect at least a 403 error.

Other websites work perfectly fine as well, and I run into this issue on different machines on different networks, so the issue probably isn't with my firewall or connection or anything. How can I fix this?
<p>When I try to make a GET request in Python like so:</p>
<pre class="lang-py prettyprint-override"><code>import requests

url = "https://www.apartments.com/" # any path on this domain has this issue

req = None
try:
req = requests.get(url, timeout=10, allow_redirects=True) # tried a variety of headers to pretend it's a browser, nothing changes
except requests.exceptions.Timeout:
print("Timed out")

if req:
print(req.text)
</code></pre>
<p>It just times out. Without a timeout it just hangs forever as if the server were down. But if you go to the URL in the browser, the page loads just fine, so the web server isn't just down. cURL has the same result as the Python requests.</p>
<p>I have never had this happen before. I'm a web developer, so I've worked with the <code>requests</code> module a lot in the past as well as my own web servers, and it was my understanding that unless the server is down, it should always be returning <em>something.</em> If I'm just missing a specific header or cookie I'd expect at least a 403 error.</p>
<p>Other websites work perfectly fine as well, and I run into this issue on different machines on different networks, so the issue probably isn't with my firewall or connection or anything. How can I fix this?</p>
 

Latest posts

I
Replies
0
Views
1
impact christian
I
Top