Rectangle 27 45

ALLOWED_HOSTS = ("yourdomain.com",)

or ALLOWED_HOSTS = ("*",) and start apache (service apache2 restart)

@HalilKaskavalci : Note that the question specifies gunicorn/nginx, not Apache2. Personally, I would recommend avoiding using ALLOWED_HOSTS=("*",).

django - Bad request 400: nginx / gunicorn - Stack Overflow

django nginx gunicorn
Rectangle 27 39

As stated by Maxim Dounin in the comments above:

When nginx returns 400 (Bad Request) it will log the reason into error log, at "info" level. Hence an obvious way to find out what's going on is to configure error_log to log messages at "info" level and take a look into error log when testing.

What if there is nothing in the error log after a 400 ? :(

@NicolaeSurdu Make sure debug logging is turned on in nginx. You can do this by appending debug to the line that defines your error log in your sites conf file. So you'd have something like server { error_log /path/to/error.log debug;

have the same issue as holms and Nicolae

How to fix nginx throws 400 bad request headers on any header testing ...

nginx
Rectangle 27 4

I can't speak for nginx as I'm running this on apache2, but here's what solved the issue:

ALLOWED_HOSTS

With vagrant, you might find it useful to accept all the vagrantshare.com subdomain, so just add '.vagrantshare.com' (notice the dot) to the ALLOWED_HOSTS list.

Not sure if it is really necessary, but I changed the modified date of the wsgi.py file

touch wsgi.py

As I'm using apache2, I needed to restart the service.

sudo service apache2 restart

django - Bad request 400: nginx / gunicorn - Stack Overflow

django nginx gunicorn
Rectangle 27 3

Yes changing the error_to debug level as Emmanuel Joubaud suggested worked out (edit /etc/nginx/sites-enabled/default ):

error_log /var/log/nginx/error.log debug;

Then after restaring nginx I got in the error log with my Python application using uwsgi:

2017/02/08 22:32:24 [debug] 1322#1322: *1 connect to unix:///run/uwsgi/app/socket, fd:20 #2
        2017/02/08 22:32:24 [debug] 1322#1322: *1 connected
        2017/02/08 22:32:24 [debug] 1322#1322: *1 http upstream connect: 0
        2017/02/08 22:32:24 [debug] 1322#1322: *1 posix_memalign: 0000560E1F25A2A0:128 @16
        2017/02/08 22:32:24 [debug] 1322#1322: *1 http upstream send request
        2017/02/08 22:32:24 [debug] 1322#1322: *1 http upstream send request body
        2017/02/08 22:32:24 [debug] 1322#1322: *1 chain writer buf fl:0 s:454
        2017/02/08 22:32:24 [debug] 1322#1322: *1 chain writer in: 0000560E1F2A0928
        2017/02/08 22:32:24 [debug] 1322#1322: *1 writev: 454 of 454
        2017/02/08 22:32:24 [debug] 1322#1322: *1 chain writer out: 0000000000000000
        2017/02/08 22:32:24 [debug] 1322#1322: *1 event timer add: 20: 60000:1486593204249
        2017/02/08 22:32:24 [debug] 1322#1322: *1 http finalize request: -4, "/?" a:1, c:2
        2017/02/08 22:32:24 [debug] 1322#1322: *1 http request count:2 blk:0
        2017/02/08 22:32:24 [debug] 1322#1322: *1 post event 0000560E1F2E5DE0
        2017/02/08 22:32:24 [debug] 1322#1322: *1 post event 0000560E1F2E5E40
        2017/02/08 22:32:24 [debug] 1322#1322: *1 delete posted event 0000560E1F2E5DE0
        2017/02/08 22:32:24 [debug] 1322#1322: *1 http run request: "/?"
        2017/02/08 22:32:24 [debug] 1322#1322: *1 http upstream check client, write event:1, "/"
        2017/02/08 22:32:24 [debug] 1322#1322: *1 http upstream recv(): -1 (11: Resource temporarily unavailable)
Invalid HTTP_HOST header: 'www.mysite.local'. You may need to add u'www.mysite.local' to ALLOWED_HOSTS.
        [pid: 10903|app: 0|req: 2/4] 192.168.221.2 () {38 vars in 450 bytes} [Wed Feb  8 22:32:24 2017] GET / => generated 54098 bytes in 55 msecs (HTTP/1.1 400) 4 headers in 135 bytes (1 switches on core 0)

And adding www.mysite.local to the settings.py ALLOWED_CONFIGS fixed the issue :)

ALLOWED_HOSTS = ['www.mysite.local']
ALLOWED_HOSTS

How to fix nginx throws 400 bad request headers on any header testing ...

nginx
Rectangle 27 1

This does not appear to be an nginx issue, as it's unlikely for it to be returning empty pages, which are usually the classic tomcat signature.

It would appear that setting up the header size may depend on the connector that you're using:

We are using HTTP connecter though

So, have you tried changing maxHttpHeaderSize as per tomcat.apache.org/tomcat-6.0-doc/config/http.html? I'm not too sure why you mention max-http-header-size in your question instead, perhaps that's a setting further down the line of your tomcat application.

We've tried both but are still having the problem. Strangely, hitting tomcat directly gives no problem. Just from nginx to tomcat.

@cnaut, it should be relatively easy to make a distinction of when the error comes out from nginx or tomcat; can you definitively confirm either way? Also, are you sure you're having the misbehaving cookie payload when you're hitting it directly? Can you reproduce the issue with a shell script (e.g., w/ curl), e.g., not through a browser? Have you tried doing a tcpdump to see where does the buck stop?

400 bad request on nginx proxy to tomcat but not on static content - S...

tomcat nginx cookies proxy
Rectangle 27 0

Have you double-checked that the websocket port is not blocked by any firewall?

Yes i had. The only port which is accessable from remote hosts is the port 80. The Server port 8080 is local reachable from the nginx server. In your blog post, which i readed before, uses only port 8020 from internet (reachable for everyone) and the port 8010 for internal use. I add the server code for you above. Wait a minute ^^

web services - nginx as webserver incl. socket.io and node.js / ws:// ...

node.js web-services sockets nginx socket.io
Rectangle 27 0

1: You almost certainly want an HTTP reverse proxy like nginx to deal with "spoon feeding" slow or stupid client programs; some of your users will be on slow connections; the reverse proxy usually can wait for the request to be fully received before contacting the application, then slurp the full response from the application quickly (so that it can move on to other requests), and then feed that back to the client as slowly as they require. If you're using a reverse proxy anyways, there's not much reason to consider a tcp level loadbalancer, too; since the reverse proxy can already solve that problem. This is especially true since tcp load balancers aren't application aware, and can't skip upstream hosts that are "reachable" but "sick", they will happily proxy for servers that are returning "500 Internal Server Error" responses to health check requests. they're usually needed only at the extreme edges of your network at very high load.

2: which application container is right for you depends as much on the application as it does on the shape of your workload; to take advantage of async containers like tornado, your application must be written in a special way; and can't use all of the nice/convenient frameworks that are available for wsgi in general; on the other hand, you will need them for some features like long polling and especially websocket, these features are not practical (or even possible) in things like uwsgi.

but, not all containers are created equal; many speak only HTTP, which is not a CPU friendly protocol, containers like uwsgi are designed to optimize the http parsing work so that only the reverse proxy has to do it, from there out, easily parsed binary protocols are passed from one process to the next.

3: websocket is still very new, and support in python is sparse. The most mature options seem to be the implementations available in tornado and in twisted; Neither can be hosted in uwsgi, and cannot be proxied behind nginx. there are other reverse proxies that can handle websocket, though, for example, varnish.

Ad. 2. I understand the difference between sync and async programming. The question was about using containers (like gunicorn + tornado workers ) rather then solo wsgi servers (tornado)

the solo wsgi servers are, without exception, not useful for any sort of async processing; you can't use them for websocket and they are unlikely to be a good fit for long polling.

Sorry for my misleading, but I always consider some load balancer server (nginx / haproxy) for every scenario. So the solo wsgi server isn't alone. So the question is: is it worth to have eg: nginx -* gunicorn + tornado worker, instead of nginx -* tornado. Thanks for your engagement!

neither nginx nor uwsgi nor both will help with websocket! to put a websocket service behind the same port as either, a secondary reverse proxy is needed, one that can distinguish traffic for the websocket application from the http/1.1 traffic. Varnish can do this, but it's not quite as versitile in other ways as nginx: it can't speak fastcgi/scgi/uwsgi and it can't serve static files: You probably need all of Varnish, Nginx, Tornado, and uWSGI.

nginx - HA deploy for Python wsgi application - Stack Overflow

python nginx wsgi haproxy
Rectangle 27 0

I assume that you are using add_header directive to add CORS headers in the nginx configuration.

Nginx modules reference states about add_header:

Adds the specified field to a response header provided that the response code equals 200, 201, 204, 206, 301, 302, 303, 304, or 307.

ngx_headers_more
more_set_headers 'Access-Control-Allow-Origin: $http_origin';

more_set_headers 'Access-Control-Allow-Headers: Content-Type, Origin, Accept, Cookie';

Cors Blocks Request with status 403 on Nginx - Stack Overflow

nginx cors http-status-code-403
Rectangle 27 0

proxy_set_header        Host            $host;

Additionally I would pass these values (example) also so you have access to the IP of the request:

proxy_set_header        X-Real-IP       $remote_addr;
proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
server_name     your_domain.com www.your_domain.com

Last but not least try to set your environment like this (solution in this case):

os.environ['DJANGO_SETTINGS_MODULE'] = "app.settings_prod"

Thanks, but it's still not working. The nginx logs just show the 400. I'm really clueless here, because I did the same thing on the development server (including gunicorn) and everything works.

I also set the server name. Still, not working.

Are you running multiple sites on this server? Then you would have to set the environ like os.environ['DJANGO_SETTINGS_MODULE'] = "app.settings_prod" in your wsgi file...

That did it! I am only running one site one the server, but tried setting the environ, just in case. And it works. I have no idea why, though.

python - Django + Gunicorn + Nginx: Bad Request (400) in Debug=True - ...

python django nginx gunicorn
Rectangle 27 0

Nginx will return status code 400 if it receives a header field larger than the configured large_client_header_buffers

A request header field cannot exceed the size of one buffer as well, or the 400 (Bad Request) error is returned to the client. Buffers are allocated only on demand. By default, the buffer size is equal to 8K bytes. If after the end of request processing a connection is transitioned into the keep-alive state, these buffers are released.

So you just need to create a curl request with a header larger than 8k. Here's an example using a bit of python to generate the header variable to pass into curl:

(nginx)macbook:nginx joeyoung$ myheader=$(python -c "print 'A'*9000")
(nginx)macbook:nginx joeyoung$ curl -vvv --header "X-MyHeader: $myheader" http://my.example.website.com
...
> 
< HTTP/1.1 400 Bad Request
< Server: nginx/1.4.7
< Date: Wed, 02 Sep 2015 22:37:29 GMT
< Content-Type: text/html
< Content-Length: 248
< Connection: close

python - How to intentionally cause a 400 Bad Request in Nginx? - Stac...

python rest nginx postman http-status-code-400
Rectangle 27 0

Solved by adding: proxy_set_header Host $host; under the location directive in nginx.conf

python - Bad Request (400) Nginx + Gunicorn + Django + FreeBSD - Stack...

python django nginx freebsd gunicorn
Rectangle 27 0

There is no solution in HTTP level, but is possible in TCP level. See the answer I chose in another question:

linux - Can PHP (with Apache or Nginx) check HTTP header before POST r...

php linux apache http file-upload
Rectangle 27 0

settings.py
ALLOWED_HOSTS
ALLOWED_HOSTS = '*'

And then in somewhere in your view, try to print out and see output of this command:

print(request.META['HTTP_HOST']) # or print(request.get_host())

Then according to output, set that (just domain of it as an list) to your ALLOWED_HOSTS.

python - Django Bad Request(400) Error in Deployment with Apache/NginX...

python django apache python-2.7 nginx
Rectangle 27 0

And check your port in proxy_pass

root@RDE-1.3:~# curl -I http://179.188.3.54
HTTP/1.1 200 OK
Server: nginx/1.1.19
Date: Tue, 24 Feb 2015 15:46:10 GMT
Content-Type: text/html; charset=utf-8
Connection: keep-alive
X-Frame-Options: SAMEORIGIN

root@RDE-1.3:~#
root@RDE-1.3:~#
root@RDE-1.3:~# curl -I http://179.188.3.54:8000


curl: (7) couldn't connect to host
ALLOWED_HOSTS = [
    '.example.com',  # Allow domain and subdomains
    '.example.com.',  # Also allow FQDN and subdomains
]

A list of strings representing the host/domain names that this Django site can serve. This is a security measure to prevent an attacker from poisoning caches and password reset emails with links to malicious hosts by submitting requests with a fake HTTP Host header, which is possible even under many seemingly-safe web server configurations.

Values in this list can be fully qualified names (e.g. 'www.example.com'), in which case they will be matched against the requests Host header exactly (case-insensitive, not including port). A value beginning with a period can be used as a subdomain wildcard: '.example.com' will match example.com, www.example.com, and any other subdomain of example.com. A value of '*' will match anything; in this case you are responsible to provide your own validation of the Host header (perhaps in a middleware; if so this middleware must be listed first in MIDDLEWARE_CLASSES).

If the Host header (or X-Forwarded-Host if USE_X_FORWARDED_HOST is enabled) does not match any value in this list, the django.http.HttpRequest.get_host() method will raise SuspiciousOperation.

When DEBUG is True or when running tests, host validation is disabled; any host will be accepted. Thus its usually only necessary to set it in production.

This validation only applies via get_host(); if your code accesses the Host header directly from request.META you are bypassing this security protection.

I tried both and I still have the same problem..

updated with curl output. it's like you have wrong port. How you run Django? python manage.py runserver 0.0.0.0:8000

Bad request (400) using nginx on ubuntu with Django - Stack Overflow

django ubuntu nginx server ubuntu-server
Rectangle 27 0

Okay so I got this same error but I finally figured it out. (At least for me). I pray to god this works for you because I wasted a day on this.

If you're like me you used: http://uwsgi-docs.readthedocs.org/en/latest/tutorials/Django_and_nginx.html as a tutorial to get the basic setup. I got uwsgi to work over http and it seemed to work over tcp socket. As soon as I tried to hook up nginx, I kept getting 400 errors. It specifically says create a file name my_site.conf and link that to sites-enabled. Well if you check sites-enabled you should see a file named default. Notice this file isn't named default.conf. Try renaming my_site.conf to my_site and make sure to re-link.

I'm sorry, I forgot to mention this, but I have already done that. In the meantime I have rewritten the entire project (using pyramid+uwsgi). The fascinating thing is, that the same uwsgi-config with minor changes (paths, directory-names and stuff like that) works with pyramid, flawlessly.

I used those tutorials, first .. and then anything, I could find on the matter: docs.djangoproject.com/en/1.7/howto/deployment/wsgi/uwsgi uwsgi-docs.readthedocs.org/en/latest/tutorials/ So I ended up mixing stuff together, that I thought, might just make it work - but nothing did work. After weeks of frustration with this, I just gave up.

Django uWSGI NGINX Bad Request 400 - Stack Overflow

django nginx uwsgi
Rectangle 27 0

Solved by adding: proxy_set_header Host $host; under the location directive in nginx.conf

python - Bad Request (400) Nginx + Gunicorn + Django + FreeBSD - Stack...

python django nginx freebsd gunicorn
Rectangle 27 0

!!!     You should now run your application with the WSGI interface
!!!     installed with your project. Ex.:
!!!
!!!         gunicorn myproject.wsgi:application

NGINX is working fine, the gunicorn just isn't receiving your request.

WARNING: This command is deprecated. I thought it is just depreacated and thus works. Otherwise it'd be and error message

@smarber you should try using siply gunicorn for running this.

ps aux | grep -i gunicorn
sudo netstat -lputn
username    16231  0.0  0.5  50856 12140 pts/0    S+   15:20   0:00 /home/username/Documents/projects/env/bin/python /home/username/Documents/projects/env/bin/gunicorn myproject.wsgi -b localhost:8333 username    16236  0.0  0.8  59000 16580 pts/0    S+   15:20   0:00 /home/username/Documents/projects/env/bin/python /home/bilou/Documents/django/projects/env/bin/gunicorn myproject.wsgi -b localhost:8333 username    16319  0.0  0.0   7848   904 pts/8    S+   15:23   0:00 grep --color=auto -i gunicorn

python - django running with gunicorn and nginx: 400 bad request - Sta...

python django curl nginx gunicorn
Rectangle 27 0

!!!     You should now run your application with the WSGI interface
!!!     installed with your project. Ex.:
!!!
!!!         gunicorn myproject.wsgi:application

NGINX is working fine, the gunicorn just isn't receiving your request.

WARNING: This command is deprecated. I thought it is just depreacated and thus works. Otherwise it'd be and error message

@smarber you should try using siply gunicorn for running this.

ps aux | grep -i gunicorn
sudo netstat -lputn
username    16231  0.0  0.5  50856 12140 pts/0    S+   15:20   0:00 /home/username/Documents/projects/env/bin/python /home/username/Documents/projects/env/bin/gunicorn myproject.wsgi -b localhost:8333 username    16236  0.0  0.8  59000 16580 pts/0    S+   15:20   0:00 /home/username/Documents/projects/env/bin/python /home/bilou/Documents/django/projects/env/bin/gunicorn myproject.wsgi -b localhost:8333 username    16319  0.0  0.0   7848   904 pts/8    S+   15:23   0:00 grep --color=auto -i gunicorn

python - django running with gunicorn and nginx: 400 bad request - Sta...

python django curl nginx gunicorn
Rectangle 27 0

You can set DEBUG = True on your server, restart uwsgi service and check the django's debug output in your browser. The fact you don't see any errors with django's development server doesn't mean the error is related to nginx or uwsgi services.

Thanks for your answer. I had actually asked that question 2 years ago .. so that django doesn't exist anymore. I get it to work with the current versions, though. I still don't know, what was wrong with the old one. But since it just works nowadays, I tend to not care what went wrong 2 years ago, anymore.

Django uWSGI NGINX Bad Request 400 - Stack Overflow

django nginx uwsgi