Rectangle 27 36

As mentioned in the other answers, here and here, the cache can be cleared by using:

$templateCache.removeAll();

However as suggested by gatoatigrado in the comment, this only appears to work if the html template was served without any cache headers.

app.run(['$templateCache', function ( $templateCache ) {
    $templateCache.removeAll(); }]);

You may be adding cache headers in a variety of ways but here are a couple of solutions that work for me.

IIS
<location path="scripts/app/views">
  <system.webServer>
    <staticContent>
      <clientCache cacheControlMode="DisableCache" />
    </staticContent>
  </system.webServer>
</location>
location ^~ /scripts/app/views/ {
    expires -1;   
}

I just realised that the question mentioned dev machine but hopefully this may still help somebody...

yes, even though this doesn't answer original question directly this did in fact help me solve the caching issue on a live website.

AngularJS disable partial caching on dev machine - Stack Overflow

caching angularjs browser-cache
Rectangle 27 36

As mentioned in the other answers, here and here, the cache can be cleared by using:

$templateCache.removeAll();

However as suggested by gatoatigrado in the comment, this only appears to work if the html template was served without any cache headers.

app.run(['$templateCache', function ( $templateCache ) {
    $templateCache.removeAll(); }]);

You may be adding cache headers in a variety of ways but here are a couple of solutions that work for me.

IIS
<location path="scripts/app/views">
  <system.webServer>
    <staticContent>
      <clientCache cacheControlMode="DisableCache" />
    </staticContent>
  </system.webServer>
</location>
location ^~ /scripts/app/views/ {
    expires -1;   
}

I just realised that the question mentioned dev machine but hopefully this may still help somebody...

yes, even though this doesn't answer original question directly this did in fact help me solve the caching issue on a live website.

AngularJS disable partial caching on dev machine - Stack Overflow

caching angularjs browser-cache
Rectangle 27 19

If the proxy_pass value doesn't contain variables, nginx will resolve domain names to IPs while loading the configuration and cache them until you restart/reload it. This is quite understandable from a performance point of view.

But, in case of dynamic DNS record change, this may not be desired. So two options are available depending on the license you possess or not.

In this case, use an upstream block and specify which domain name need to be resolved periodically using a specific resolver. Records TTL can be overriden using valid=time parameter. The resolve parameter of the server directive will force the DN to be resolved periodically.

http {    

    resolver X.X.X.X valid=5s;

    upstream dynamic {
        server foo.dnsalias.net resolve;
    }

    server {

        server_name www.example.com;

        location / {
            proxy_pass http://dynamic;
            ...
        }

    }

}

This feature was added in Nginx+ 1.5.12.

In that case, you will also need a custom resolver as in the previous solution. But to workaround the unavailable upstream solution, you need to use a variable in your proxy_pass directive. That way nginx will use the resolver too, honoring the caching time specified with the valid parameter. For instance, you can use the domain name as a variable :

http {  

    resolver X.X.X.X valid=5s;

    server {

        server_name www.example.com;
        set $dn "foo.dnsalias.net"; 

        location / {
            proxy_pass http://$dn;
            ...
        }

    }

}
proxy_redirect

Sounds good, I will try this solution ang give feedback then. Thanks a lot. (community version)

recv() failed (111: Connection refused) while resolving, resolver: X.X.X.X:53

@max54 That's an other issue there. It means nothing is listening at X.X.X.X:53 or that a firewall is blocking you from accessing it. What's the output of dig foo.dnsalias.net @X.X.X.X ? You should set in this directive a list of DNS such as the ones in /etc/resolv.conf.

Xavier Lucas after some time it seems that it doesn't work, my ip still is the wrong after it changed. I followed your second instrcutions about non commercial version. Still need any help :/

Error with IP and Nginx as reverse proxy - Stack Overflow

nginx proxy dns reverse-proxy dyndns
Rectangle 27 15

Supposing there is a unique server running nginx + php + mysql instances with some remaining free RAM, the easiest way to use that RAM to cache data is simply to increase the buffer caches of the mysql instances. Databases already use LRU-like mechanisms to handle their buffers.

Now, if you need to move part of the processing away from the databases, then pre-caching may be an option. Before talking about memcached/redis, a shared memory cache integrated with php such as APC will be efficient provided only one server is considered (actually more efficient than redis/memcached).

Both memcached and redis can be considered to perform remote caching (i.e. to share the cache between various nodes). I would not rule out redis for this: it can easily be configured for this purpose. Both will allow to define a memory limit, and handle the cache with LRU-like behavior.

However, I would not use couchbase here, which is an elastic (i.e. supposed to be used on several nodes) NoSQL key/value store (i.e. not a cache). You could probably move some data from your mysql instances to a couchbase cluster, but using it just for caching is over-engineering IMO.

I have used APC, but the php cli scripts we're using with the web app can't access the same data. That's where memcached became the first next logical step. I was looking at couchbase because I read it was a nonvolatile (if needed) drop-in replacement for memcached more or less.

Couchbase should be considered since it has a plain-old-memcached mode of operation (called a bucket) but makes management easier, manages stats, etc. Full disclosure: I work for Couchbase.

caching - Memcached, Redis, or Couchbase - Stack Overflow

caching redis memcached couchbase
Rectangle 27 4

We've been using the old standard nginx -> mongrel stack for the last 18 months, and although it was not trivial to set up the first time around, it's proven flexible, and has dealt with some very high traffic sites for us. Nginx in particular has been absolutely rock solid and fast, and if you can get your app page-caching you can deal with a lot of requests.

Stuck mongrels have been an issue, so we use monit to kill them when they misbehave. Again, it was not totally trivial to set up, but we've used the same process on many many sites at this point.

We haven't played with passenger yet, so perhaps it's easier and more stable, I'll defer to the other responders on that one, all I can say is that there's no reason at all you can't build a solid stack with nginx and mongrel.

ruby - Best practices for new Rails deployments on Linux? - Stack Over...

ruby-on-rails ruby linux deployment release-management
Rectangle 27 5

We thought of Redis as a load-takeoff for our project at work. We thought that by using a module in nginx called HttpRedis2Module or something similar we would have awesome speed but when testing with AB-test we're proven wrong.

Maybe the module was bad or our layout but it was a very simple task and it was even faster to take data with php and then stuff it into MongoDB. We're using APC as caching-system and with that php and MongoDB. It was much much faster then nginx Redis module.

My tip is to test it yourself, doing it will show you the results for your environment. We decided that using Redis was unnecessary in our project as it would not make any sense.

Interesting answer but not sure if it helps out the OP

Inserting to Redis and using it as cache was slower than using APC + PHP + MongoDB. But just the insertion to Redis was MUCH slower than inserting directly into MongoDB. Without APC I think they're pretty equal.

but it is webscale, mongodb will run around you in circles while you write. Nowadays I only write to /dev/null because that is the fastest.

caching - Memcached vs. Redis? - Stack Overflow

caching web-applications memcached redis
Rectangle 27 33

No, this is not yet possible; nginx 1.2 incorporates stuff from the 1.1.x development branch which indeed includes HTTP/1.1 reverse proxying. Websocket connections are established using the HTTP/1.1 "Upgrade" header, but the fact that nginx now supports this kind of headers does not mean it supports websockets (websockets are a different protocol, not HTTP). (I tried this myself using the 1.1.x branch (which I found to be stable enough for my purpose) and it doesn't work without the tcp_module)

  • nginx with tcp module: in this case I think you need an additional port for this module (never tried this myself)
  • put something else in front as a reverse proxy: I use HAProxy (which supports websockets) in front of nginx and node. Nginx now simply acts as a static fileserver, not a proxy. Varnish is another option, if you want additional caching.

Thx for the clarification Matthias. For me it was natural that nginx supports websocket proxying if it comes with HTTP1/1 on board. I know that its not HTTP protocol, but still seams I need to do some homework :)

As of this writing, Nginx 1.3 has been released and supports websockets. It's a pretty simple configuration I've blogged about Hope that helps.

nginx 1.2.0 - socket.io - HTTP/1.1 - Proxy websocket connections - Sta...

proxy nginx websocket socket.io reverse-proxy
Rectangle 27 1

You're projected growth of ~45 / requests per second really isn't that intensive. I think using a standard nginx load balancer in front of your web servers will handle everything. If your DB access isn't very intense you will probably do fine with just 1 DB machine.

I really think the most important thing is not to do any premature optimization. Deal with issues as they come, or else you may end up wasting a lot of time.

There are tons of caching, multiple server configurations, and load balancing tutorials.

Growing traffic is a standard problem, there are no lack of tutorials on these things.

apache - Django-based app might grow out of control - how can I scale ...

django apache caching scalability
Rectangle 27 141

There are several good reasons to stick another webserver in front of Node.js:

  • Not having to worry about privileges/setuid for the Node.js process. Only root can bind to port 80 typically. If you let nginx/Apache worry about starting as root, binding to port 80, and then relinquishing its root privileges, it means your Node app doesn't have to worry about it.
  • Serving static files like images, css, js, and html. Node may be less efficient compared to using a proper static file web server (Node may also be faster in select scenarios, but this is unlikely to be the norm). On top of files serving more efficiently, you won't have to worry about handling eTags or cache control headers the way you would if you were servings things out of Node. Some frameworks may handle this for you, but you would want to be sure. Regardless, still probably slower.
  • As Matt Sergeant mentioned in his answer, you can more easily display meaningful error pages or fall back onto a static site if your node service crashes. Otherwise users may just get a timed out connection.
  • Running another web server in front of Node may help to mitigate security flaws and DoS attacks against Node. For a real-world example, CVE-2013-4450 is prevented by running something like Nginx in front of Node.

I'll caveat the second bullet point by saying you should probably be serving your static files via a CDN, or from behind a caching server like Varnish. If you're doing this it doesn't really matter if the origin is Node or Nginx or Apache.

Caveat with nginx specifically: if you're using websockets, make sure to use a recent version of nginx (>= 1.3.13), since it only just added support for upgrading a connection to use websockets.

express.static

pauljz, do you have benchmarks to back up slower? the articles @pawlakpp pointed out seem to say Node.js is much faster under load.

Benchmarks are useful and provide meaningful information, however the difference is not large enough in most applications to warrant concern over these numbers. Having Nginx/Apache in front of Node might be better, but in most situations it is simply overkill and adds unnecessary complexity. It all depends on your needs. My opinion is to make it work using NodeJS only and if the throughput is not to your liking, then consider adding a webserver in front.

It should be noted that if you're using just node directly, you can still bind to reserved ports such as :80 without running node as root by simply using authbind: thomashunter.name/blog/using-authbind-with-node-js

Using Node.js only vs. using Node.js with Apache/Nginx - Stack Overflo...

node.js
Rectangle 27 136

There are several good reasons to stick another webserver in front of Node.js:

  • Not having to worry about privileges/setuid for the Node.js process. Only root can bind to port 80 typically. If you let nginx/Apache worry about starting as root, binding to port 80, and then relinquishing its root privileges, it means your Node app doesn't have to worry about it.
  • Serving static files like images, css, js, and html. Node may be less efficient compared to using a proper static file web server (Node may also be faster in select scenarios, but this is unlikely to be the norm). On top of files serving more efficiently, you won't have to worry about handling eTags or cache control headers the way you would if you were servings things out of Node. Some frameworks may handle this for you, but you would want to be sure. Regardless, still probably slower.
  • As Matt Sergeant mentioned in his answer, you can more easily display meaningful error pages or fall back onto a static site if your node service crashes. Otherwise users may just get a timed out connection.
  • Running another web server in front of Node may help to mitigate security flaws and DoS attacks against Node. For a real-world example, CVE-2013-4450 is prevented by running something like Nginx in front of Node.

I'll caveat the second bullet point by saying you should probably be serving your static files via a CDN, or from behind a caching server like Varnish. If you're doing this it doesn't really matter if the origin is Node or Nginx or Apache.

Caveat with nginx specifically: if you're using websockets, make sure to use a recent version of nginx (>= 1.3.13), since it only just added support for upgrading a connection to use websockets.

express.static

pauljz, do you have benchmarks to back up slower? the articles @pawlakpp pointed out seem to say Node.js is much faster under load.

Benchmarks are useful and provide meaningful information, however the difference is not large enough in most applications to warrant concern over these numbers. Having Nginx/Apache in front of Node might be better, but in most situations it is simply overkill and adds unnecessary complexity. It all depends on your needs. My opinion is to make it work using NodeJS only and if the throughput is not to your liking, then consider adding a webserver in front.

It should be noted that if you're using just node directly, you can still bind to reserved ports such as :80 without running node as root by simply using authbind: thomashunter.name/blog/using-authbind-with-node-js

Using Node.js only vs. using Node.js with Apache/Nginx - Stack Overflo...

node.js
Rectangle 27 3

Render them all. Hide the ones that you don't need using CSS/Javascript, which can be trivially initialized in any number of ways. (Javascript can read the URL used, query parameters, something in a cookie, etc etc.) This has the advantage of potentially playing much better with your cache (why cache three views and then have to expire them all simultaneously when you can cache one?), and can be used to present a better user experience.

For example, let's pretend you have a common tab bar interface with sub navigation. If you render the content of all three tabs (i.e. its written in the HTML) and hide two of them, switching between two tabs is trivial Javascript and doesn't even hit your server. Big win! No latency for the user. No server load for you.

Want another big win? You can use a variation on this technique to cheat on pages which might but 99% common across users but still contain user state. For example, you might have a front page of a site which is relatively common across all users but say "Hiya Bob" when they're logged in. Put the non-common part ("Hiya, Bob") in a cookie. Have that part of the page be read in via Javascript reading the cookie. Cache the entire page for all users regardless of login status in page caching. This is literally capable of slicing 70% of the accesses off from the entire Rails stack on some sites.

Who cares if Rails can scale or not when your site is really Nginx serving static assets with new HTML pages occasionally getting delivered by some Ruby running on every thousandth access or so ;)

Upvotes for originallity of thought. :-)

Note that this answer will fail for some users if you don't first test (or assume) JavaScript or Cookies, respectively. A better answer would be to use JavaScript with partial HTML or Varnish+ESI instead.

How do I implement Section-specific navigation in Ruby on Rails? - Sta...

ruby-on-rails ruby templates actionview
Rectangle 27 29

The server may be sending headers to the browser that cause it to keep using cached copies. The simple way to test this is to empty your browser cache.

If that fixes it, you need to study the HTTP headers you get from the server. The developer tools (a.k.a. F12 tools) in your browser of choice will expose the headers returned by the server. Then decide if you want to keep using these caching settings (good for speed) or change them (good for development).

And how do you adjust these headers, you ask? It depends on the server. Here is a link to the instructions for common servers:

An easier way to do this would be pressing CTRL+F5 in your browser

This isn't working for me. Tested with 4 different browsers. Emptied the cache on all of them. Hit CTRL+F5 more times than I could count. Asp.net development server just doesn't want to update with any markup/css/javascript changes. Stopping and starting the testing server is the only way to make it recognize changes.

Yeah this fixes it for me but when tweaking the CSS this seems a clunky solution. What is the headers I should be looking at? and How do I change them

asp.net - Visual Studio Development Server not updating css and javasc...

asp.net cassini
Rectangle 27 1

Are you using Webrick in production? If so, you will need to set config.serve_static_assets = true since it's no good at serving static assets. Other Ruby 'app servers' aren't also ideal for serving static assets so you'll need to have Rails do that for the meantime. It's not an ideal set-up though since page caching won't work and all requests will hit your app.

Once you use a proper server for serving static assets like Nginx or Apache, you will need to set it to config.serve_static_assets = false so that Rails will leave it to Nginx/Apache to handle serving static assets. That way, not all requests will have to hit your Rails app since caching will work.

Since you're building a Rails engine, you don't need to worry about that since that is the responsibility of the one who is deploying the Rails app. You won't have control over their config.

Rails Engine asset images are not compiled - Stack Overflow

ruby-on-rails asset-pipeline production-environment rails-engines
Rectangle 27 0

basically instead of using the purge module they just show you how to delete the nginx files directly, so you can make some easy script that will get urls and will purge them directly

The naming convention of the cache is based on the variables we set for the "fastcgicachekey" directive... Passing this string through MD5 hashing... get the directory and delete the files

caching - Nginx reverse-proxy cache invalidation strategies - Stack Ov...

caching nginx reverse-proxy purge
Rectangle 27 0

TLDR: You may be using ContainerDirectory without a HostDirectory or you may need to update the 03build.sh to build with the --no-cache=true flag.

After a bazillion hours later, I finally fixed this with my use case. I am using CodePipeline to run CodeCommit, CodeBuild, and Elastic Beanstalk to create a continuous integration / continuous delivery solution in AWS with docker. The issue I ran into was the CodeBuild was successfully building and publishing new docker images to AWS ECR (EC2 container registry) and EBS was correctly pulling down the new image, but yet the docker image was never getting updated on the server.

After inspecting the entire process of how EBS builds the docker image (there's a really great article here, part 1 and here part 2 that gives an overview), I discovered the issue.

To add to the article, there is a 3-stage process in EBS on the EC2 instances that are spun-up for deploying docker images.

This 3-stage process is a sequence of bash files that are executed which are located in /opt/elasticbeanstalk/hooks/appdeploy/.

The pre-stage contain the following shell scripts:

  • 00clean_dir.sh - Cleans directory where source will be downloaded, removes docker containers and images (e.g. cleanup)
  • 01unzip.sh - Downloads source from S3 and unzips it
  • 03build.sh - This is where the magic happens where EC2 will build your docker image from your Dockerfile or Dockerrun.aws.json. After much testing, I realized this build script was building my updated image but I modified this script to include the --no-cache flag on docker build as well.

The enact stage is where my caching issue was actually occurring. The enact stage consists of:

  • 00run.sh - this is where docker run is executed against the image that was generated in the pre stage based on environment variables and settings in your Dockerrun.aws.json. This is what was causing the caching issue for me.
  • 01flip.sh - Converts from aws-staging to current-app and a lot of other stuff

When I would execute docker run from the image that was generated in Pre stage, 03build.sh, I would see my updated changes. However, when I would execute the 00run.sh shell script, the old changes would appear. After investigating the docker run command, it was executing

`Docker command: docker run -d  -v null:/usr/share/nginx/html/ -v /var/log/eb-docker/containers/eb-current-app:/var/log/nginx  ca491178d076`

The -v null:/usr/share/nginx/html/ is what was breaking it and causing it not to update. This was because my Dockerrun.aws.json file had

"Volumes": [
    {
      "ContainerDirectory": "/usr/share/nginx/html/"
    }
  ],

without a referenced host location. As a result, any future changes I made, didn't get updated.

For my solution, I just removed the "Volumes" array as all of my files are contained in the docker image I upload to ECR. Note: You may need to add the --no-cache to the 03build.sh as well.

amazon web services - elastic-beanstalk docker app not updating upon d...

amazon-web-services amazon-ec2 amazon-s3 docker elastic-beanstalk