When you have trouble replicating browser behavior using scrapy, you generally want to look at what are those things which are being communicated differently when your browser is talking to the website compared with when your spider is talking to the website. Remember that a website is (almost always) not designed to be nice to webcrawlers, but to interact with web browsers.
In : request.headers
'User-Agent': 'Scrapy/0.24.6 (+http://scrapy.org)'}
If you examine the headers sent by a request for the same page by your web browser, you might see something like:
GET /blog/page/10/ HTTP/1.1
User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36
Accept-Encoding: gzip, deflate, sdch
Cookie: fealty_segment_registeronce=1; ... ... ...
Try changing the User-Agent in your request. This should allow you to get around the redirect.
Thanks, changing USER_AGENT from default 'Scrapy/0.24.6 (+scrapy.org)'; to 'born_fitness'(or anything) resolved the issue. Any idea why this is happening only for some urls(/page/10/ but not /page/8/) & why only for USER_AGENT 'Scrapy/0.24.6 (+scrapy.org)'; ?