Rectangle 27 3

If you want to do it without purchasing a hardware based load balancer, you can do this via Windows Network Load Balancing, your clients will point to a virtual IP which will be distributed to multiple servers inside your network. There are many load balancing solutions that come at a price, but this one can be accomplished given you have a windows infrastructure with a couple of servers.

Thats great for the server side, but how can it be done for on the actual WCF Client, especially as this won't work cross connection, and the system needs to intelligently re-assign the connection based on the users interaction.

It can support redirecting the user to a separate server after the initial connection is made? It was downvoted? Already got my upvote for a useful serverside setup for nlb

@cgoddard Can it support redirecting the user to a separate server after the initial connection is made? Did you find any other solution?

Hosting the same WCF Service on multiple Servers for Load Balancing - ...

load-balancing wcf
Rectangle 27 8

You will need to define a machine key: http://orchardproject.net/docs/Setting-up-a-machine-key.ashx Other than that, if all the servers are working off the same share, you should be all set. There is no caching out of the box, but any modules that do caching will just cache everything on each server as needed. What will not work is triggered cache invalidation (http://orchard.codeplex.com/workitem/17361).

Brill - thanks very much for the comprehensive answer Bertrand. And great stuff on Orchard, I think I read someone describe the 'clean code' as very appealing. +1 for that.

asp.net - Orchard CMS on Load Balanced web servers - Stack Overflow

asp.net asp.net-mvc asp.net-mvc-3 load-balancing orchardcms
Rectangle 27 13

Firstly, your diagram assumes that the load balancer is acting as a (TCP) proxy, which is not always the case. Often Direct Routing (or Direct Server Return) is used, or Destination NAT is performed. In both cases the connection between backend server and the client is direct. So in this case it is essentially the TCP handshake that is distributed amongst backend servers. See the following for more info:

Obviously TCP proxies do exist (HAProxy being one), in which case the proxy manages both sides of the connecton, so your app would need to be able to identify the client by the incoming IP/Port (which would happen to be from the proxy rather than the client). The proxy will handle getting the messages back to the client.

Either way, it comes down to application design as I would imagine the tricky bit is having a common session store (a database of some kind, or key=>value store such as Redis), so that when your app server says "I need to send a message to Frank" it can determine which backend server Frank is connected to (from DB), and signal that server to send it the message. You reduce the problem of connections (from the same client) moving around different backend servers by having persistent connections (all load balancers can do this), or by using something intrinsically persistent like a websocket.

This is probably a vast oversimplification as I have no experience with chat software. Obviously DB servers themselves can be distributed amongst several machines, for fault-tolerance and load balancing.

Thanks for the informative answer. I'll have to look into those topics to understand more. Do you know of any good resources for learning more about load balancing? Thanks!

How do you load balance TCP traffic? - Stack Overflow

tcp load-balancing
Rectangle 27 40

Memcached if you need to preserve state across several web servers (if you're load balanced and it's important that what's in the cache is the same for all servers).

APC if you just need access to quick memory to read (& write) on a (or each) server.

Remember APC can also compile and speed up your script execution time. So you could for example be using APC for increased execution performance, while using memcached for cache storage.

APC can cache storage too.... Or?

2014 update: PHP 5.5 will nativly include Zend Optimizer Plus (which is only for opcache, so not for user cache) and it seems APC will not be developed beyond PHP 5.4? However there is now APCu pecl.php.net/package/APCu, which took only the user cache parts of APC

php - Memcached vs APC which one should I choose? - Stack Overflow

php caching memcached apc
Rectangle 27 4

WCF requires security tokens to be passed over a secure transport if the message itself is not signed/encrypted. Since traffic is HTTP between your Big-IP and your individual web servers, you need a way to have security tokens that you know are secured between the client and the Big-IP up front still be passed to your server farm. There's a couple ways to do that depending on what version of WCF you're using:

If you're using WCF 4.0 you can just create a custom binding and set the AllowInsecureTransport property on the built in SecurityBindingElement to signify that you don't care that the transport isn't secure.

If you're using WCF 3.5 you have to "lie" about security with a custom TransportSecurityBindingElement on the server side. You can read my old post about this here.

FWIW, they created a hotfix release for 3.5 SP1 that adds the AllowInsecureTransport to that version, but I don't know if your company will allow you to install custom hotfixes.

We could never get AllowInsecureTransport to work on the secure side bindings. Good luck.

The only problem is that OP doesn't know what he wants. Title is asking for load balancing with transport security and the question is asking for load balancing with message security.

TransportWithMessageCredential
<security mode="TransportWithMessageCredential"> 	<transport clientCredentialType="None" proxyCredentialType="None" realm="" /> 	<message clientCredentialType="Certificate" />   </security>

How does WCF + SSL working with load balancing? - Stack Overflow

wcf ssl load-balancing
Rectangle 27 30

This is round robin DNS. This is a quite simple solution for load balancing. Usually DNS servers rotate/shuffle the DNS records for each incoming DNS request. Unfortunately it's not a real solution for fail-over. If one of the servers fail, some visitors will still be directed to this failed server.

do you have any pointers for handling failovers?

It depends on your expectations regarding recovery time. If the provided service can be down for some seconds or minutes you can update the DNS accordingly (e.g. take out IPs of failed systems). If this is not acceptable DNS cannot help you and you have to use load balancers and a high availability network structure.

dns - Is it possible that one domain name has multiple corresponding I...

dns
Rectangle 27 1

TraumaPony is right. Tons of servers and smart architecture for load balancing/caching and voila you can run query in under 1 second. There was a lot of articles on the net describing google services architecture. I'm sure you can find them via Google :)

performance - How can Google be so fast? - Stack Overflow

performance algorithm
Rectangle 27 4

say for argument sake, if unsecure stays on port 80. and secure forwarded to a random port: x. Doesn't that mean going straight to domain:x bypasses the secure certificate? users might not realize that they're sending communication unencrypted.

Users wouldn't see the addresses behind the firewall, as the firewall acts as a kind of proxy - it communicates with machines behind, or routes and forwards packets back and forth. (This isn't really that correct, but it's an OK description of what the user would see)

Check the PCI/DSS standards out as this leaves you vulnerable to having the traffic intercepted within the network.

asp.net - How to set up SSL in a load balanced environment? - Stack Ov...

asp.net wcf infrastructure load-balancing
Rectangle 27 47

Latency is killed by disk accesses. Hence it's reasonable to believe that all data used to answer queries is kept in memory. This implies thousands of servers, each replicating one of many shards. Therefore the critical path for search is unlikely to hit any of their flagship distributed systems technologies GFS, MapReduce or BigTable. These will be used to process crawler results, crudely.

The handy thing about search is that there's no need to have either strongly consistent results or completely up-to-date data, so Google are not prevented from responding to a query because a more up-to-date search result has become available.

So a possible architecture is quite simple: front end servers process the query, normalising it (possibly by stripping out stop words etc.) then distributing it to whatever subset of replicas owns that part of the query space (an alternative architecture is to split the data up by web pages, so that one of every replica set needs to be contacted for every query). Many, many replicas are probably queried, and the quickest responses win. Each replica has an index mapping queries (or individual query terms) to documents which they can use to look up results in memory very quickly. If different results come back from different sources, the front-end server can rank them as it spits out the html.

Note that this is probably a long way different from what Google actually do - they will have engineered the life out of this system so there may be more caches in strange areas, weird indexes and some kind of funky load-balancing scheme amongst other possible differences.

performance - How can Google be so fast? - Stack Overflow

performance algorithm
Rectangle 27 4

It sounds like (though I could be wrong) that you have multiple IIS servers each maintaining state. If you're load balancing, there's no guarantee you'll hit the same server twice. So, while server A has the state information you're looking for, Server B will not.

Are you load balancing? If so, you might want to use SQL server for state management.

c# - session state losing value - Stack Overflow

c# asp.net session
Rectangle 27 83

My solution (because the standard conditions [$_SERVER['HTTPS'] == 'on'] do not work on servers behind a load balancer) is:

$isSecure = false;
if (isset($_SERVER['HTTPS']) && $_SERVER['HTTPS'] == 'on') {
    $isSecure = true;
}
elseif (!empty($_SERVER['HTTP_X_FORWARDED_PROTO']) && $_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https' || !empty($_SERVER['HTTP_X_FORWARDED_SSL']) && $_SERVER['HTTP_X_FORWARDED_SSL'] == 'on') {
    $isSecure = true;
}
$REQUEST_PROTOCOL = $isSecure ? 'https' : 'http';

HTTP_X_FORWARDED_PROTO: a de facto standard for identifying the originating protocol of an HTTP request, since a reverse proxy (load balancer) may communicate with a web server using HTTP even if the request to the reverse proxy is HTTPS http://en.wikipedia.org/wiki/List_of_HTTP_header_fields#Common_non-standard_request_headers

My issue got resolved with this solution. (PHP - AWS Elastic beanstalk)

This is the solution if you use load balancers.

What type of file are you sticking this in? I am assuming this is not in the .htaccess file?

php - How to find out if you're using HTTPS without $_SERVER['HTTPS'] ...

php https
Rectangle 27 3

What if you need to scale your app by load balancing more than one web server? You could install the full app on all web servers, but a better solution is to have the web servers talk to an application server.

But if there aren't any entity objects, they won't have very much to talk about.

I'm not saying that you shouldn't write monoliths if its a simple, internal, short life application. But as soon as it gets moderately complex, or it should last a significant amount of time, you really need to think about a good design.

By splitting application logic from presentation logic and data access, and by passing DTOs between them, you decouple them. Allowing them to change independently.

Lots of people are bringing up de-coupling, and allowing one layer to change without affecting the other. Stored procedures do this better than any ORM! I can radically alter the data model, and as long as the procedures return the same data, nothing breaks.

In my opinion stored procedures AND a entity model are not mutually exclusive. Stored procedures can provide a mechanism to store your entity model. The question is: Does your business logic work with the entities or access stored procedures directly?

sql - Why do we need entity objects? - Stack Overflow

sql database orm entities
Rectangle 27 3

I dont think serving few images and static htmls from nodejs itself will ever be a bottleneck , ideally a front end proxy like nginx is required if you need to load balance between multiple servers and also for exposing you internal http services as https traffic. If you dont have that requirement it would be a overkill imho.

I was thinking of that primarly because of decoupling. So I could have websockets-only node.js app. So doing both http and websockets in one app is more than ok? I really don't want it to grow into a big ball of mud.

I know. Maybe the whole nginx idea from the start was a little over engineered... :)

well as far as decoupling is concerned you can have your websocket traffic connect to a different port (one on which socket.io runs) from your simple html server's pages

AND if you reaaaallly have to go for a proxy there is a good option ...github.com/nodejitsu/node-http-proxy

javascript - Nginx (serving html) and Node.js setup - Stack Overflow

javascript node.js nginx client-side
Rectangle 27 5

This is a pure algorithmic method of generating random but unique numbers without arrays, lists, permutations or heavy CPU load.

Latest version allows also to set the range of numbers, For example, if I want unique random numbers in range of 0-1073741821.

  • Pixel wise video frames dissolving effect (fast and smooth)
  • Creating a secret "noise" fog over image for signatures and markers (steganography)
  • Data Object IDs for serialization of huge amount of Java objects via Databases
  • Address+value encryption (every byte is not just only encrypted but also moved to a new encrypted location in buffer). This really made cryptanalysis fellows mad on me :-)

Could that method work for a decimal value, e.g. scrambling a 3-digit decimal counter to always have a 3-digit decimal result?

As an example of Xorshift algorithm, it's an LFSR, with all related kinks (e.g. values more than k apart in the sequence can never occur together).

algorithm - Unique (non-repeating) random numbers in O(1)? - Stack Ove...

algorithm math random language-agnostic
Rectangle 27 8

If you're running a load balancer then you have to make sure your servers are hitting a common point for data. By default PHP stores sessions in the local file system. That becomes a problem if your load balancer sends you from server A to server B, where that file doesn't exist. You could set up a network share and make sure all web servers use that share. So you could create an NFS share and then add session_save_path or set it within php.ini

session_save_path('/your/nfs/share/here');

Another option is to write your own session handler that puts sessions into your database. You could then use something like memcached to store the sessions in a way that you won't hammer your DB every time you read your session data.

@Marki555 Interesting link. I'd like to see how well it runs under something faster than traditional magnetic drives (i.e. AWS Provisioned IOPS). That question is almost 3 years old. And I do prefer Memcached myself but it's overkill for low traffic.

Why does this PHP script (called by AJAX) randomly not load SESSION co...

php ajax session session-variables
Rectangle 27 8

If you're running a load balancer then you have to make sure your servers are hitting a common point for data. By default PHP stores sessions in the local file system. That becomes a problem if your load balancer sends you from server A to server B, where that file doesn't exist. You could set up a network share and make sure all web servers use that share. So you could create an NFS share and then add session_save_path or set it within php.ini

session_save_path('/your/nfs/share/here');

Another option is to write your own session handler that puts sessions into your database. You could then use something like memcached to store the sessions in a way that you won't hammer your DB every time you read your session data.

@Marki555 Interesting link. I'd like to see how well it runs under something faster than traditional magnetic drives (i.e. AWS Provisioned IOPS). That question is almost 3 years old. And I do prefer Memcached myself but it's overkill for low traffic.

Why does this PHP script (called by AJAX) randomly not load SESSION co...

php ajax session session-variables
Rectangle 27 8

If you're running a load balancer then you have to make sure your servers are hitting a common point for data. By default PHP stores sessions in the local file system. That becomes a problem if your load balancer sends you from server A to server B, where that file doesn't exist. You could set up a network share and make sure all web servers use that share. So you could create an NFS share and then add session_save_path or set it within php.ini

session_save_path('/your/nfs/share/here');

Another option is to write your own session handler that puts sessions into your database. You could then use something like memcached to store the sessions in a way that you won't hammer your DB every time you read your session data.

@Marki555 Interesting link. I'd like to see how well it runs under something faster than traditional magnetic drives (i.e. AWS Provisioned IOPS). That question is almost 3 years old. And I do prefer Memcached myself but it's overkill for low traffic.

Why does this PHP script (called by AJAX) randomly not load SESSION co...

php ajax session session-variables
Rectangle 27 1

Do you have clustered servers with load balancing, if yes, then you may want to enable sticky session on it, so that all requests from one session goes to the same server.

if you have Server Clustering Answers :

1) When you store the session in memeory, you want to make sure all requests goes to the same server otherwise you would get this error on server, but not local.

2) If you just enable sticky session on server this error might go away.

3) If the error is because of clustering then you cannot reproduce it on local.

4) If you cannot enable sticky session on loadbalancer, then you may either have to store the session to a file accessible to all clustered servers or store it in the db.

The first part of your answer is a question to OP, which should be a comment. Given OP didn't answer the question, the second part of your answer is a guess, which should also be a comment. Only answer when the problem is clear, use comments otherwise.

@CodeCaster - While I agree that asking questions should be a comment, you can certain answer based on a hypothesis.. such as "If you are doing xxxx then do this", it's an answer, and even if it isn't a good answer for this question, it might answer other peoples questions who find this question as part of their search and the qualification of what it applies to helps.

@CodeCaster - I agree, in a perfect q/a world.. a lot of times, the asker doesn't come back to the question for several days though... regardless, this is just one of the reasons I think the workflow of SO needs a rethink.. but that's a different argument.

c# - Asp MVC session lost after RedirectToAction - Stack Overflow

c# asp.net asp.net-mvc-4 session
Rectangle 27 2

I would not recommend this. Balancing seems simple to most developers at first ("hey I'll just keep track of each request and forward the next one to the next server in line, etc") but in reality it is complicated if it's not completely trivial. You need to think about maintaining load quotas per server, handling servers that go down, etc.

If you're already running Server 2008 it's probably cheaper and easier (and far more performant) to use the NLB features of the OS instead of coming up with your own. This for example is a good walkthrough of setting up an NLB cluster.

Ultimately of course the approach is up to you, but I think using the right tools for the job is always a good idea. Re-inventing round robin IP clustering in a WCF service seems like a waste of time if you have that baked into the OS already.

Thanks @kprobst! In this particular edge case, I need to pretend as if each server is a Singleton, able to serve only 1 client at a time. If any particular server is busy, the load balancer will need to look for another server that is not busy. There is no ASP session or state to worry about. Does that make you think any differently?

@Hairgami_Master But how would you decide if a particular server is busy? That kind of complexity can get really old really quickly as you try to scale this to a real-world scenario. Like I said, if your balancing needs are relatively simple then I suppose your approach would work, but it's not something I would choose as a long-term solution.

I was hoping there might be a simple way to make an IHttpHandler a singleton... Something that would make a simple request to that handler fail if it was busy; something that I could easily detect and move on and try the next server if it was busy. Perhaps there isn't a way to do this. Thanks so much!

wcf - In ASP.NET, is there a way to write a custom Load Balancer? - St...

asp.net wcf load-balancing
Rectangle 27 21

I've used IoC containers (Spring.NET and StructureMap) in several production apps under high load (not Facebook/MySpace high, but enough to stress out a few servers).

In my experience, even before I started using IoC, the biggest perf concern is the database and the interaction with the database -- optimizing queries, indexes, using 2nd-level caching, etc.

If you have a database involved with your app, then whatever performance hit Windsor or any other container may cause will be so infintessimally small compared to a DB round trip.

This is like people who compare the performance hit of the new() operator vs. Activator.CreateInstance() at 1-10ms when a single DB roundtrip is usually an order of magnitude more expensive.

I would advise you not to worry about the small stuff and concentrate on the big stuff.

Also, I'd like to advise you to look at StructureMap, as I believe it has a lot more functionality than Windsor and doesn't have a lot of Windsor's shortcomings (i.e. holding on to references and requiring you to release them, etc).

-1. resource acquisition is intialization; how would you verify the lifetimes of your objects when there's no meta-data about the life-style that propagates across the resolve call? Or, what about disposable objects being depended on by what you are resolving; how would you make sure to release those manually? It's not a short-coming. In MVC, e.g. the release-part is coupled to either the controller or the per-request lifestyle, normally and Windsor handles it for you!

I've used StructureMap and Windsor and with Windsor I have to do a lot more messing with lifecycles and configuring it. With StructureMap, it just works. It handles all the same scenarios, but in a more obvious way. Windsor makes you think about it more which isn't necessary 99% of the time.

.net - Castle Windsor are there any downsides? - Stack Overflow

.net asp.net dependency-injection inversion-of-control castle-windsor