A race condition occurs when two or more threads can access shared data and they try to change it at the same time. Because the thread scheduling algorithm can swap between threads at any time, you don't know the order in which the threads will attempt to access the shared data. Therefore, the result of the change in data is dependent on the thread scheduling algorithm, i.e. both threads are "racing" to access/change the data.

Problems often occur when one thread does a "check-then-act" (e.g. "check" if the value is X, then "act" to do something that depends on the value being X) and another thread does something to the value in between the "check" and the "act". E.g:

```if (x == 5) // The "Check"
{
y = x * 2; // The "Act"

// If another thread changed x in between "if (x == 5)" and "y = x * 2" above,
// y will not be equal to 10.
}```

The point being, y could be 10, or it could be anything, depending on whether another thread changed x in between the check and act. You have no real way of knowing.

In order to prevent race conditions from occurring, you would typically put a lock around the shared data to ensure only one thread can access the data at a time. This would mean something like this:

```// Obtain lock for x
if (x == 5)
{
y = x * 2; // Now, nothing can change x until the lock is released.
// Therefore y = 10
}
// release lock for x```

What does the other thread do when it encounters the lock? Does it wait? Error?

Yes, the other thread will have to wait until the lock is released before it can proceed. This makes it very important that the lock is released by the holding thread when it is finished with it. If it never releases it, then the other thread will wait indefinitely.

@Ian In a multithreaded system there will always be times when resources need to be shared. To say that one approach is bad without giving an alternative just isn't productive. I'm always looking for ways to improve and if there is an alternative I will gladly research it and weigh the pro's and cons.

@Despertar ...also, its not necessarily the case that resources will always need to be shared in a milti-threaded system. For example you might have an array where each element needs processing. You could possibly partition the array and have a thread for each partition and the threads can do their work completely independently of one another.

For a race to occur it's enough that a single thread attempts to change the shared data while rest of the threads can either read or change it.

## multithreading - What is a race condition? - Stack Overflow

A race condition occurs when two or more threads can access shared data and they try to change it at the same time. Because the thread scheduling algorithm can swap between threads at any time, you don't know the order in which the threads will attempt to access the shared data. Therefore, the result of the change in data is dependent on the thread scheduling algorithm, i.e. both threads are "racing" to access/change the data.

Problems often occur when one thread does a "check-then-act" (e.g. "check" if the value is X, then "act" to do something that depends on the value being X) and another thread does something to the value in between the "check" and the "act". E.g:

```if (x == 5) // The "Check"
{
y = x * 2; // The "Act"

// If another thread changed x in between "if (x == 5)" and "y = x * 2" above,
// y will not be equal to 10.
}```

The point being, y could be 10, or it could be anything, depending on whether another thread changed x in between the check and act. You have no real way of knowing.

In order to prevent race conditions from occurring, you would typically put a lock around the shared data to ensure only one thread can access the data at a time. This would mean something like this:

```// Obtain lock for x
if (x == 5)
{
y = x * 2; // Now, nothing can change x until the lock is released.
// Therefore y = 10
}
// release lock for x```

What does the other thread do when it encounters the lock? Does it wait? Error?

Yes, the other thread will have to wait until the lock is released before it can proceed. This makes it very important that the lock is released by the holding thread when it is finished with it. If it never releases it, then the other thread will wait indefinitely.

@Ian In a multithreaded system there will always be times when resources need to be shared. To say that one approach is bad without giving an alternative just isn't productive. I'm always looking for ways to improve and if there is an alternative I will gladly research it and weigh the pro's and cons.

@Despertar ...also, its not necessarily the case that resources will always need to be shared in a milti-threaded system. For example you might have an array where each element needs processing. You could possibly partition the array and have a thread for each partition and the threads can do their work completely independently of one another.

For a race to occur it's enough that a single thread attempts to change the shared data while rest of the threads can either read or change it.

## multithreading - What is a race condition? - Stack Overflow

Your note about threads is worrisome. I'm pretty sure you have a race condition that can lead to a crash. If a thread deletes an object and zeros the pointer, another thread could make a call through that pointer between those two operations, leading to this being non-null and also not valid, resulting in a crash. Similarly, if a thread calls a method while another thread is in the middle of creating the object, you may also get a crash.

Short answer, you really need to use a mutex or something to synchonize access to this variable. You need to ensure that this is never null or you're going to have problems.

"You need to ensure that this is never null" - I think a better approach is to ensure that the left operand of operator-> is never null :) but aside from that, I wish I could +10 this.

## c++ - Checking if this is null - Stack Overflow

c++ pointers null

I think that it can be related with a race condition that occurs when several threads runs simultaneously. The keyword static is only restricts the scope of the variable, so it's not a solution -- use something like mutexes for the purpose of the race condition exclusion. Speaking about keeping some variable between a function calls within a connection you'll need to store this variable in connection-related structures (see request_rec->notes or request_rec->connection->notes for example).

Can you be more specific about 'connection-related structures'. I've checked Apache API docs and there is no request_rec->connection->notes . 'request_rec->notes' is not for my case.

Another comment. I don't think that it is race condition issue. the first test that I've made, is to do simple request twice from browser. I need approximately 0.5 second to do 'F5' in browser and request handling for is approximatlely 0.1 second or less. There is intersection between two requests, so there is no place for 'race' condition'.

You looked at v1.3 API's. I pointed to v2.0. I think that With the 1.3 version you'll need to manage structures like connection 'notes' by hand... As for the race condition, it seems that you are right there aren't races.

## c - Static Variables in Apache module are initialized more then once? ...

c apache module apache2 global-variables