Rectangle 27 17

If someone would need to display all time units e.g "hours minutes seconds" not just "hours". Let's say the time difference between two dates is 1hour 59minutes 20seconds. This function will display "1h 59m 20s".

extension NSDate {

    func offsetFrom(date:NSDate) -> String {

        let dayHourMinuteSecond: NSCalendarUnit = [.Day, .Hour, .Minute, .Second]
        let difference = NSCalendar.currentCalendar().components(dayHourMinuteSecond, fromDate: date, toDate: self, options: [])

        let seconds = "\(difference.second)s"
        let minutes = "\(difference.minute)m" + " " + seconds
        let hours = "\(difference.hour)h" + " " + minutes
        let days = "\(difference.day)d" + " " + hours

        if difference.day    > 0 { return days }
        if difference.hour   > 0 { return hours }
        if difference.minute > 0 { return minutes }
        if difference.second > 0 { return seconds }
        return ""
    }

}
extension Date {

    func offsetFrom(date: Date) -> String {

        let dayHourMinuteSecond: Set<Calendar.Component> = [.day, .hour, .minute, .second]
        let difference = NSCalendar.current.dateComponents(dayHourMinuteSecond, from: date, to: self);

        let seconds = "\(difference.second ?? 0)s"
        let minutes = "\(difference.minute ?? 0)m" + " " + seconds
        let hours = "\(difference.hour ?? 0)h" + " " + minutes
        let days = "\(difference.day ?? 0)d" + " " + hours

        if let day = difference.day, day          > 0 { return days }
        if let hour = difference.hour, hour       > 0 { return hours }
        if let minute = difference.minute, minute > 0 { return minutes }
        if let second = difference.second, second > 0 { return seconds }
        return ""
    }

}

this is exactly what I need. simple, and gets the job done.

ios - Getting the difference between two NSDates in (months/days/hours...

ios swift osx swift2 nsdate
Rectangle 27 2

This is an odd case. You're looking for the difference in calendar dates between two Dates when those dates are evaluated in a specific time zone.

I did some playing, and came up with code that works for dates that fall in the same year:

let date = Date()

guard let nycTimeZone = TimeZone(abbreviation: "EST"),
  let nzTimeZone = TimeZone(abbreviation: "NZDT") else {
    fatalError()
}
var nycCalendar = Calendar(identifier: .gregorian)
nycCalendar.timeZone = nycTimeZone
var nzCalendar = Calendar(identifier: .gregorian)
nzCalendar.timeZone = nzTimeZone

let now = Date()

let nycDayOfYear = nycCalendar.ordinality(of: .day, in: .year, for: now)

var nzDayOfYear = nzCalendar.ordinality(of: .day, in: .year, for: now)

I'm using New York and Aukland, NZ as my time zones because as of the time of this writing, those zones are on different dates.

As of now (~12:00 PM on Feb 11, 2017 in US Eastern Standard Time (UTC - 5) the code above gives

nycDayOfYear = 42

and

nzDayOfYear = 43

It would take some work to make that calculation work across year boundaries.

Curiously, the following code:

var nzDayOfEra = nzCalendar.ordinality(of: .day, in: .era, for: now)
let nycDayOfEra = nycCalendar.ordinality(of: .day, in: .era, for: now)

Gives the same value for both NZ and NYC. I'm not sure why.

Ok, I did some experimenting and got code that works. What I do is to convert both dates to month/day/year date components using a calendar set to the local time zone for each time. Then I use a method dateComponents(_:from:to:) to calculate the difference between those 2 DateComponents, in days:

import UIKit

let date = Date()

guard let nycTimeZone = TimeZone(abbreviation: "EST"),
  let nzTimeZone = TimeZone(abbreviation: "NZDT") else {
    fatalError()
}
var nycCalendar = Calendar(identifier: .gregorian)
nycCalendar.timeZone = nycTimeZone
var nzCalendar = Calendar(identifier: .gregorian)
nzCalendar.timeZone = nzTimeZone

let now = Date()


let nycDateComponents = nycCalendar.dateComponents([.month, .day, .year], from: now)
let nzDateComponents = nzCalendar.dateComponents([.month, .day, .year], from: now)

let difference = Calendar.current.dateComponents([.day],
  from: nycDateComponents,
    to: nzDateComponents)

let daysDifference = difference.days

As of this writing that gives a daysDifference of 1. Since we're using the dateComponents(_:from:to:) function, it takes care of the math to calculate the number of days difference between the 2 month/day/year DateComponents.

What you are actually doing is taking two dates and getting the difference between them in respect to the local time zone. The first thing you did was to remove the time zone information... This is exactly the result you would get if you had two datetime strings (including the timezone) and parsed them to your local time zone and then got the number of days between them.

ios - How to calculate days difference between two NSDate objects in d...

ios swift nsdate nsdatecomponents
Rectangle 27 1

A NSDate represents a moment in time. It has no time zone. Only string representations of dates have time zone information.

If you have the dates, just take the number of days between them. Don't worry about time zones.

let difference = calendar.components(.Day, fromDate: self, toDate: toDate!, options: [])
return difference.day

should be enough.

Minor nitpicking: "just take the number of days ... Don't worry about time zones" may be misleading. The difference in days makes only sense with respect to a calendar and a time zone. That can of course be the default calendar and the local time zone.

"Only string representations of dates have time zone information." No, that's not exactly true. Consider converting an NSDate to DateComponents. The year/month/day/hour/minute values you get will be different depending on time zone. The expression of a Date object in a particular calendar depends on the time zone, regardless of wether that representation is numeric or string-based.

The OP is asking about an unusual problem: Figuring out the difference in calendar dates for 2 different locations on earth that are in different time zones. For example, right now it's Feb 13th in Aukland, NZ, but Feb 12th in New York City. The OP wants a result of 1 day difference in that case. He WANTS to worry about time zones.

ios - How to calculate days difference between two NSDate objects in d...

ios swift nsdate nsdatecomponents
Rectangle 27 2

If the the camera is not moved nor the background changed (as it appears from the sample photo), the difference in global illumination is likely due (for the most part) to either of two factors - camera with auto-exposure making a different choice of f/stop or exposure time when the subject is in the scene, or a time-varying light source within the exposure time window (e.g. 60Hz line hum in the lamp). The latter is to be excluded if the illuminator is a strobe (and enough time between shots is given for the strobe to recharge).

I say "for the most part" above because, with the subject taking up a large portion of the frame, light reflected from it does affect global illumination as well, but in your case this is likely a second order effect.

Your best approach is likely to be to better control the capture - at a minimum, disable auto-exposure in the camera, use a ballasted light (if it's not a strobe).

If you cannot (or in addition), you should start with global histogram alignment, rather than equalization. Global histogram equalization, as suggested by other posters, is likely to hurt, because the subject's pixel values will be part of the histogram, not just the background. However, if the camera is not moved, you can pre-identify in the image frame regions that are known to be background, and sample the histogram only from them in both the "background only" and "with subject" images. You can then find the values at, say, the top and bottom 5% of the dynamic range, and just apply a global scaling to they match.

I completely agree. Controlling the exposure will probably not work in this case because the object considerably affects the illumination of the setup - adjusting for it seems to be the best approach. Sadly, I haven't found any tutorials or function in OpenCV for aligning two images' histograms. I will update my post later with some code when I hacked something together. If you have any resources on histogram alignment, I'd be glad to get some pointers!

The "alignment" I'd start with is a simple linear scaling. Compute the pixel values at the 5% and 95% of the histogram in both images: they define the "effective dynamic range" of the two images (of course you can play a bit with the boundaries, 5 and 95 are just good places to start), then compute the shift and scale that, when applied to image 2, make its effective dynamic range equal to that of image 1. You can try it in Gimp or Photoshop for a few sample images ("adjust levels..."), and see how well it'd work without having to write a single line of code.

I think I see what you mean, but I started by implementing what amounts to a look-up table following this procedure, and a simple shift by the average difference in brightness in the reference area. You can see the results in these pictures. Strangely, the simplest of those methods seems to have worked best. I marked your post as the accepted answer because this is evidently the right first step, although obscured details in the reference picture still make the distinction of the different regions difficult.

c++ - Find the differences between two noisy, lighting variant images ...

c++ opencv image-processing computer-vision
Rectangle 27 306

It's important to understand that there are two aspects to thread safety: (1) execution control, and (2) memory visibility. The first has to do with controlling when code executes (including the order in which instructions are executed) and whether it can execute concurrently, and the second to do with when the effects in memory of what has been done are visible to other threads. Because each CPU has several levels of cache between it and main memory, threads running on different CPUs or cores can see "memory" differently at any given moment in time because threads are permitted to obtain and work on private copies of main memory.

Using synchronized prevents any other thread from obtaining the monitor (or lock) for the same object, thereby preventing all code blocks protected by synchronization on the same object from executing concurrently. Synchronization also creates a "happens-before" memory barrier, causing a memory visibility constraint such that anything done up to the point some thread releases a lock appears to another thread subsequently acquiring the same lock to have happened before it acquired the lock. In practical terms, on current hardware, this typically causes flushing of the CPU caches when a monitor is acquired and writes to main memory when it is released, both of which are (relatively) expensive.

Using volatile, on the other hand, forces all accesses (read or write) to the volatile variable to occur to main memory, effectively keeping the volatile variable out of CPU caches. This can be useful for some actions where it is simply required that visibility of the variable be correct and order of accesses is not important. Using volatile also changes treatment of long and double to require accesses to them to be atomic; on some (older) hardware this might require locks, though not on modern 64 bit hardware. Under the new (JSR-133) memory model for Java 5+, the semantics of volatile have been strengthened to be almost as strong as synchronized with respect to memory visibility and instruction ordering (see http://www.cs.umd.edu/users/pugh/java/memoryModel/jsr-133-faq.html#volatile). For the purposes of visibility, each access to a volatile field acts like half a synchronization.

Under the new memory model, it is still true that volatile variables cannot be reordered with each other. The difference is that it is now no longer so easy to reorder normal field accesses around them. Writing to a volatile field has the same memory effect as a monitor release, and reading from a volatile field has the same memory effect as a monitor acquire. In effect, because the new memory model places stricter constraints on reordering of volatile field accesses with other field accesses, volatile or not, anything that was visible to thread A when it writes to volatile field f becomes visible to thread B when it reads f.

So, now both forms of memory barrier (under the current JMM) cause an instruction re-ordering barrier which prevents the compiler or run-time from re-ordering instructions across the barrier. In the old JMM, volatile did not prevent re-ordering. This can be important, because apart from memory barriers the only limitation imposed is that, for any particular thread, the net effect of the code is the same as it would be if the instructions were executed in precisely the order in which they appear in the source.

One use of volatile is for a shared but immutable object is recreated on the fly, with many other threads taking a reference to the object at a particular point in their execution cycle. One needs the other threads to begin using the recreated object once it is published, but does not need the additional overhead of full synchronization and it's attendant contention and cache flushing.

I just yesterday had some code where a shared but immutable object is recreated on the fly, with many other threads taking a reference to the object at a particular point in their execution cycle (at the start of handling a message) - volatile is perfect for that situation. I needed the other threads to begin using the recreated object as soon as it was published, but did not need the additional overhead of full synchronization and it's attendant contention and cache flushing.

// Declaration
public class SharedLocation {
    static public SomeObject someObject=new SomeObject(); // default object
    }

// Publishing code
// Note: do not simply use SharedLocation.someObject.xxx(), since although
//       someObject will be internally consistent for xxx(), a subsequent 
//       call to yyy() might be inconsistent with xxx() if the object was 
//       replaced in between calls.
SharedLocation.someObject=new SomeObject(...); // new object is published

// Using code
private String getError() {
    SomeObject myCopy=SharedLocation.someObject; // gets current copy
    ...
    int cod=myCopy.getErrorCode();
    String txt=myCopy.getErrorText();
    return (cod+" - "+txt);
    }
// And so on, with myCopy always in a consistent state within and across calls
// Eventually we will return to the code that gets the current SomeObject.

Speaking to your read-update-write question, specifically. Consider the following unsafe code:

public void updateCounter() {
    if(counter==1000) { counter=0; }
    else              { counter++; }
    }

Now, with the updateCounter() method unsynchronized, two threads may enter it at the same time. Among the many permutations of what could happen, one is that thread-1 does the test for counter==1000 and finds it true and is then suspended. Then thread-2 does the same test and also sees it true and is suspended. Then thread-1 resumes and sets counter to 0. Then thread-2 resumes and again sets counter to 0 because it missed the update from thread-1. This can also happen even if thread switching does not occur as I have described, but simply because two different cached copies of counter were present in two different CPU cores and the threads each ran on a separate core. For that matter, one thread could have counter at one value and the other could have counter at some entirely different value just because of caching.

What's important in this example is that the variable counter was read from main memory into cache, updated in cache and only written back to main memory at some indeterminate point later when a memory barrier occurred or when the cache memory was needed for something else. Making the counter volatile is insufficient for thread-safety of this code, because the test for the maximum and the assignments are discrete operations, including the increment which is a set of non-atomic read+increment+write machine instructions, something like:

MOV EAX,counter
INC EAX
MOV counter,EAX

Volatile variables are useful only when all operations performed on them are "atomic", such as my example where a reference to a fully formed object is only read or written (and, indeed, typically it's only written from a single point). Another example would be a volatile array reference backing a copy-on-write list, provided the array was only read by first taking a local copy of the reference to it.

Thanks very much! The example with the counter is simple to understand. However, when things get real, it's a bit different.

Nice answer and a special +1 for asm!

"In practical terms, on current hardware, this typically causes flushing of the CPU caches when a monitor is acquired and writes to main memory when it is released, both of which are expensive (relatively speaking)." . When you say CPU caches, is it the same as Java Stacks local to each thread? or does a thread has its own local version of Heap? Apologize if i am being silly here.

@nishm It's not the same, but it would include the local caches of the threads involved. .

@MarianPadzioch: An increment or decrement is NOT a read or a write, it's a read and a write; it's a read into a register, then a register increment, then a write back to memory. Reads and writes are individually atomic, but multiple such operations are not.

So, according to the FAQ, not only the actions made since a lock acquisition are made visible after unlock, but all actions made by that thread are made visible. Even actions made before the lock acquisition.

multithreading - Difference between volatile and synchronized in Java ...

java multithreading java-me synchronized volatile
Rectangle 27 306

It's important to understand that there are two aspects to thread safety: (1) execution control, and (2) memory visibility. The first has to do with controlling when code executes (including the order in which instructions are executed) and whether it can execute concurrently, and the second to do with when the effects in memory of what has been done are visible to other threads. Because each CPU has several levels of cache between it and main memory, threads running on different CPUs or cores can see "memory" differently at any given moment in time because threads are permitted to obtain and work on private copies of main memory.

Using synchronized prevents any other thread from obtaining the monitor (or lock) for the same object, thereby preventing all code blocks protected by synchronization on the same object from executing concurrently. Synchronization also creates a "happens-before" memory barrier, causing a memory visibility constraint such that anything done up to the point some thread releases a lock appears to another thread subsequently acquiring the same lock to have happened before it acquired the lock. In practical terms, on current hardware, this typically causes flushing of the CPU caches when a monitor is acquired and writes to main memory when it is released, both of which are (relatively) expensive.

Using volatile, on the other hand, forces all accesses (read or write) to the volatile variable to occur to main memory, effectively keeping the volatile variable out of CPU caches. This can be useful for some actions where it is simply required that visibility of the variable be correct and order of accesses is not important. Using volatile also changes treatment of long and double to require accesses to them to be atomic; on some (older) hardware this might require locks, though not on modern 64 bit hardware. Under the new (JSR-133) memory model for Java 5+, the semantics of volatile have been strengthened to be almost as strong as synchronized with respect to memory visibility and instruction ordering (see http://www.cs.umd.edu/users/pugh/java/memoryModel/jsr-133-faq.html#volatile). For the purposes of visibility, each access to a volatile field acts like half a synchronization.

Under the new memory model, it is still true that volatile variables cannot be reordered with each other. The difference is that it is now no longer so easy to reorder normal field accesses around them. Writing to a volatile field has the same memory effect as a monitor release, and reading from a volatile field has the same memory effect as a monitor acquire. In effect, because the new memory model places stricter constraints on reordering of volatile field accesses with other field accesses, volatile or not, anything that was visible to thread A when it writes to volatile field f becomes visible to thread B when it reads f.

So, now both forms of memory barrier (under the current JMM) cause an instruction re-ordering barrier which prevents the compiler or run-time from re-ordering instructions across the barrier. In the old JMM, volatile did not prevent re-ordering. This can be important, because apart from memory barriers the only limitation imposed is that, for any particular thread, the net effect of the code is the same as it would be if the instructions were executed in precisely the order in which they appear in the source.

One use of volatile is for a shared but immutable object is recreated on the fly, with many other threads taking a reference to the object at a particular point in their execution cycle. One needs the other threads to begin using the recreated object once it is published, but does not need the additional overhead of full synchronization and it's attendant contention and cache flushing.

// Declaration
public class SharedLocation {
    static public SomeObject someObject=new SomeObject(); // default object
    }

// Publishing code
// Note: do not simply use SharedLocation.someObject.xxx(), since although
//       someObject will be internally consistent for xxx(), a subsequent 
//       call to yyy() might be inconsistent with xxx() if the object was 
//       replaced in between calls.
SharedLocation.someObject=new SomeObject(...); // new object is published

// Using code
private String getError() {
    SomeObject myCopy=SharedLocation.someObject; // gets current copy
    ...
    int cod=myCopy.getErrorCode();
    String txt=myCopy.getErrorText();
    return (cod+" - "+txt);
    }
// And so on, with myCopy always in a consistent state within and across calls
// Eventually we will return to the code that gets the current SomeObject.

Speaking to your read-update-write question, specifically. Consider the following unsafe code:

public void updateCounter() {
    if(counter==1000) { counter=0; }
    else              { counter++; }
    }

Now, with the updateCounter() method unsynchronized, two threads may enter it at the same time. Among the many permutations of what could happen, one is that thread-1 does the test for counter==1000 and finds it true and is then suspended. Then thread-2 does the same test and also sees it true and is suspended. Then thread-1 resumes and sets counter to 0. Then thread-2 resumes and again sets counter to 0 because it missed the update from thread-1. This can also happen even if thread switching does not occur as I have described, but simply because two different cached copies of counter were present in two different CPU cores and the threads each ran on a separate core. For that matter, one thread could have counter at one value and the other could have counter at some entirely different value just because of caching.

What's important in this example is that the variable counter was read from main memory into cache, updated in cache and only written back to main memory at some indeterminate point later when a memory barrier occurred or when the cache memory was needed for something else. Making the counter volatile is insufficient for thread-safety of this code, because the test for the maximum and the assignments are discrete operations, including the increment which is a set of non-atomic read+increment+write machine instructions, something like:

MOV EAX,counter
INC EAX
MOV counter,EAX

Volatile variables are useful only when all operations performed on them are "atomic", such as my example where a reference to a fully formed object is only read or written (and, indeed, typically it's only written from a single point). Another example would be a volatile array reference backing a copy-on-write list, provided the array was only read by first taking a local copy of the reference to it.

Thanks very much! The example with the counter is simple to understand. However, when things get real, it's a bit different.

"In practical terms, on current hardware, this typically causes flushing of the CPU caches when a monitor is acquired and writes to main memory when it is released, both of which are expensive (relatively speaking)." . When you say CPU caches, is it the same as Java Stacks local to each thread? or does a thread has its own local version of Heap? Apologize if i am being silly here.

@nishm It's not the same, but it would include the local caches of the threads involved. .

@MarianPadzioch: An increment or decrement is NOT a read or a write, it's a read and a write; it's a read into a register, then a register increment, then a write back to memory. Reads and writes are individually atomic, but multiple such operations are not.

So, according to the FAQ, not only the actions made since a lock acquisition are made visible after unlock, but all actions made by that thread are made visible. Even actions made before the lock acquisition.

multithreading - Difference between volatile and synchronized in Java ...

java multithreading java-me synchronized volatile
Rectangle 27 304

It's important to understand that there are two aspects to thread safety: (1) execution control, and (2) memory visibility. The first has to do with controlling when code executes (including the order in which instructions are executed) and whether it can execute concurrently, and the second to do with when the effects in memory of what has been done are visible to other threads. Because each CPU has several levels of cache between it and main memory, threads running on different CPUs or cores can see "memory" differently at any given moment in time because threads are permitted to obtain and work on private copies of main memory.

Using synchronized prevents any other thread from obtaining the monitor (or lock) for the same object, thereby preventing all code blocks protected by synchronization on the same object from executing concurrently. Synchronization also creates a "happens-before" memory barrier, causing a memory visibility constraint such that anything done up to the point some thread releases a lock appears to another thread subsequently acquiring the same lock to have happened before it acquired the lock. In practical terms, on current hardware, this typically causes flushing of the CPU caches when a monitor is acquired and writes to main memory when it is released, both of which are (relatively) expensive.

Using volatile, on the other hand, forces all accesses (read or write) to the volatile variable to occur to main memory, effectively keeping the volatile variable out of CPU caches. This can be useful for some actions where it is simply required that visibility of the variable be correct and order of accesses is not important. Using volatile also changes treatment of long and double to require accesses to them to be atomic; on some (older) hardware this might require locks, though not on modern 64 bit hardware. Under the new (JSR-133) memory model for Java 5+, the semantics of volatile have been strengthened to be almost as strong as synchronized with respect to memory visibility and instruction ordering (see http://www.cs.umd.edu/users/pugh/java/memoryModel/jsr-133-faq.html#volatile). For the purposes of visibility, each access to a volatile field acts like half a synchronization.

Under the new memory model, it is still true that volatile variables cannot be reordered with each other. The difference is that it is now no longer so easy to reorder normal field accesses around them. Writing to a volatile field has the same memory effect as a monitor release, and reading from a volatile field has the same memory effect as a monitor acquire. In effect, because the new memory model places stricter constraints on reordering of volatile field accesses with other field accesses, volatile or not, anything that was visible to thread A when it writes to volatile field f becomes visible to thread B when it reads f.

So, now both forms of memory barrier (under the current JMM) cause an instruction re-ordering barrier which prevents the compiler or run-time from re-ordering instructions across the barrier. In the old JMM, volatile did not prevent re-ordering. This can be important, because apart from memory barriers the only limitation imposed is that, for any particular thread, the net effect of the code is the same as it would be if the instructions were executed in precisely the order in which they appear in the source.

One use of volatile is for a shared but immutable object is recreated on the fly, with many other threads taking a reference to the object at a particular point in their execution cycle. One needs the other threads to begin using the recreated object once it is published, but does not need the additional overhead of full synchronization and it's attendant contention and cache flushing.

// Declaration
public class SharedLocation {
    static public SomeObject someObject=new SomeObject(); // default object
    }

// Publishing code
// Note: do not simply use SharedLocation.someObject.xxx(), since although
//       someObject will be internally consistent for xxx(), a subsequent 
//       call to yyy() might be inconsistent with xxx() if the object was 
//       replaced in between calls.
SharedLocation.someObject=new SomeObject(...); // new object is published

// Using code
private String getError() {
    SomeObject myCopy=SharedLocation.someObject; // gets current copy
    ...
    int cod=myCopy.getErrorCode();
    String txt=myCopy.getErrorText();
    return (cod+" - "+txt);
    }
// And so on, with myCopy always in a consistent state within and across calls
// Eventually we will return to the code that gets the current SomeObject.

Speaking to your read-update-write question, specifically. Consider the following unsafe code:

public void updateCounter() {
    if(counter==1000) { counter=0; }
    else              { counter++; }
    }

Now, with the updateCounter() method unsynchronized, two threads may enter it at the same time. Among the many permutations of what could happen, one is that thread-1 does the test for counter==1000 and finds it true and is then suspended. Then thread-2 does the same test and also sees it true and is suspended. Then thread-1 resumes and sets counter to 0. Then thread-2 resumes and again sets counter to 0 because it missed the update from thread-1. This can also happen even if thread switching does not occur as I have described, but simply because two different cached copies of counter were present in two different CPU cores and the threads each ran on a separate core. For that matter, one thread could have counter at one value and the other could have counter at some entirely different value just because of caching.

What's important in this example is that the variable counter was read from main memory into cache, updated in cache and only written back to main memory at some indeterminate point later when a memory barrier occurred or when the cache memory was needed for something else. Making the counter volatile is insufficient for thread-safety of this code, because the test for the maximum and the assignments are discrete operations, including the increment which is a set of non-atomic read+increment+write machine instructions, something like:

MOV EAX,counter
INC EAX
MOV counter,EAX

Volatile variables are useful only when all operations performed on them are "atomic", such as my example where a reference to a fully formed object is only read or written (and, indeed, typically it's only written from a single point). Another example would be a volatile array reference backing a copy-on-write list, provided the array was only read by first taking a local copy of the reference to it.

Thanks very much! The example with the counter is simple to understand. However, when things get real, it's a bit different.

"In practical terms, on current hardware, this typically causes flushing of the CPU caches when a monitor is acquired and writes to main memory when it is released, both of which are expensive (relatively speaking)." . When you say CPU caches, is it the same as Java Stacks local to each thread? or does a thread has its own local version of Heap? Apologize if i am being silly here.

@nishm It's not the same, but it would include the local caches of the threads involved. .

@MarianPadzioch: An increment or decrement is NOT a read or a write, it's a read and a write; it's a read into a register, then a register increment, then a write back to memory. Reads and writes are individually atomic, but multiple such operations are not.

So, according to the FAQ, not only the actions made since a lock acquisition are made visible after unlock, but all actions made by that thread are made visible. Even actions made before the lock acquisition.

multithreading - Difference between volatile and synchronized in Java ...

java multithreading java-me synchronized volatile
Rectangle 27 4

I've just found out what's happening. The difference between the simple example and the real thing is the fact that there is an AddIn involved. It's quite a log story, but this is it. As you can see from the code sample, I load the ConfClasses dll via reflection in order to avoid to add a reference to it. This is fine at runtime, but the designer complains saying that it's not able to cast IConf to IConf. This happens because CoreClasses.dll is loaded from AppData\Local\Microsoft\VisualStudio\10.0\ProjectAssemblies when the designer starts, but ConfClasses.dll is loaded from my bin folder and so are its references, so there are two versions of CoreClasses.dll and different versions of IConf. To bypass the problem I developed an AddIn that, when an assembly is loaded with Assembly.Load at design time, adds a reference to that assembly, and then cleans the references when the last designer window is closed. Everything was OK with VS2005, but using ProcMon.exe I found out that VS2010 added a new folder where the addins look for assemblies: Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\CommonExtensions\DataDesign I copied my assembly there and everything works again. Now it's only a matter of finding a way to add things there manually.

c# - Where does Visual Studio look for assemblies? - Stack Overflow

c# visual-studio-2010 windows-forms-designer
Rectangle 27 1

$startDate = "2015-11-15 11:40:44pm";
$endDate = "2015-11-22 10:50:48am";  // You had 50:88 here? That's not an existing time

$startEpoch = strtotime($startDate);
$endEpoch = strtotime($endDate);

$difference = $endEpoch - $startEpoch;

The script above converts the start and end date to epoch time (seconds since January 1 1970 00:00:00 GMT). Then it does the maths and gets the difference between them.

Since years and months aren't a static value, I haven't added them in the script below

$minute = 60; // A minute in seconds
$hour = $minute * 60; // An hour in seconds
$day = $hour * 24; // A day in seconds

$daycount = 0; // Counts the days
$hourcount = 0; // Counts the hours
$minutecount = 0; // Counts the minutes

while ($difference > $day) { // While the difference is still bigger than a day
    $difference -= $day; // Takes 1 day from the difference
    $daycount += 1; // Add 1 to days
}

// Now it continues with what's left
while ($difference > $hour) { // While the difference is still bigger than an hour
    $difference -= $hour; // Takes 1 hour from the difference
    $hourcount += 1; // Add 1 to hours
}

// Now it continues with what's left
while ($difference > $minute) { // While the difference is still bigger than a minute
    $difference -= $minute; // Takes 1 minute from the difference
    $minutecount += 1; // Add 1 to minutes
}

// What remains are the seconds
echo $daycount . " days ";
echo $hourcount . " hours ";
echo $minutecount . " minutes ";
echo $difference . " seconds ";

How to calculate the difference between two dates with time using PHP?...

php date datetime time