Rectangle 27 25

public static void clean(ByteBuffer bb) {
    if(bb == null) return;
    Cleaner cleaner = ((DirectBuffer) bb).cleaner();
    if (cleaner != null) cleaner.clean();
}

Using it can make a big difference if you are discarding direct or memory mapped ByteBuffer fairly quickly.

One of the reasons for using the Cleaner to do this is that you can have multiple copies of the underlying memory resources e.g. with slice(). and the Cleaner has a resource count of these.

Hmmh. I was looking at cleaner, but wasn't sure if access rights would let me call it directly... but if so, this definitely would be a good way to go. Thanks -- maybe I was too hasty in discarding this approach!

I have used this approach in a number of projects. Its better than using reflection IMHO.

sun.nio.ch.DirectBuffer (unlike sun.misc.Cleaner) can no longer be used by default in Java 1.9, see the JEP 260.

sun.misc.Unsafe
void invokeCleaner(ByteBuffer)

java - Examples of forcing freeing of native memory direct ByteBuffer ...

java memory memory-management garbage-collection
Rectangle 27 5

Using sun.misc.Unsafe is hardly possible because base address of the allocated native memory is a local variable of java.nio.DirectByteBuffer constructor.

Actually you can force freeing of native memory with the following code:

import sun.misc.Cleaner;

import java.lang.reflect.Field;
import java.nio.ByteBuffer;

...

public static void main(String[] args) throws Exception {
    ByteBuffer direct = ByteBuffer.allocateDirect(1024);
    Field cleanerField = direct.getClass().getDeclaredField("cleaner");
    cleanerField.setAccessible(true);
    Cleaner cleaner = (Cleaner) cleanerField.get(direct);
    cleaner.clean();
}

Right, this is why I asked; the whole orchestration of Cleaner paired with callback to Deallocator is complex. I will try out this approach, assuming type Cleaner is accessible (I noticed that thing it calls on isn't).

java - Examples of forcing freeing of native memory direct ByteBuffer ...

java memory memory-management garbage-collection
Rectangle 27 2

Basically what you want is following the same semantics as using IO streams. Just like you need to close a stream once, you need to free the memory once. So you could write your own wrapper around the native calls making early freeing of memory possible

Yes, at very high level -- but have you looked at details of exactly how? That's where the trouble starts: there isn't a method to do that with (direct) ByteBuffers, no close() method. And even under the hood, things are rather complicated, unfortunately.

Yes but you can change the game by implementing your own buffer using the unsafe. (Which probably works pretty darn good with padded memory to avoid false sharing, but that's a whole different ball game)

True enough. And in my case, I fully control all access, so underlying raw storage is neither exposed nor should be danger of dangling references.

java - Examples of forcing freeing of native memory direct ByteBuffer ...

java memory memory-management garbage-collection
Rectangle 27 3

Just guessing: You might be using a non-default malloc implementation when running inside the JVM that's tune to the specfic needs of the JVM and produces more overhead than the general-purpose malloc in your normal libc implementation.

That would be my guess too. But I'm not giving you the +1 until there is evidence that really is the case.

Why does a native library use 1.5 times more memory when used by java ...

java linux memory native
Rectangle 27 2

I got used to the 64MB the Sun Java implementations used to use for default maximum heap size. But I used openjdk 1.6 for testing. Openjdk uses a fraction of the physical memory if no maximum heap size was explicitly specified. In my case one fourth. I used a 4GB machine. One fourth is thus 1GB. There it is the difference between C and Java.

Sadly this behavior isn't documented anywhere. I found it looking at the source code of openjdk (arguments.cpp):

// If the maximum heap size has not been set with -Xmx,
// then set it as fraction of the size of physical memory,
// respecting the maximum and minimum sizes of the heap.

Why does a native library use 1.5 times more memory when used by java ...

java linux memory native
Rectangle 27 1

Java need to have continuous memory for its heap so it can allocate the maximum memory size as virtual memory. However, this doesn't consume physical memory and might not even consume swap. I would check how much your resident memory increases by.

libraries called by the JVM can still free memory the allocate, though.

One possibility is that there's a Java thread running concurrently with the native code and they are fragmenting memory. The OP says there are a lot of small allocations.

The maximum heap size for Java is set to 60MB or so. This can't be the reason for consuming 1GB more memory. Still: resident memory is 3GB, like the c program.

It could be using 1 GB virtual memory by the time all the shared libraries are included. The whole point of virtual memory is you don't need to worry about how memory is used, only the resident memory actually consumes physical memory. Why do you care how much address space it uses, it like caring how big the integers in your program are.

Why does a native library use 1.5 times more memory when used by java ...

java linux memory native
Rectangle 27 184

It seems you're thinking that a stackoverflow error is like a buffer overflow exception in native programs, when there is a risk of writing into memory that had not been allocated for the buffer, and thus to corrupt some other memory locations. It's not the case at all.

JVM has a given memory allocated for each stack of each thread, and if an attempt to call a method happens to fill this memory, JVM throws an error. Just like it would do if you were trying to write at index N of an array of length N. No memory corruption can happen. The stack can not write into the heap.

A StackOverflowError is to the stack what an OutOfMemoryError is to the heap: it simply signals that there is no more memory available.

StackOverflowError: The Java Virtual Machine implementation has run out of stack space for a thread, typically because the thread is doing an unbounded number of recursive invocations as a result of a fault in the executing program.

Just a appointment, java.lang.StackOverflowError is an Error, like OutOfMemory, and is not a Exception. Error and Exception extends Thowable. Its a really bad praticce to catch a Error/throwable instead a Exception.

You got your answer in all the other answers. You get the error when the stack is full, whatever the way you get to this situation. But I've never seen that happen without recursion, given the large size of the stack.

HotSpot has reliable and safe stack overflow detection, but I wonder if such behavior is mandated by the JVM spec?

java - What actually causes a Stack Overflow error?

java jvm stack-overflow
Rectangle 27 2

By default JVM touches pages incrementally during application execution. In order to change this behavior use the following option -XX:+AlwaysPreTouch. In this case JVM pre-touches heap during JVM initialization(every page of the heap is demand-zeroed).

Most likely your memory is swapped. You can check swap usage by using the following instructions.

In this case you're dealing with overcommitment. Memory overcommit is a OS feature that allows to use more memory space than the physical machine actually has. Native stacks are lazy enough to avoid of touching backed physical memory. So as a result you have 12.1g of virtual memory and 3.7g of RSS. For further more detailed analysis, you can use pmap -x <java pid>.

I have wrote a very simple java code whose work is just create thread

That's why the memory allocated for the threads was not used. Just do something useful in your threads. For example:

public class CreateThread {
  private static int threadNumber = 0;

  public static int doRecursiveCall(int i) throws StackOverflowError {
      return doRecursiveCall(++i);
  }

  public static void warmupNativeStack() {
      int sideEffect;
      try {
          sideEffect = doRecursiveCall(0);
      } catch (StackOverflowError ignored) {
          return;
      }
      System.out.println("Never happend  " + sideEffect);
  }

  public static void main(String[] args) {
      while (true) {
          new Thread(new Runnable() {

              @Override
              public void run() {
                  warmupNativeStack();
                  System.out.println("Thread " + threadNumber++);
                  while (true) {
                      try {
                          Thread.sleep(20000);
                      } catch (InterruptedException e) {
                          Thread.currentThread().interrupt();
                      }
                  }
              }
          }).start();
      }
    }
}

why the error is OutOfMemory instead of StackOverflow?

Actually, you're in luck because in real life everything ends with OOM killer. In your case a new Java thread is requested through OS. The OS tries to create a new native thread which requires memory to be allocated for the thread but allocation failed due to the lack of physical memory. In real life your java application will be killed by Linux kernel(OOM killer). Stack overflow is used for other purposes(see another answer)

What does the MAX VIRT depend on before my process is killed by OS?

How can I estimate the maximum thread according the OS free memory before startup up and the JVM startup parameter?

It depends on many factors and JVM parameters don't play any role. For example you have to know the maximum depth of the call stack in your application - check thread count with your and with mine examples.

thank you .I user top command , it shows that my swap size is zero.Then ,what could make my total stack memory is larger than total system free memory?

Please provide output from 'free' and internal memory usage -XX:NativeMemoryTracking=summary.

Thank you so much for you detailed explanation.I have some additional description in UPDATE 1.Is it possible to estimate the max thread number before I run the process, according the system free memory before startup and the jvm startup parameter?

jvm startup parameter will not consume the memory which is declared in...

jvm out-of-memory jvm-hotspot
Rectangle 27 1

Your expectations are misplaced. You can't expect System.gc() to be more than a hint to the garbage collector, and calling finalize() yourself is entirely incorrect.

i guess you've not read the question carefully, problem is not with java side, even if i remove finalize() and System.gc() there is still some memory leakage!

c++ - How to know whether it is a memory leak or not when calling nati...

java c++ memory-leaks jni
Rectangle 27 0

Objects being shown as GC root: Native Stack turned out to be a problem of the Eclipse Debugger. When the application was started without the debugger (i.e. with 'run' instead of 'debug') the problem disappeared. This was also the reason that I couldn't find the code where the objects where used in JNI inside my project (cause they weren't).

Finding the real memory leak without debugging was a bit hard, but using Eclipse MIT and its 'find Leak suspects' feature helped a lot!

java - Memory leak shown in MAT as GC root: Native Stack - Stack Overf...

java memory-leaks jni out-of-memory
Rectangle 27 0

What exactly is the problem? Is the OS swapping? Perhaps you only need to read the accepted answer to this question: Virtual Memory Usage from Java under Linux, too much memory used

The problem is that once the RES consume all the memory the kernel is killing the process. Thanks for the link. It looks interesting.

linux - Java native memory leak on Debian - Stack Overflow

java linux memory native memory-leaks
Rectangle 27 0

Just got the following information: This is a limitation imposed by my host provider. This has nothing to do with programming, or linux.

Here's the answer: Nuxit.com host provider sells virtuals server under the denomination "Dedicated server". They virtualize fake servers using a software called "Parallels Virtuozzo Containers". They can control how the processes run and impose limits on threads, memory, and so on. It took me a week to find out but I have changed host, and now my program finally works fine.

@Joel: I am not able to create more than 375 threads per process on Amazon EC2 machine. Is this limit imposed by Amazon? forums.aws.amazon.com/thread.jspa?threadID=86751

I'm pretty sure this limit is imposed by Amazon. If you need more threads, rent a real dedicated server and not a virtual private server or a cloud. You can also try to contact their support and ask for more threads slots. Good luck.

linux - Java Memory error: unable to create new native thread - Stack ...

java linux multithreading memory debian
Rectangle 27 0

I am starting to suspect that "Native Posix Thread Library" is missing.

>getconf GNU_LIBPTHREAD_VERSION
NPTL 2.13

If not, the Debian installation is messed up. I am not sure how to fix that, but installing Ubuntu Server seems like a good move...

for ulimit -n 100000; (open fd:s) the following program should be able to handle 32.000 threads or so.

package test;

import java.io.InputStream;
import java.net.ServerSocket;
import java.net.Socket;
import java.util.ArrayList;
import java.util.concurrent.Semaphore;

public class Test {

    final static Semaphore ss = new Semaphore(0);


    static class TT implements Runnable {

        @Override
        public void run() {
            try {
                Socket t = new Socket("localhost", 47111);
                InputStream is = t.getInputStream();
                for (;;) {
                    is.read();
                }

            } catch (Throwable t) {
                System.err.println(Thread.currentThread().getName() + " : abort");
                t.printStackTrace();
                System.exit(2);
            }

        }
    }

    /**
     * @param args
     */
    public static void main(String[] args) {
        try {

            Thread t = new Thread() {
                public void run() {
                    try {
                        ArrayList<Socket> sockets = new ArrayList<Socket>(50000);
                        ServerSocket s = new ServerSocket(47111,1500);
                        ss.release();

                        for (;;) {
                            Socket t = s.accept();
                            sockets.add(t);
                        }
                    } catch (Exception e) {
                        e.printStackTrace();
                        System.exit(1);

                    }
                }
            };


            t.start();
            ss.acquire();


            for (int i = 0; i < 30000; i++) {

                Thread tt = new Thread(new TT(), "T" + i);
                tt.setDaemon(true);
                tt.start();
                System.out.println(tt.getName());
                try {
                    Thread.sleep(1);
                } catch (InterruptedException e) {
                    return;
                }
            }

            for (;;) {
                System.out.println();
                try {
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
                    return;
                }
            }

        } catch (Throwable t) {
            t.printStackTrace();
        }
    }
}

Added info above. I also tested with openJDk 1.6 and I got the same results.

getconf GNU_LIBPTHREAD_VERSION
NPTL 2.7

linux - Java Memory error: unable to create new native thread - Stack ...

java linux multithreading memory debian
Rectangle 27 0

I'm posting references that can be useful for those who will experience the same problems and lack of information on topic.

not a sign of a problem

  • Android applications start with a small heap which grows (up to a point) when applications require more and more memory, and a GC_FOR_ALLOC is done before increasing the size of the heap. In this case GC_FOR_ALLOC is perfectly normal.
  • If you allocate memory faster than the concurrent GC has time to free it up, GC_FOR_ALLOC is inevitable. And there's nothing inherently wrong with allocating memory faster than the concurrent GC can free up memory.

Also a very useful source of information is video by Patrick Dubroy. It gave me more understanding what is happening in my application and what I need to analyze in my code: Google I/O 2011: Memory management for Android Apps. Special attention to questions and answers on 53 minute 5 seconds and 55 minute 42 seconds.

java - Android native libraries and memory consumption - Stack Overflo...

java android debugging memory heap
Rectangle 27 0

This article gives some good information about hunting down native memory issues and explains how you run out of native memory.

Java native memory usage - Stack Overflow

java memory native out-of-memory
Rectangle 27 0

I had lots of trouble with that setting (Java on 32-bit systems - msw and others) and they were all solved by reserving just *under 1GB of RAM to the JVM.

Otherwise, as stated, the actual occupied memory in the system for that process would be over 2GB; at that point I was having 'silent deaths' of the process - no errors, no warnings, just the process terminating very quietly.

I got more stability and performance running several JVM (each with under 1GB RAM) on the same system.

windows - Java memory usage with native processes - Stack Overflow

java windows memory-management
Rectangle 27 0

As for point 3 - if the machine crashes and there are any pages that were not flushed to disk, then they are lost. Another thing is the waste of the address space - mapping a file to memory consumes address space (and requires contiguous area), and well, on 32-bit machines it's a bit limited. But you've said about 100MB - so it should not be a problem. And one more thing - expanding the size of the mmaped file requires some work.

By the way, this SO discussion can also give you some insights.

Actually 100s of MB - so up to a Gig per file. And some deployments of the application have multiple such files! I'll edit to be clearer.

java - Performance / stability of a Memory Mapped file - Native or Map...

java performance file-io jni production
Rectangle 27 0

There are different factors that you need to take into account especially on a language like Java, Java runs on a virtual machine and garbage collection is handled by the Java Runtime, as there is considerable effort (I would imagine) from using the Java Invocation Interface to switch or execute the native method within the native library as there would have to be a means to allocate space on the stack, switch to native code, execute the native method, switch back to the Java virtual machine and perhaps somehow, the space on the stack was not freed up - that's what I would be inclined to think.

Why does a native library use 1.5 times more memory when used by java ...

java linux memory native
Rectangle 27 0

It is hard to say, but I think at the heart of the problem is that there are two heaps in your application which need to be maintained -- the standard Java heap for Java object allocations (maintained by the JVM), and the C heap which is maintained by calls to malloc/free. It is hard to say what is going on exactly without seeing some code.

Why does a native library use 1.5 times more memory when used by java ...

java linux memory native