Rectangle 27 5

You can use a pointer-to-const-pointer to avoid memory management issues:

__attribute__((objc_precise_lifetime)) NSString *b = [[NSString alloc] initWithString:@"hello"];
NSString *const*a;
a = &b;

you need to use objc_precise_lifetime to make b available for the whole context (ARC may release b after last reference)

EDIT: This can also be used (but be aware of managing your double pointer)

NSString *b = [[NSString alloc] initWithString:@"hello"];
NSString *__strong*a;
a = &b;

but i don't think this is a good reason because ARC will not "memory manage" a pointer to a pointer (this is why ** defaults to __autoreleasing)

awesome thanks. One last question! Do you know of any good references to read up on ARC's behaviour with pointers to pointers? I.e. how did you not about the default there?

objective c - Converting Project To ARC With Double Indirection Pointe...

objective-c pointers automatic-ref-counting unsafe-unretained
Rectangle 27 5

You can use a pointer-to-const-pointer to avoid memory management issues:

__attribute__((objc_precise_lifetime)) NSString *b = [[NSString alloc] initWithString:@"hello"];
NSString *const*a;
a = &b;

you need to use objc_precise_lifetime to make b available for the whole context (ARC may release b after last reference)

EDIT: This can also be used (but be aware of managing your double pointer)

NSString *b = [[NSString alloc] initWithString:@"hello"];
NSString *__strong*a;
a = &b;

but i don't think this is a good reason because ARC will not "memory manage" a pointer to a pointer (this is why ** defaults to __autoreleasing)

awesome thanks. One last question! Do you know of any good references to read up on ARC's behaviour with pointers to pointers? I.e. how did you not about the default there?

objective c - Converting Project To ARC With Double Indirection Pointe...

objective-c pointers automatic-ref-counting unsafe-unretained
Rectangle 27 81

The garbage collector is a program which runs on the Java Virtual Machine which gets rid of objects which are not being used by a Java application anymore. It is a form of automatic memory management.

When a typical Java application is running, it is creating new objects, such as Strings and Files, but after a certain time, those objects are not used anymore. For example, take a look at the following code:

for (File f : files) {
    String s = f.getName();
}

In the above code, the String s is being created on each iteration of the for loop. This means that in every iteration, a little bit of memory is being allocated to make a String object.

Going back to the code, we can see that once a single iteration is executed, in the next iteration, the String object that was created in the previous iteration is not being used anymore -- that object is now considered "garbage".

Eventually, we'll start getting a lot of garbage, and memory will be used for objects which aren't being used anymore. If this keeps going on, eventually the Java Virtual Machine will run out of space to make new objects.

That's where the garbage collector steps in.

The garbage collector will look for objects which aren't being used anymore, and gets rid of them, freeing up the memory so other new objects can use that piece of memory.

In Java, memory management is taken care of by the garbage collector, but in other languages such as C, one needs to perform memory management on their own using functions such as malloc and free. Memory management is one of those things which are easy to make mistakes, which can lead to what are called memory leaks -- places where memory is not reclaimed when they are not in use anymore.

Automatic memory management schemes like garbage collection makes it so the programmer does not have to worry so much about memory management issues, so he or she can focus more on developing the applications they need to develop.

Would it be true to say a java application running on a computer then have two garbage collection functionalities. One with the Java virtual machine. And then one on the actual machine that runs windows? (Or any other OS)

No, usually Java applications run only if JRE is present. So, only JVM's garbage collection functionality is required! @Lealo

What is the garbage collector in Java? - Stack Overflow

java garbage-collection
Rectangle 27 7

From the XML package's webpage, it seems that the author, Duncan Temple Lang, has quite extensively described certain memory management issues. See this page: "Memory Management in the XML Package".

Honestly, I'm not proficient in the details of what's going on here with your code and the package, but I think you'll either find the answer in that page, specifically in the section called "Problems", or in direct communication with Duncan Temple Lang.

Update 1. An idea that might work is to use the multicore and foreach packages (i.e. listResults = foreach(ix = 1:N) %dopar% {your processing;return(listElement)}. I think that for Windows you'll need doSMP, or maybe doRedis; under Linux, I use doMC. In any case, by parallelizing the loading, you'll get faster throughput. The reason I think you may get some benefit from memory usage is that it could be that forking R, could lead to different memory cleaning, as each spawned process gets killed when complete. This isn't guaranteed to work, but it could address both memory and speed issues.

Note, though: doSMP has its own idiosyncracies (i.e. you may still have some memory issues with it). There have been other Q&As on SO that mentioned some issues, but I'd still give it a shot.

I'll second this, I've also had memory problems with the XML package.

@Iterator: thanks for the pointer. I've looked at the doc, but missed the "problems" section somehow. That seems to be the cause. I'll try my luck with directly contacting Duncan and post the outcome here.

@HongOoi: did you manage to work around the issues you where experiencing? If so, I'd be glad to hear how ;-)

I've contacted Duncan and he has been very committed to narrowing down the cause of all this. I'll keep you updated.

I used the Trojan horse approach with another packages that had memory leaks. I set up workers, did the work there, and closed the process. No more problems due to memory leaking.

r - Serious Memory Leak When Iteratively Parsing XML Files - Stack Ove...

xml r memory-leaks web-scraping bigdata
Rectangle 27 1

Pretty much all of you are off base if you are talking about the Microsoft heap. Syncronization is effortlessly handled as is fragmentation.

The current perferrred heap is the LFH, (LOW FRAGMENTATION HEAP), it is default in vista+ OS's and can be configured on XP, via gflag, with out much trouble

It is easy to avoid any locking/blocking/contention/bus-bandwitth issues and the lot with the

HEAP_NO_SERIALIZE

option during HeapAlloc or HeapCreate. This will allow you to create/use a heap without entering into an interlocked wait.

I would reccomend creating several heaps, with HeapCreate, and defining a macro, perhaps, mallocx(enum my_heaps_set, size_t);

would be fine, of course, you need realloc, free also to be setup as appropiate. If you want to get fancy, make free/realloc auto-detect which heap handle on it's own by evaluating the address of the pointer, or even adding some logic to allow malloc to identify which heap to use based on it's thread id, and building a heierarchy of per-thread heaps and shared global heap's/pools.

Here's a nice article on some dynamic memory management issues, with some even nicer references. To instrument and analyze heap activity.

The LFH trades allocation speed for low fragmentation, so we can#t be all that wrong...

performance - Memory Allocation/Deallocation Bottleneck? - Stack Overf...

performance optimization memory-management garbage-collection malloc
Rectangle 27 2

You appear to be using the address of a variable as a unique tag, so there are no memory management/ownership issues here. To do the address comparison cast the address of the variable to void:

if ((void *)&cashBalanceKeyPath == context)

That seems to give the compiler all it needs without any bridge cast.

Right. In fact, question author didn't share the original error/warning message that leaded to doing bridge. Maybe it was just incompatible pointers?

@iMartin: I think the original error message is in the question title.

ios - Implicit conversion of a non-Objective-C pointer type void* to N...

ios objective-c cocoa automatic-ref-counting
Rectangle 27 2

Where I am not sure about retain cycles is in something like this

[[NSNotificationCenter defaultCenter]addObserverForName: //...

Yes, there can be memory management issues associated with calling addObserverForName:. As I explain in my book:

You might want to read the rest of the discussion in my book for actual examples and solutions.

Fixing this solved my memory leak. Your book also provided me with additional insight. Thanks!

objective c - When Do Blocks Cause Retain Cycles When Using Self? - St...

objective-c cocoa-touch objective-c-blocks
Rectangle 27 24

As @Ben S said, it's the retainCount method. However, you're asking the wrong question, because:

Important: Typically there should be no reason to explicitly ask an object what its retain count is (see retainCount). The result is often misleading, as you may be unaware of what framework objects have retained an object in which you are interested. In debugging memory management issues, you should be concerned only with ensuring that your code adheres to the ownership rules.

So here's the real question: why do you need to know?

Put a break point on the dealloc method. Also, check this link out: cocoadev.com/index.pl?NSZombieEnabled

You want to know, because you're trying to debug your application. You can't tell someone, "you don't need to know, because you should just do it right, in which case you won't care". Maybe they shouldn't need the code in their production release, but that's not the same thing as not needing to know the ref count, as a tool to debug why you're having problems. Bad answer.

@Nate that's what Instruments is for...

Don't get me wrong. Instruments is great. But, it has limitations, and there's no reason a developer should be limited to just using Instruments. What if the problem doesn't crop up in your desktop test environment? What if you want to deploy devices to the field, and use logging to give you something to analyze later? Or maybe there's something location-based about your app, or about the problem code, that requires you taking the device out of your office. You may not be able to bring your development workstation into the field. The poster doesn't need to justify asking the question.

objective c - How to get the reference count of an NSObject? - Stack O...

objective-c cocoa
Rectangle 27 23

The use of retainCount is not recommended.

This method is of no value in debugging memory management issues. Because any number of framework objects may have retained an object in order to hold references to it, while at the same time autorelease pools may be holding any number of deferred releases on an object, it is very unlikely that you can get useful information from this method

For future references, I'm going to add some linsk to help you understand the process of how memory works in iOS. Even if you use ARC, this is a must know (remember that ARC is NOT a garbage collector)

And, of course, once you understand how memory works is time to learn how to profile it with instruments:

Thanks but I still want to know how to invoke -retainCount or use -dealloc selector under ARC.

You don't, you use Instruments to see where your memory issues may be. I know what you want to do, but this path is futile and you'll just end up back at this point in the future. Seriously take the time to learn Instruments and its capabilities and you will be able to find any memory issues through using it.

-retainCount is just a bad idea, period. DO NOT USE -retainCount. Instead, as @ColinWheeler suggests, use Instruments; it has excellent debugging tools for refcounted objects, and its well worth learning to take advantage of them.

Always a +1 for the advocation of PROPER tooling.

objective c - How to enforce using `-retainCount` method and `-dealloc...

objective-c automatic-ref-counting
Rectangle 27 65

Again and again we see questions asking about reading a text file to process it line-by-line, that use variations of read, or readlines, which pull the entire file into memory in one action.

read

Opens the file, optionally seeks to the given offset, then returns length bytes (defaulting to the rest of the file). [...]

readlines

Reads the entire file specified by name as individual lines, and returns those lines in an array. [...]

Pulling in a small file is no big deal, but there comes a point where memory has to be shuffled around as the incoming data's buffer grows, and that eats CPU time. In addition, if the data consumes too much space, the OS has to get involved just to keep the script running and starts spooling to disk, which will take a program to its knees. On a HTTPd (web-host) or something needing fast response it'll cripple the entire application.

Slurping is usually based on a misunderstanding of the speed of file I/O or thinking that it's better to read then split the buffer than it is to read it a single line at a time.

Save this as "test.sh":

echo Building test files...

yes "abcdefghijklmnopqrstuvwxyz 123456890" | head -c 1000       > kb.txt
yes "abcdefghijklmnopqrstuvwxyz 123456890" | head -c 1000000    > mb.txt
yes "abcdefghijklmnopqrstuvwxyz 123456890" | head -c 1000000000 > gb1.txt
cat gb1.txt gb1.txt > gb2.txt
cat gb1.txt gb2.txt > gb3.txt

echo Testing...

ruby -v

echo
for i in kb.txt mb.txt gb1.txt gb2.txt gb3.txt
do
  echo
  echo "Running: time ruby readlines.rb $i"
  time ruby readlines.rb $i
  echo '---------------------------------------'
  echo "Running: time ruby foreach.rb $i"
  time ruby foreach.rb $i
  echo
done

rm [km]b.txt gb[123].txt

It creates five files of increasing sizes. 1K files are easily processed, and are very common. It used to be that 1MB files were considered big, but they're common now. 1GB is common in my environment, and files beyond 10GB are encountered periodically, so knowing what happens at 1GB and beyond is very important.

Save this as "readlines.rb". It doesn't do anything but read the entire file line-by-line internally, and append it to an array that is then returned, and seems like it'd be fast since it's all written in C:

lines = File.readlines(ARGV.shift).size
puts "#{ lines } lines read"

Save this as "foreach.rb":

lines = 0
File.foreach(ARGV.shift) { |l| lines += 1 }
puts "#{ lines } lines read"
sh ./test.sh
Building test files...
Testing...
ruby 2.1.2p95 (2014-05-08 revision 45877) [x86_64-darwin13.0]
Running: time ruby readlines.rb gb3.txt
81081082 lines read

real    2m7.260s
user    1m57.410s
sys 0m7.007s
---------------------------------------
Running: time ruby foreach.rb gb3.txt
81081082 lines read

real    0m33.116s
user    0m30.790s
sys 0m2.134s

Notice how readlines runs twice as slow each time the file size increases, and using foreach slows linearly. At 1MB, we can see there's something affecting the "slurping" I/O that doesn't affect reading line-by-line. And, because 1MB files are very common these days, it's easy to see they'll slow the processing of files over the lifetime of a program if we don't think ahead. A couple seconds here or there aren't much when they happen once, but if they happen multiple times a minute it adds up to a serious performance impact by the end of a year.

I ran into this problem years ago when processing large data files. The Perl code I was using would periodically stop as it reallocated memory while loading the file. Rewriting the code to not slurp the data file, and instead read and process it line-by-line, gave a huge speed improvement from over five minutes to run to less than one and taught me a big lesson.

"slurping" a file is sometimes useful, especially if you have to do something across line boundaries, however, it's worth spending some time thinking about alternate ways of reading a file if you have to do that. For instance, consider maintaining a small buffer built from the last "n" lines and scan it. That will avoid memory management issues caused by trying to read and hold the entire file. This is discussed in a Perl-related blog "Perl Slurp-Eaze" which covers the "whens" and "whys" to justify using full file-reads, and applies well to Ruby.

I see what you did here. It's a good idea and will help Ruby newbies. I recommend adding some suggestions as to what they should do.

I'll be growing this answer over time, but in the meantime others can add additional answers.

For 1mb, I can not see "something affecting the slurping..." the numbers are only about 3% apart. You need a better test setup to reliable measure this 3%. And if you care for these 3%, you should probably use C anyway.

What about the file system cache? That ultimately might help foreach, making each read from the cache without the overhead of allocating memory for the whole file.

It might help foreach just as it will slurping, as long as the cache is bigger than the file. If the file is larger than the cache it'll force an additional read, which could affect slurping a little, but I don't think it'd affect it nearly as much as exceeding the available memory and making the system start paging.

ruby - Why is "slurping" a file not a good practice? - Stack Overflow

ruby io slurp
Rectangle 27 2

My experience is that graphical languages can do a good job of the 'plumbing' part of programming, but the ones I've used actively get in the way of algorithmics. If your algorithms are very simple, that might be OK.

On the other hand, I don't think C++ is great for your situation either. You'll spend more time tracking down pointer and memory management issues than you do in useful work.

If your robot can be controlled using a scripting language (Python, Ruby, Perl, whatever), then I think that would be a much better choice.

If there's no scripting option for your robot, and you have a C++ geek on your team, then consider having that geek write bindings to map your C++ library to a scripting language. This would allow people with other specialities to program the robot more easily. The bindings would make a good gift to the community.

If LabVIEW allows it, use its graphical language to plumb together modules written in a textual language.

robotics - Textual versus Graphical Programming Languages - Stack Over...

robotics labview graphical-language
Rectangle 27 2

My experience is that graphical languages can do a good job of the 'plumbing' part of programming, but the ones I've used actively get in the way of algorithmics. If your algorithms are very simple, that might be OK.

On the other hand, I don't think C++ is great for your situation either. You'll spend more time tracking down pointer and memory management issues than you do in useful work.

If your robot can be controlled using a scripting language (Python, Ruby, Perl, whatever), then I think that would be a much better choice.

If there's no scripting option for your robot, and you have a C++ geek on your team, then consider having that geek write bindings to map your C++ library to a scripting language. This would allow people with other specialities to program the robot more easily. The bindings would make a good gift to the community.

If LabVIEW allows it, use its graphical language to plumb together modules written in a textual language.

robotics - Textual versus Graphical Programming Languages - Stack Over...

robotics labview graphical-language
Rectangle 27 3

A better way to manage this is to use std::vector, which is exactly designed for this use case. For most compilers, using vector rather than raw pointers will have zero overhead, and means you do not run into all the memory management and safety issues that come with raw pointers. To use it, all you need to is replace your float * member with a vector:

#include <vector>

class Foo {
    std::vector<float> a;
};

Your entire allocateMemory function can then be replaced by simplify calling a.resize(size). Thanks to the wonder of RAII, the memory for a will be automatically, safely, and consistently released when your Foo object is destroyed.

Some other general comments: you seem to be writing very C-like code, rather than using C++ idioms:

  • new and delete should be used rather than malloc and free.
  • You do not need to put void in empty argument lists.
nullptr
NULL
  • You should almost never need to use new and delete or manual pointer management. Look into std::unique_ptr and std::shared_ptr for managing memory. C++11 and the STL have a number of very useful tools which take the pain out of manual memory and resource management. Use them!

It might be worth checking out something like Herb Sutter's Elements of Modern C++ Style for an overview of the advantadges of writing modern C++, rather than trying to stick to C with a few extra bits thrown on top.

c++ - malloc and free within constructor and desctructor - Stack Overf...

c++ constructor malloc free
Rectangle 27 2

IBOutlet UILabel *fooLabel; declares a fooLabel variable along with an outlet for your Interface Builder nib file.

UILabel *fooLabel; as above without the outlet for Interface Builder.

@property (nonatomic, retain) IBOutlet UILabel *fooLabel; declares a property fooLabel and an outlet for your nib file. If you synthesize this property with synthesize fooLabel, it will create a getter and setter methods for the property. The (retain) attribute tells the synthesized setter method to retain your new value before releasing the old one.

iphone - Why IBOutlet retain count is 2 - Stack Overflow

iphone cocoa-touch
Rectangle 27 2

IBOutlet UILabel *fooLabel; declares a fooLabel variable along with an outlet for your Interface Builder nib file.

UILabel *fooLabel; as above without the outlet for Interface Builder.

@property (nonatomic, retain) IBOutlet UILabel *fooLabel; declares a property fooLabel and an outlet for your nib file. If you synthesize this property with synthesize fooLabel, it will create a getter and setter methods for the property. The (retain) attribute tells the synthesized setter method to retain your new value before releasing the old one.

iphone - Why IBOutlet retain count is 2 - Stack Overflow

iphone cocoa-touch
Rectangle 27 3

Blocks are created and stored on the stack. So the block will be destroyed when the method that created the block returns.

If a block becomes an instance variable ARC copy the block from the stack to the heap. You can explicit copy a block with the copy message. Your block is now a heap-based block instead of a stack-based block. And you have to deal with some memory management issues. The block itself will keep a strong reference to any objects it references. Declare __weak pointers outside the block and then reference this pointer within the block to avoid retain cycles.

objective c - iOS blocks and strong/weak references to self - Stack Ov...

ios objective-c objective-c-blocks
Rectangle 27 3

Blocks are created and stored on the stack. So the block will be destroyed when the method that created the block returns.

If a block becomes an instance variable ARC copy the block from the stack to the heap. You can explicit copy a block with the copy message. Your block is now a heap-based block instead of a stack-based block. And you have to deal with some memory management issues. The block itself will keep a strong reference to any objects it references. Declare __weak pointers outside the block and then reference this pointer within the block to avoid retain cycles.

objective c - iOS blocks and strong/weak references to self - Stack Ov...

ios objective-c objective-c-blocks
Rectangle 27 3

Blocks are created and stored on the stack. So the block will be destroyed when the method that created the block returns.

If a block becomes an instance variable ARC copy the block from the stack to the heap. You can explicit copy a block with the copy message. Your block is now a heap-based block instead of a stack-based block. And you have to deal with some memory management issues. The block itself will keep a strong reference to any objects it references. Declare __weak pointers outside the block and then reference this pointer within the block to avoid retain cycles.

objective c - iOS blocks and strong/weak references to self - Stack Ov...

ios objective-c objective-c-blocks
Rectangle 27 3

Blocks are created and stored on the stack. So the block will be destroyed when the method that created the block returns.

If a block becomes an instance variable ARC copy the block from the stack to the heap. You can explicit copy a block with the copy message. Your block is now a heap-based block instead of a stack-based block. And you have to deal with some memory management issues. The block itself will keep a strong reference to any objects it references. Declare __weak pointers outside the block and then reference this pointer within the block to avoid retain cycles.

objective c - iOS blocks and strong/weak references to self - Stack Ov...

ios objective-c objective-c-blocks
Rectangle 27 1

This is one of the things I specialize in, so here you go. There is a whole school of programming around this, but the basic rules I follow are:

1) Use FIXED-LENGTH structures for things with a "constant" layout. These are things like the flag bits of the file, bytes indicating the # of sub-records, etc. Put as much of the file contents into these structures as you can- they are very efficient especially when combined with a good I/O system.

You do this using the pre-processor macro "#pragma pack(1)" to align a struct to byte boundaries:

#ifdef WINDOWS
#pragma pack(push)
#endif
#pragma pack(1)

struct FixedSizeHeader {
   uint32 FLAG_BYTES[1];   // All Members are pointers for a reason
   char   NAME[20];
};

#ifdef WINDOWS
#pragma pack(pop)
#endif
#ifdef LINUX
#pragma pack()
#endif

2) Create a base class, pure interface with a name like "Serializable". He is your high-level API for staging entire file objects into and out of raw memory.

class Serializable { // Yes, the name comes from Java. The idea, however, predates it
public:
   // Choose your buffer type- char[], std::string, custom
   virtual bool WriteToBinary(char* buffer) const = 0;
};

NOTE: To support a static "Load" you will need all your "Serializable"s to have an additional static function. There are several (very different) ways to support that, none of which the language alone will enforce since C++ doesn't have "virtual static".

3) Create your aggregate classes for managing each file type. They should have the same name as the file type. Depending on file structure, each may in turn contain more "aggregator" classes before you get down to the fixed structures.

class GameResourceFile : public Serializable
{
private:
    // Operator= and the copy ctor should point to the same data for files,
    // since that is what you get with FILE*
protected:
    // Actual member variables- allows specialized (derived) file types direct access
    FixedSizeHeader* hdr;     // You don't have to use pointers here
    ContentManager*  innards; // Another aggregator- implements "Serializable"

    GameResourceFile(FixedSizeHeader* hdr, ContentManager* innards)
       : hdr(hdr), innards(innards) {}
    virtual ~GameResourceFile() { delete hdr; delete innards; }
public:
    virtual bool WriteToBinary(char* outBuffer) const 
    {
        // For fixed portions, use this
        memcpy(outBuffer, hdr, sizeof(FixedSizeHeader)); // This is why we 'pack'
        outBuffer += sizeof(FixedSizeHeader);            // Improve safety...
        return innards->WriteToBinary(outBuffer);
    }

    // C++ doesn't enforce this, but you can via convention
    static GameResourceFile* Load(const char* filename)
    {
        // Load file into a buffer- You'll want your own code here
        // Now that's done, we have a buffer
        char* srcContents;
        FixedSizeHeader* hdr = new FixedSizeHeader();
        memcpy(hdr, srcContents, sizeof(FixedSizeHeader));
        srcContents += sizeof(FixedSizeHeader);

        ContentManager* innards = ContentManager::Load( srcContents); // NOT the file
        if(!innards) {
           return 0;
        }
        return new GameResourceFile(hdr, innards);
    }
};

Notice how this works- each piece is responsible for serializing itself into the buffer, until we get to "primitive" structures that we can add via memcpy() (you can make ALL the components 'Serializable' classes). If any piece fails to add, the call returns "false" and you can abort.

I STRONGLY recommend using a pattern like "referenced object" to avoid the memory management issues. However, even if you don't you now provide users a nice, one-stop shopping method to load data objects from files:

GameResourceFile* resource = GameResourceFile::Load("myfile.game");
if(!resource) { // Houston, we have a problem
   return -1;
}

The best thing yet is to add all low-level manipulation and retrieval APIs for that kind of data to "GameResourceFile". Then any low-level state machine coordination for committing changes to disk & such is all localized to 1 object.

Any tips or links for doing this in C?

visual c++ - C++ Custom Binary Resource File - Stack Overflow

c++ visual-c++ resources resourcebundle