// Exposed interface for adding data point to stored data
- (void) addDatum:(double_t)datum
- (void) addToCache:(double_t)datum
if (cache == nil)
// This is temporary. Ideally, cache is separate from main store, but
// is appended to main store periodically - and then cleared for reuse.
cache = [NSMutableData dataWithData:[self dataSet]];
[cache appendBytes:&datum length:sizeof(double_t)];
// Periodic copying of cache to dataSet could happen here...
// Called at end of sampling.
- (void) wrapup
[self setDataSet:[NSData dataWithData:cache]]; // force a copy to alert Core Data of change
cache = nil;
As for MVC question. I would suggest that data (model) is managed in by Model. Views and Controllers can ask Model for data (or subsets of data) in order to display. But ownership of data is with the Model. In my case, which may be similar to yours, there were times when the Model returns abridged data sets (using Douglas-Peucker algorithm). The views and controllers were none the wiser that points were being dropped - even though their requests to the Model may have played in a role in that (graph scaling factors, etc.).
Here is a snippet of code from my Data class which extends NSManagedObject. For a filesystem solution, NSFileHandle's -writeData: and methods for monitoring file offset might allow similar (better) management controls.
I have to use a mutable array because the data is continuously coming in (and I may have to preen values occasionally). I would like the ability to save to disk from time to time, instead of waiting for the user to close the app (and then calling nsarchiver to write the file). However, I'm not quite sure how I would go about dumping it incrementally, short of just rewriting the file each time I write it to disk. As far as the model (since I can't find an authoritative answer on analysis app design), I've decided that my model is going to implement all of the methods to process its own data.
I usually hide implementation details as far "down" as possible. The less that the rest of the application knows about complicated algorithms that could change, the better.
If you are simply writing data to disk, then I don't think that you'd need NSArchiver. I would think that during sampling, you could write bytes, track file offset (end), and write (append) more bytes periodically. When it comes time to read data back in, then you could either read it all into an NSData object - or, if data is too big, then manage your own "paging".
NSData's -bytes returns a pointer to the raw data within an NSData object. Core Data supports NSData as one its attribute types. If you know the size of each item in data, then you can use -length to calculate the number of elements, etc.
On the sampling side, I would suggest using vector<> as you collect data and, intermittently, copy data to an NSData attribute and save. Note: I ran into a bit of problem with this approach (Truncated Core Data NSData objects) that I attribute to Core Data not recognizing changes made to NSData attribute when it is backed by an NSMutableData object and that mutable object's data is changed.
The objc++ class wrapper of vector that I wrote includes the method encodeWithCoder which copies the vector's block into an NSData object to be encoded, and initWithCoder to bring it back out. These both work. Using vector looked easier to me than writing an objc class to make NSMutableData act like an array (maybe this is possible). All I need are arrays that are mutable of type short, float, and double that have good performance for floating-point-heavy algorithms. I was also concerned whether or not it is a good design practice to put all of my analysis code into the NSManagedObject model.
While vector<> is great for handling your data that you are sampling (because of its support for dynamically resizing underlying storage), you may find that straight C arrays are sufficient (even better) for data that is already stored. This does add a level of complexity but it avoids a copy for data arrays that are already of a known and static size.