Rectangle 27 3

@Paul Lynch's answer is great, but it would change the image ratio. if you don`t want to change the image ratio, and still want the new image fit for new size, try this.

+ (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {

// calculate a new size which ratio is same to original image
CGFloat ratioW = image.size.width / newSize.width;
CGFloat ratioH = image.size.height / newSize.height;

CGFloat ratio = image.size.width / image.size.height;

CGSize showSize = CGSizeZero;
if (ratioW > 1 && ratioH > 1) { 

    if (ratioW > ratioH) { 
        showSize.width = newSize.width;
        showSize.height = showSize.width / ratio;
    } else {
        showSize.height = newSize.height;
        showSize.width = showSize.height * ratio;
    }

} else if (ratioW > 1) {

    showSize.width = showSize.width;
    showSize.height = showSize.width / ratio;

} else if (ratioH > 1) {

    showSize.height = showSize.height;
    showSize.width = showSize.height * ratio;

}

//UIGraphicsBeginImageContext(newSize);
// In next line, pass 0.0 to use the current device's pixel scaling factor (and thus account for Retina resolution).
// Pass 1.0 to force exact pixel size.
UIGraphicsBeginImageContextWithOptions(showSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, showSize.width, showSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;}

ios - The simplest way to resize an UIImage? - Stack Overflow

ios uiimage resize
Rectangle 27 3

@Paul Lynch's answer is great, but it would change the image ratio. if you don`t want to change the image ratio, and still want the new image fit for new size, try this.

+ (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {

// calculate a new size which ratio is same to original image
CGFloat ratioW = image.size.width / newSize.width;
CGFloat ratioH = image.size.height / newSize.height;

CGFloat ratio = image.size.width / image.size.height;

CGSize showSize = CGSizeZero;
if (ratioW > 1 && ratioH > 1) { 

    if (ratioW > ratioH) { 
        showSize.width = newSize.width;
        showSize.height = showSize.width / ratio;
    } else {
        showSize.height = newSize.height;
        showSize.width = showSize.height * ratio;
    }

} else if (ratioW > 1) {

    showSize.width = showSize.width;
    showSize.height = showSize.width / ratio;

} else if (ratioH > 1) {

    showSize.height = showSize.height;
    showSize.width = showSize.height * ratio;

}

//UIGraphicsBeginImageContext(newSize);
// In next line, pass 0.0 to use the current device's pixel scaling factor (and thus account for Retina resolution).
// Pass 1.0 to force exact pixel size.
UIGraphicsBeginImageContextWithOptions(showSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, showSize.width, showSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;}

ios - The simplest way to resize an UIImage? - Stack Overflow

ios uiimage resize
Rectangle 27 9

I have updated the code above to account for retina resolution images:

- (UIImage *) changeColorForImage:(UIImage *)mask toColor:(UIColor*)color {

CGImageRef maskImage = mask.CGImage;
CGFloat width = mask.scale * mask.size.width;
CGFloat height = mask.scale * mask.size.height;
CGRect bounds = CGRectMake(0,0,width,height);

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmapContext = CGBitmapContextCreate(NULL, width, height, 8, 0, colorSpace, (CGBitmapInfo)kCGImageAlphaPremultipliedLast);
CGContextClipToMask(bitmapContext, bounds, maskImage);
CGContextSetFillColorWithColor(bitmapContext, color.CGColor);    
CGContextFillRect(bitmapContext, bounds);

CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(bitmapContext);
CGContextRelease(bitmapContext);

return [UIImage imageWithCGImage:mainViewContentBitmapContext scale:mask.scale orientation:UIImageOrientationUp];
}

This code does not respect the CreateRule of Memory Management and fails to release the color space. Don't forget to CGColorSpaceRelease(colorSpace) to avoid leaks!

CGImageRed mainViewContentBitmapContext

uiimage - How do I change a partially transparent image's color in iOS...

ios uiimage quartz-2d retina-display
Rectangle 27 9

I have updated the code above to account for retina resolution images:

- (UIImage *) changeColorForImage:(UIImage *)mask toColor:(UIColor*)color {

CGImageRef maskImage = mask.CGImage;
CGFloat width = mask.scale * mask.size.width;
CGFloat height = mask.scale * mask.size.height;
CGRect bounds = CGRectMake(0,0,width,height);

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmapContext = CGBitmapContextCreate(NULL, width, height, 8, 0, colorSpace, (CGBitmapInfo)kCGImageAlphaPremultipliedLast);
CGContextClipToMask(bitmapContext, bounds, maskImage);
CGContextSetFillColorWithColor(bitmapContext, color.CGColor);    
CGContextFillRect(bitmapContext, bounds);

CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(bitmapContext);
CGContextRelease(bitmapContext);

return [UIImage imageWithCGImage:mainViewContentBitmapContext scale:mask.scale orientation:UIImageOrientationUp];
}

This code does not respect the CreateRule of Memory Management and fails to release the color space. Don't forget to CGColorSpaceRelease(colorSpace) to avoid leaks!

CGImageRed mainViewContentBitmapContext

uiimage - How do I change a partially transparent image's color in iOS...

ios uiimage quartz-2d retina-display
Rectangle 27 30

Since establishing that you're using Auto Layout in your project, I have made a demo app to show you how you could change the image of the image view and adjust the height. Auto Layout will do this for you automatically, but the catch is that the photo you'll be using is coming from the users gallery and so they're likely to be very big and this.

The trick is to create a reference of the NSLayoutConstraint of the height of the image view. When you change your image, you need to adjust it's constant to the correct height given the fixed width.

Your other solution could be to set a contentMode on your image view, by using UIViewContentModeScaleAspectFit your image will always appear in full but will be locked to the bounds of the image view, which can change based on the Auto Layout constraints you have.

When you get a new image from the image picker, all you need to do is change the frame of the image view according to the UIImage size.

- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
  image = [info objectForKey:UIImagePickerControllerOriginalImage];
  [self.imageView setImage:image];
  self.imageView.frame = CGRectMake(self.imageView.frame.origin.x
                                    self.imageView.frame.origin.y
                                    image.size.width
                                    image.size.height);
  [self dismissViewControllerAnimated:YES completion:NULL];
}

Of course this is going to be potentially very large (it's using the original image which may be very large).

So let's lock the width to 280 points... this way we can always have the full image on screen in portrait mode.

So assuming your image size is 1000x800, and our image view perhaps is 280x100. We can calculate the correct height for the image view retaining the image view's width like this:

CGSize imageSize        = CGSizeMake(1000.0, 800.0);
CGSize imageViewSize    = CGSizeMake(280.0, 100.0);

CGFloat correctImageViewHeight = (imageViewSize.width / imageSize.width) * imageSize.height;

self.imageView.frame = CGRectMake(  self.imageView.frame.origin.x,
                                    self.imageView.frame.origin.x,
                                    CGRectGetWidth(self.imageView.bounds),
                                    correctImageViewHeight);

i tried your first snippet and it wont change the width/height unless i add `[self.imageView setTranslatesAutoresizingMaskIntoConstraints:YES];

Will attempt to recover by breaking constraint  <NSLayoutConstraint:0x71ad740 UIImageView:0x71ac340.trailing == UIView:0x71ac5f0.trailing>  Break on objc_exception_throw to catch this in the debugger. The methods in the UIConstraintBasedLayoutDebugging category on UIView listed in <UIKit/UIView.h> may also be helpful.

Right you're using AutoLayout. That is a very important detail.

ios - Change UIImageView size to match image with AutoLayout - Stack O...

ios ios6 uiimageview autolayout
Rectangle 27 6

You can modify the colors of an image in interesting ways programmatically using CIFilter, if that's what you're asking.

ios - Is that possible to change the color for an image programmatical...

ios objective-c uiimage
Rectangle 27 6

You can modify the colors of an image in interesting ways programmatically using CIFilter, if that's what you're asking.

ios - Is that possible to change the color for an image programmatical...

ios objective-c uiimage
Rectangle 27 6

I agree with Matt, you should check CIFilter for future image modification. but if you are looking for a quick code sample, here is how i did it. works pretty fine for me, simply need to call like this :

[self useColor:[UIColor redColor] forImage:WHATEVER_IMAGE];

 - (UIImage *)useColor:(UIColor *)color forImage:(UIImage *)image
 {
     if(!color)
         return image;

     NSUInteger width = CGImageGetWidth([image CGImage]);
     NSUInteger height = CGImageGetHeight([image CGImage]);
     CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

     NSUInteger bytesPerPixel = 4;
     NSUInteger bytesPerRow = bytesPerPixel * width;
     NSUInteger bitsPerComponent = 8;
     NSUInteger bitmapByteCount = bytesPerRow * height;

     unsigned char *rawData = (unsigned char*) calloc(bitmapByteCount, sizeof(unsigned char));

     CGContextRef context = CGBitmapContextCreate(rawData, width, height,
                                             bitsPerComponent, bytesPerRow, colorSpace,
                                             kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
     CGColorSpaceRelease(colorSpace);

     CGContextDrawImage(context, CGRectMake(0, 0, width, height), [image CGImage]);

     CGColorRef cgColor = [color CGColor];
     const CGFloat *components = CGColorGetComponents(cgColor);
     float r = components[0] * 255.0;
     float g = components[1] * 255.0;
     float b = components[2] * 255.0;
     //float a = components[3]; // not needed

     int byteIndex = 0;

     while (byteIndex < bitmapByteCount)
     {
         int oldR = rawData[byteIndex];
         int oldG = rawData[byteIndex + 1];
         int oldB = rawData[byteIndex + 2];
         int oldA = rawData[byteIndex + 3];
         if(oldR != 0 || oldG != 0 || oldB != 0 || oldA != 0)
         {
             rawData[byteIndex] = r;
             rawData[byteIndex + 1] = g;
             rawData[byteIndex + 2] = b;
         }

         byteIndex += 4;
     }

     UIImage *result = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context) scale:image.scale orientation:image.imageOrientation];

     CGContextRelease(context);
     free(rawData);

     return result;
 }

ios - Is that possible to change the color for an image programmatical...

ios objective-c uiimage
Rectangle 27 6

I agree with Matt, you should check CIFilter for future image modification. but if you are looking for a quick code sample, here is how i did it. works pretty fine for me, simply need to call like this :

[self useColor:[UIColor redColor] forImage:WHATEVER_IMAGE];

 - (UIImage *)useColor:(UIColor *)color forImage:(UIImage *)image
 {
     if(!color)
         return image;

     NSUInteger width = CGImageGetWidth([image CGImage]);
     NSUInteger height = CGImageGetHeight([image CGImage]);
     CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

     NSUInteger bytesPerPixel = 4;
     NSUInteger bytesPerRow = bytesPerPixel * width;
     NSUInteger bitsPerComponent = 8;
     NSUInteger bitmapByteCount = bytesPerRow * height;

     unsigned char *rawData = (unsigned char*) calloc(bitmapByteCount, sizeof(unsigned char));

     CGContextRef context = CGBitmapContextCreate(rawData, width, height,
                                             bitsPerComponent, bytesPerRow, colorSpace,
                                             kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
     CGColorSpaceRelease(colorSpace);

     CGContextDrawImage(context, CGRectMake(0, 0, width, height), [image CGImage]);

     CGColorRef cgColor = [color CGColor];
     const CGFloat *components = CGColorGetComponents(cgColor);
     float r = components[0] * 255.0;
     float g = components[1] * 255.0;
     float b = components[2] * 255.0;
     //float a = components[3]; // not needed

     int byteIndex = 0;

     while (byteIndex < bitmapByteCount)
     {
         int oldR = rawData[byteIndex];
         int oldG = rawData[byteIndex + 1];
         int oldB = rawData[byteIndex + 2];
         int oldA = rawData[byteIndex + 3];
         if(oldR != 0 || oldG != 0 || oldB != 0 || oldA != 0)
         {
             rawData[byteIndex] = r;
             rawData[byteIndex + 1] = g;
             rawData[byteIndex + 2] = b;
         }

         byteIndex += 4;
     }

     UIImage *result = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context) scale:image.scale orientation:image.imageOrientation];

     CGContextRelease(context);
     free(rawData);

     return result;
 }

ios - Is that possible to change the color for an image programmatical...

ios objective-c uiimage
Rectangle 27 5

You can't assign a file name to photo library images. ios assign a file name called asset url to the images that are saved to the photo library, we can't change that.

If you need to save the images to photo library you can use ALAssetsLibrary

ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
   [library writeImageToSavedPhotosAlbum:[image CGImage] orientation:(ALAssetOrientation)     [image imageOrientation] completionBlock:^(NSURL *assetURL, NSError *error){
    if (error)
    {
     // Eror
    }
    else
    {
     // Success
    }
   }];
   [library release];

iphone - saving images with UIImagePickerController - Stack Overflow

iphone ios5 uiimage uiimagepickercontroller
Rectangle 27 1

If your html didn't change and the only change was in image you should use UIWebView's reload message instead of loading request again.

- (void)reloadWebView:(id)sender
{
    if (self.loaded) {
        [self.webView reload];
    } else {
        NSString *html = [[NSBundle mainBundle] pathForResource:@"web" ofType:@"html"];
        NSURLRequest *request = [NSURLRequest requestWithURL:[NSURL fileURLWithPath:html]];
        [self.webView loadRequest:request];
        self.loaded = YES;
    }
}

You don't even need any manipulations with cache.

That's not true, the only reason we are asking how to clear the cache is because we have already reloaded the html file and the images did NOT change. You are using the loadRequest method which actually has a way to clear the cache... I was using the loadHTML method which has no way to clear the cache (since my html file was being generated on the spot in a string)... I guess I'd have to save the string as an actually .html file in my bundle then use loadRequest with it's clear cache method.

But according to the person who asked this question... even using the loadRequest's clear cache method... (cachePolicy:NSURLRequestReloadIgnoringLocalAndRemoteCacheData) it still didn't update the png files that were changed in the bundle from within the app!

The key point is in the reload message. It's not an issue related to caching, because in my test project I didn't do anything with caches and all works fine. When I replace an old image with a new one and then call reload the image in the webview changed. I think it's not related to the way you load your data too.. But should to check it.

Unfortunately I already tried reload and it didn't work /: As of now I've just ended up renaming all 108 images each time a change is made and then updating those new image name references throughout the whole project, then reloading them all and deleting the old images with the old names (each image is only like 8x8 pixels so it's not tooo bad)

Well, I did the tests and have to say you're right. WebView doesn't refresh images when the html loaded from string. It's very strange behaviour.. Since it works as needed when you're loading file into WebView I think it's better to use this approach instead of yours. It would be much easier to save string in the Temp directory and just reload WebViews when you've change your images.

ios - Clear UIWebView cache when use local image file - Stack Overflow

ios objective-c caching uiwebview nsdocument
Rectangle 27 3

If, after considering George's answer, you still need to modify the colours, you could try using Core Image filters.

The following example adjusts the hue of a UIImageView called screenImage by an angle given in angle.

- (void)rebuildFilterChain:(double)angle {
  UIImage *uiImage = [UIImage imageNamed:@"background.jpg"];
  CIImage *image = [CIImage imageWithCGImage:[uiImage CGImage]];

  CIFilter *hueAdjust = [CIFilter filterWithName:@"CIHueAdjust"];
  [hueAdjust setDefaults];
  [hueAdjust setValue:image forKey: @"inputImage"];
  [hueAdjust setValue: [NSNumber numberWithFloat:angle] forKey: @"inputAngle"];
  self.resultImage = [hueAdjust valueForKey: @"outputImage"];

  CGImageRef cgImage = [context createCGImage:resultImage fromRect:resultImage.extent];
  screenImage.image = [UIImage imageWithCGImage:cgImage];
  CGImageRelease(cgImage);
}

The full list of Core Image filters is on the Apple site.

ios - Is it possible to change the color of the image? - Stack Overflo...

ios cocoa uiimageview uiimage uicolor
Rectangle 27 4

Apply a mask from the image before adding the color.

CGSize imageSize = [imageView.image size];
CGRect imageExtent = CGRectMake(0,0,imageSize.width,imageSize.height);

// Create a context containing the image.
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
[imageView.image drawAtPoint:CGPointMake(0, 0)];

// Setup a clip region using the image
CGContextSaveGState(context);
CGContextClipToMask(context, imageExtent, imageView.image.CGImage);

[color set];
CGContextFillRect(context, imageExtent);

// Draw the hue on top of the image.
CGContextSetBlendMode(context, kCGBlendModeHue);
[color set];

UIBezierPath *imagePath = [UIBezierPath bezierPathWithRect:imageExtent];
[imagePath fill];

CGContextRestoreGState(context); // remove clip region

// Retrieve the new image.
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
imageView.image= result;

UIGraphicsEndImageContext();

@maddy thanks for the help actually I want to make it a general code not for any specific image. For example you have shirt, cap etc transparent images, I just want to change hue of the opaque part only. I've attached an two images.

Hey @rmaddy, thanks it's working but I'm trying to fix this issue of 180 rotated masked image , used CGContextRotateCTM but than its not changing the hue. Can you please help?

The rotation will happen around the origin which is in the corner, not the center. So you will need to translate in addition to rotate.

I think its only possible through CIFilter but that doesn't change according to selected color. It only changes according to deltaHueRadians in float. But I want to change it by selected color. Any suggestion to fix it?

ios - How to change hue of UIIMAGE having transparent area? - Stack Ov...

ios objective-c uiimage hue
Rectangle 27 1

CoreImage Color Filters work great for this kind of tasks - I find them slightly more straightforward than using the Core Graphic classes (CG...) : They work by allowing you to adjust the RGB and Alpha characteristics of the image I have been using them to change the white background of a QRCode to colored. RGBA of white is (1,1,1,1) in your case I believe you have to reverse the colour. Just check the CI documentation of Apple, there are a few dozen filters available CIColorMatrix is just one of them.

CIImage *beginImage = [CIImage imageWithCGImage:image.CGImage];

CIContext *context = [CIContext contextWithOptions:nil];

CIFilter *filtercd = [CIFilter filterWithName:@"CIColorMatrix"  //rgreen
                                keysAndValues: kCIInputImageKey, beginImage, nil];
[filtercd setValue:[CIVector vectorWithX:0 Y:1 Z:1 W:0] forKey:@"inputRVector"]; // 5
[filtercd setValue:[CIVector vectorWithX:1 Y:0 Z:1 W:0] forKey:@"inputGVector"]; // 6
[filtercd setValue:[CIVector vectorWithX:1 Y:1 Z:0 W:0] forKey:@"inputBVector"]; // 7
[filtercd setValue:[CIVector vectorWithX:0 Y:0 Z:0 W:1] forKey:@"inputAVector"]; // 8
[filtercd setValue:[CIVector vectorWithX:1 Y:1 Z:0 W:0] forKey:@"inputBiasVector"]; 
CIImage *doutputImage = [filtercd outputImage];


CGImageRef cgimgd = [context createCGImage:doutputImage fromRect:[doutputImage extent]];
UIImage *newImgd = [UIImage imageWithCGImage:cgimgd];

filterd.image = newImgd;

CGImageRelease(cgimgd);

ios - changing the color of image in iphone sdk - Stack Overflow

ios objective-c image cocoa-touch uicolor
Rectangle 27 1

Why do you use an image? If you have a background color that means you are using a flat color...not an image actually. So you can set the background color of the button as well as the title color. If you are doing this just to have rounded corners , don't. Just import <QuartzCore/QuartzCore.h> and do this:

button.layer.cornerRadius = 8;
button.layer.masksToBounds = TRUE;

Thank you. Actually, I am having an image which consists of the text (like glittering) of height 50. And I don't know which font they are using. Looks stlyish. That text color is Red. I am thinking , whether it is possible to change that particular text color in that image. That is places in a UIImageView.

ios - Is it possible to change the color of the image? - Stack Overflo...

ios cocoa uiimageview uiimage uicolor
Rectangle 27 1

Can I suggest you look into using OpenCV? It's an open source image manipulation library, and it's got an iOS port too. There are plenty of blog posts about how to use it and set it up.

It has a whole heap of functions that will help you do a good job of what you're attempting. You could do it just using CoreGraphics, but the end result isn't going to look nearly as good as OpenCV would.

It was developed by some folks at MIT, so as you might expect it does a pretty good job at things like edge detection and object tracking. I remember reading a blog about how to separate a certain color from a picture with OpenCV - the examples showed a pretty good result. See here for an example. From there I can't imagine it would be a massive job to actually change the separated color to something else.

iphone - How to change a particular color in an image? - Stack Overflo...

iphone objective-c image-processing core-graphics pixel
Rectangle 27 3

  • Get the coordinates of the touched point in the image coordinate system
  • Get the x and y position of the pixel to change
  • Create a bitmap context and replace the given pixel's components with your new color's components.

First of all, to get the coordinates of the touched point in the image coordinate system you can use a category method that I wrote on UIImageView. This will return a CGAffineTransform that will map a point from view coordinates to image coordinates depending on the content mode of the view.

@interface UIImageView (PointConversionCatagory)

@property (nonatomic, readonly) CGAffineTransform viewToImageTransform;
@property (nonatomic, readonly) CGAffineTransform imageToViewTransform;

@end

@implementation UIImageView (PointConversionCatagory)

-(CGAffineTransform) viewToImageTransform {

    UIViewContentMode contentMode = self.contentMode;

    // failure conditions. If any of these are met  return the identity transform
    if (!self.image || self.frame.size.width == 0 || self.frame.size.height == 0 ||
        (contentMode != UIViewContentModeScaleToFill && contentMode != UIViewContentModeScaleAspectFill && contentMode != UIViewContentModeScaleAspectFit)) {
        return CGAffineTransformIdentity;
    }

    // the width and height ratios
    CGFloat rWidth = self.image.size.width/self.frame.size.width;
    CGFloat rHeight = self.image.size.height/self.frame.size.height;

    // whether the image will be scaled according to width
    BOOL imageWiderThanView = rWidth > rHeight;

    if (contentMode == UIViewContentModeScaleAspectFit || contentMode == UIViewContentModeScaleAspectFill) {

        // The ratio to scale both the x and y axis by
        CGFloat ratio = ((imageWiderThanView && contentMode == UIViewContentModeScaleAspectFit) || (!imageWiderThanView && contentMode == UIViewContentModeScaleAspectFill)) ? rWidth:rHeight;

        // The x-offset of the inner rect as it gets centered
        CGFloat xOffset = (self.image.size.width-(self.frame.size.width*ratio))*0.5;

        // The y-offset of the inner rect as it gets centered
        CGFloat yOffset = (self.image.size.height-(self.frame.size.height*ratio))*0.5;

        return CGAffineTransformConcat(CGAffineTransformMakeScale(ratio, ratio), CGAffineTransformMakeTranslation(xOffset, yOffset));
    } else {
        return CGAffineTransformMakeScale(rWidth, rHeight);
    }
}

-(CGAffineTransform) imageToViewTransform {
    return CGAffineTransformInvert(self.viewToImageTransform);
}

@end

There's nothing too complicated here, just some extra logic for scale aspect fit/fill, to ensure the centering of the image is taken into account. You could skip this step entirely if your were displaying your image 1:1 on screen.

Next, you'll want to get the x and y position of the pixel to change. This is fairly simple you just want to use the above category property viewToImageTransform to get the pixel in the image coordinate system, and then use floor to make the values integral.

UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(imageViewWasTapped:)];
tapGesture.numberOfTapsRequired = 1;
[imageView addGestureRecognizer:tapGesture];

...

-(void) imageViewWasTapped:(UIGestureRecognizer*)tapGesture {

    if (!imageView.image) {
        return;
    }

    // get the pixel position
    CGPoint pt = CGPointApplyAffineTransform([tapGesture locationInView:imageView], imageView.viewToImageTransform);
    PixelPosition pixelPos = {(NSInteger)floor(pt.x), (NSInteger)floor(pt.y)};

    // replace image with new image, with the pixel replaced
    imageView.image = [imageView.image imageWithPixel:pixelPos replacedByColor:[UIColor colorWithRed:0 green:1 blue:1 alpha:1.0]];
}

Finally, you'll want to use another of my category methods imageWithPixel:replacedByColor: to get out your new image with a replaced pixel with a given color.

/// A simple struct to represent the position of a pixel
struct PixelPosition {
    NSInteger x;
    NSInteger y;
};

typedef struct PixelPosition PixelPosition;

@interface UIImage (UIImagePixelManipulationCatagory)

@end

@implementation UIImage (UIImagePixelManipulationCatagory)

-(UIImage*) imageWithPixel:(PixelPosition)pixelPosition replacedByColor:(UIColor*)color {

    // components of replacement color  in a 255 UInt8 format (fairly standard bitmap format)
    const CGFloat* colorComponents = CGColorGetComponents(color.CGColor);
    UInt8* color255Components = calloc(sizeof(UInt8), 4);
    for (int i = 0; i < 4; i++) color255Components[i] = (UInt8)round(colorComponents[i]*255.0);

    // raw image reference
    CGImageRef rawImage = self.CGImage;

    // image attributes
    size_t width = CGImageGetWidth(rawImage);
    size_t height = CGImageGetHeight(rawImage);
    CGRect rect = {CGPointZero, {width, height}};

    // image format
    size_t bitsPerComponent = 8;
    size_t bytesPerRow = width*4;

    // the bitmap info
    CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;

    // data pointer  stores an array of the pixel components. For example (r0, b0, g0, a0, r1, g1, b1, a1 .... rn, gn, bn, an)
    UInt8* data = calloc(bytesPerRow, height);

    // get new RGB color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    // create bitmap context
    CGContextRef ctx = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo);

    // draw image into context (populating the data array while doing so)
    CGContextDrawImage(ctx, rect, rawImage);

    // get the index of the pixel (4 components times the x position plus the y position times the row width)
    NSInteger pixelIndex = 4*(pixelPosition.x+(pixelPosition.y*width));

    // set the pixel components to the color components
    data[pixelIndex] = color255Components[0]; // r
    data[pixelIndex+1] = color255Components[1]; // g
    data[pixelIndex+2] = color255Components[2]; // b
    data[pixelIndex+3] = color255Components[3]; // a

    // get image from context
    CGImageRef img = CGBitmapContextCreateImage(ctx);

    // clean up
    free(color255Components);
    CGContextRelease(ctx);
    CGColorSpaceRelease(colorSpace);
    free(data);

    UIImage* returnImage = [UIImage imageWithCGImage:img];
    CGImageRelease(img);

    return returnImage;
}

@end

What this does is first get out the components of the color you want to write to one of the pixels, in a 255 UInt8 format. Next, it creates a new bitmap context, with the given attributes of your input image.

The important bit of this method is:

// get the index of the pixel (4 components times the x position plus the y position times the row width)
NSInteger pixelIndex = 4*(pixelPosition.x+(pixelPosition.y*width));

// set the pixel components to the color components
data[pixelIndex] = color255Components[0]; // r
data[pixelIndex+1] = color255Components[1]; // g
data[pixelIndex+2] = color255Components[2]; // b
data[pixelIndex+3] = color255Components[3]; // a

What this does is get out the index of a given pixel (based on the x and y coordinate of the pixel) then uses that index to replace the component data of that pixel with the color components of your replacement color.

Finally, we get out an image from the bitmap context and perform some cleanup.

After running your project on my phone, the code clearly works, so I've marked the question as answered. However, when I ran the same code in my own project, it was met with a couple of errors which I listed in an edit to my main post. Though you have helped me answer the question, I would be very grateful if you could help me solve the issue, as I suspect its cause is quite straight forward.

@IsaacA What's the image you're using? Sounds like it's in an unsupported format (3 components, 16 bits-per-component, 64 bits-per-pixel). You'll either have to convert it to a supported format (4 components, 8 bits per component, 32 bits per pixel) or I'll have to add some extra code to handle it. A link to your image would be great for debugging it though.

@IsaacA Actually, after refreshing myself on the supported bitmap pixel formats it appears 64 bits per pixel is only available on OS X, not iOS. So you'll need to convert your image to an 8 bit per component format. You should be able to do this (fairly) easily in an image editor. In Photoshop, you can select it when creating a new file under "Color Mode: RGB, 8 bit"

I actually tried making a 32-bit .tif image, but this too produced peculiar errors in the Xcode logs, as well as causing strange colour shifts when I exported the image. Oddly, when I instead produced an 8-bit JPEG, this resulted in your code working perfectly! As is, I expect 8-bit will likely be all I require for my intended purposes. Thanks a lot for the extra help :)

ios - How to change colour of individual pixel of UIImage/UIImageView ...

ios objective-c uiimageview uiimage
Rectangle 27 1

Try this if you have an image:

-(void)setBackgroundImage:(UIImage *)image forState:(UIControlState)state;
showsTouchWhenHighlighted

I tried playing around with showsTouchWhenHighlighted but it didn't help. I don't want to use setBackgroundImage:forState:. I was in fact trying to use the backgroundColor to not use any image.

ios - How to change the background color of a UIButton while it's high...

ios objective-c xcode uibutton