The euclidean distance of each point on the grid from the target point p can be efficiently computed with:

`dist <- sqrt(rowSums(mapply(function(x,y) (x-y)^2, grid, p)))`

Basically the inner mapply call will result in a matrix of the same size as grid but that has the squared distance of that point from the target point in that dimension; rowSums and sqrt efficiently then compute the euclidean distance.

In this case you are including anything with sqrt(2) Euclidean distance from the target point:

```grid[dist < 1.5,]
#    Var1 Var2
# 16    1    4
# 17    2    4
# 18    3    4
# 21    1    5
# 22    2    5
# 23    3    5
# 26    1    6
# 27    2    6
# 28    3    6```

The use of mapply (operating over dimensions) and rowSums makes this much more efficient than an approach that loops through individual points on the grid, computing the distance to the target point. To see this, consider a slightly larger example with 1000 randomly distributed points in three dimensions:

The vectorized approach is 500 times faster than the approach that loops through the rows.

This approach can be used in cases where you have many more points (1 million in this example):

```set.seed(144)
grid <- data.frame(x=rnorm(1000000), y=rnorm(1000000), z=rnorm(1000000))
p <- data.frame(x=rnorm(1), y=rnorm(1), z=rnorm(1))
lim <- 1.5
system.time(vectorized(grid, p, lim))
#    user  system elapsed
#   3.466   0.136   3.632```

ahhh i see. the way mapply works is such that your function (x-y)^2 will simply take the distance between grid and p regardless of the number of dimensions?

@road_to_quantdom it loops through the columns one by one, computing the resulting vector for each.

## How to pull points that are within a certain distance away in R? - Sta...

r euclidean-distance

The 'classic' way of measuring this is to break the image up into some canonical number of sections (say a 10x10 grid) and then computing a histogram of RGB values inside of each cell and compare corresponding histograms. This type of algorithm is preferred because of both its simplicity and it's invariance to scaling and (small!) translation.

Isn't this similar to doing a single histogram for the whole image, but with the added drawbacks of not being resiliant to mirror and rotate?

2 histograms from 2 halves of image will have better matching precision than 1 histogram of a whole. Though it has drawbacks you mentioned, it depends on what problem you are solving.

## algorithm - How can I measure the similarity between two images? - Sta...

algorithm language-agnostic image image-processing

in the past when desktop machines had a single CPU, parallelization only applied to "special" parallel hardware. But these days desktops have usually from 2 to 8 cores, so now the parallel hardware is the standard. That's a big difference and therefore it is not just about which problems suggest parallelism, but also how to apply parallelism to a wider set of problems than before.

In order to be take advantage of parallelism, you usually need to recast your problem in some ways. Parallelism changes the playground in many ways:

• You get the data coherence and locking problems. So you need to try to organize your problem so that you have semi-independent data structures which can be handled by different threads, processes and computation nodes.
• Parallelism can also introduce nondeterminism into your computation, if the relative order in which the parallel components do their jobs affects the results. You may need to protect against that, and define a parallel version of your algorithm which is robust against different scheduling orders.
• When you transcend intra-motherboard parallelism and get into networked / cluster / grid computing, you also get the issues of network bandwidth, network going down, and the proper management of failing computational nodes. You may need to modify your problem so that it becomes easier to handle the situations where part of the computation gets lost when a network node goes down.

Here is a link to a Herb Sutter article where he talks about how when we have more potential to parallelise it allows us to think about the same problem in novel ways (including redefining the original scope of the problem): drdobbs.com/cpp/205900309

## concurrency - What challenges promote the use of parallel/concurrent a...

concurrency erlang parallel-processing python-stackless stackless

One of the answers above mentions handling different pixel density but suggests computing the swipe parameters by hand. It is worth noting that you can actually obtain scaled, reasonable values from the system using ViewConfiguration class:

```final ViewConfiguration vc = ViewConfiguration.get(getContext());
final int swipeMinDistance = vc.getScaledPagingTouchSlop();
final int swipeThresholdVelocity = vc.getScaledMinimumFlingVelocity();
final int swipeMaxOffPath = vc.getScaledTouchSlop();
// (there is also vc.getScaledMaximumFlingVelocity() one could check against)```

I noticed that using these values causes the "feel" of fling to be more consistent between the application and rest of system.

`swipeMinDistance = vc.getScaledPagingTouchSlop()`
`swipeMaxOffPath = vc.getScaledTouchSlop()`

getScaledTouchSlop gives me very little offset result, awkwardly. For example only 24 pixels on a 540 high screen, that's very hard to keep it in range with the finger. :S

## android - Fling gesture detection on grid layout - Stack Overflow

android listener gesture-recognition

One of the answers above mentions handling different pixel density but suggests computing the swipe parameters by hand. It is worth noting that you can actually obtain scaled, reasonable values from the system using ViewConfiguration class:

```final ViewConfiguration vc = ViewConfiguration.get(getContext());
final int swipeMinDistance = vc.getScaledPagingTouchSlop();
final int swipeThresholdVelocity = vc.getScaledMinimumFlingVelocity();
final int swipeMaxOffPath = vc.getScaledTouchSlop();
// (there is also vc.getScaledMaximumFlingVelocity() one could check against)```

I noticed that using these values causes the "feel" of fling to be more consistent between the application and rest of system.

`swipeMinDistance = vc.getScaledPagingTouchSlop()`
`swipeMaxOffPath = vc.getScaledTouchSlop()`

getScaledTouchSlop gives me very little offset result, awkwardly. For example only 24 pixels on a 540 high screen, that's very hard to keep it in range with the finger. :S

## android - Fling gesture detection on grid layout - Stack Overflow

android listener gesture-recognition

One of the answers above mentions handling different pixel density but suggests computing the swipe parameters by hand. It is worth noting that you can actually obtain scaled, reasonable values from the system using ViewConfiguration class:

```final ViewConfiguration vc = ViewConfiguration.get(getContext());
final int swipeMinDistance = vc.getScaledPagingTouchSlop();
final int swipeThresholdVelocity = vc.getScaledMinimumFlingVelocity();
final int swipeMaxOffPath = vc.getScaledTouchSlop();
// (there is also vc.getScaledMaximumFlingVelocity() one could check against)```

I noticed that using these values causes the "feel" of fling to be more consistent between the application and rest of system.

`swipeMinDistance = vc.getScaledPagingTouchSlop()`
`swipeMaxOffPath = vc.getScaledTouchSlop()`

getScaledTouchSlop gives me very little offset result, awkwardly. For example only 24 pixels on a 540 high screen, that's very hard to keep it in range with the finger. :S

## android - Fling gesture detection on grid layout - Stack Overflow

android listener gesture-recognition

First, compute the x and y range for each rectangle (because you have a torus geometry do it mod gridsize).

```x1 = x = 0, x2 = x + w = 20
y1 = y = 0, y2 = y + h = 20```
```x3 = 495, x4 = 505 mod 500 = 5
y3 = 0,   y4 = 10```

Create the x and y "regions" for each rectangle:

```Reactangle-1: x-regions: (0, 20)
y-regions: (0, 20)

Rectangle-2:  x-regions: (495, 500), (0, 5)
y-regions: (0, 10)```

If any (both) x and y regions between the two rectangles have a non-null intersection, then your rectangles overlap. Here the (0, 20) x-region of Rectangle-1 and the (0, 5) x-region of Rectangle-2 have a non-null intersection and so do the (0, 20) and (0, 10) y-regions.

## math - Calculate overlap between two rectangles on x/y grid? - Stack O...

math grid coordinates overlap euclidean-distance

Don't give cols a width. They have already a, from Bootstrap computed width. That's the core of the grid system and what makes it responsive.

If you want something to give a width or height do it within the col:

```<div class="container">
<div class="row">
<div class="col-...">
<div id="page">
<div id="article"></div>
<div id="footer"></div>
</div>
</div>
</div>
</div>```

## html - Bootstrap grid gives black line - Stack Overflow

After a day searching, I have working solution. Everything is in how do you start the node. So first, do the usual:

`java -jar lib/selenium-server-standalone-2.20.0.jar -role hub`

Then start the node like this:

`java -jar lib/selenium-server-standalone-2.20.0.jar -role webdriver -hub http://localhost:4444/grid/register -browser browserName="chrome",version=ANY,platform=WINDOWS,maxInstances=5 -Dwebdriver.chrome.driver=lib\chromedriver.exe`

More specifically: You have to start up the NODE with parameter browser and add -D parameter specifying the full path to the ChromeDriver

My huge thanks goes to John Naegle who answered similar question here on SO regarding the Internet Explorer - see here

Many thanks for the solution, after a night of trying to get it to work. For Linux (Centos), you need both the chromedriver and a chrome installation. That also took me a while to figure out :(

## webdriver - Selenium Grid does not run Chrome on another computer - St...

selenium webdriver selenium-grid selenium-chromedriver

Instead of defining Status as a function in the model, add it in model.parse as a computed field. Ex:

```schema: {
parse: function (d) {
\$.each(d, function (idx, elem) {
elem.Status = helper.GetStatus(elem.EntriesStatus);
});
return d;
}
}```

And then in the template remove ():

`template: kendo.template("#if (HasError) {# <strong class='clrRed' > \#= Status #\ </strong> #} else { # \#= Status #\ #} #"),`

## javascript - Kendo Grid Filter on a dynamic column - Stack Overflow

javascript kendo-ui kendo-grid

Assuming your computer generates and tests one million combinations per second, it would need about 195 years to process the whole set of possibilities.

You definitely need to use some faster method, basically relying on cutting non-promising paths of search. The answer by Tom Zych contains useful hint.

I think it's actually 4002 * 4001 * 4000 * 3999, not 4002^4, but 2.56 * 10^4 is about right either way.

@cwallenpoole 10^14, not 10^4. Anyway, that would be the answer in the case of combinations without duplications, but the problem does not exclude repetitions (see examples I added to my answer).

ACK! Yes. That was a typo. it's 10^14. I thought the math behind a limited, unordered set was (Possiblities!)/((max valid possibilities)!). In this case, that means 4002*4001*4000*3999.

@cwallenpoole If you count ordered subsets i.e. sequences with no repetitions from Nword dictionary, then yes: you can choose any of N words as the first one, then any of the remaining N1 as the second one and so on. However repetitions are not forbidden here, so at each choice you have all N words in the dictionary to choose from.

python performance array

I actually ended up using Delauney Triangulation to break down the fields into 3 dimensional X,Y,Z surfaces with an Identifier. Then given a set of (Identity,Z) pairs I form a field line from each surface, and from these lines compute the polygon formed from the shortest edges between lines. This gives me an area of potential x,y coordinates.

## Bicubic Interpolation for Non-regular grids? - Stack Overflow

grid interpolation bicubic non-uniform-distribution
`http://localhost:4444/`

## webdriver - Selenium Grid does not run Chrome on another computer - St...

selenium webdriver selenium-grid selenium-chromedriver

I have been stuck on the same issue. For the way of computing distances you may want to use Gower transformation. If you had not continuos data you could use an overlap function, which I did not manage to find on R yet (this paper). Here what I found for the computation problem:

To compute the distances on a very large dataset with too many N observations to be computationally feasibile, it is possilbe to apply the solution used in this recent paper (this one). They propose a smart way to proceed: they create a new dataset, where each new row is a possible combination of values over the d attributes in the original dataset. Therefore, this will give a new matrix with M < N osbervations for which the distance matrix can be computationally feasible. They "create a grid of all possible cases, with their corresponding distances (of each from each ther) and used this grid to create our clusters, to which we subsequently assiggned our observations"

I tried to reproduce that in R making use of this answer with the library(plyr). In the following I will use just 4 observations but it should work with Nobservations, as long as the combinations you produce will reduce the memory requirement

```id <- c(1,2,3,4)
a <- c(1,1,0,1)
b <- c(0,1,0,0)
c <- c(3,2,1,3)
d <- c(1,0,1,1)
Mydata <- as.data.frame(cbind(id, a,b,c,d))
Mydata
id a b c d
1  1 0 3 1
2  1 1 2 0
3  0 0 1 1
4  1 0 3 1

require(plyr)
Mydata_grid <-  count(Mydata[,-1])
Mydata_grid
a b c d freq
1 0 3 1  2
1 1 2 0  1
0 0 1 1  1```

Where freq is the frequency of the combination in the origial Mydata. Then I just apply the distance measure I do prefer to Mydata_grid. In this case my data are categorical, therefore I apply jaccard (which I dont know if it is correct for the data in the example. Maybe I should have used an overlapmatching function but I did not find it in R yet)

```require(vegan)
dist_grid <- vegdist(Mydata_grid, method="jaccard")
d_matrix <- as.matrix(dist_grid)
d_matrix
1         2          3
1 0.0000000 0.5714286  0.6666667
2 0.5714286 0.0000000  0.5000000
3 0.6666667 0.5000000  0.0000000```
`d_grid`
```clusters_d <- hclust(dist_grid, method="ward.D2")
cluster <- cutree(clusters_d, k = 2) # k= number of clusters
cluster
1 2 1```

which is the vector which assigns each combination to each cluster. Now it is enough to go back to the original sample and it is done. For doing this just do

`Mydata_cluster <- cbind(Mydata_grid, cluster, Mydata_grid\$freq)`

and then expand the sample to the original dimension using rep

```Mydata_cluster_full <- Mydata_cluster[rep(row.names(Mydata_cluster), Mydata_cluster\$freq), 1:(dim(Mydata_cluster)-1)]
Mydata_cluster_full
a b c d freq cluster
0 0 1 1    1       1
1 0 3 1    2       2
1 0 3 1    2       2
1 1 2 0    1       1```

You can also add the original idvector and remove the freqcolumnd

```Mydata_cluster_full\$id <- id
Mydata_cluster_full\$freq <- NULL

a b c d freq cluster id
0 0 1 1    1       1  1
1 0 3 1    2       2  2
1 0 3 1    2       2  3
1 1 2 0    1       2  4```

If you are not unlucy, this process will reduce the amount of memory needed to compute your distance matrix to a feasible level.

## macos - R distance matrix and clustering for mixed and large dataset? ...

r macos bigdata cluster-analysis distance

I can usually resolve this problem when a computer is under my control, but it's more of a nuisance when working with a grid. When a grid is not homogenous, not all libraries may be installed, and my experience has often been that a package wasn't installed because a dependency wasn't installed. To address this, I check the following:

• Is Java installed? Are the Java class paths correct?
• Check that the package was installed by the admin and available for use by the appropriate user. Sometimes users will install packages in the wrong places or run without appropriate access to the right libraries. .libPaths() is a good check.
• Check ldd results for R, to be sure about shared libraries
• It's good to periodically run a script that just loads every package needed and does some little test. This catches the package issue as early as possible in the workflow. This is akin to build testing or unit testing, except it's more like a smoke test to make sure that the very basic stuff works.
• If packages can be stored in a network-accessible location, are they? If they cannot, is there a way to ensure consistent versions across the machines? (This may seem OT, but correct package installation includes availability of the right version.)
• Is the package available for the given OS? Unfortunately, not all packages are available across platforms. This goes back to step 5. If possible, try to find a way to handle a different OS by switching to an appropriate flavor of a package or switch off the dependency in certain cases.

Having encountered this quite a bit, some of these steps become fairly routine. Although #7 might seem like a good starting point, these are listed in approximate order of the frequency that I use them.

Useful considerations to be sure, but more an answer for "Why do I get an error when installing a package".

@DWin: Maybe, but not really. I may have been unclear. These issues come up when a job grinds to a halt on a grid because a package wasn't installed. Maintaining software consistency on a grid isn't hard, but does require a good process for installation, maintenance, and debugging. These are just some of the items that come up from each phase, at least as they relate to the screaching sound that comes when a function isn't available. :)

## Error: could not find function ... in R - Stack Overflow

r function error-handling r-faq

Replace your singleton cache with a distributed cache.

One such cache could be JBoss Infinispan but I'm sure that other distributed cache and grid technologies exist, including commercial ones which are probably more mature at this point.

For singleton objects in general, I'm not sure. I think I'd try to not have singletons in the first place.

## java - Singleton in Cluster environment - Stack Overflow

java singleton websphere cluster-computing

There is software that detects the probability for porn, but this is not an exact science, as computers can't recognize what is actually on pictures (pictures are only a big set of values on a grid with no meaning). You can just teach the computer what is porn and what not by giving examples. This has the disadvantage that it will only recognize these or similar images.

Given the repetitive nature of porn you have a good chance if you train the system with few false positives. For example if you train the system with nude people it may flag pictures of a beach with "almost" naked people as porn too.

A similar software is the facebook software that recently came out. It's just specialized on faces. The main principle is the same.

Technically you would implement some kind of feature detector that utilizes a bayes filtering. The feature detector may look for features like percentage of flesh colored pixels if it's a simple detector or just computes the similarity of the current image with a set of saved porn images.

This is of course not limited to porn, it's actually more a corner case. I think more common are systems that try to find other things in images ;-)

because it doesn't contain anything like an algorithm, recipe, or reference.

So it's not a valid answer to explain the user asking the question that it's not really possible what he tries to achieve? Dude, you might be a little bit more releaxed...

It's also making a false statement "as computers can't recognize what is actually on pictures"

Because they can't. You can only learn to detect certain images and the larger your db of positive and negative cases is, the better, but in general you will never get a solution that is as accurate as a human, so you will end up with a huge number of false positives and negatives.

## spam prevention - What is the best way to programmatically detect porn...

spam-prevention

You're adding 10 panels which all have their own gridbag layout, checkbox and label. So each panel has its own grid, and the width of the cells are computed independantly, based on the components they contain.

If you want a well-aligned single grid, you should have a single panel using GridBagLayout, and add your 10 labels and 10 checkboxes to this unique panel.

Moreover, you should give a weightx > 0 to the label's constraint if you really want it to fill horizontally.

`weightx`

That will work only because all your checkboxes have the same size. If they differ in size, you'll need a single grid.

## java - Components in GridBagLayout are put to the center, without taki...

java swing
```import matlab.pyplot as plt

# Subplots are organized in a Rows x Cols Grid
# Tot and Cols are known

Tot = number_of_subplots
Cols = number_of_columns

# Compute Rows required

Rows = Tot // Cols
Rows += Tot % Cols

# Create a Position index

Position = range(1,Tot + 1)```

First instance of Rows accounts only for rows completely filled by subplots, then is added one more Row if 1 or 2 or ... Cols - 1 subplots still need location.

```# Create main figure

fig = plt.figure(1)
for k in range(Tot):

# add every single subplot to the figure with a for loop

plt.plot(x,y)      # Or whatever you want in the subplot

plt.show()```

Please note that you need the range Position to move the subplots into the right place.

## python - Dynamically add/create subplots in matplotlib - Stack Overflo...

python matplotlib
• Enhancing dynamic range and normalizing illumination The point is to normalize background to seamless color first. There are many methods to do this. Here is what I have tried for your image: create paper/ink cell table for the image (in the same manner as in the linked answer). So you select grid cell size big enough to distinct character features from background. For your image I choose 8x8 pixels. So divide the image into squares and compute the avg color and abs difference of color for each of them. Then mark saturated ones (small abs difference) and set them as paper or ink cells according to avg color in comparison to whole image avg color. Now just process all lines of image and for each pixel just obtain the left and right paper cells. and linearly interpolate between those values. That should lead you to actual background color of that pixel so just substract it from image. My C++ implementation for this looks like this: color picture::normalize(int sz,bool _recolor,bool _sbstract) { struct _cell { color col; int a,da,_paper; _cell(){}; _cell(_cell& x){ *this=x; }; ~_cell(){}; _cell* operator = (const _cell *x) { *this=*x; return this; }; /*_cell* operator = (const _cell &x) { ...copy... return this; };*/ }; int i,x,y,tx,ty,txs,tys,a0,a1,n,dmax; int x0,x1,y0,y1,q,qx,qy; color c; _cell **tab; // allocate grid table txs=xs/sz; tys=ys/sz; n=sz*sz; c.dd=0; if ((txs<2)||(tys<2)) return c; tab=new _cell*[tys]; for (ty=0;ty<tys;ty++) tab[ty]=new _cell[txs]; // compute grid table for (y0=0,y1=sz,ty=0;ty<tys;ty++,y0=y1,y1+=sz) for (x0=0,x1=sz,tx=0;tx<txs;tx++,x0=x1,x1+=sz) { for (i=0;i<4;i++) a0[i]=0; for (y=y0;y<y1;y++) for (x=x0;x<x1;x++) { dec_color(a1,p[y][x],pf); for (i=0;i<4;i++) a0[i]+=a1[i]; } for (i=0;i<4;i++) tab[ty][tx].a[i]=a0[i]/n; enc_color(tab[ty][tx].a,tab[ty][tx].col,pf); tab[ty][tx].da=0; for (i=0;i<4;i++) a0[i]=tab[ty][tx].a[i]; for (y=y0;y<y1;y++) for (x=x0;x<x1;x++) { dec_color(a1,p[y][x],pf); for (i=0;i<4;i++) tab[ty][tx].da+=abs(a1[i]-a0[i]); } tab[ty][tx].da/=n; } // compute max safe delta dmax = avg(delta) for (dmax=0,ty=0;ty<tys;ty++) for (tx=0;tx<txs;tx++) dmax+=tab[ty][tx].da; dmax/=(txs*tys); // select paper cells and compute avg paper color for (i=0;i<4;i++) a0[i]=0; x0=0; for (ty=0;ty<tys;ty++) for (tx=0;tx<txs;tx++) if (tab[ty][tx].da<=dmax) { tab[ty][tx]._paper=1; for (i=0;i<4;i++) a0[i]+=tab[ty][tx].a[i]; x0++; } else tab[ty][tx]._paper=0; if (x0) for (i=0;i<4;i++) a0[i]/=x0; enc_color(a0,c,pf); // remove saturated ink cells from paper (small .da but wrong .a[]) for (ty=1;ty<tys-1;ty++) for (tx=1;tx<txs-1;tx++) if (tab[ty][tx]._paper==1) if ((tab[ty][tx-1]._paper==0) ||(tab[ty][tx+1]._paper==0) ||(tab[ty-1][tx]._paper==0) ||(tab[ty+1][tx]._paper==0)) { x=0; for (i=0;i<4;i++) x+=abs(tab[ty][tx].a[i]-a0[i]); if (x>dmax) tab[ty][tx]._paper=2; } for (ty=0;ty<tys;ty++) for (tx=0;tx<txs;tx++) if (tab[ty][tx]._paper==2) tab[ty][tx]._paper=0; // piecewise linear interpolation H-lines int ty0,ty1,tx0,tx1,d; if (_sbstract) for (i=0;i<4;i++) a0[i]=0; for (y=0;y<ys;y++) { ty=y/sz; if (ty>=tys) ty=tys-1; // first paper cell for (tx=0;(tx<txs)&&(!tab[ty][tx]._paper);tx++); tx1=tx; if (tx>=txs) continue; // no paper cell found for (;tx<txs;) { // fnext paper cell for (tx++;(tx<txs)&&(!tab[ty][tx]._paper);tx++); if (tx<txs) { tx0=tx1; x0=tx0*sz; tx1=tx; x1=tx1*sz; d=x1-x0; } else x1=xs; // interpolate for (x=x0;x<x1;x++) { dec_color(a1,p[y][x],pf); for (i=0;i<4;i++) a1[i]-=tab[ty][tx0].a[i]+(((tab[ty][tx1].a[i]-tab[ty][tx0].a[i])*(x-x0))/d)-a0[i]; if (pf==_pf_s ) for (i=0;i<1;i++) clamp_s32(a1[i]); if (pf==_pf_u ) for (i=0;i<1;i++) clamp_u32(a1[i]); if (pf==_pf_ss ) for (i=0;i<2;i++) clamp_s16(a1[i]); if (pf==_pf_uu ) for (i=0;i<2;i++) clamp_u16(a1[i]); if (pf==_pf_rgba) for (i=0;i<4;i++) clamp_u8 (a1[i]); enc_color(a1,p[y][x],pf); } } } // recolor paper cells with avg color (remove noise) if (_recolor) for (y0=0,y1=sz,ty=0;ty<tys;ty++,y0=y1,y1+=sz) for (x0=0,x1=sz,tx=0;tx<txs;tx++,x0=x1,x1+=sz) if (tab[ty][tx]._paper) for (y=y0;y<y1;y++) for (x=x0;x<x1;x++) p[y][x]=c; // free grid table for (ty=0;ty<tys;ty++) delete[] tab[ty]; delete[] tab; return c; } See the linked answer for more details. Here result for your input image after switching to gray-scale <0,765> and using pic1.normalize(8,false,true);

I tried naive simple range tresholding first so if all color channel values (R,G,B) are in range <min,max> it is recolored to c1 else to c0:

```void picture::treshold_AND(int min,int max,int c0,int c1) // all channels tresholding: c1 <min,max>, c0 (-inf,min)+(max,+inf)
{
int x,y,i,a,e;
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
{
dec_color(a,p[y][x],pf);
for (e=1,i=0;i<3;i++) if ((a[i]<min)||(a[i]>max)){ e=0; break; }
if (e) for (i=0;i<4;i++) a[i]=c1;
else  for (i=0;i<4;i++) a[i]=c0;
enc_color(a,p[y][x],pf);
}
}```
`pic1.treshold_AND(0,127,765,0);`

The gray noise is due to JPEG compression (PNG would be too big). As you can see the result is more or less acceptable.

In case this is not enough you can divide your image into segments. Compute histogram for each segment (it should be bimodal) then find the color between the 2 maximums which is your treshold value. The problem is that the background covers much more area so the ink peak is relatively small and sometimes hard to spot in linear scales see full image histogram:

When you do this for each segment it will be much better (as there will be much less background/text color bleedings around the tresholds) so the gap will be more visible. Also do not forget to ignore the small gaps (missing vertical lines in the histogram) as they are just related to quantization/encoding/rounding (not all gray shades are present in the image) so you should filter out gaps smaller then few intensities replacing them with avg of last and next valid histogram entry.

I will try this out - haven't found a way to add this to my obj-C project but will do research to see how I can add a method to the picture object.

@Xav In case you are using 3th party image lib it is not a good idea to mess with it directly. You you can write custom function outside image class that changes such image taken as operand instead. Yes my image is 2D array/matrix of pixels where pixel is 32bit unsigned int (DWORD) with supported encodings _pf_rgba (4x8bit uint),_pf_u(1x32bit uint), _pf_s(1x32bit int), and more the dec_color/enc_color just unpack/pack this to DWORD array on per channel basis to make the code universal for any pixel-format. You can ignore all that as you got only grayscale.

@Xav so all the 3 and 4 iteration fors can disappear if your image is encoded as 1 channel gray-scale. Also you can improve this a lot if you compute booth horizontal and vertical lines and use the average of booth or use cubic interpolation, or interpolate the gaps in the table first. But as you see even simple piecewise linear interpolation is good. PS. that code was used for bi-cubic interpolation so there are many unused variables left like q etc. (I forgot to erase)

## tesseract - OpenCV for OCR: How to compute thresholding levels for gra...

image-processing tesseract opencv3.0