Rectangle 27 9

How do you filter out the values

The most voted answer is wrong or at least not completely true as the OP is talking about blank strings only. Here's a thorough explanation:

First of all, we must agree on what empty means. Do you mean to filter out:

$element === false
  • the falsey values? (i.e. 0, 0.0, "", "0", NULL, array()...)
  • the equivalent of PHP's empty() function?

To only filter out strictly false values, you must use a callback function:

$filtered = array_diff($originalArray, 'myCallback');
function myCallback($var) {
    return $var === false;
}

The callback is also useful for any combination in which you want to filter out the "falsey" values, except some. (For example, filter every null and false, etc, leaving only 0):

Third and fourth case are (for our purposes at last) equivalent, and for that all you have to use is the default:

$filtered = array_filter($originalArray);

If you want to take out null and false, but leave 0, you can also use php's built-in strlen function as your callback.

php - Remove empty array elements - Stack Overflow

php arrays string
Rectangle 27 3

As EJP noted the masks are used to filter out the values for one of the color components/channels (Red, Green, Blue). And that's usually what you do with bit shift/masking operations. Everything else you do with the values you get is arithmetics or more advanced math.

A color channel's range is 0-255, or 0x00 - 0xFF in hexadecimal values. To reconstruct it, you need to bitshift the components value back to their place. Which can be put together with simple arithmetic addition:

// Example values
int r = 255; // 0xFF
int g = 1;   // 0x01
int b = 15;  // 0x0F

// go back to original form:
                      //    A  R  G  B
int color = r << 16;  // 0x00.FF.00.00
color += g << 8;      // 0x00.FF.01.00
color += b;           // 0x00.FF.01.0F

// just add back the alpha if it is going to be full on
color = += 255 << 24; // 0xFF.FF.01.0F

If you want to do some interpolation between colors you need to do it for each color component seperately, not all of them together in one integer. In some instances it may also a good idea to change the representation from [0-255] to a decimal one [0.0f-1.0f]:

// Darken red value by 50%
int color = ...; // some color input
int mask = 0xFF;

int a = (color >> 24) & mask;
int r = (color >> 16) & mask;
int g = (color >> 8) & mask;
int b = color & mask;

// convert to decimal form:
float rDecimal = r / 255f; 
  // Let r: 0x66 = 102 => rDecimal: 0.4

// darken with 50%, basically divide it by two
rDecimal = r/2; 
  // rDecimal: 0.2

// Go back to original representation and put it back to r
r = (int)(rDecimal * 255); 
  // r: 51 = 0x33

// Put it all back in place
color = (a << 24) + (r << 16) + (g << 8) + b;

If I wanted to do interpolation of a color and map it on a BufferedImage how would I go about it?

@MasqueradeToday: What kind of interpolation do you have in mind? Have you read the BufferedImage java api page? There are getRBG() and setRBG() methods in it if you want to start simple. You have to iterate through all pixels for all x and y coordinates of the picture.

Well written and clear code. Is there any performance gain by converting the representation to decimal?

@akhyar: I do it for the sake of readability rather than performance... personally I just find it easier to debug color values in decimal form (mostly because I've studied signal processing at a college that uses this form). As with most optimizations I'd start profiling when things are getting "slow" (it's always surprising when code you think is slow end up not being the actual culprit/bottleneck).

<< 24

java - Bitwise Operations-- How to change existing color? - Stack Over...

java colors operators bit-manipulation interpolation
Rectangle 27 21

You can use LINQ's Where method to filter out values that should not be a part of the list. The result is an IEnumerable<T> with the elements removed.

var res = list.Where(item => !(one.Value1 == item.Value1 && one.Value2 < item.Value2));

This will not updated the original List<T> instance but instead will create a new IEnumerable<T> with the values removed.

What's the perf. hit doing this vs. a for loop removing the items?

c# - Remove instances from a list by using LINQ or Lambda? - Stack Ove...

c# linq lambda
Rectangle 27 4

I don't believe that there's a magic number. If all the predicates make sense together, then I will put them together. This might involve splitting the if statement over two lines, but I usually never introduce extra if statements that are superfluous. But if it is particularly long, you should ask yourself if all the statements are really necessary. Perhaps you could filter out some of the values earlier or something like that. The biggest concern is readability. If it is hard for someone else to understand, you need to refactor your code. But splitting up the code into two different if statements rarely makes the code more readable, it just takes up more lines.

language agnostic - Where is the right balance of predicates in a sing...

language-agnostic coding-style
Rectangle 27 4

I don't believe that there's a magic number. If all the predicates make sense together, then I will put them together. This might involve splitting the if statement over two lines, but I usually never introduce extra if statements that are superfluous. But if it is particularly long, you should ask yourself if all the statements are really necessary. Perhaps you could filter out some of the values earlier or something like that. The biggest concern is readability. If it is hard for someone else to understand, you need to refactor your code. But splitting up the code into two different if statements rarely makes the code more readable, it just takes up more lines.

language agnostic - Where is the right balance of predicates in a sing...

language-agnostic coding-style
Rectangle 27 128

One general procedure is laid out in the wikipedia article on unsharp masking: You use a gaussian smoothing filter and subtract the smoothed version from the original image (in a weighted way so the values of a constant area remain constant).

To get a sharpened version of frame into image: (both cv::Mat)

cv::GaussianBlur(frame, image, cv::Size(0, 0), 3);
cv::addWeighted(frame, 1.5, image, -0.5, 0, image);

The parameters there are something you need to adjust for yourself.

There's also laplacian sharpening, you should find something on that when you google.

Is there a way to replicate Photoshop's result of Unsharp Mask?

@tilaprimera, I'm asking because Photoshop's USM is different from the "Classic" USM.

How to sharpen an image in OpenCV? - Stack Overflow

image-processing opencv
Rectangle 27 1

You could filter out the non-numeric values with a function like this answer provides, or with a regular expression - which might need some tweaking:

That will exclude most non-numbers (maybe all, but I'm not that confident - regex isn't a strong area), though Justin's function is probably safer.

However, there's still no guarantee that the filter function will be applied before the cast. If this still trips up then you could use a subquery to filter out non-numeric values and then check the actual value of those that remain; but you'd probably need to add a hint to stop Oracle unnesting the subquery and changing the evaluation order on you.

Another approach is a variation of Justin's function that returns the actual number:

CREATE OR REPLACE FUNCTION safe_number( p_str IN VARCHAR2 )
  RETURN NUMBER DETERMINISTIC PARALLEL_ENABLE
IS
  l_num NUMBER;
BEGIN
  l_num := to_number( p_str );
  RETURN l_num;
EXCEPTION
  WHEN value_error THEN
    RETURN null;
END safe_number;
/
select count(1), result_num
from vitals
where test_cd = 'TEMP'
and safe_number(result_num) > 104
group by result_num;

sql - Invalid numbers - Stack Overflow

sql oracle ora-01722
Rectangle 27 13

You can definitely use a projection. If you notice when you use the search function on the page you referenced, the inputs are formed into query string values. You can use tokens to grab the values from query strings to use in your projection filter. For example, if you're using fields as you stated, then you just add a filter for that field and in the value field use {Request.QueryString:State}. Or, replace "State" with whatever key you're using for the query string value.

<form action="/search-results" method="Get">
  <select name="State">
     <option value="OH">Ohio</option>
     ...all the states...
  </select>
  <input type="submit" value="Search" />
</form>

"/search-results" could be a projection page or any content that has the projection widget present. You could build the form as a widget that you can place somewhere, or for testing purposes, you could just paste this html into an html widget to try it out.

...I wish I found this answer before I dug through all the source code and worked out how to do HQL queries myself...

orchardcms - Orchard Create Projection or Search Based on Filtered Dro...

orchardcms
Rectangle 27 76

I would just create a custom filter. They are not that hard.

angular.module('myFilters', []).
  filter('bygenre', function() {
    return function(movies,genres) {
      var out = [];
      // Filter logic here, adding matches to the out var.
      return out;
    }
  });
<h1>Movies</h1>

<div ng-init="movies = [
          {title:'Man on the Moon', genre:'action'},
          {title:'Meet the Robinsons', genre:'family'},
          {title:'Sphere', genre:'action'}
       ];" />
<input type="checkbox" ng-model="genrefilters.action" />Action
<br />
<input type="checkbox" ng-model="genrefilters.family" />Family
<br />{{genrefilters.action}}::{{genrefilters.family}}
<ul>
    <li ng-repeat="movie in movies | bygenre:genrefilters">{{movie.title}}: {{movie.genre}}</li>
</ul>

UPDATE: Here is a fiddle that has an exact demo of my suggestion.

so then in the out I return the movie objects that I want to display?

Sorry it took me awhile to get back. I updated my answer with a fiddle of a concrete example.

ok, thanks, in the meantime I figured it out myself, but it will sure help someone later :)

Either i have two value such like active and inactive then how can i search seprately using this filter. we do not want to make custom filter.

javascript - How to filter multiple values (OR operation) in angularJS...

javascript angularjs angularjs-ng-repeat
Rectangle 27 75

I would just create a custom filter. They are not that hard.

angular.module('myFilters', []).
  filter('bygenre', function() {
    return function(movies,genres) {
      var out = [];
      // Filter logic here, adding matches to the out var.
      return out;
    }
  });
<h1>Movies</h1>

<div ng-init="movies = [
          {title:'Man on the Moon', genre:'action'},
          {title:'Meet the Robinsons', genre:'family'},
          {title:'Sphere', genre:'action'}
       ];" />
<input type="checkbox" ng-model="genrefilters.action" />Action
<br />
<input type="checkbox" ng-model="genrefilters.family" />Family
<br />{{genrefilters.action}}::{{genrefilters.family}}
<ul>
    <li ng-repeat="movie in movies | bygenre:genrefilters">{{movie.title}}: {{movie.genre}}</li>
</ul>

UPDATE: Here is a fiddle that has an exact demo of my suggestion.

so then in the out I return the movie objects that I want to display?

Sorry it took me awhile to get back. I updated my answer with a fiddle of a concrete example.

ok, thanks, in the meantime I figured it out myself, but it will sure help someone later :)

Either i have two value such like active and inactive then how can i search seprately using this filter. we do not want to make custom filter.

javascript - How to filter multiple values (OR operation) in angularJS...

javascript angularjs angularjs-ng-repeat
Rectangle 27 23

I have tried doing 'honeypots' where you put a field and then hide it with CSS (marking it as 'leave blank' for anyone with stylesheets disabled) but I have found that a lot of bots are able to get past it very quickly. There are also techniques like setting fields to a certain value and changing them with JS, calculating times between load time and submit time, checking the referer URL, and a million other things. They all have their pitfalls and pretty much all you can hope for is to filter as much as you can with them while not alienating who you're here for: the users.

At the end of the day, though, if you really, really, don't want bots to be sending things through your form you're going to want to put a CAPTCHA on it - best one I've seen that takes care of mostly everything is reCAPTCHA - but thanks to India's CAPTCHA solving market and the ingenuity of spammers everywhere that's not even successful all of the time. I would beware using something that is 'ingenious' but kind of 'out there' as it would be more of a 'wtf' for users that are at least somewhat used to your usual CAPTCHAs.

I like the CSS technique. It works very well across the board. I'd also vote for this answer, but I have no votes left! :D

Actually, this is trivially bypassable by the simplest of techniques; if a spammer wants to misuse your site, he's gonna. Only thing protecting you is if you're not big enouhg to bother with looking at your code.

-1 This is funny that solutions routinely circumvented automatically by spam bots without even the need to engage human solvers shops APIs integrated into professional bots are marked as answer. Is it by ignorance or intentionally?

security - When the bots attack! - Stack Overflow

security captcha spam-prevention bots
Rectangle 27 3

With korchev's inspiration... I came up with the following that is executed before the rebind occurs. It clears out the filter values and then applies the new (non-existent) values.

//Clear UI Filter Text
$('#Groups .t-clear-button').click();
$('#Groups .t-filter-button').click();

// rebind the related grid
groupsGrid.rebind({
    userName: user
});

jquery - How to clear filter on Telerik ASP.NET MVC Grid - Stack Overf...

jquery asp.net-mvc telerik telerik-mvc
Rectangle 27 2

CSV object uses note properties in each row to store its fields so we'll need to filter each row object and leave just the field(s) we want using Select-Object cmdlet (alias: select), which processes the entire CSV object at once:

Import-Csv 1.csv | select field2 | Export-Csv 2.csv -NoTypeInformation

Note, there's no need to escape the end of line if it ends with |, {, (, or ,. It's possible to specify several fields: select field2, field3.

Import-Csv 1.csv |
    select field2 |
    %{
        $_.field2 = $_.field2 -replace '"', [char]1
        $_
    } |
    ConvertTo-Csv -NoTypeInformation |
    %{ $_ -replace '"(\S*?)"', '$1' -replace '\x01', '""' } |
    Out-File 2.csv -Encoding ascii

A tricky case of embedded quotes inside a field was solved by temporary replacing them with a control character code 01 (there are just a few that can be used in a typical non-broken text file: 09/tab, 0A/line feed, 0D/carriage return).

I do believe you have it... for one (1) field. It would require some cycles to convert this into a full solution for generic CSV usage. It really should be that Powershell provides switches to control quoting. Some may want or need every field quoted. I would like to see quoting omitted unless the field contains quote characters or the delimiter.

powershell - Export-Csv emits Length and not values - Stack Overflow

powershell
Rectangle 27 4

Fixing your book array is easy enough - you just have to filter out the nulls. The most straightforward way would probably be building a new array and reassigning it:

var temp = [];
var i;
for (i = 0; i < obj.store.book.length; ++i) {
    if (obj.store.book[i] != null) {
        temp.push(obj.store.book[i]);
    }
}
obj.store.book = temp;

I'm sure there are plenty of other ways, like using jQuery, or the filter function (which I believe is not available in older browsers). You could also loop through the array and splice out the nulls. I just find this way the easiest to read.

json - Recursively remove null values from JavaScript object - Stack O...

javascript json
Rectangle 27 2

It is possible to alter the page name value with a filter within the GA interface. If you go to the admin for an account or profile, there's an "all filters" link listed on the left under the account section (if you're on account level), or a "filters" link under the "view" section on the right. Click on one of those (which one depends on how your account/profile/views are setup and what scope you want to apply the filter for), then click on the red "new filter" button.

Then, select "custom filter" for the filter type, and then you can either use the "search and replace" or "advanced" radio button option and work with "request uri". This will change the value before it goes into your reports.

However, this approach is kinda limited in that you need to know on that level what all the root paths will be, and even if you can get a list, the regex will be long and messy and you'll have to keep the list updated.

An alternate (and IMO better) approach would be to put the responsibility on the site owner of the pages. You can override GA's default page name value by specifying it as the 2nd array element in the _trackPageview push:

_gaq.push(['_trackPageview','custom page name here']);
// the root you want to exclude. Make site owners fill this out
var rootPath = '/myexample/'; 

/** now here's logic to specify page name with the root path removed **/
// this replicates GA's default page name
var pageName = location.pathname+location.search;
// replace rootPath with '/'
if (typeof rootPath != 'undefined')
  pageName=pageName.replace( (new RegExp("^"+String(rootPath),"")) , '/' )

// now specify the custom page name 
_gaq.push(['_trackPageview',pageName]);

I don't know what your exact situation with these "domains" are, but by default GA doesn't report page names with the domain, only the path + query string. So if you have multiple domains all pointing to the same account, pages with similar paths will be aggregated.

www.site1.com/intro.aspx
www.site2.com/intro.aspx

By default, both of these will show up as the same page in your reports. Removing the "root path" from those urls you showed is only going to exasperate the situation. But maybe this is really what you want to happen anyway, so maybe you're fine with that.

I'm just bringing this up because without additional context, I'm unclear why you would really want to strip the "root path" out of different domains. Usually people have a rollup profile/view that has everything aggregated, and usually people make a filter or otherwise alter the page name to include the domain prefix or similar to page names, so that they don't get rolled up to a single page. But this is all just speculation since you didn't really get into full details about your situation, so you may or may not have addressed that or it may or may not even be applicable to you. .. in any case, what I have shown should work for your immediate issue.

Thanks for your work on this answer. While I was waiting for an answer I did some more reading and found "set_CookiePath" as an option. I wondering if you thought that was a good option in order to solve my problem (which is aggregating on the same page on different root folders)??

@Mark Unless I misunderstood the question, I don't think that option is really applicable to your situation. My understanding is that you simply want to remove part of the dir path from the page name. If that is the case, then setting the cookie path is not applicable. Now.. setting the cookie path may be something you want to do anyway, for different reasons.

@Mark for example, if you have two different GA implementations pointing to two different account ids, and both implementations are on the same domain but in different paths, e.g. www.mysite.com/site1/ vs. www.mysite.com/site2/ then it will be prudent to set the cookie path accordingly.

@Mark but that is a separate issue, not related to making a custom page name stripping dirs from the path

javascript - How to handle Google Analytics tracking code for multiple...

javascript asp.net google-analytics analytics multiple-domains
Rectangle 27 2

Your quesstion is not very clearly presented, but what it seems you wanted to do here was count the occurances of the data in the fields, optionally filtering those fields by the values that matches the criteria.

Here the $cond operator allows you to tranform a logical condition into a value:

db.collection.aggregate([
    { "$group": {
        "_id": null,
        "name": { "$sum": 1 },
        "salary": { 
            "$sum": { 
                "$cond": [
                    { "$gte": [ "$salary", 1000 ] },
                    1,
                    0
                ]
            }
        },
        "type": { 
            "$sum": { 
                "$cond": [
                    { "$eq": [ "$type", "type2" ] },
                    1,
                    0
                ]
            }
        }
    }}
])

All values are in the same document, and it does not really make any sense to split them up here as this is additional work in the pipeline.

{ "_id" : null, "name" : 3, "salary" : 3, "type" : 2 }

Otherwise in the long form, which is not very performant due to needing to make a copy of each document for every key looks like this:

db.collection.aggregate([
  { "$project": {
    "name": 1,
    "salary": 1,
    "type": 1,
    "category": { "$literal": ["name","salary","type"] }
  }},
  { "$unwind": "$category" },
  { "$group": {
    "_id": "$category",
    "count": {
      "$sum": {
        "$cond": [
          { "$and": [ 
            { "$eq": [ "$category", "name"] },
            { "$ifNull": [ "$name", false ] }
          ]},
          1,
          { "$cond": [
            { "$and": [
              { "$eq": [ "$category", "salary" ] },
              { "$gte": [ "$salary", 1000 ] }
            ]},
            1,
            { "$cond": [
              { "$and": [
                { "$eq": [ "$category", "type" ] },
                { "$eq": [ "$type", "type2" ] }
              ]},
              1,
              0
            ]}
          ]}
        ]
      }
    }
  }}
])
{ "_id" : "type",   "count" : 2 }
{ "_id" : "salary", "count" : 3 }
{ "_id" : "name",   "count" : 3 }

If your documents do not have uniform key names or otherwise cannot specify each key in your pipeline condition, then apply with mapReduce instead:

db.collection.mapReduce(
    function() {
        var doc = this;
        delete doc._id;

        Object.keys(this).forEach(function(key) {
          var value = (( key == "salary") && ( doc[key] < 1000 ))
              ? 0
              : (( key == "type" ) && ( doc[key] != "type2" ))
              ? 0
              : 1;

          emit(key,value);
        });
    },
    function(key,values) {
        return Array.sum(values);
    },
    {
        "out": { "inline": 1 }
    }
);
"results" : [
            {
                    "_id" : "name",
                    "value" : 3
            },
            {
                    "_id" : "salary",
                    "value" : 3
            },
            {
                    "_id" : "type",
                    "value" : 2
            }
    ]

Which is basically the same thing with a conditional count, except that you only specify the "reverse" of the conditions you want and only for the fields you want to filter conditions on. And of course this output format is simple to emit as separate documents.

The same approach applies where to test the condition is met on the fields you want conditions for and return 1 where the condition is met or 0 where it is not for the summing the count.

Thanks for the clear explanation it really helped me lot and one more clarification, How can I use "like" condition instead of "equals" in both mapReduce and aggregate functions.

@venkat.s The aggregation framework has no such operator for logical comparisons. mapReduce is JavaScript so you can use a regular expression as a logical comparison.

Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).

mapreduce - MongoDB aggregate count based on multiple query fields - (...

mongodb mapreduce mongodb-query aggregation-framework
Rectangle 27 2

For your problem, I think cogroup() is better suited. The intersection() method will consider both keys and values in your data, and will result in an empty rdd.

The function cogroup() groups the values of both rdd's by key and gives us (key, vals1, vals2), where vals1 and vals2 contain the values of data1 and data2 respectively, for each key. Note that if a certain key is not shared in both datasets, one of vals1 or vals2 will be returned as an empty Seq, hence we'll first have to filter out these tuples to arrive at the intersection of the two rdd's.

Next, we'll grab vals1 - which contains the values from data1 for the common keys - and convert it to format (key, Array). Lastly we use flatMapValues() to unpack the result into the format of (key, value).

val result = (data1.cogroup(data2)
  .filter{case (k, (vals1, vals2)) => vals1.nonEmpty && vals2.nonEmpty }
  .map{case (k, (vals1, vals2)) => (k, vals1.toArray)}
  .flatMapValues(identity[Array[Int]]))

result.collect()
// Array[(String, Int)] = Array((a,1), (a,2), (b,2), (b,3))
val result = data1.filter(r => bcast.value.contains(myFuncOper(r._1)))
cogroup

scala - how to use spark intersection() by key or filter() with two RD...

scala apache-spark filter rdd intersection
Rectangle 27 2

The best way to do this is using the .aggregate() if your MongoDB server version is 3.2 or newer. The first stage in the pipeline is use the $match aggregation pipeline operator to filter out all those documents where "myArray" length is less than 2. This operation also reduce the number of documents to be processed down the pipeline. The next stage is the $project stage where you use the $arrayElemAt operator which returns element at a specified index, here the last two elements in the array using index -1 and index-2. Of course the $let operator here is used to assign those values to a variables which are then used in the $subtract expression to return the difference. The last stage is the $sort aggregation pipeline stage where you sort your documents.

db.collection.aggregate([ 
    { "$match": { "myArray.1": { "$exists": true } } }, 
    { "$project": { 
        "myArray": "$myArray", 
        "diff": { 
            "$let": { 
                "vars": { 
                    "first": { "$arrayElemAt": [ "$myArray", -2 ] }, 
                    "last": { "$arrayElemAt": [ "$myArray", -1 ] } 
                }, 
                "in": { "$subtract": [ "$$last.a", "$$first.a" ] } 
            } 
        } 
    }}, 
    { "$sort": { "diff": -1 } }
])
{ "myArray" : [ { "a" : 14 }, { "a" : 33 }, { "a" : 66 } ], "diff" : 33 }
{ "myArray" : [ { "a" : 1 }, { "a" : 45 }, { "a" : 46 } ], "diff" : 1 }
{ "myArray" : [ { "a" : 12 }, { "a" : 3 } ], "diff" : -9 }

I don't thing there is a way to do this but you can always do this client-side using Array.sort

db.collection.find({ "myArray.1": { "$exists": true } }).toArray().sort(
    function(a, b) { 
        return a === b ? 0 :( a < b ? -1 : 1 ); 
    }
)

This return the same result as the query using the aggregation framework except that the "diff" field is absent. Also it worth mentioned that the aggregation solution is much faster than this.

Suppose im only running v3.0 (mlab) which doesnt support $arrayElemAt is there a workaround?

@mikeysee In which case you will need to do this client-side. as shown in the last code block.

javascript - How to sort by the difference in array contents in mongod...

javascript mongodb sorting mongodb-query aggregation-framework
Rectangle 27 1

Here dbo.OH_Case.CreatedDate is of data type Datetime and you have to convert your values in Datetime format for filter. convert columns in Datetime format and format will be same for all. In your case, convert values in Datetime instead of Varchar and it will work fine.

Siddharth, I have checked you date values and figured out that issue is in date "24/04/2017". This format is for "MM/dd/yyyy". and 24 is passed as month which is not valid. Therefore it is throwing exception. I have updated result to handle such exceptions.

select * from 
dbo.OH_Case   
where dbo.OH_Case.CreatedDate between  convert(datetime, '24/04/2017', 103)  and
convert(datetime, '01/05/2017', 103)

Unfortunately , it throws the same error mentioned in question

what is the data type of column dbo.OH_Case.CreatedDate?

How to convert date time format in SQL Server like '2017-03-04 10:07:0...

sql sql-server database
Rectangle 27 3

Here is an extraodinarily simplified version, that can be extended to be full RGB, and it also does not use the image procesing library. Basically you can do 2-D convolution with a filter image (which is an example of the dot you are looking for), and from the points where the convolution returns the highest values, are the best matches for the dots. You can then of course threshold that. Here is a simple binary image example of just that.

%creating a dummy image with a bunch of small white crosses
im = zeros(100,100);
numPoints = 10;

% randomly chose the location to put those crosses
points = randperm(numel(im));
% keep only certain number of points
points = points(1:numPoints);
% get the row and columns (x,y)
[xVals,yVals] = ind2sub(size(im),points);

for ii = 1:numel(points)
   x = xVals(ii);
   y = yVals(ii);
   try
       % create the crosses, try statement is here to prevent index out of bounds
       % not necessarily the best practice but whatever, it is only for demonstration
       im(x,y) = 1;
       im(x+1,y) = 1;
       im(x-1,y) = 1;
       im(x,y+1) = 1;
       im(x,y-1) = 1;
   catch err
   end
end
% display the randomly generated image
imshow(im)

% create a simple cross filter
filter = [0,1,0;1,1,1;0,1,0];
figure; imshow(filter)

% perform convolution of the random image with the cross template
result = conv2(im,filter,'same');

% get the number of white pixels in filter
filSum = sum(filter(:));

% look for all points in the convolution results that matched identically to the filter
matches = find(result == filSum);

%validate all points found
sort(matches(:)) == sort(points(:))
% get x and y coordinate matches
[xMatch,yMatch] = ind2sub(size(im),matches);

I would highly suggest looking at the conv2 documentation on MATLAB's website.

algorithm - Detect black dots from color background - Stack Overflow

algorithm matlab opencv image-processing computer-vision