Rectangle 27 250

Deleting the app is sometimes not the case! Suggest, your app has already been published! You can't just add new entity to the data base and go ahead - you need to perform migration!

  • Add a new version of your model (the new group of datamodels added)
  • select the main file, open file inspector (right-hand panel)
  • and under Versioned core data model select your new version of data model for current data model
AppDelegate
persistentStoreCoordinator
if (![_persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeURL options:nil error:&error])
nil
@{NSMigratePersistentStoresAutomaticallyOption:@YES, NSInferMappingModelAutomaticallyOption:@YES}
  • Here you go, have fun!

P.S. This only applies for lightweight migration. For your migration to qualify as a lightweight migration, your changes must be confined to this narrow band:

  • Add or remove a property (attribute or relationship).
  • Make a nonoptional property optional.
  • Make an optional attribute nonoptional, as long as you provide a default value.
  • Add or remove an entity.
  • Rename a property.

main file is the .xcdatamodel that you created. You should open the first tab of Utilities pane(right hand side one) and find "Model Version"(Xcode 5.1) and select Current:"your newly created .xcdatamodel"

@Computer_whiz123, in XCode 5.1.1 it is called 'Model Version'

I get this error: "CoreData: error: -addPersistentStoreWithType:SQLite configuration:(null) URL:file:///...file.sqlite options:{ NSInferMappingModelAutomaticallyOption = 1; NSMigratePersistentStoresAutomaticallyOption = 1; } ... returned error Error Domain=NSCocoaErrorDomain Code=134130 "The operation couldnt be completed. (Cocoa error 134130.)"

let options = [ NSMigratePersistentStoresAutomaticallyOption: true, NSInferMappingModelAutomaticallyOption:true ]

iphone - The model used to open the store is incompatible with the one...

iphone core-data xcode4
Rectangle 27 35

Power of a Go Channel

This is an easy question, since Scala is a general purpose language, which is no worse or better than many others you can choose to "port goroutines".

There are of course many opinions on why Scala is better or worse as a language (e.g. here is mine), but these are just opinions, and don't let them stop you. Since Scala is general purpose, it "pretty much" comes down to: everything you can do in language X, you can do in Scala. If it sounds too broad.. how about continuations in Java :)

goroutines

The only similarity (aside the nitpicking) is they both have to do with concurrency and message passing. But that is where the similarity ends.

Since Jamie's answer gave a good overview of Scala actors, I'll focus more on Goroutines/core.async, but with some actor model intro.

Where a "worry free" piece is usually associated with terms such as: fault tolerance, resiliency, availability, etc..

Without going into grave details how actors work, in two simple terms actors have to do with:

  • Locality: each actor has an address/reference that other actors can use to send messages to
  • Behavior: a function that gets applied/called when the message arrives to an actor

Think "talking processes" where each process has a reference and a function that gets called when a message arrives.

There is much more to it of course (e.g. check out Erlang OTP, or akka docs), but the above two is a good start.

The point of the above is not to say that one is better than the other, but it's to show that purity of the actor model as a concept depends on its implementation.

As other answers already mentioned, goroutines take roots in Communicating Sequential Processes, which is a "formal language for describing patterns of interaction in concurrent systems", which by definition can mean pretty much anything :)

I am going to give examples based on core.async, since I know internals of it better than Goroutines. But core.async was built after the Goroutines/CSP model, so there should not be too many differences conceptually.

The main concurrency primitive in core.async/Goroutine is a channel. Think about a channel as a "queue on rocks". This channel is used to "pass" messages. Any process that would like to "participate in a game" creates or gets a reference to a channel and puts/takes (e.g. sends/receives) messages to/from it.

It is a lot easier to convey with a visual. Here is what a blocking IO execution looks like:

You can see that threads mostly spend time waiting for work. Here is the same work but done via "Goroutine"/"go block" approach:

Here 2 threads did all the work, that 4 threads did in a blocking approach, while taking the same amount of time.

The kicker in above description is: "threads are parked" when they have no work, which means, their state gets "offloaded" to a state machine, and the actual live JVM thread is free to do other work (source for a great visual)

note: in core.async, channel can be used outside of "go block"s, which will be backed by a JVM thread without parking ability: e.g. if it blocks, it blocks the real thread.

Another huge thing in "Goroutines"/"go blocks" is operations that can be performed on a channel. For example, a timeout channel can be created, which will close in X milliseconds. Or select/alt! function that, when used in conjunction with many channels, works like a "are you ready" polling mechanism across different channels. Think about it as a socket selector in non blocking IO. Here is an example of using timeout channel and alt! together:

(defn race [q]
  (searching [:.yahoo :.google :.bing])
  (let [t (timeout timeout-ms)
        start (now)]
    (go
      (alt! 
        (GET (str "/yahoo?q=" q))  ([v] (winner :.yahoo v (took start)))
        (GET (str "/bing?q=" q))   ([v] (winner :.bing v (took start)))
        (GET (str "/google?q=" q)) ([v] (winner :.google v (took start)))
        t                          ([v] (show-timeout timeout-ms))))))

This code snippet is taken from wracer, where it sends the same request to all three: Yahoo, Bing and Google, and returns a result from the fastest one, or times out (returns a timeout message) if none returned within a given time. Clojure may not be your first language, but you can't disagree on how sequential this implementation of concurrency looks and feels.

You can also merge/fan-in/fan-out data from/to many channels, map/reduce/filter/... channels data and more. Channels are also first class citizens: you can pass a channel to a channel..

Since core.async "go blocks" has this ability to "park" execution state, and have a very sequential "look and feel" when dealing with concurrency, how about JavaScript? There is no concurrency in JavaScript, since there is only one thread, right? And the way concurrency is mimicked is via 1024 callbacks.

But it does not have to be this way. The above example from wracer is in fact written in ClojureScript that compiles down to JavaScript. Yes, it will work on the server with many threads and/or in a browser: the code can stay the same.

Again, a couple of implementation differences [there are more] to underline the fact that theoretical concept is not exactly one to one in practice:

  • In Go, a channel is typed, in core.async it is not: e.g. in core.async you can put messages of any type on the same channel.
  • In Go, you can put mutable things on a channel. It is not recommended, but you can. In core.async, by Clojure design, all data structures are immutable, hence data inside channels feels a lot safer for its wellbeing.

I hope the above shed some light on differences between the actor model and CSP.

Not to cause a flame war, but to give you yet another perspective of let's say Rich Hickey:

However, in practice, Whatsapp is based on Erlang OTP, and it seemed to sell pretty well.

Another interesting quote is from Rob Pike:

Oh, Anatoly. The least you could do is spell my name right. :)

Regarding the quote from Rich Hickey, I don't think the primitivity of queues is a good reasoning for unenthusiasm about actors. Pointers are more primitive than references, yet we have no pointers designed for JVM.

Is Scala's actors similar to Go's coroutines? - Stack Overflow

scala go
Rectangle 27 4

I would go with:

<a ng-click="do()">Click</a>
  • because according to the docs you should be able to leave of the href and then Angular will handle the prevent default for you!

I have created a JSFiddle there illustrate when and where Angular is preventing default

The JSFiddle is using Angular's a directive - so it should be EXACTLY the same. You can see the source code here: a tag source code

I would have liked to post the doc to ngHref but I can't because of my reputation.

angularjs - How to preventDefault on anchor tags? - Stack Overflow

angularjs preventdefault
Rectangle 27 4

I would go with:

<a ng-click="do()">Click</a>
  • because according to the docs you should be able to leave of the href and then Angular will handle the prevent default for you!

I have created a JSFiddle there illustrate when and where Angular is preventing default

The JSFiddle is using Angular's a directive - so it should be EXACTLY the same. You can see the source code here: a tag source code

I would have liked to post the doc to ngHref but I can't because of my reputation.

angularjs - How to preventDefault on anchor tags? - Stack Overflow

angularjs preventdefault
Rectangle 27 12

1. With a function value

Foreword: I will use a much simpler generator, because the problem does not concern the generator complexity but rather the signals between the generator and consumer, and the call of the consumer itself. This simple generator just generates the integer numbers from 0 to 9.

A generate-consumer pattern is much cleaner with a simple consumer function passed, which also has the advantage that it can return a value signalling if abortion or any other action is required.

And since in the example only one event is to be signaled ("abort"), the consumer function will have bool return type, signalling if abort is required.

So see this simple example with a consumer function value passed to the generator:

func generate(process func(x int) bool) {
    for i := 0; i < 10; i++ {
        if process(i) {
            break
        }
    }
}

func main() {
    process := func(x int) bool {
        fmt.Println("Processing", x)
        return x == 3 // Terminate if x == 3
    }
    generate(process)
}
Processing 0
Processing 1
Processing 2
Processing 3

Note that the consumer (process) does not need to be a "local" function, it can be declared outside of main(), e.g. it can be a global function or a function from another package.

The potential downside of this solution is that it uses only 1 goroutine both for generating and consuming values.

If you still want to do it with channels, you can. Note that since the channel is created by the generator, and since the consumer loops over the values received from the channel (ideally with a for ... range construct), it is the generator's responsibility to close the channel. Settling with this also allows you to return a receive-only channel.

And yes, closing the returned channel in the generator is best done as a deferred statement, so even if the generator panics, the consumer will not get blocked. But note that this deferred close is not in the generate() function but in the anonymous function started from generate() and executed as a new goroutine; else the channel would be closed before it is returned from generate() - not useful at all...

And if you want to signal the generator from the consumer (e.g. to abort and not generate further values), you can use e.g. another channel, which is passed to the generator. Since the generator will only "listen" to this channel, it can also be declared as a receive-only channel to the generator. If you only need to signal one event (abort in our case), no need to send any values on this channel, a simple close will do it. If you need to signal multiple events, it can be done by actually sending a value on this channel, the event / action to be carried out (where abort may be one from multiple events).

And you can use the select statement as the idiomatic way to handle sending values on the returned channel and watching the channel passed to the generator.

Here is a solution with an abort channel:

func generate(abort <-chan struct{}) <-chan int {
    ch := make(chan int)
    go func() {
        defer close(ch)
        for i := 0; i < 10; i++ {
            select {
            case ch <- i:
                fmt.Println("Sent", i)
            case <-abort: // receive on closed channel can proceed immediately
                fmt.Println("Aborting")
                return
            }
        }
    }()
    return ch
}

func main() {
    abort := make(chan struct{})
    ch := generate(abort)
    for v := range ch {
        fmt.Println("Processing", v)
        if v == 3 { // Terminate if v == 3
            close(abort)
            break
        }
    }
    // Sleep to prevent termination so we see if other goroutine panics
    time.Sleep(time.Second)
}

Output (try it on the Go Playground):

Sent 0
Processing 0
Processing 1
Sent 1
Sent 2
Processing 2
Processing 3
Sent 3
Aborting

The obvious advantage of this solution is that it already uses 2 goroutines (1 that generates values, 1 that consumes/processes them), and it is very easy to extend it to process the generated values with any number of goroutines as the channel returned by the generator can be used from multiple goroutines concurrently - channels are safe to be receiving from concurrently, data races cannot occur, by design; for more read: If I am using channels properly should I need to use mutexes?

An "uncaught" panic on a goroutine will end the execution of the goroutine but will not cause a problem in regards to resource leak. But if the function executed as a separate goroutine would free resources (in non-deferred statements) allocated by it in case of non-panic, that code will obviously not run and will cause resource leak for example.

You haven't observed this because the program terminates when the main goroutine terminates (and it does not wait for other non-main goroutines to finish - so your other goroutines did not get a chance to panic). See Spec: Program execution.

But know that panic() and recover() are for exceptional cases, they are not intended for such general use cases like the Exceptions and try-catch blocks in Java. Panics should be avoided, by returning errors (and handling them!) for example, and panics should definitely not leave the "borders" of packages (e.g. panic() and recover() may be justified to be used in a package implementation, but panicking state should be "caught" inside the package and not let out of it).

In the channel-based solution, I think items 4, 5, 6... can be processed even after abort is closed, although this requires very unlucky timing.

@PaulHankin You're right, thanks. Fixed it by adding a break in the consumer after the close() (it wasn't noticeable with current goroutine scheduling because unbuffered channel was used in generate()). Using break ensures no further elements will be processed, and the close(abort) will ensure the other goroutine can terminate as well.

My question is different from stackoverflow.com/q/11385556/142239. The key-point is the recursive function, which is not handled in your answer. I will make another post later to address the problems.

recursion - The idiomatic way to implement generators (yield) in Golan...

recursion go generator yield
Rectangle 27 23

Aniket did a good job, but I'll have a go too.

First, understand that at the lowest level, computer programs and all data are just numbers (sometimes called words), in memory of some kind. Most commonly these words are multiples of 8 bits (1's and 0's) (such as 32 and 64) but not necessarily, and in some processors each word is considerably larger. Regardless though, it's just numbers that are represented as a series of 1's and 0's, or on's and off's if you like. What numbers mean is up to what/who-ever is reading them, and in the processor's case, it reads memory one word at a time, and based on the number (instruction) it sees, takes some action. Such actions might for example be reading a value from memory, writing a value to memory, modifying a value it had read, jumping to somewhere else in memory to read instructions from.

In the very early days programmer would literally flick switches on and off to make changes to memory, with lights on or off to read out the 1's and 0's, as there were no keyboards, screens and so on. As time progressed, memory got larger, processors became more complex, display devices and keyboards for input were conceived, and with that, easier ways to program.

The OPCODE is part of an instruction word that is interpreted by the processor as representing the operation to perform, such as read, write, jump, add. Many instructions will also have OPERANDS that affect how the instruction performs, such as saying from where in memory to read or write, or where to jump to. So if instructions are 32 bits in size for example, a processor may use 8 bits for the opcode, and 12 bits for each of two operands.

A step up from toggling switches, code might be entered into a machine using a program called a "monitor". The programmer would use simple commands to say what memory they want to modify, and enter MACHINE CODE numerically, e.g. in base 16 (hex) using 0 to 9 and A to F for digits.

Though better than toggling switches, entering machine code is still slow and error prone. A step up from that is ASSEMBLY CODE, which uses more easily remembered MNEMONICS in place of the actual number that represents an instruction. The job of the ASSEMBLER is primarily to transform the mnemonic form of the program to the corresponding machine code. This makes programming easier, particularly for jump instructions, where part of the instruction is a memory address to jump to or a number of words to skip. Programming in machine code requires painstaking calculations to formulate the correct instruction, and if some code is added or removed, jump instructions may need to be recalculated. The assembler handles this for the programmer.

This leaves BYTECODE, which is fundamentally the same as machine code, in that it describes low level operations such as reading and writing memory, and basic calculations. Bytecode is typically conceived to be produced when COMPILING a higher level language, for example PHP or Java, and unlike machine code for many hardware based processors, may have operations to support specific features of the higher level language. A key difference is that the processor of bytecode is usually a program, though processors have been created for interpreting some bytecode specifications, e.g. a processor called SOAR (Smalltalk On A RISC) for Smalltalk bytecode. While you wouldn't typically call native machine code bytecode, for some types of processors such as CISC and EISC (e.g. Linn Rekursiv, from the people who made record players), the processor itself contains a program that is interpreting the machine instructions, so there are parallels.

Very Elegant..... I was looking for something like this that ties the parts together forming a clear picture !

I am studying shellcoding now and found this script that converts the shellcode -which is per my understanding now is hex representation of machine code that do a specific operation- to binary file "as claimed". this binary file is a text file actually ! How does the word binary fits in the context here !

Strictly speaking the phrase "binary file" is meaningless and inaccurate, particular here it seems. It's commonly used when referring to a file having contents that cannot be meaningfully interpreted by a human, e.g. not using a character set such as ASCII. So for example, a pdf or word document would be said to be in a binary format as when viewed we could not interpret the contents, whereas a .txt file would be said to be a text file, as each byte in the file directly represents the contents. An "executable binary" would be a file where the contents represents a program.

@AhmedTahler "Very Elegant..... I was looking for something like this that ties the parts together forming a clear picture" Thanks; don't forget to upvote ;)

@naxa "word" tends to mean the natural unit of memory that a processor accesses. It's common for words to be multiples of 8 bits, e.g. 32 or 64 bits, but so called VLIW (Very Large Instruction Word) processors with much wider words, e.g. 1024 bits, have been created for parallel fetch and execution of independent instructions. Not all processors use multiples of 8 bits though, such as microcontrollers from microchip.com with 12 and 14 bit instruction words.

Difference between: Opcode, byte code, mnemonics, machine code and ass...

assembly
Rectangle 27 266

This is a classic case of rebase --onto:

# let's go to current master (X, where quickfix2 should begin)
 git checkout master

 # replay every commit *after* quickfix1 up to quickfix2 HEAD.
 git rebase --onto master quickfix1 quickfix2

So you should go from

o-o-X (master HEAD)
     \ 
      q1a--q1b (quickfix1 HEAD)
              \
               q2a--q2b (quickfix2 HEAD)
q2a'--q2b' (new quickfix2 HEAD)
     /
o-o-X (master HEAD)
     \ 
      q1a--q1b (quickfix1 HEAD)
git config --global rebase.autostash true

Beware that these steps will modify quickfix2's history, so if you already shared the branch, use cherry-picking instead (see following answers).

Just for the records: with SmartGit's log just drag q2a onto X and select Rebase 2 commits from the options of the occurring dialog.

@ThomasS. Interesting. That is a nice GUI implementation of a git rebase --onto.

I have to admit, I do silly things like committing on to the wrong branch more often that I really should, the SmartGit log view GUI has saved me so many times with the same situation.

@Cosine Agreed. I have edited my answer to add the reference to rebase.autostash configuration: that will avoid any loss of work in progress in the working tree when doing a rebase.

How to move certain commits to another branch in git? - Stack Overflow

git commit patch
Rectangle 27 260

This is a classic case of rebase --onto:

# let's go to current master (X, where quickfix2 should begin)
 git checkout master

 # replay every commit *after* quickfix1 up to quickfix2 HEAD.
 git rebase --onto master quickfix1 quickfix2

So you should go from

o-o-X (master HEAD)
     \ 
      q1a--q1b (quickfix1 HEAD)
              \
               q2a--q2b (quickfix2 HEAD)
q2a'--q2b' (new quickfix2 HEAD)
     /
o-o-X (master HEAD)
     \ 
      q1a--q1b (quickfix1 HEAD)
git config --global rebase.autostash true

Beware that these steps will modify quickfix2's history, so if you already shared the branch, use cherry-picking instead (see following answers).

Just for the records: with SmartGit's log just drag q2a onto X and select Rebase 2 commits from the options of the occurring dialog.

@ThomasS. Interesting. That is a nice GUI implementation of a git rebase --onto.

I have to admit, I do silly things like committing on to the wrong branch more often that I really should, the SmartGit log view GUI has saved me so many times with the same situation.

@Cosine Agreed. I have edited my answer to add the reference to rebase.autostash configuration: that will avoid any loss of work in progress in the working tree when doing a rebase.

How to move certain commits to another branch in git? - Stack Overflow

git commit patch
Rectangle 27 260

This is a classic case of rebase --onto:

# let's go to current master (X, where quickfix2 should begin)
 git checkout master

 # replay every commit *after* quickfix1 up to quickfix2 HEAD.
 git rebase --onto master quickfix1 quickfix2

So you should go from

o-o-X (master HEAD)
     \ 
      q1a--q1b (quickfix1 HEAD)
              \
               q2a--q2b (quickfix2 HEAD)
q2a'--q2b' (new quickfix2 HEAD)
     /
o-o-X (master HEAD)
     \ 
      q1a--q1b (quickfix1 HEAD)
git config --global rebase.autostash true

Beware that these steps will modify quickfix2's history, so if you already shared the branch, use cherry-picking instead (see following answers).

Just for the records: with SmartGit's log just drag q2a onto X and select Rebase 2 commits from the options of the occurring dialog.

@ThomasS. Interesting. That is a nice GUI implementation of a git rebase --onto.

I have to admit, I do silly things like committing on to the wrong branch more often that I really should, the SmartGit log view GUI has saved me so many times with the same situation.

@Cosine Agreed. I have edited my answer to add the reference to rebase.autostash configuration: that will avoid any loss of work in progress in the working tree when doing a rebase.

How to move certain commits to another branch in git? - Stack Overflow

git commit patch
Rectangle 27 110

I know this is old but I go a kind of working solution

SELECT Tbla.* FROM Tbla
INNER JOIN Tblb ON
Tblb.col1 Like '%'+Tbla.Col2+'%'

You can expand it further with your where clause etc I only answered cause this is what was looking for and had to figure out a way of doing it

Excellent solution, this should be marked as the answer.

@lloydz1 this isn't the original question though which has a column of values created on the fly. You are doing this on two tables.

Does not accurately address the question. This works with a standard column from another table, not a manually curated list of words.

Combining "LIKE" and "IN" for SQL Server - Stack Overflow

sql sql-like
Rectangle 27 7

Should we go with Winforms or WPF?

This is a very broad question. But my answer to that would be this. Winforms designer is a painful experience and it is not as flexible as WPF by any means so if that is important to you then you should.

We want the design and structure to the exact same as what we have now.

Anything you do in Winforms you can replicate in WPF so no worries there.

What kind of pitfalls can I expect when migrating?

There are too many to name one of the most common ones for our migration was converting existing logic in the WinForms (yes it was a crappy, outsourced, legacy WinForms full of business logic) and connecting it to WPF elements. The process in and of itself is not that complicated but when you have too many intertwined pieces it can get pretty ugly.

Is there any place where I can lookup to see the process of migration?

The process of migration pretty much has to be iterative. So the way many people do it based on my research (articles, StackOverflow answers and such) is through ElementHosts.

1) Target a certain part of your WinForms that you would like to switch up to WPF and then take it out.

2) Replace it with ElementHost.

3) Then in that ElementHost you will have your newly rewritten WPF counterpart.

4) Test it out make sure that it works okay with the rest of the elements.

5) Go to step 1 until rest of the stuff on that window is replaced (You can do header, middle, footer or top, bottom any way you wish to go about replacing elements on the window it depends on your particular situation).

6) Once most of the stuff is replaced you can then combine all of those WPF User Controls into a WPF Window.

And please, for the love of all things .NET, use MVVM for the WPF pieces.

@AvetisG: Thank you for the reply. This kind of helps me get started.

@BradleyDotNET: If it's in WPF.. I will definitely go with MVVM. Thank you for the suggestion

c# - Migrating from Winforms to WPF - Stack Overflow

c# .net wpf winforms migration
Rectangle 27 91

This probably isn't the answer you're looking for, but here you go anyways:

One way of looking at abstract concepts like these is to link them with basic concepts, such as ordinary list processing operations. Then, you could say that,

  • A category generalizes the (.) operation.
  • A monoid generalizes the (++) operation.
  • A functor generalises the map operation.
  • An applicative functor generalizes the zip (or zipWith) operation.
  • A monad generalizes the concat operation.

A category consists of a set (or a class) of objects and bunch of arrows that each connect two of the objects. In addition, for each object, there should be an identity arrow connecting this object to itself. Further, if there is one arrow (f) that ends on an object, and another (g) that starts from the same object, there should then also be a composite arrow called g . f.

In Haskell this is modelled as a typeclass that represents the category of Haskell types as objects.

class Category cat where
  id :: cat a a
  (.) :: cat b c -> cat a b -> cat a c

Basic examples of a category are functions. Each function connects two types, for all types, there is the function id :: a -> a that connects the type (and the value) to itself. The composition of functions is the ordinary function composition.

In short, categories in Haskell base are things that behave like functions, i.e. you can put one after another with a generalized version of (.).

A monoid is a set with an unit element and an associative operation. This is modelled in Haskell as:

class Monoid a where
  mempty  :: a
  mappend :: a -> a -> a
  • set of integers, the element 0, and the operation (+).
  • set of positive integers, the element 1, and the operation (*).
  • set of all lists, the empty list [], and the operation (++).

These are modelled in Haskell as

newtype Sum a = Sum {getSum :: a}
instance (Num a) => Monoid (Sum a) where
  mempty  = Sum 0
  mappend (Sum a) (Sum b) = Sum (a + b)  

instance Monoid [a] where
  mempty = []
  mappend = (++)

Monoids are used to 'combine' and accumulate things. For example, the function mconcat :: Monoid a => [a] -> a, can be used to reduce a list of sums to single sum, or a nested list into a flat list. Consider this as a kind of generalization of (++) or (+) operations that in a way 'merge' two things.

A functor in Haskell is a thing that quite directly generalizes the operation map :: (a->b) -> [a] -> [b]. Instead of mapping over a list, it maps over some structure, such as a list, binary tree, or even an IO operation. Functors are modelled like this:

class Functor f where
  fmap :: (a->b) -> f a -> f b

Contrast this to the definition of the normal map function.

Applicative functors can be seen as things with a generalized zipWith operation. Functors map over general structures one at the time, but with an Applicative functor you can zip together two or more structures. For the simplest example, you can use applicatives to zip together two integers inside the Maybe type:

pure (+) <*> Just 1 <*> Just 2  -- gives Just 3

Notice that the structure can affect the result, for example:

pure (+) <*> Nothing <*> Just 2  -- gives Nothing
zipWith
zipWith (+) [1] [2]

Instead of of just lists, the applicative works for all kinds of structures. Additionally, the clever trickery with pure and (<*>) generalizes the zipping to work with any number of arguments. To see how this works, inspect the following types while keeping the concept of partially applied functions at hand:

instance (Functor f) => Applicative f where
  pure  :: a -> f a
  (<*>) :: f (a -> b) -> f a -> f b

Notice also the similarity between fmap and (<*>).

Monads are often used to model different computational contexts, such as non-deterministic, or side-effectful computations. Since there are already far too many monad tutorials, I will just recommend The best one, instead of writing yet another.

Relating to the ordinary list processing functions, monads generalize the function concat :: [[a]] -> [a] to work with many other sorts of structures besides lists. As a simple example, the monadic operation join can be used to flatten nested Maybe values:

join (Just (Just 42)) -- gives Just 42
join (Just (Nothing)) -- gives Nothing

How is this related to the use of Monads as a means of structuring computations? Consider a toy example where you do two consecutive queries from some database. The first query returns you some key value, with which you wish to do another lookup. The problem here is that the first value is wrapped inside Maybe, so you can't query with that directly. Instead, as maybe is a Functor, you could instead fmap the return value with the new query. This would give you two nested Maybe values like above. Another query would result in three layers of Maybes. This would be quite difficult to program with, but a monadic join gives you a way to flatten this structure, and work with just a single level of Maybes.

(I think I'll be editing this post a lot before it makes any sense..)

concat :: [[a]] -> [a]
mempty = 0
mempty = Sum 0

I like this answer, but you might want to point out that the standard Applicative instance for [] is not the zippy one, but the cartesian product. The not-very-useful zippy list monad is only valid on lists of consistent length and its join is taking the diagonal.

@C.A.McCann Applicative Functors are "zippy" in another sense, aren't they, in a sense of having a method fzip :: (f a, f b) -> f (a,b), combining two fs into one. (cf. stackoverflow.com/a/15211856/849891).

@WillNess: Typically, the phrase "zippy Applicative" means an instance that behaves as a structural intersection, matching elements from two structures that have the same position. Such an fzip implemented with Applicative may not behave like zip with other instances, the obvious example being the default list instance, where fzip would give the full cartesian product.

haskell - Simple examples to illustrate Category, Monoid and Monad? - ...

haskell monads category-theory monoids
Rectangle 27 14

I would go like this (regex explained in comments):

import re

# If you need to use the regex more than once it is suggested to compile it.
pattern = re.compile(r"</{0,}\[\d+>")

# <\/{0,}\[\d+>
# 
# Match the character < literally <
# Match the character / literally \/{0,}
#    Between zero and unlimited times, as many times as possible, giving back as needed (greedy) {0,}
# Match the character [ literally \[
# Match a single digit 0..9 \d+
#    Between one and unlimited times, as many times as possible, giving back as needed (greedy) +
# Match the character > literally >

subject = """this is a paragraph with<[1> in between</[1> and then there are cases ... where the<[99> number ranges from 1-100</[99>. 
and there are many other lines in the txt files
with<[3> such tags </[3>"""

result = pattern.sub("", subject)

print(result)

You could simply use * instead of {0,}

I think that {0,} is more readable. Just a matter of style

From the python docs: {0,} is the same as *, {1,} is equivalent to +, and {0,1} is the same as ?. Its better to use *, +, or ? when you can, simply because theyre shorter and easier to read.

python - How to input a regex in string.replace? - Stack Overflow

python regex string replace
Rectangle 27 24

There are two ways you could go about to solve your problem

  • Use a static builder, like freeze, or pyinstaller, or py2exe
  • Compile using cython

I will explain how you can go about doing it using the second, since the first method is not cross platform and version, and has been explained in other answers. Also, using programs like pyinstaller typically results in huge file sizes, where as using cython will result in a file that's KBs in size

First, install cython. Then, rename your python file (say test.py) into a pyx file

$ sudo pip install cython
$ mv test.py test.pyx

Then, you can use cython along with GCC to compile it (Cython generates a C file out of a Python .pyx file, and then GCC compiles the C file) (in reference to https://stackoverflow.com/a/22040484/5714445)

$ cython test.pyx --embed
$ gcc -Os -I /usr/include/python3.5m -o test test.c -lpython3.5m -lpthread -lm -lutil -ldl

NOTE: Depending on your version of python, you might have to change the last command. To know which version of python you are using, simply use

$ python -V

You will now have a binary file 'test', which is what you are looking for

NOTE2: If you are using additional libraries (like opencv, for example), you might have to provide the directory to them using -L and then specify the name of the library using -l in the GCC Flags. For more information on this, please refer to GCC flags

Hi, what if my application spans across multiple files. This approach seems to only compile the current script and not all the Python dependencies.

build - Is there a way to compile python application into static binar...

python build
Rectangle 27 94

Update, as of Go 1.8: If you're installing Go 1.8 (released: Feb 2017) or later, GOPATH is automatically determined by the Go toolchain for you.

It defaults to $HOME/go on macOS (nee OS X) - e.g. /Users/matt/go/. This makes getting started with Go even easier, and you can go get <package> right after installing Go.

~/.bash_profile should contain export GOPATH=$HOME/go and also export PATH=$GOPATH/bin:$PATH. The use of the $ is important: make sure to note where I've used it (and where I have not).

{
        "shell": ["/bin/bash"],
        "env": {"GOPATH": "/Users/#USERNAME#/go/"},
}

Make sure your GOPATH is not set to the full path of the package; just the root of your go folder where src, pkg, and bin reside. If you're not using GoSublime, I'd suggest installing that first.

/usr/bin/bash
/bin/bash

go - Cannot set $GOPATH on Mac OSX - Stack Overflow

osx go environment-variables
Rectangle 27 4

Another option is using a map as a set. You use just the keys and having the value be something like a boolean that's always true. Then you can easily check if the map contains the key or not. This is useful if you need the behavior of a set, where if you add a value multiple times it's only in the set once.

Here's a simple example where I add random numbers as keys to a map. If the same number is generated more than once it doesn't matter, it will only appear in the final map once. Then I use a simple if check to see if a key is in the map or not.

package main

import (
    "fmt"
    "math/rand"
)

func main() {
    var MAX int = 10

    m := make(map[int]bool)

    for i := 0; i <= MAX; i++ {
        m[rand.Intn(MAX)] = true
    }

    for i := 0; i <= MAX; i++ {
        if _, ok := m[i]; ok {
            fmt.Printf("%v is in map\n", i)
        } else {
            fmt.Printf("%v is not in map\n", i)
        }
    }
}

if statement - Does Golang have "if x in" construct similar to Python?...

if-statement go
Rectangle 27 23

The easiest way is to use the strconv.Atoi() function.

Note that there are many other ways. For example fmt.Sscan() and strconv.ParseInt() which give greater flexibility as you can specify the base and bitsize for example. Also as noted in the documentation of strconv.Atoi():

Here's an example using the mentioned functions (try it on the Go Playground):

flag.Parse()
s := flag.Arg(0)

if i, err := strconv.Atoi(s); err == nil {
    fmt.Printf("i=%d, type: %T\n", i, i)
}

if i, err := strconv.ParseInt(s, 10, 64); err == nil {
    fmt.Printf("i=%d, type: %T\n", i, i)
}

var i int
if _, err := fmt.Sscan(s, &i); err == nil {
    fmt.Printf("i=%d, type: %T\n", i, i)
}
"123"
i=123, type: int
i=123, type: int64
i=123, type: int

There is also a handy fmt.Sscanf() which gives even greater flexibility as with the format string you can specify the number format (like width, base etc.) along with additional extra characters in the input string.

This is great for parsing custom strings holding a number. For example if your input is provided in a form of "id:00123" where you have a prefix "id:" and the number is fixed 5 digits, padded with zeros if shorter, this is very easily parsable like this:

s := "id:00123"

var i int
if _, err := fmt.Sscanf(s, "id:%5d", &i); err == nil {
    fmt.Println(i) // Outputs 123
}
ParseInt
strconv.ParseInt()
ParseInt(s string, base int, bitSize int)

Convert string to integer type in Go? - Stack Overflow

go type-conversion
Rectangle 27 3

Unfortunately using the encoding/json package you can't, because type information is not transmitted, and JSON numbers by default are unmarshaled into values of float64 type if type information is not present. You would need to define struct types where you explicitly state the field is of type uint32.

Alternatively you may opt to use encoding/gob which does transmit and preserve type information. See this example:

m := map[string]interface{}{"1": uint32(1)}

b := &bytes.Buffer{}
gob.NewEncoder(b).Encode(m)

var m2 map[string]interface{}
gob.NewDecoder(b).Decode(&m2)
fmt.Printf("%T\n%#v\n", m2["1"], m2)

Output (try it on the Go Playground):

uint32
map[string]interface {}{"1":0x1}

The downside of gob is that it's Go-specific unlike the language and platform independent JSON.

json - How to unmarshal from interface{} to interface{} in Go - Stack ...

json go unmarshalling
Rectangle 27 24

Scene Text Detection Module in OpenCV 3

There are multiple ways to go about detecting text in an image.

I recommend looking at this question here, for it may answer your case as well. Although it is not in python, the code can be easily translated from c++ to python (Just look at the API and convert the methods from c++ to python, not hard. I did it myself when I tried their code for my own separate problem). The solutions here may not work for your case, but I recommend trying them out.

If I were to go about this I would do the following process:

Prep your image: If all of your images you want to edit are roughly like the one you provided, where the actual design consists of a range of gray colors, and the text is always black. I would first white out all content that is not black (or already white). Doing so will leave only the black text left.

# must import if working with opencv in python
import numpy as np
import cv2

# removes pixels in image that are between the range of
# [lower_val,upper_val]
def remove_gray(img,lower_val,upper_val):
    hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
    lower_bound = np.array([0,0,lower_val])
    upper_bound = np.array([255,255,upper_val])
    mask = cv2.inRange(gray, lower_bound, upper_bound)
    return cv2.bitwise_and(gray, gray, mask = mask)

Now that all you have is the black text the goal is to get those boxes. As stated before, there are different ways of going about this.

The typical way to find text areas: you can find text regions by using stroke width transform as depicted in "Detecting Text in Natural Scenes with Stroke Width Transform " by Boris Epshtein, Eyal Ofek, and Yonatan Wexler. To be honest, if this is as fast and reliable as I believe it is, then this method is a more efficient method than my below code. You can still use the code above to remove the blueprint design though, and that may help the overall performance of the swt algorithm.

Here is a c library that implements their algorithm, but it is stated to be very raw and the documentation is stated to be incomplete. Obviously, a wrapper will be needed in order to use this library with python, and at the moment I do not see an official one offered.

The library I linked is CCV. It is a library that is meant to be used in your applications, not recreate algorithms. So this is a tool to be used, which goes against OP's want for making it from "First Principles", as stated in comments. Still, useful to know it exists if you don't want to code the algorithm yourself.

If you have meta data for each image, say in an xml file, that states how many rooms are labeled in each image, then you can access that xml file, get the data about how many labels are in the image, and then store that number in some variable say, num_of_labels. Now take your image and put it through a while loop that erodes at a set rate that you specify, finding external contours in the image in each loop and stopping the loop once you have the same number of external contours as your num_of_labels. Then simply find each contours' bounding box and you are done.

# erodes image based on given kernel size (erosion = expands black areas)
def erode( img, kern_size = 3 ):
    retval, img = cv2.threshold(img, 254.0, 255.0, cv2.THRESH_BINARY) # threshold to deal with only black and white.
    kern = np.ones((kern_size,kern_size),np.uint8) # make a kernel for erosion based on given kernel size.
    eroded = cv2.erode(img, kern, 1) # erode your image to blobbify black areas
    y,x = eroded.shape # get shape of image to make a white boarder around image of 1px, to avoid problems with find contours.
    return cv2.rectangle(eroded, (0,0), (x,y), (255,255,255), 1)

# finds contours of eroded image
def prep( img, kern_size = 3 ):    
    img = erode( img, kern_size )
    retval, img = cv2.threshold(img, 200.0, 255.0, cv2.THRESH_BINARY_INV) #   invert colors for findContours
    return cv2.findContours(img,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE) # Find Contours of Image

# given img & number of desired blobs, returns contours of blobs.
def blobbify(img, num_of_labels, kern_size = 3, dilation_rate = 10):
    prep_img, contours, hierarchy = prep( img.copy(), kern_size ) # dilate img and check current contour count.
    while len(contours) > num_of_labels:
        kern_size += dilation_rate # add dilation_rate to kern_size to increase the blob. Remember kern_size must always be odd.
        previous = (prep_img, contours, hierarchy)
        processed_img, contours, hierarchy = prep( img.copy(), kern_size ) # dilate img and check current contour count, again.
    if len(contours) < num_of_labels:
        return (processed_img, contours, hierarchy)
    else:
        return previous

# finds bounding boxes of all contours
def bounding_box(contours):
    bBox = []
    for curve in contours:
        box = cv2.boundingRect(curve)
    bBox.append(box)
    return bBox

The resulting boxes from the above method will have space around the labels, and this may include part of the original design, if the boxes are applied to the original image. To avoid this make regions of interest via your new found boxes and trim the white space. Then save that roi's shape as your new box.

Perhaps you have no way of knowing how many labels will be in the image. If this is the case, then I recommend playing around with erosion values until you find the best one to suit your case and get the desired blobs.

Or you could try find contours on the remaining content, after removing the design, and combine bounding boxes into one rectangle based on their distance from each other.

After you found your boxes, simply use those boxes with respect to the original image and you will be done.

As mentioned in the comments to your question, there already exists a means of scene text detection (not document text detection) in opencv 3. I understand you do not have the ability to switch versions, but for those with the same question and not limited to an older opencv version, I decided to include this at the end. Documentation for the scene text detection can be found with a simple google search.

The opencv module for text detection also comes with text recognition that implements tessaract, which is a free open-source text recognition module. The downfall of tessaract, and therefore opencv's scene text recognition module is that it is not as refined as commercial applications and is time consuming to use. Thus decreasing its performance, but its free to use, so its the best we got without paying money, if you want text recognition as well.

Honestly, I lack the experience and expertise in both opencv and image processing in order to provide a detailed way in implementing their text detection module. The same with the SWT algorithm. I just got into this stuff this past few months, but as I learn more I will edit this answer.

Detect text area in an image using python and opencv - Stack Overflow

python opencv image-processing ocr
Rectangle 27 14

No,No and No. Go and try the controller code below where we have the "LoadCustomer" overloaded.

If you try to invoke the "LoadCustomer" action you will get error as shown in the below figure.

Polymorphism is a part of C# programming while HTTP is a protocol. HTTP does not understand polymorphism. HTTP works on the concept's or URL and URL can only have unique name's. So HTTP does not implement polymorphism.

public class CustomerController : Controller
    {
        //
        // GET: /Customer/

        public ActionResult LoadCustomer()
        {
            return Content("LoadCustomer");
        }

        [ActionName("LoadCustomerbyName")]
        public ActionResult LoadCustomer(string str)
        {
            return Content("LoadCustomer with a string");
        }
    }

So now if you make a call to URL "Customer/LoadCustomer" the "LoadCustomer" action will be invoked and with URL structure "Customer/LoadCustomerByName" the "LoadCustomer(string str)" will be invoked.

The above answer i have taken from this codeproject article --> MVC Action overloading

Thanks for this. I guess you may as well just use a different action name from the beginning rather than use the attribute.

@Dan but then we do not have polymorphism on the C# side.

You're correct, there's no controller method overloading but it's nothing to do with HTTP.

Thanks for clarification. +1. Should be thinking more HTTP and not C#. There's no reason to approach actions with an OO strategy.

c# - Can you overload controller methods in ASP.NET MVC? - Stack Overf...

c# asp.net-mvc overloading