Rectangle 27 7

Java Beans is a standard, and its basic syntax requirements have been clearly explained by the other answers.

However, IMO, it is more than a simple syntax standard. The real meaning or intended usage of Java Beans is, together with various tool supports around the standard, to facilitate code reuse and component-based software engineering, i.e. enable developers to build applications by assembling existing components (classes) and without having to write any code (or only have to write a little glue code). Unfortunately this technology is way under-estimated and under-utilized by the industry, which can be told from the answers in this thread.

If you read Oracle's tutorial on Java Beans, you can get a better understanding in that.

Useful post and link. When I think of beans I do indeed think thing of "Visual Builder" type stuff, as illustrated in the Oracle article. I wonder whether there are many other frameworks which use them in a big way...

java - What is a JavaBean exactly? - Stack Overflow

java javabeans serializable
Rectangle 27 7

Java Beans is a standard, and its basic syntax requirements have been clearly explained by the other answers.

However, IMO, it is more than a simple syntax standard. The real meaning or intended usage of Java Beans is, together with various tool supports around the standard, to facilitate code reuse and component-based software engineering, i.e. enable developers to build applications by assembling existing components (classes) and without having to write any code (or only have to write a little glue code). Unfortunately this technology is way under-estimated and under-utilized by the industry, which can be told from the answers in this thread.

If you read Oracle's tutorial on Java Beans, you can get a better understanding in that.

Useful post and link. When I think of beans I do indeed think thing of "Visual Builder" type stuff, as illustrated in the Oracle article. I wonder whether there are many other frameworks which use them in a big way...

java - What is a JavaBean exactly? - Stack Overflow

java javabeans serializable
Rectangle 27 7

Java Beans is a standard, and its basic syntax requirements have been clearly explained by the other answers.

However, IMO, it is more than a simple syntax standard. The real meaning or intended usage of Java Beans is, together with various tool supports around the standard, to facilitate code reuse and component-based software engineering, i.e. enable developers to build applications by assembling existing components (classes) and without having to write any code (or only have to write a little glue code). Unfortunately this technology is way under-estimated and under-utilized by the industry, which can be told from the answers in this thread.

If you read Oracle's tutorial on Java Beans, you can get a better understanding in that.

Useful post and link. When I think of beans I do indeed think thing of "Visual Builder" type stuff, as illustrated in the Oracle article. I wonder whether there are many other frameworks which use them in a big way...

java - What is a JavaBean exactly? - Stack Overflow

java javabeans serializable
Rectangle 27 7

Java Beans is a standard, and its basic syntax requirements have been clearly explained by the other answers.

However, IMO, it is more than a simple syntax standard. The real meaning or intended usage of Java Beans is, together with various tool supports around the standard, to facilitate code reuse and component-based software engineering, i.e. enable developers to build applications by assembling existing components (classes) and without having to write any code (or only have to write a little glue code). Unfortunately this technology is way under-estimated and under-utilized by the industry, which can be told from the answers in this thread.

If you read Oracle's tutorial on Java Beans, you can get a better understanding in that.

Useful post and link. When I think of beans I do indeed think thing of "Visual Builder" type stuff, as illustrated in the Oracle article. I wonder whether there are many other frameworks which use them in a big way...

java - What is a JavaBean exactly? - Stack Overflow

java javabeans serializable
Rectangle 27 26

The basic syntax for an intent based URI is as follows:

Parsing details available in the Android source.

To launch the ZXing barcode scanner app you can encode your href as follows:

<p>
  <a href="intent://scan/#Intent;scheme=zxing;package=com.google.zxing.client.android;end">Take a qr code</a><br>

  <a href="intent://scan/?ret=http%3A%2F%2Fexample.com#Intent;scheme=zxing;package=com.google.zxing.client.android;end">Take a qr code = 10px x 10px</a><br>

  <a href="intent:play/?mediaset=android-phone-rtmp-high&amp;playlisturl=http://www.bbc.co.uk/iplayer/playlist/bbc_one_london#Intent;scheme=bbcmediaplayer;package=air.uk.co.bbc.android.mediaplayer;end">Launch BBC</a>
</p>
zxing
com.google.zxing.client.android
scan

If the activity you are invoking via an intent contains extra data, these too can be included.

Only activities that have category filter android.intent.category.BROWSABLE are able to be invoked using this method as it indicates that the application is safe to open from the browser.

How can the browser get the barcode scanned by ZXing?

If you want to get the code back when the scanner call the return URL, in the return (or callback) URL you have to add a placeholder with the text "{RAWCODE} . For instance example.com/scan.html?code={CODE} . The scanner app will call back to that URL with the {CODE} replaced with the actual code. The rest is up to you. Hope it help someone

How do I open any app from my web browser (Chrome) in Android? What do...

android google-chrome web
Rectangle 27 26

The basic syntax for an intent based URI is as follows:

Parsing details available in the Android source.

To launch the ZXing barcode scanner app you can encode your href as follows:

<p>
  <a href="intent://scan/#Intent;scheme=zxing;package=com.google.zxing.client.android;end">Take a qr code</a><br>

  <a href="intent://scan/?ret=http%3A%2F%2Fexample.com#Intent;scheme=zxing;package=com.google.zxing.client.android;end">Take a qr code = 10px x 10px</a><br>

  <a href="intent:play/?mediaset=android-phone-rtmp-high&amp;playlisturl=http://www.bbc.co.uk/iplayer/playlist/bbc_one_london#Intent;scheme=bbcmediaplayer;package=air.uk.co.bbc.android.mediaplayer;end">Launch BBC</a>
</p>
zxing
com.google.zxing.client.android
scan

If the activity you are invoking via an intent contains extra data, these too can be included.

Only activities that have category filter android.intent.category.BROWSABLE are able to be invoked using this method as it indicates that the application is safe to open from the browser.

How can the browser get the barcode scanned by ZXing?

If you want to get the code back when the scanner call the return URL, in the return (or callback) URL you have to add a placeholder with the text "{RAWCODE} . For instance example.com/scan.html?code={CODE} . The scanner app will call back to that URL with the {CODE} replaced with the actual code. The rest is up to you. Hope it help someone

How do I open any app from my web browser (Chrome) in Android? What do...

android google-chrome web
Rectangle 27 21

Let me start off by saying that I started programming in Visual/Real Basic, then moved on to Java, so I'm fairly used to dot syntax. However, when I finally moved to Objective-C and got used to brackets, then saw the introduction of Objective-C 2.0 and its dot syntax, I realized that I really don't like it. (for other languages it's fine, because that's how they roll).

I have three main beefs with dot syntax in Objective-C:

Beef #1: It makes it unclear why you might be getting errors. For example, if I have the line:

Then I'll get a compiler error, because something is an object, and you can't use structs of an object as the lvalue of an expression. However, if I have:

something.frame.origin.x = 42;

Then this compiles just fine, because something is a struct itself that has an NSRect member, and I can use it as an lvalue.

If I were adopting this code, I would need to spend some time trying to figure out what something is. Is it a struct? Is it an object? However, when we use the bracket syntax, it's much clearer:

[something setFrame:newFrame];

In this case, there is absolutely no ambiguity if something is an object or not. The introduction of ambiguity is my beef #1.

Beef #2: In C, dot syntax is used to access members of structs, not call methods. Programmers can override the setFoo: and foo methods of an objects, yet still access them via something.foo. In my mind, when I see expressions using dot syntax, I'm expecting them to be a simple assignation into an ivar. This is not always the case. Consider a controller object that mediates an array and a tableview. If I call myController.contentArray = newArray;, I would expect it to be replacing the old array with the new array. However, the original programmer might have overridden setContentArray: to not only set the array, but also reload the tableview. From the line, there's no indication of that behavior. If I were to see [myController setContentArray:newArray];, then I would think "Aha, a method. I need to go see the definition of this method just to make sure I know what it's doing."

So I think my summary of Beef #2 is that you can override the meaning of dot syntax with custom code.

Beef #3: I think it looks bad. As an Objective-C programmer, I'm totally used to bracket syntax, so to be reading along and see lines and lines of beautiful brackets and then to be suddenly broken with foo.name = newName; foo.size = newSize; etc is a bit distracting to me. I realize that some things require dot syntax (C structs), but that's the only time I use them.

Of course, if you're writing code for yourself, then use whatever you're comfortable with. But if you're writing code that you're planning on open sourcing, or you're writing something you don't expect to maintain forever, then I would strong encourage using bracket syntax. This is, of course, just my opinion.

Beef #1 seems a little specious to me. I don't see how that's a concern any more than you might try sending a message to a struct and then have to figure out that the variable is actually a struct. Do you actually see a lot of people who understand how it works getting confused by this in practice? This seems like the sort of thing that you'd mess up once, learn about the lvalue thing, and then do it right in the future just like, say, not trying to treat NSNumbers like ints.

@Chuck it is somewhat of an edge case, but I do quite a bit of contract development, which involves inheriting projects and having to work with them. Usually something is an object, but I've come across a couple places where the original authors have created structs (for speed, lower memory usage, etc), and I have to spend a good couple minutes trying to figure out why the code doesn't compile.

objective c - Dot notation vs. message notation for declared propertie...

objective-c syntax coding-style
Rectangle 27 61

The fundamental trick is to recognize that parsing, however accomplished, happens in incremental steps, including the reading of the tokens one by one.

At each incremental step, there is an opportunity to build part of the AST by combining AST fragments built by other incremental steps. This is a recursive idea, and it bottoms out in building AST leaf nodes for tokens as they are scanned. This basic idea occurs in pretty much all AST-building parsers.

If one builds a recursive descent parser, one in effect builds a cooperating system of recursive procedures, each one of which recognizes a nonterminal in whatever grammar is being implemented. For pure parsing, each procedure simply returns a boolean for "nonterminal (not) recognized".

To build an AST with a recursive descent parser, one designs these procedures to return two values: the boolean "recognized", and, if recognized, an AST constructed (somehow) for the nonterminal. (A common hack is return a pointer, which is void for "not recognized", or points to the constructed AST if "recognized"). The way the resulting AST for a single procedure is built, is by combining the ASTs from the sub-procedures that it invokes. This is pretty trivial to do for leaf procedures, which read an input token and can immediately build a tree.

The downside to all this is one must manually code the recursive descent, and augment it with the tree building steps. In the grand scheme of things, this is actually pretty easy to code for small grammars.

GOAL = ASSIGNMENT 
ASSIGNMENT = LHS '=' RHS ';' 
LHS = IDENTIFIER 
RHS = IDENTIFIER | NUMBER
boolean parse_Goal()
{  if parse_Assignement()
   then return true
   else return false
}

boolean parse_Assignment()
{  if not Parse_LHS()
   then return false
   if not Parse_equalsign()
   then throw SyntaxError // because there are no viable alternatives from here
   if not Parse_RHS()
   then throw SyntaxError
   if not Parse_semicolon()
   the throw SyntaxError
   return true
}

boolean parse_LHS()
{  if parse_IDENTIFIER()
   then return true
   else return false
}

boolean parse_RHS()
{  if parse_IDENTIFIER()
   then return true
   if parse_NUMBER()
   then return true
   else return false
}

boolean parse_equalsign()
{  if TestInputAndAdvance("=")  // this can check for token instead
   then return true
   else return false
}

boolean parse_semicolon()
{  if TestInputAndAdvance(";")
   then return true
   else return false
}

boolean parse_IDENTIFIER()
{  if TestInputForIdentifier()
   then return true
   else return false
}

boolean parse_NUMBER()
{  if TestInputForNumber()
   then return true
   else return false
}

Now, let's revise it build a abstract syntax tree:

AST* parse_Goal() // note: we choose to return a null pointer for "false"
{  node = parse_Assignment()
   if node != NULL
   then return node
   else return NULL
}

AST* parse_Assignment()
{  LHSnode = Parse_LHS()
   if LHSnode == NULL
   then return NULL
   EqualNode = Parse_equalsign()
   if EqualNode == NULL
   then throw SyntaxError // because there are no viable alternatives from here
   RHSnode = Parse_RHS()
   if RHSnode == NULL
   then throw SyntaxError
   SemicolonNode = Parse_semicolon()
   if SemicolonNode == NULL
   the throw SyntaxError
   return makeASTNode(ASSIGNMENT,LHSNode,RHSNode)
}

AST* parse_LHS()
{  IdentifierNode = parse_IDENTIFIER()
   if node != NULL
   then return IdentifierNode
   else return NULL
}

AST* parse_RHS()
{  RHSnode = parse_IDENTIFIER()
   if RHSnode != null
   then return RHSnode
   RHSnode = parse_NUMBER()
   if RHSnode != null
   then return RHSnode
   else return NULL
}

AST* parse_equalsign()
{  if TestInputAndAdvance("=")  // this can check for token instead
   then return makeASTNode("=")
   else return NULL
}

AST* parse_semicolon()
{  if TestInputAndAdvance(";")
   then return makeASTNode(";")
   else return NULL
}

AST* parse_IDENTIFIER()
{  text = TestInputForIdentifier()
   if text != NULL
   then return makeASTNode("IDENTIFIER",text)
   else return NULL
}

AST* parse_NUMBER()
{  text = TestInputForNumber()
   if text != NULL
   then return makeASTNode("NUMBER",text)
   else return NULL
}

I've obviously glossed over some details, but I assume the reader will have no trouble filling them in.

Parser generator tools like JavaCC and ANTLR basically generate recursive descent parsers, and have facilities for constructing trees that work very much like this.

Parser generator tools that build bottom-up parsers (YACC, Bison, GLR, ...) also build AST nodes in the same style. However, there is no set of recursive functions; instead, a stack of tokens seen and reduced-to nonterminals is managed by these tools. The AST nodes are constructed on a parallel stack; when a reduction occurs, the AST nodes on the part of the stack covered by the reduction are combined to produce a nonterminal AST node to replace them. This happens with "zero-size" stack segments for grammar rules which are empty too causing AST nodes (typically for 'empty list' or 'missing option') to seemingly appear from nowhere.

With bitty languages, writing recursive-descent parsers that build trees is pretty practical.

A problem with real languages (whether old and hoary like COBOL or hot and shiny like Scala) is that the number of grammar rules is pretty large, complicated by the sophistication of the language and the insistence on whatever language committee is in charge of it to perpetually add new goodies offered by other languages ("language envy", see the evolutionary race between Java, C# and C++). Now writing a recursive descent parser gets way out of hand and one tends to use parser generators. But even with a parser generator, writing all the custom code to build AST nodes is also a big battle (and we haven't discussed what it takes to design a good "abstract" syntax vs. the first thing that comes to mind). Maintaining grammar rules and AST building goo gets progressively harder with scale and ongoing evolution. (If your language is successful, within a year you'll want to change it). So even writing the AST building rules gets awkward.

Last point: having a parser (even with an AST) is hardly a solution to the actual problem you set out to solve, whatever it was. Its just a foundation piece, and much to the shock for most parser-newbies, it is the smallest part to a tool that manipulates code. Google my essay on Life After Parsing (or check my bio) for more detail.

I am implementing my own parser in rust and am figuring out a lot by myself. However I couldn't find a resource that confirmed that my way of parsing is a valid approach. Your example is pretty much exactly what I am doing. Thank you a lot!

java - Constructing an Abstract Syntax Tree with a list of Tokens - St...

java interpreter abstract-syntax-tree
Rectangle 27 61

The fundamental trick is to recognize that parsing, however accomplished, happens in incremental steps, including the reading of the tokens one by one.

At each incremental step, there is an opportunity to build part of the AST by combining AST fragments built by other incremental steps. This is a recursive idea, and it bottoms out in building AST leaf nodes for tokens as they are scanned. This basic idea occurs in pretty much all AST-building parsers.

If one builds a recursive descent parser, one in effect builds a cooperating system of recursive procedures, each one of which recognizes a nonterminal in whatever grammar is being implemented. For pure parsing, each procedure simply returns a boolean for "nonterminal (not) recognized".

To build an AST with a recursive descent parser, one designs these procedures to return two values: the boolean "recognized", and, if recognized, an AST constructed (somehow) for the nonterminal. (A common hack is return a pointer, which is void for "not recognized", or points to the constructed AST if "recognized"). The way the resulting AST for a single procedure is built, is by combining the ASTs from the sub-procedures that it invokes. This is pretty trivial to do for leaf procedures, which read an input token and can immediately build a tree.

The downside to all this is one must manually code the recursive descent, and augment it with the tree building steps. In the grand scheme of things, this is actually pretty easy to code for small grammars.

GOAL = ASSIGNMENT 
ASSIGNMENT = LHS '=' RHS ';' 
LHS = IDENTIFIER 
RHS = IDENTIFIER | NUMBER
boolean parse_Goal()
{  if parse_Assignement()
   then return true
   else return false
}

boolean parse_Assignment()
{  if not Parse_LHS()
   then return false
   if not Parse_equalsign()
   then throw SyntaxError // because there are no viable alternatives from here
   if not Parse_RHS()
   then throw SyntaxError
   if not Parse_semicolon()
   the throw SyntaxError
   return true
}

boolean parse_LHS()
{  if parse_IDENTIFIER()
   then return true
   else return false
}

boolean parse_RHS()
{  if parse_IDENTIFIER()
   then return true
   if parse_NUMBER()
   then return true
   else return false
}

boolean parse_equalsign()
{  if TestInputAndAdvance("=")  // this can check for token instead
   then return true
   else return false
}

boolean parse_semicolon()
{  if TestInputAndAdvance(";")
   then return true
   else return false
}

boolean parse_IDENTIFIER()
{  if TestInputForIdentifier()
   then return true
   else return false
}

boolean parse_NUMBER()
{  if TestInputForNumber()
   then return true
   else return false
}

Now, let's revise it build a abstract syntax tree:

AST* parse_Goal() // note: we choose to return a null pointer for "false"
{  node = parse_Assignment()
   if node != NULL
   then return node
   else return NULL
}

AST* parse_Assignment()
{  LHSnode = Parse_LHS()
   if LHSnode == NULL
   then return NULL
   EqualNode = Parse_equalsign()
   if EqualNode == NULL
   then throw SyntaxError // because there are no viable alternatives from here
   RHSnode = Parse_RHS()
   if RHSnode == NULL
   then throw SyntaxError
   SemicolonNode = Parse_semicolon()
   if SemicolonNode == NULL
   the throw SyntaxError
   return makeASTNode(ASSIGNMENT,LHSNode,RHSNode)
}

AST* parse_LHS()
{  IdentifierNode = parse_IDENTIFIER()
   if node != NULL
   then return IdentifierNode
   else return NULL
}

AST* parse_RHS()
{  RHSnode = parse_IDENTIFIER()
   if RHSnode != null
   then return RHSnode
   RHSnode = parse_NUMBER()
   if RHSnode != null
   then return RHSnode
   else return NULL
}

AST* parse_equalsign()
{  if TestInputAndAdvance("=")  // this can check for token instead
   then return makeASTNode("=")
   else return NULL
}

AST* parse_semicolon()
{  if TestInputAndAdvance(";")
   then return makeASTNode(";")
   else return NULL
}

AST* parse_IDENTIFIER()
{  text = TestInputForIdentifier()
   if text != NULL
   then return makeASTNode("IDENTIFIER",text)
   else return NULL
}

AST* parse_NUMBER()
{  text = TestInputForNumber()
   if text != NULL
   then return makeASTNode("NUMBER",text)
   else return NULL
}

I've obviously glossed over some details, but I assume the reader will have no trouble filling them in.

Parser generator tools like JavaCC and ANTLR basically generate recursive descent parsers, and have facilities for constructing trees that work very much like this.

Parser generator tools that build bottom-up parsers (YACC, Bison, GLR, ...) also build AST nodes in the same style. However, there is no set of recursive functions; instead, a stack of tokens seen and reduced-to nonterminals is managed by these tools. The AST nodes are constructed on a parallel stack; when a reduction occurs, the AST nodes on the part of the stack covered by the reduction are combined to produce a nonterminal AST node to replace them. This happens with "zero-size" stack segments for grammar rules which are empty too causing AST nodes (typically for 'empty list' or 'missing option') to seemingly appear from nowhere.

With bitty languages, writing recursive-descent parsers that build trees is pretty practical.

A problem with real languages (whether old and hoary like COBOL or hot and shiny like Scala) is that the number of grammar rules is pretty large, complicated by the sophistication of the language and the insistence on whatever language committee is in charge of it to perpetually add new goodies offered by other languages ("language envy", see the evolutionary race between Java, C# and C++). Now writing a recursive descent parser gets way out of hand and one tends to use parser generators. But even with a parser generator, writing all the custom code to build AST nodes is also a big battle (and we haven't discussed what it takes to design a good "abstract" syntax vs. the first thing that comes to mind). Maintaining grammar rules and AST building goo gets progressively harder with scale and ongoing evolution. (If your language is successful, within a year you'll want to change it). So even writing the AST building rules gets awkward.

Last point: having a parser (even with an AST) is hardly a solution to the actual problem you set out to solve, whatever it was. Its just a foundation piece, and much to the shock for most parser-newbies, it is the smallest part to a tool that manipulates code. Google my essay on Life After Parsing (or check my bio) for more detail.

I am implementing my own parser in rust and am figuring out a lot by myself. However I couldn't find a resource that confirmed that my way of parsing is a valid approach. Your example is pretty much exactly what I am doing. Thank you a lot!

java - Constructing an Abstract Syntax Tree with a list of Tokens - St...

java interpreter abstract-syntax-tree
Rectangle 27 2

This is where the guard statement comes in handy, the basic syntax is:

guard let foo = expression else {
    handle the error, and return (or throw)
}

In this case, you're wanting to protect a bunch of optional parsing from the vagaries of Facebook changing their API at some point in the future. Using guard and continue in this case allows you to safely skip the rest of the block:

for object in array {
        guard
            let firstName = object["first_name"] as? String,
            let lastName = object["last_name"] as? String,
            let tagId = object["id"] as? String,
            let picture = object["picture"] as? NSDictionary,
            let pictureData = picture["data"] as? NSDictionary,
            let pictureUrl = pictureData["url"] as? String else {
                print("JSON format has changed")
                continue
        }

        let friend = FacebookFriend(firstName: firstName, lastName: lastName, tagId: tagId, picture: pictureUrl)

        print(pictureUrl)

        self.friends.append(friend)
    }

Generally speaking, Swift try/catch blocks aren't really analogous to try/catch in other languages, particularly so for Java. In Java all exceptions are (to some extent) handleable, even those such as null references, array bounds issues, etc. In Swift, try/catch errors are definitively not meant to handle exceptions, but only meant to handle errors. So there's no way to use try/catch to protect yourself from as! String failing. You have to specifically expect and protect against the possibility, which is precisely where guard comes into play, potentially in combination with throw.

So basically guard will prevent my app from crashing if some keys (like first_name) won't be present in json, but still I'd have to add if-let for each of them? But if I'd just add if-lets won't it work the same?

guard let is a replacement (of sorts) for if let. It is very similar except that with if let foo = bar, foo is only accessible in the success portion of the if statement. With guard let foo, foo is accessible in the rest of the block containing the guard let so, as you see in the block above, you don't wind up with a bunch of nested if let blocks. It also has a much more natural flow, since the exceptional case is handled immediately, and the major flow of the routine continues unabated.

Btw, guard can be used with any conditional, not just with guard let What guard is doing is establishing preconditions for a block of code that must exist for the rest of the block to execute. So you can also use something like guard index < array.count else { throw index out of bounds }

ios - Try catch whole block in Swift 2 - Stack Overflow

ios swift swift2
Rectangle 27 6

There are two basic approaches:

For MySQL / many other SQLs it can be done with limit and offset

In Oracle, it use the same form as to handle "Top-N query" e.g. who are the 5 highest paid employee, which is optimized

select *   from ( select a.*, rownum rnum

from ( YOUR_QUERY_GOES_HERE -- including the order by ) a

where rownum <= MAX_ROWS )

where rnum >= MIN_ROWS

The question comes into mind is: when I execute the SQL, how is the result being loaded? Immediately or on request? same as this SO thread

execute: Returns true if the first object that the query returns is a ResultSet object. Use this method if the query could return one or more ResultSet objects. Retrieve the ResultSet objects returned from the query by repeatedly calling Statement.getResutSet.

We access data in Resultset via a cursor. Note this cursor is different from that of DB while it is a pointer initially positioned before the first row of data.

The data is fetch on request. while when you do the execute() you are fetching for the first time.

Then, how many data is loaded? It is configurable. One can use the java API setFetchSize() method on ResultSet to control how many rows are fetched from DB a time by the driver, how big the blocks it retrieves at once.

For example assume the total result is 1000. If fetch size is 100, fetching the 1st row will load 100 rows from DB and 2nd to 100th row will be loaded from local memory.to query 101st row another 100 rows will be load into memory.

Gives the JDBC driver a hint as to the number of rows that should be fetched from the database when more rows are needed for ResultSet objects genrated by this Statement. If the value specified is zero, then the hint is ignored. The default value is zero.

Note the word "hint" - it can be override by driver specific implementation.

This is also what the "Limit Rows to 100" feature in client like SQL developer based on.

Completing the whole solution, to scroll results, one need to consider the ResultSet Types and ScrollableCursor in API

One can find an example implementation from this post in oracle

which is from the book Oracle Toplink Developer's Guide Example 112 JDBC Driver Fetch Size

ReadAllQuery query = new ReadAllQuery();

query.setReferenceClass(Employee.class);

query.setSelectionCriteria(new ExpressionBuilder.get("id").greaterThan(100));

// Set the JDBC fetch size

query.setFetchSize(50);

// Configure the query to return results as a ScrollableCursor

query.useScrollableCursor();

// Execute the query

ScrollableCursor cursor = (ScrollableCursor) session.executeQuery(query);

// Iterate over the results

while (cursor.hasNext()) {

System.out.println(cursor.next().toString());

}

cursor.close();

Note the SQL should be ORDER by to make sense in the SQL approach,

Below is some points from Postgresql's documentation on JDBC Driver and other SO answers

First off, the original query would need to have an ORDER BY clause in order to make the paging solution work reasonably. Otherwise, it would be perfectly valid for Oracle to return the same 500 rows for the first page, the second page, and the Nth page

The major difference is for the JDBC way, it is required to hold the connection during the fetching. This may not be suitable in stateless web application, for example.

the syntax is SQL specific and may not be easy to maintain. For JDBC way

  • The connection to the server must be using the V3 protocol. This is the default for (and is only supported by) server versions 7.4 and later.
  • The Connection must not be in autocommit mode. The backend closes cursors at the end of transactions, so in autocommit mode the backend will have closed the cursor before anything can be fetched from it.
  • The Statement must be created with a ResultSet type of ResultSet.TYPE_FORWARD_ONLY. This is the default, so no code will need to be rewritten to take advantage of this, but it also means that you cannot scroll backwards or otherwise jump around in the ResultSet.
  • The query given must be a single statement, not multiple statements strung together with semicolons.

java - JDBC Pagination - Stack Overflow

java sql jdbc pagination
Rectangle 27 6

To my university project I was looking for a parser / evaluator supporting both basic formulas and more complicated equations (especially iterated operators). I found very nice open source library for JAVA and .NET called mXparser. I will give a few examples to make some feeling on the syntax, for further instructions please visit project website (especially tutorial section).

And few examples

Argument x = new Argument("x = 10");
Constant a = new Constant("a = pi^2");
Expression e = new Expression("cos(a*x)", x, a);
double v = e.calculate()
Function f = new Function("f(x, y, z) = sin(x) + cos(y*z)");
Expression e = new Expression("f(3,2,5)", f);
double v = e.calculate()
Expression e = new Expression("sum( i, 1, 100, sin(i) )");
double v = e.calculate()

java - Evaluating a math expression given in string form - Stack Overf...

java string math
Rectangle 27 145

I've written this eval method for arithmetic expressions to answer this question. It does addition, subtraction, multiplication, division, exponentiation (using the ^ symbol), and a few basic functions like sqrt. It supports grouping using (...), and it gets the operator precedence and associativity rules correct.

public static double eval(final String str) {
    return new Object() {
        int pos = -1, ch;

        void nextChar() {
            ch = (++pos < str.length()) ? str.charAt(pos) : -1;
        }

        boolean eat(int charToEat) {
            while (ch == ' ') nextChar();
            if (ch == charToEat) {
                nextChar();
                return true;
            }
            return false;
        }

        double parse() {
            nextChar();
            double x = parseExpression();
            if (pos < str.length()) throw new RuntimeException("Unexpected: " + (char)ch);
            return x;
        }

        // Grammar:
        // expression = term | expression `+` term | expression `-` term
        // term = factor | term `*` factor | term `/` factor
        // factor = `+` factor | `-` factor | `(` expression `)`
        //        | number | functionName factor | factor `^` factor

        double parseExpression() {
            double x = parseTerm();
            for (;;) {
                if      (eat('+')) x += parseTerm(); // addition
                else if (eat('-')) x -= parseTerm(); // subtraction
                else return x;
            }
        }

        double parseTerm() {
            double x = parseFactor();
            for (;;) {
                if      (eat('*')) x *= parseFactor(); // multiplication
                else if (eat('/')) x /= parseFactor(); // division
                else return x;
            }
        }

        double parseFactor() {
            if (eat('+')) return parseFactor(); // unary plus
            if (eat('-')) return -parseFactor(); // unary minus

            double x;
            int startPos = this.pos;
            if (eat('(')) { // parentheses
                x = parseExpression();
                eat(')');
            } else if ((ch >= '0' && ch <= '9') || ch == '.') { // numbers
                while ((ch >= '0' && ch <= '9') || ch == '.') nextChar();
                x = Double.parseDouble(str.substring(startPos, this.pos));
            } else if (ch >= 'a' && ch <= 'z') { // functions
                while (ch >= 'a' && ch <= 'z') nextChar();
                String func = str.substring(startPos, this.pos);
                x = parseFactor();
                if (func.equals("sqrt")) x = Math.sqrt(x);
                else if (func.equals("sin")) x = Math.sin(Math.toRadians(x));
                else if (func.equals("cos")) x = Math.cos(Math.toRadians(x));
                else if (func.equals("tan")) x = Math.tan(Math.toRadians(x));
                else throw new RuntimeException("Unknown function: " + func);
            } else {
                throw new RuntimeException("Unexpected: " + (char)ch);
            }

            if (eat('^')) x = Math.pow(x, parseFactor()); // exponentiation

            return x;
        }
    }.parse();
}
System.out.println(eval("((4 - 2^3 + 1) * -sqrt(3*3+4*4)) / 2"));

The parser is a recursive descent parser, so internally uses separate parse methods for each level of operator precedence in its grammar. I kept it short so it's easy to modify, but here are some ideas you might want to expand it with:

The bit of the parser that reads the names for functions can easily be changed to handle custom variables too, by looking up names in a variable table passed to the eval method, such as a Map<String,Double> variables.

What if, having added support for variables, you wanted to evaluate the same expression millions of times with changed variables, without parsing it every time? It's possible. First define an interface to use to evaluate the precompiled expression:

@FunctionalInterface
interface Expression {
    double eval();
}

Now change all the methods that return doubles, so instead they return an instance of that interface. Java 8's lambda syntax works great for this. Example of one of the changed methods:

Expression parseExpression() {
    Expression x = parseTerm();
    for (;;) {
        if (eat('+')) { // addition
            Expression a = x, b = parseTerm();
            x = (() -> a.eval() + b.eval());
        } else if (eat('-')) { // subtraction
            Expression a = x, b = parseTerm();
            x = (() -> a.eval() - b.eval());
        } else {
            return x;
        }
    }
}

That builds a recursive tree of Expression objects representing the compiled expression (an abstract syntax tree). Then you can compile it once and evaluate it repeatedly with different values:

public static void main(String[] args) {
    Map<String,Double> variables = new HashMap<>();
    Expression exp = parse("x^2 - x + 2", variables);
    for (double x = -20; x <= +20; x++) {
        variables.put("x", x);
        System.out.println(x + " => " + exp.eval());
    }
}

Instead of double, you could change the evaluator to use something more powerful like BigDecimal, or a class that implements complex numbers, or rational numbers (fractions). You could even use Object, allowing some mix of datatypes in expressions, just like a real programming language. :)

Saved me a steaming pile of hot mess. Thankyou very much. <3

Nice algorithm, starting from it I managed to impliment and logical operators. We created separate classes for functions to evaluate a function, so like your idea of variables, I create a map with functions and looking after the function name. Every function implements an interface with a method eval (T rightOperator , T leftOperator), so anytime we can add features without changing the algorithm code. And it is a good idea to make it work with generic types. Thanks you!

Can you explain the logic behind this algorithm?

I try to give a description of what I understand from the code written by Boann, and examples described wiki.The logic of this algoritm starting from rules of operation orders. 1. operator sign | variable evaluation | function call | parenthesis (sub-expressions); 2. exponentiation; 3. multiplication, division; 4. addition, subtraction;

Algorithm methods are divided for each level of operations order as follows: parseFactor = 1. operator sign | variable evaluation | function call | parenthesis (sub-expressions); 2. exponentiation; parseTerms = 3. multiplication, division; parseExpression = 4. addition, subtraction. The algorithm, call methods in reverse order (parseExpression -> parseTerms -> parseFactor -> parseExpression (for sub-expressions)), but every method to the first line call the method to the next level, so the entire execution order methods will be actually normal order of operations.

java - Evaluating a math expression given in string form - Stack Overf...

java string math
Rectangle 27 32

Regular expressions in @Path annotation

You can't have mutliple @Path annotations on a single method. It causes a "duplicate annotation" syntax error.

However, there's a number of ways you can effectively map two paths to a method.

The @Path annotation in JAX-RS accepts parameters, whose values can be restricted using regular expressions.

@Path("a/{parameter: path1|path2}")

would enable the method to be reached by requests for both /a/path1 and /a/path2. If you need to work with subpaths, escape slashes: {a:path1\\/subPath1|path2\\/subPath2}

Alternatively, you could set up a redirection. Here's a way to do it in Jersey (the reference implementation of JAX-RS), by defining another subresource. This is just an example, if you prefer a different way of handling redirections, feel free to use it.

@Path("basepath")
public class YourBaseResource {

  //this gets injected after the class is instantiated by Jersey    
  @Context
  UriInfo uriInfo; 

  @Path("a/b")
  @GET
  public Responce method1(){
    return Response.ok("blah blah").build();
  }

  @Path("a/b/c")
  @GET
  public Response method2(){
    UriBuilder addressBuilder = uriInfo.getBaseUriBuilder();
    addressBuilder.path("a/b");
    return Response.seeOther(addressBuilder.build()).build();
  }

}

If you're going to need such functionality often, I suggest intercepting the incoming requests using a servlet filter and rewriting the paths on the fly. This should help you keep all redirections in one place. Ideally, you could use a ready library. UrlRewriteFilter can do the trick, as long as you're fine with a BSD license (check out their google code site for details)

Another option is to handle this with a proxy set up in front of your Java app. You can set up an Apache server to offer basic caching and rewrite rules without complicating your Java code.

Thanks Tom. Regular expressions in @Path would not work as one of the path contains a slash. Redirection would send status 303, but I want 200 (same for both). Will try the servlet filter approach.

You can use slashes in @Path regular expressions

@Jonas good to know. Do the slashes matched by regular expressions take precedence over the ones separating path segments? Personally, I wouldn't implement it this way. It would make the code harder to understand.

@Tom Have you tried the example you provided in second solution? First, Response.ok or Response.seeOther return Response.ResponseBuilder object, you should add .build() to return Response. Even after I added, this doesn't work for me.

@Cacheing not this specific snippet. You're right, the build() call is missing. What happens when you add it? What do you mean by "doesn't work for me"?

java - Can we have more than one @Path annotation for same REST method...

java rest jersey jax-rs
Rectangle 27 7

  • parse: Reads a set of *.java source files and maps the resulting token sequence into AST (Abstract Syntax Tree)-Nodes.
  • enter: Enters symbols for the definitions into the symbol table.
  • process annotations: If Requested, processes annotations found in the specifed compilation units.
  • attribute: Attributes the Syntax trees. This step includes name resolution, type checking and constant folding.
  • flow: Performs dataflow analysis on the trees from the previous step. This includes checks for assignments and reachability.
  • desugar: Rewrites the AST and translates away some syntactic sugar.
  • generate: Generates Source Files or Class Files.
  • Lex - Break the source file into individual words, or tokens.
  • Parse - Analyze the phrase structure of the program.
  • Semantic Actions - Build a piece of abstract syntax tree corresponding to each phrase.
  • Semantic Analysis - Determine what each phrase means, relate uses of variables to their definitions, check types of expressions, request translation of each phrase.
  • Frame Layout - Place variables, function-parameters, etc. into activation records (stack frames) in a machine-dependent way.
  • Translate - Produce intermediate representation trees (IR trees), a notation that is not tied to any particular source language or targetmachine architecture.
  • Canonicalize - Hoist side effects out of expressions, and clean up conditional branches, for the convenience of the next phases.
  • Instruction Selection - Group the IR-tree nodes into clumps that correspond to the actions of target-machine instructions.

Control Flow Analysis - Analyze the sequence of instructions into a control flow graph that shows all the possible flows of control the program might follow when it executes.

Dataflow Analysis - Gather information about the flow of information through variables of the program; for example, liveness analysis calculates the places where each program variable holds a still-needed value (is live).

Register Allocation - Choose a register to hold each of the variables and temporary values used by the program; variables not live at the same time can share the same register.

Code Emission - Replace the temporary names in each machine instruction with machine registers.

Modern Compiler Implementation in Java

Kindly include the PDF file web site address which you had included and deleted.

javacompiler - Internal Architecture of Java Compiler - Stack Overflow

java javacompiler
Rectangle 27 5

The main difficulty with pointers, at least to me, is that I didn't start with C. I started with Java. The whole notion of pointers were really foreign until a couple of classes in college where I was expected to know C. So then I taught myself the very basics of C and how to use pointers in their very basic sense. Even then, every time I find myself reading C code, I have to look up pointer syntax.

So in my very limited experience(1 year real world + 4 in college), pointers confuse me because I've never had to really use it in anything other than a classroom setting. And I can sympathize with the students now starting out CS with JAVA instead of C or C++. As you said, you learned pointers in the 'Neolithic' age and have probably been using it ever since that. To us newer people, the notion of allocating memory and doing pointer arithmetic is really foreign because all these languages have abstracted that away.

P.S. After reading the Spolsky essay, his description of 'JavaSchools' was nothing like what I went through in college at Cornell ('05-'09). I took the structures and functional programming (sml), operating systems (C), algorithms (pen and paper), and a whole slew of other classes that weren't taught in java. However all the intro classes and electives were all done in java because there's value in not reinventing the wheel when you are trying to do something higher leveled than implementing a hashtable with pointers.

Honestly, given that you have difficulties with pointers still, I'm not sure that your experience at Cornell substantively contradicts Joel's article. Obviously enough of your brain is wired in a Java-mindset to make his point.

Wat? References in Java (or C#, or Python, or propably dozens of other languages) are just pointers without the arithmetic. Understanding pointers means understanding why void foo(Clazz obj) { obj = new Clazz(); } is a no-op while void bar(Clazz obj) { obj.quux = new Quux(); } mutates the argument...

I know what references are in Java, but I'm just saying that if you asked me to do reflection in Java or write anything meaningful in C I can't just chug it out. It requires a lot of research, like learning it for the first time, every time.

How is it that you got through an operating systems class in C without becoming fluent in C? No offense intended, it's just I remember having to pretty much develop a simple operating system from scratch. I must have used pointers a thousand times...

What do people find difficult about C pointers? - Stack Overflow

c pointers
Rectangle 27 103

A virtual machine is a virtual computing environment with a specific set of atomic well defined instructions that are supported independent of any specific language and it is generally thought of as a sandbox unto itself. The VM is analogous to an instruction set of a specific CPU and tends to work at a more fundamental level with very basic building blocks of such instructions (or byte codes) that are independent of the next. An instruction executes deterministically based only on the current state of the virtual machine and does not depend on information elsewhere in the instruction stream at that point in time.

An interpreter on the other hand is more sophisticated in that it is tailored to parse a stream of some syntax that is of a specific language and of a specific grammer that must be decoded in the context of the surrounding tokens. You can't look at each byte or even each line in isolation and know exactly what to do next. The tokens in the language can't be taken in isolation like they can relative to the instructions (byte codes) of a VM.

A Java compiler converts Java language into a byte-code stream no different than a C compiler converts C Language programs into assembly code. An interpreter on the other hand doesn't really convert the program into any well defined intermediate form, it just takes the program actions as a matter of the process of interpreting the source.

Another test of the difference between a VM and an interpreter is whether you think of it as being language independent. What we know as the Java VM is not really Java specific. You could make a compiler from other languages that result in byte codes that can be run on the JVM. On the other hand, I don't think we would really think of "compiling" some other language other than Python into Python for interpretation by the Python interpreter.

Because of the sophistication of the interpretation process, this can be a relatively slow process....specifically parsing and identifying the language tokens, etc. and understanding the context of the source to be able to undertake the execution process within the interpreter. To help accelerate such interpreted languages, this is where we can define intermediate forms of pre-parsed, pre-tokenized source code that is more readily directly interpreted. This sort of binary form is still interpreted at execution time, it is just starting from a much less human readable form to improve performance. However, the logic executing that form is not a virtual machine, because those codes still can't be taken in isolation - the context of the surrounding tokens still matter, they are just now in a different more computer efficient form.

I was under the impression that python did generate byte code, pyc, or is that what you are referring by "help accelerate such interpreted languages, this is where we can define intermediate forms of pre-parsed, pre-tokenized source code that is more readily directly interpreted."

@InSciTek Jeff: From your answer it's not clear whether you do know that Python uses a virtual machine too.

@TZ - The popular Python implementation is a Python compiler with a back side VM. In interactive mode, it is a bit of hybrid with both an interpreter front end, and a compiler back end. However those are implementation choices. I tried to describe difference between concept of VM and Interpreter

On the other hand, I don't think we would really think of "compiling" some other language other than Python into Python for interpretation by the Python interpreter. It is possible to write a language that can be compiled into Python bytecode, just like Scala is compiled into Java bytecode. In interactive mode, Python's interactive shell compiles your typed command into bytecode and executes that bytecode. You can write your own shell using eval and exec, and you can use compile() built-in function to turn a string into bytecode.

@Lie Ryan yes but it's not officially supported like it is with the JVM. In Python, bytecode is an undocumented implementation detail.

Java "Virtual Machine" vs. Python "Interpreter" parlance? - Stack Over...

java python jvm
Rectangle 27 49

Besides the basic usage of just private inheritance shown in the C++ FAQ (linked in other's comments) you can use a combination of private and virtual inheritance to seal a class (in .NET terminology) or to make a class final (in Java terminology). This is not a common use, but anyway I found it interesting:

class ClassSealer {
private:
   friend class Sealed;
   ClassSealer() {}
};
class Sealed : private virtual ClassSealer
{ 
   // ...
};
class FailsToDerive : public Sealed
{
   // Cannot be instantiated
};

Sealed can be instantiated. It derives from ClassSealer and can call the private constructor directly as it is a friend.

FailsToDerive won't compile as it must call the ClassSealer constructor directly (virtual inheritance requirement), but it cannot as it is private in the Sealed class and in this case FailsToDerive is not a friend of ClassSealer.

It was mentioned in the comments that this could not be made generic at the time using CRTP. The C++11 standard removes that limitation by providing a different syntax to befriend template arguments:

template <typename T>
class Seal {
   friend T;          // not: friend class T!!!
   Seal() {}
};
class Sealed : private virtual Seal<Sealed> // ...

Of course this is all moot, since C++11 provides a final contextual keyword for exactly this purpose:

class Sealed final // ...

+1. @Sasha: Correct, virtual inheritance is needed since the most-derived class always calls the constructors of all virtually inherited class directly, which is not the case with plain inheritance.

This can be made generic, without making a custom ClassSealer for every class you want to seal! Check it out: class ClassSealer { protected: ClassSealer() {} }; that's all.

+1 Iraimbilanja, very cool! BTW I saw your earlier comment (now deleted) about using the CRTP: I think that should in fact work, it's just tricky to get the syntax for template friends right. But in any case your non-template solution is much more awesome :)

oop - When should I use C++ private inheritance? - Stack Overflow

c++ oop
Rectangle 27 12

I learned how to program when I was 10 in exactly the way you taught your son. My dad used the GW-Basic interpreter that came with our AT&T PC6300, and we wrote a game where the computer asked you a question, and you had to answer A/B/C. The big advantage to syntax in GW-Basic was that you didn't have multi-line statements. You might want to try something similar. Java, with it's curly braces, might be a little tough.

10 PRINT "What color is Big Bird?"
20 PRINT "A. Blue"
30 PRINT "B. Green"
40 PRINT "C. Yellow"
50 INPUT$ ANSWER$
60 IF ANSWER$ = "C" THEN PRINT "Good Job!" ELSE PRINT "Oops, wrong answer!"

I spent hours upon hours using various permutations of that syntax and writing my own "games". And it made me want to learn more... might help.

children - Suitable environment for a 7 year old - Stack Overflow

children
Rectangle 27 12

I learned how to program when I was 10 in exactly the way you taught your son. My dad used the GW-Basic interpreter that came with our AT&T PC6300, and we wrote a game where the computer asked you a question, and you had to answer A/B/C. The big advantage to syntax in GW-Basic was that you didn't have multi-line statements. You might want to try something similar. Java, with it's curly braces, might be a little tough.

10 PRINT "What color is Big Bird?"
20 PRINT "A. Blue"
30 PRINT "B. Green"
40 PRINT "C. Yellow"
50 INPUT$ ANSWER$
60 IF ANSWER$ = "C" THEN PRINT "Good Job!" ELSE PRINT "Oops, wrong answer!"

I spent hours upon hours using various permutations of that syntax and writing my own "games". And it made me want to learn more... might help.

children - Suitable environment for a 7 year old - Stack Overflow

children