Rectangle 27 9

[]= is the way to go, but watch out!

A pandas dataframe is implemented as an ordered dict of columns.

This means that the __getitem__ [] can not only be used to get a certain column, but __setitem__ [] = can be used to assign a new column.

For example, this dataframe can have a column added to it by simply using the [] accessor

size      name color
0    big      rose   red
1  small    violet  blue
2  small     tulip   red
3  small  harebell  blue

df['protected'] = ['no', 'no', 'no', 'yes']

    size      name color protected
0    big      rose   red        no
1  small    violet  blue        no
2  small     tulip   red        no
3  small  harebell  blue       yes

Note that this works even if the index of the dataframe is off.

df.index = [3,2,1,0]
df['protected'] = ['no', 'no', 'no', 'yes']
    size      name color protected
3    big      rose   red        no
2  small    violet  blue        no
1  small     tulip   red        no
0  small  harebell  blue       yes

However, if you have a pd.Series and try to assign it to a dataframe where the indexes are off, you will run in to trouble. See example:

This is because a pd.Series by default has an index enumerated from 0 to n. And the pandas [] = method tries to be "smart"

When you use the [] = method pandas is quietly performing an outer join or outer merge using the index of the left hand dataframe and the index of the right hand series. df['column'] = series

This quickly causes cognitive dissonance, since the []= method is trying to do a lot of different things depending on the input, and the outcome cannot be predicted unless you just know how pandas works. I would therefore advice against the []= in code bases, but when exploring data in a notebook, it is fine.

If you have a pd.Series and want it assigned from top to bottom, or if you are coding productive code and you are not sure of the index order, it is worth it to safeguard for this kind of issue.

You could downcast the pd.Series to a np.ndarray or a list, this will do the trick.

df['protected'] = list(pd.Series(['no', 'no', 'no', 'yes']))

But this is not very explicit.

Some coder may come along and say "Hey, this looks redundant, I'll just optimize this away".

Setting the index of the pd.Series to be the index of the df is explicit.

df['protected'] = pd.Series(['no', 'no', 'no', 'yes'], index=df.index)

Or more realistically, you probably have a pd.Series already available.

protected_series = pd.Series(['no', 'no', 'no', 'yes'])
protected_series.index = df.index

3     no
2     no
1     no
0    yes
df['protected'] = protected_series

    size      name color protected
3    big      rose   red        no
2  small    violet  blue        no
1  small     tulip   red        no
0  small  harebell  blue       yes

Since the index dissonance is the problem, if you feel that the index of the dataframe should not dictate things, you can simply drop the index, this should be faster, but it is not very clean, since your function now probably does two things.

df.reset_index(drop=True)
protected_series.reset_index(drop=True)
df['protected'] = protected_series

    size      name color protected
0    big      rose   red        no
1  small    violet  blue        no
2  small     tulip   red        no
3  small  harebell  blue       yes

While df.assign make it more explicit what you are doing, it actually has all the same problems as the above []=

df.assign(protected=pd.Series(['no', 'no', 'no', 'yes']))
    size      name color protected
3    big      rose   red       yes
2  small    violet  blue        no
1  small     tulip   red        no
0  small  harebell  blue        no

Just watch out with df.assign that your column is not called self. It will cause errors. This makes df.assign smelly, since there are these kind of artifacts in the function.

df.assign(self=pd.Series(['no', 'no', 'no', 'yes'])
TypeError: assign() got multiple values for keyword argument 'self'

You may say, "Well, I'll just not use self then". But who knows how this function changes in the future to support new arguments. Maybe your column name will be an argument in a new update of pandas, causing problems with upgrading.

"When you use the [] = method pandas is quietly performing an outer join or outer merge". This is the most important piece of information in the whole topic. But could you provide link to the official documentation on how []= operator works?

Adding new column to existing DataFrame in Python pandas - Stack Overf...

python pandas dataframe chained-assignment
Rectangle 27 88

I disagree with your conclusion that the connect-auth plugin is the wa...

I guess the reason that you haven't found many good libraries is that using a library for authentication is mostly over engineered.

What you are looking for is just a session-binder :) A session with:

if login and user == xxx and pwd == xxx 
   then store an authenticated=true into the session 
if logout destroy session

I'm using also connect but I do not use connect-auth for two reasons:

(It's complete. Just execute it for testing but if you want to use it in production, make sure to use https) (And to be REST-Principle-Compliant you should use a POST-Request instead of a GET-Request b/c you change a state :)

var connect = require('connect');
var urlparser = require('url');

var authCheck = function (req, res, next) {
    url = req.urlp = urlparser.parse(req.url, true);

    // ####
    // Logout
    if ( url.pathname == "/logout" ) {
      req.session.destroy();
    }

    // ####
    // Is User already validated?
    if (req.session && req.session.auth == true) {
      next(); // stop here and pass to the next onion ring of connect
      return;
    }

    // ########
    // Auth - Replace this example with your Database, Auth-File or other things
    // If Database, you need a Async callback...
    if ( url.pathname == "/login" && 
         url.query.name == "max" && 
         url.query.pwd == "herewego"  ) {
      req.session.auth = true;
      next();
      return;
    }

    // ####
    // This user is not authorized. Stop talking to him.
    res.writeHead(403);
    res.end('Sorry you are not authorized.\n\nFor a login use: /login?name=max&pwd=herewego');
    return;
}

var helloWorldContent = function (req, res, next) {
    res.writeHead(200, { 'Content-Type': 'text/plain' });
    res.end('authorized. Walk around :) or use /logout to leave\n\nYou are currently at '+req.urlp.pathname);
}

var server = connect.createServer(
      connect.logger({ format: ':method :url' }),
      connect.cookieParser(),
      connect.session({ secret: 'foobar' }),
      connect.bodyParser(),
      authCheck,
      helloWorldContent
);

server.listen(3000);

I wrote this statement over a year ago and have currently no active node projects. So there are may be API-Changes in Express. Please add a comment if I should change anything.

Why does connect-auth break the onion/layers pattern? is it because it doesn't use next()? Could it?

Yes. It must use next() because thats the idea behind connect. Connect has a layer-architecture / form of code structure. And every layer has the power to stop the request execution by not calling next(). If we are talking about authentication: An authentication layer will check if the user has the correct permissions. If everything is fine the layer calls next(). If not this auth-layer generates an error and will not call next().

man, this is exactly what I was looking for. connect-auth was giving me a bit of indigestion. I just logged into my app for the first time. thanks so much.

This still doesn't help to answer how to connect to a database backend (preferably with encrypted passwords). I appreciate your comment that this one library is over-engineered, but surely there is one that isn't. Also, if I wanted to write my own auth system I would have used Struts in Java. just like the OP, I want to know which plugins will do that for me in 1 line of code.

great answer Nivoc. Doesn't work with latest versions of connect tho. I had to change... cookieDecoder() --> cookieParser() and bodyDecoder() --> bodyParser() and remove the next() call from helloWorldContent function as i was getting an error 'Can't set headers after they are sent'

user authentication libraries for node.js? - Stack Overflow

authentication node.js serverside-javascript
Rectangle 27 88

I disagree with your conclusion that the connect-auth plugin is the wa...

I guess the reason that you haven't found many good libraries is that using a library for authentication is mostly over engineered.

What you are looking for is just a session-binder :) A session with:

if login and user == xxx and pwd == xxx 
   then store an authenticated=true into the session 
if logout destroy session

I'm using also connect but I do not use connect-auth for two reasons:

(It's complete. Just execute it for testing but if you want to use it in production, make sure to use https) (And to be REST-Principle-Compliant you should use a POST-Request instead of a GET-Request b/c you change a state :)

var connect = require('connect');
var urlparser = require('url');

var authCheck = function (req, res, next) {
    url = req.urlp = urlparser.parse(req.url, true);

    // ####
    // Logout
    if ( url.pathname == "/logout" ) {
      req.session.destroy();
    }

    // ####
    // Is User already validated?
    if (req.session && req.session.auth == true) {
      next(); // stop here and pass to the next onion ring of connect
      return;
    }

    // ########
    // Auth - Replace this example with your Database, Auth-File or other things
    // If Database, you need a Async callback...
    if ( url.pathname == "/login" && 
         url.query.name == "max" && 
         url.query.pwd == "herewego"  ) {
      req.session.auth = true;
      next();
      return;
    }

    // ####
    // This user is not authorized. Stop talking to him.
    res.writeHead(403);
    res.end('Sorry you are not authorized.\n\nFor a login use: /login?name=max&pwd=herewego');
    return;
}

var helloWorldContent = function (req, res, next) {
    res.writeHead(200, { 'Content-Type': 'text/plain' });
    res.end('authorized. Walk around :) or use /logout to leave\n\nYou are currently at '+req.urlp.pathname);
}

var server = connect.createServer(
      connect.logger({ format: ':method :url' }),
      connect.cookieParser(),
      connect.session({ secret: 'foobar' }),
      connect.bodyParser(),
      authCheck,
      helloWorldContent
);

server.listen(3000);

I wrote this statement over a year ago and have currently no active node projects. So there are may be API-Changes in Express. Please add a comment if I should change anything.

Why does connect-auth break the onion/layers pattern? is it because it doesn't use next()? Could it?

Yes. It must use next() because thats the idea behind connect. Connect has a layer-architecture / form of code structure. And every layer has the power to stop the request execution by not calling next(). If we are talking about authentication: An authentication layer will check if the user has the correct permissions. If everything is fine the layer calls next(). If not this auth-layer generates an error and will not call next().

man, this is exactly what I was looking for. connect-auth was giving me a bit of indigestion. I just logged into my app for the first time. thanks so much.

This still doesn't help to answer how to connect to a database backend (preferably with encrypted passwords). I appreciate your comment that this one library is over-engineered, but surely there is one that isn't. Also, if I wanted to write my own auth system I would have used Struts in Java. just like the OP, I want to know which plugins will do that for me in 1 line of code.

great answer Nivoc. Doesn't work with latest versions of connect tho. I had to change... cookieDecoder() --> cookieParser() and bodyDecoder() --> bodyParser() and remove the next() call from helloWorldContent function as i was getting an error 'Can't set headers after they are sent'

user authentication libraries for node.js? - Stack Overflow

authentication node.js serverside-javascript
Rectangle 27 76

Creating and using the key is the way to go. The usage is free until your application reaches 25.000 calls per day on 90 consecutive days.

BTW.: In the google Developer documentation it says you shall add the api key as option {key:yourKey} when calling the API to create new instances. This however doesn't shush the console warning. You have to add the key as a parameter when including the api.

<script src="https://maps.googleapis.com/maps/api/js?key=yourKEYhere"></script>

This should be marked as the correct answer for this question. Seems silly that someone would ask why the API key says it's missing if they never placed it there to begin with.

Great! But which is the direct link to generate the API Key? Thanks!

i have the same issue, but i don't know how to add this key, what is the nature of the key and how to choose it.

the key is a string you get on the site linked in my answer. You add it to the api load url as a GET var. see above

Google Maps API warning: NoApiKeys - Stack Overflow

google-maps google-maps-api-3
Rectangle 27 76

Creating and using the key is the way to go. The usage is free until your application reaches 25.000 calls per day on 90 consecutive days.

BTW.: In the google Developer documentation it says you shall add the api key as option {key:yourKey} when calling the API to create new instances. This however doesn't shush the console warning. You have to add the key as a parameter when including the api.

<script src="https://maps.googleapis.com/maps/api/js?key=yourKEYhere"></script>

This should be marked as the correct answer for this question. Seems silly that someone would ask why the API key says it's missing if they never placed it there to begin with.

Great! But which is the direct link to generate the API Key? Thanks!

i have the same issue, but i don't know how to add this key, what is the nature of the key and how to choose it.

the key is a string you get on the site linked in my answer. You add it to the api load url as a GET var. see above

Google Maps API warning: NoApiKeys - Stack Overflow

google-maps google-maps-api-3
Rectangle 27 30

No, using static variables for this is not the way to go:

  • Static variables don't scale horizontally - if you load-balance your application, a user who hits one server then a different one won't see the data store in the static variables in the first server
  • Most importantly, static variables will be shared by all access to that server... it won't be on a per-user basis at all... whereas from your description, you wouldn't want user X to see user Y's information.

Fundamentally, you have two choices for propagating information around your application:

  • Keep it client-side, so each request gives the information from the previous steps. (This can become unwieldy with large amounts of information, but can be useful for simple cases.)
  • Keep it server-side, ideally in some persistent way (such as a database) with the client providing a session identifier.

If you can use load-balancing to keep all users going to the same server, and if you don't mind sessions being lost when the AppDomain is recycled1 or a server going down, you can keep it in memory, keyed by session ID... but be careful.

1 There may be mechanisms in ASP.NET to survive this, propagating session information from one AppDomain to another - I'm not sure

c# - Static fields vs Session variables - Stack Overflow

c# asp.net .net session-variables static-members
Rectangle 27 29

No, using static variables for this is not the way to go:

  • Static variables don't scale horizontally - if you load-balance your application, a user who hits one server then a different one won't see the data store in the static variables in the first server
  • Most importantly, static variables will be shared by all access to that server... it won't be on a per-user basis at all... whereas from your description, you wouldn't want user X to see user Y's information.

Fundamentally, you have two choices for propagating information around your application:

  • Keep it client-side, so each request gives the information from the previous steps. (This can become unwieldy with large amounts of information, but can be useful for simple cases.)
  • Keep it server-side, ideally in some persistent way (such as a database) with the client providing a session identifier.

If you can use load-balancing to keep all users going to the same server, and if you don't mind sessions being lost when the AppDomain is recycled1 or a server going down, you can keep it in memory, keyed by session ID... but be careful.

1 There may be mechanisms in ASP.NET to survive this, propagating session information from one AppDomain to another - I'm not sure

c# - Static fields vs Session variables - Stack Overflow

c# asp.net .net session-variables static-members
Rectangle 27 3

No. It is not the way to go.

removeCachedAuthToken is a function that removes a token acquired using getAuthToken from the internal token cache. However, it does not revoke the token. That means that the application will no longer be able to access to the user resources in current session, until it calls getAuthToken again. When that happens, it will be able to obtain a token again without the user needing to grant access.

As such, this function is not meant to be a logout related routine. It is more of a recovery mechanism, when you realize that the access token that your application is using is stale, or invalid in any other way. That happens, when you make a request using the access token and the HTTP response status is 401 Unauthorized. In that case you can scrap the token and then request a new one using getAuthToken. To simulate that behavior, you can revoke the a relevant grant using the Google Accounts page or form the diagnostic UI: chrome://identity-internals (currently it lists all of the cached tokens).

Please refer to the chrome app samples for GDocs and Identity. (Pull requests 114 for GDocs and 115 for Identity in case you are doing that in next few days.)

Thanks for the explanation. It is good to know. But I am still at the dark. How would I go about to revoke the token then?

thanks for pointing out the fact that removeCachedAuthToken does not revoke it actually.

Google packaged app - identity API - removeCachedAuthToken - Stack Ove...

google-chrome-app
Rectangle 27 28

There are a few ways to go about doing this. All of them require that you execute the job in your code.

Method 1: A test which queues the job and then tells the DelayedJob::Worker to complete it.

describe Batch do  
  it 'runs Singleplex for a valid panel' do
    batch = FactoryGirl.create(:batch)
    user = User.find(1)
    Singleplex.new.perform(batch.id,user)
    expect(Delayed::Worker.new.work_off).to eq [1, 0] # Returns [successes, failures]
    # Add expectations which check multiple tables to make sure the work is done
  end
end

Method 2: A test which runs the job in question with queueing disabled, and checks for the desired results. You can delay queuing by calling Delayed::Worker.delay_jobs = false somewhere in your testing configuration or in a before block.

before(:each) do
  Delayed::Worker.delay_jobs = false
end
describe Batch do  
  it 'runs Singleplex for a valid panel' do
    batch = FactoryGirl.create(:batch)
    user = User.find(1)
    Singleplex.new.perform(batch.id,user)
    # expectations which check that the work is done
  end
end

Method 3: Write an observer that watches for any new jobs that are created and runs them. This way you won't have to manually declare "work_off" in your tests. Artsy has a gist for this.

It's also a good idea to have tests elsewhere that make sure jobs get queued as expected

it "queues welcome when a user is created" do
  expect(Delayed::Job.count).to eq 0
  # Create user step
  expect(Delayed::Job.count).to eq 1 # You should really be looking for the count of a specific job.
end

Thanks @faraz, for method 3, where did you add the references to delayed_job_observer.rb, is it in spec_helper.rb?

Use rails_helper.rb if you're using the newer organization format, otherwise spec_helper.rb works.

delayed job - Rspec testing delayed_job - Stack Overflow

rspec delayed-job
Rectangle 27 28

There are a few ways to go about doing this. All of them require that you execute the job in your code.

Method 1: A test which queues the job and then tells the DelayedJob::Worker to complete it.

describe Batch do  
  it 'runs Singleplex for a valid panel' do
    batch = FactoryGirl.create(:batch)
    user = User.find(1)
    Singleplex.new.perform(batch.id,user)
    expect(Delayed::Worker.new.work_off).to eq [1, 0] # Returns [successes, failures]
    # Add expectations which check multiple tables to make sure the work is done
  end
end

Method 2: A test which runs the job in question with queueing disabled, and checks for the desired results. You can delay queuing by calling Delayed::Worker.delay_jobs = false somewhere in your testing configuration or in a before block.

before(:each) do
  Delayed::Worker.delay_jobs = false
end
describe Batch do  
  it 'runs Singleplex for a valid panel' do
    batch = FactoryGirl.create(:batch)
    user = User.find(1)
    Singleplex.new.perform(batch.id,user)
    # expectations which check that the work is done
  end
end

Method 3: Write an observer that watches for any new jobs that are created and runs them. This way you won't have to manually declare "work_off" in your tests. Artsy has a gist for this.

It's also a good idea to have tests elsewhere that make sure jobs get queued as expected

it "queues welcome when a user is created" do
  expect(Delayed::Job.count).to eq 0
  # Create user step
  expect(Delayed::Job.count).to eq 1 # You should really be looking for the count of a specific job.
end

Thanks @faraz, for method 3, where did you add the references to delayed_job_observer.rb, is it in spec_helper.rb?

Use rails_helper.rb if you're using the newer organization format, otherwise spec_helper.rb works.

delayed job - Rspec testing delayed_job - Stack Overflow

rspec delayed-job
Rectangle 27 375

Let's make a Go 1-compatible list of all the ways to read and write files in Go.

Because file API has changed recently and most other answers don't work with Go 1. They also miss bufio which is important IMHO.

In the following examples I copy a file by reading from it and writing to the destination file.

Start with the basics

Here I used os.Open and os.Create which are convenient wrappers around os.OpenFile. We usually don't need to call OpenFile directly.

Notice treating EOF. Read tries to fill buf on each call, and returns io.EOF as error if it reaches end of file in doing so. In this case buf will still hold data. Consequent calls to Read returns zero as the number of bytes read and same io.EOF as error. Any other error will lead to a panic.

bufio
package main

import (
    "bufio"
    "io"
    "os"
)

func main() {
    // open input file
    fi, err := os.Open("input.txt")
    if err != nil {
        panic(err)
    }
    // close fi on exit and check for its returned error
    defer func() {
        if err := fi.Close(); err != nil {
            panic(err)
        }
    }()
    // make a read buffer
    r := bufio.NewReader(fi)

    // open output file
    fo, err := os.Create("output.txt")
    if err != nil {
        panic(err)
    }
    // close fo on exit and check for its returned error
    defer func() {
        if err := fo.Close(); err != nil {
            panic(err)
        }
    }()
    // make a write buffer
    w := bufio.NewWriter(fo)

    // make a buffer to keep chunks that are read
    buf := make([]byte, 1024)
    for {
        // read a chunk
        n, err := r.Read(buf)
        if err != nil && err != io.EOF {
            panic(err)
        }
        if n == 0 {
            break
        }

        // write a chunk
        if _, err := w.Write(buf[:n]); err != nil {
            panic(err)
        }
    }

    if err = w.Flush(); err != nil {
        panic(err)
    }
}

bufio is just acting as a buffer here, because we don't have much to do with data. In most other situations (specially with text files) bufio is very useful by giving us a nice API for reading and writing easily and flexibly, while it handles buffering behind the scenes.

ioutil
package main

import (
    "io/ioutil"
)

func main() {
    // read the whole file at once
    b, err := ioutil.ReadFile("input.txt")
    if err != nil {
        panic(err)
    }

    // write the whole body at once
    err = ioutil.WriteFile("output.txt", b, 0644)
    if err != nil {
        panic(err)
    }
}

Easy as pie! But use it only if you're sure you're not dealing with big files.

For anyone who stumbles upon this question, it was originally asked in 2009 before these libraries were introduced, so please, use this answer as your guide!

According to golang.org/pkg/os/#File.Write, when Write hasn't written all bytes, it returns an error. So the extra check in the first example (panic("error in writing")) isn't necessary.

Note that these examples aren't checking the error return from fo.Close(). From the Linux man pages close(2): Not checking the return value of close() is a common but nevertheless serious programming error. It is quite possible that errors on a previous write(2) operation are first reported at the final close(). Not checking the return value when closing the file may lead to silent loss of data. This can especially be observed with NFS and with disk quota.

Close

So, what's a "big" file? 1KB? 1MB? 1GB? Or does "big" depend on the machine's hardware?

How to read/write from/to file using Go? - Stack Overflow

file go
Rectangle 27 3

you could make check opposite for respondsToSelector it might help, and this is the way to go actually if you are supporting older versions:)

if ([self respondsToSelector:@selector(presentViewController:animated:completion:)]){
    [self presentViewController:anotherViewController animated:YES completion:nil];
} else {
    [self presentModalViewController:anotherViewController animated:YES];
}

Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).

iphone - iOS Version Checking gives warning - Stack Overflow

iphone ios objective-c respondstoselector
Rectangle 27 3

you could make check opposite for respondsToSelector it might help, and this is the way to go actually if you are supporting older versions:)

if ([self respondsToSelector:@selector(presentViewController:animated:completion:)]){
    [self presentViewController:anotherViewController animated:YES completion:nil];
} else {
    [self presentModalViewController:anotherViewController animated:YES];
}

iphone - iOS Version Checking gives warning - Stack Overflow

iphone ios objective-c respondstoselector
Rectangle 27 28

Int to bool is easy, just x != 0 will do the trick. To go the other way, since Go doesn't support a ternary operator, you'd have to do:

var x int
if b {
    x = 1
} else {
    x = 0
}

You could of course put this in a function:

func Btoi(b bool) int {
    if b {
        return 1
    }
    return 0
 }

There are so many possible boolean interpretations of integers, none of them necessarily natural, that it sort of makes sense to have to say what you mean.

In my experience (YMMV), you don't have to do this often if you're writing good code. It's appealing sometimes to be able to write a mathematical expression based on a boolean, but your maintainers will thank you for avoiding it.

types - Is there a way to convert integers to bools in go or vice vers...

types go
Rectangle 27 19

You say you've seen discussions of nesting UIScrollViews but don't want to go there - but that is the way to go! It works easily and well.

It's essentially what Apple does in its PhotoScroller example (and the 2010 WWDC talk linked to in Jonah's answer). Only in those examples, they've added a whole bunch of complex tiling and other memory management. If you don't need the tiling etc. and if you dont want to wade through those examples and try and remove the bits related to it, the underlying principle of nesting UIScrollViews is actually quite simple:

That's it. Works just like the photo app.

If you have a lot of photos, to save memory you can just have two inner UIScrollViews and two UIImagesViews. You then dynamically flip between them, moving their position within the outer UIScrollView and changing their images as the user scrolls the outer UIScrollView. It's a bit more complex but the same principle.

Yeah nested scrollviews now work correctly. When I originally posted this question, support for iOS 3.2 was important, but nested scroll views had all sorts of issues.

Figured you'd probably worked that out by now! But these older questions hang on at the top of Google search results, so thought it was worth adding.

This answer deserves way more up votes as of now: I just solved the whole problem with only a few lines of code using nested scroll views (on iOS 6 in my case).

ios - UIScrollView image/photo viewer with paging enabled and zooming ...

ios iphone uiscrollview
Rectangle 27 191

If you want to go through each row(<tr>), knowing/identifying the row(<tr>), and iterate through each column(<td>) of each row(<tr>), then this is the way to go.

If you just want to go through the cells(<td>), ignoring which row you're on, then this is the way to go.

var table = document.getElementById("mytab1");
for (var i = 0, cell; cell = table.cells[i]; i++) {
     //iterate through cells
     //cells would be accessed using the "cell" variable assigned in the for loop
}
row.cells[j]; j++)

Does not work with IE9.

@WilliamRemacle I posted this answer almost 4 years ago.. IE 9 was not even a thought at the time!

How do I iterate through table rows and cells in JavaScript? - Stack O...

javascript
Rectangle 27 172

Note: This was written and accepted back in the Rails 2 days; nowadays grosser's answer (below) is the way to go.

class MyController < ApplicationController
  include MyHelper

  def xxxx
    @comments = []
    Comment.find_each do |comment|
      @comments << {:id => comment.id, :html => html_format(comment.content)}
    end
  end
end

Option 2: Or you can declare the helper method as a class function, and use it like so:

MyHelper.html_format(comment.content)

If you want to be able to use it as both an instance function and a class function, you can declare both versions in your helper:

module MyHelper
  def self.html_format(str)
    process(str)
  end

  def html_format(str)
    MyHelper.html_format(str)
  end
end

Thanks but I'm a little confused. Right now my helper is in /app/helpers/application_helper.rb ... ANd you're suggesting I should move the helper to ApplicationController?

I added 'include ApplicationHelper' to my application_controller but that errors with 'NoMethodError (undefined method `html_format' for ApplicationHelper:Module):'

@AnApprentice Looks like you've figured it out, but I tweaked my answer a little, hopefully making things clearer. In the first version, you can just use html_format - you only need MyHelper.html_format in the second.

This does not work when the helper method you want to use make use of view methods such as link_to. Controllers don't have access to these methods and most of my helpers use these methods. Also, including the helper into the controller exposes all the helper's methods as publicly accessible actions which is not good. view_context is the way to go in Rails 3.

@GregT - Hadn't seen grosser's answer before, as it came in a bit after the fact, but I like it better too. He just got my upvote.

Rails - How to use a Helper Inside a Controller - Stack Overflow

ruby-on-rails ruby-on-rails-3
Rectangle 27 327

You are right that CSS positioning is the way to go. Here's a quick run down:

position: relative will layout an element relative to itself. In other words, the elements is laid out in normal flow, then it is removed from normal flow and offset by whatever values you have specified (top, right, bottom, left). It's important to note that because it's removed from flow, other elements around it will not shift with it (use negative margins instead if you want this behaviour).

However, you're most likely interested in position: absolute which will position an element relative to a container. By default, the container is the browser window, but if a parent element either has position: relative or position: absolute set on it, then it will act as the parent for positioning coordinates for its children.

<div id="container">
   <div id="box"> </div>
</div>
#container {
  position: relative;
}

#box {
  position: absolute;
  top: 100px;
  left: 50px;
}

In that example, the top left corner of #box would be 100px down and 50px left of the top left corner of #container. If #container did not have position: relative set, the coordinates of #box would be relative to the top left corner of the browser view port.

Position an HTML element relative to its container using CSS - Stack O...

html css positioning
Rectangle 27 26

I think SignalR is the way to go, and is going to be part of .NET itself anyway (and likely extend/merge/replace web-sockets support). It uses web sockets when its supported, and consistent client polling hack when it's not, so, it's the way to go.

Since this answer is still getting upvoted, it's worth mentioning that SignalR is now officially part of ASP.NET.

As much as I respect your answer here, do you know if WebSockets right now doesn't do the same? And if I was to use SignalR, if there's a potential merge of the technologies - would it be wise to use the SignalR library for now with this in mind?

The notes I have about this are from recent notes from Justin King from Microsoft when he presented SignalR in Sydney ALT.NET a few days ago. I understand that the merge of SignalR in .NET core is going to be by the SignalR guy himself, and hence is likely to be very few changes from current offering.

Unbelievable they did not add SignalR support for websockets to Win7 and Server 2008. Outrageous really...

It's not a SignalR thing, it's an IIS thing, which signalR leverages. Websocket support was added in IIS 8 iis.net/learn/get-started/whats-new-in-iis-8/

Well their bad to the IIS and socket folks then. I actually got a basic SignalR mechanism already implemented on my website in a couple hours, so kudos to the SignalR team. It uses SSE on Chrome and probably long polling on IE10 but at least my initial attempt worked right away. SignalR looks really nice so far!

asp.net - .NET 4.5 WebSockets vs SignalR - Stack Overflow

asp.net websocket signalr
Rectangle 27 195

If you're concatenating more than two arrays, concat() is the way to go for convenience and likely performance.

var a = [1, 2], b = ["x", "y"], c = [true, false];
var d = a.concat(b, c);
console.log(d); // [1, 2, "x", "y", true, false];

For concatenating just two arrays, the fact that push accepts multiple arguments consisting of elements to add to the array can be used instead to add elements from one array to the end of another without producing a new array. With slice() it can also be used instead of concat() but there appears to be no performance advantage from doing this.

var a = [1, 2], b = ["x", "y"];
a.push.apply(a, b);
console.log(a); // [1, 2, "x", "y"];

In ECMAScript 2015 and later, this can be reduced even further to

a.push(...b)

However, it seems that for large arrays (of the order of 100,000 members or more), the technique passing an array of elements to push (either using apply() or the ECMAScript 2015 spread operator) can fail. For such arrays, using a loop is a better approach. See https://stackoverflow.com/a/17368101/96100 for details.

I believe your test may have an error: the a.concat(b) test case seems to be needlessly making a copy of the array a then throwing it away.

@ninjagecko: You're right. I updated it: jsperf.com/concatperftest/6. For the particular case of creating a new array that concatenates two existing arrays, it appears concat() is generally faster. For the case of concatenating an array onto an existing array in place, push() is the way to go. I updated my answer.

I can attest that extending a single array with several new arrays using the push.apply method confers a huge performance advantage (~100x) over the simple concat call. I'm dealing with very long lists of short lists of integers, in v8/node.js.

javascript - What is the most efficient way to concatenate N arrays? -...

javascript arrays