Rectangle 27 5

Approach 1: Use the SQL function on the client

After going through all the different options your best option seems to be to use the same literal function on both the server and the client. This can be achieved in two ways:

In this case you would trigger an AJAX query on the client to the server, which in turn queries the database for the specific calculation you want and return it to the client.

This might sound quite impossible, but using xp_cmdshell it's possible to execute command line commands from sql, and you can run javascript from the terminal using something like node.js, so everything you're left with is implementing the Vincenty function to be called from the command line.

The big question here is how the performance will be. Starting and stopping a node instance every few seconds seems like a relatively bad idea, so it would be far more optimal to code a service in node to do this work, however I would not know what the best way would be for sql to interact with such a service. The simplest approach would probably be to have it do a http request to something like localhost:8888/?lat1=&lng1=&etc., but that's starting to be nearly as complex as approach 1.

Approach 1 still seems to be the most reasonable one, although approach 2 gives you a lot more flexibility to do exactly what you want. For a private project or a perfectionist project I think I would go with approach 2, for a 'we need to finish this and we do not have time for surprises or optimalization'-kinda project I think I would advise approach number 1.

If accuracy isn't an issue then google maps built in function and MS SQL SRID 104001 will work as a solution that matches close enough. In my specific application accuracy does matter but seems approach 2 may be what is needed.

javascript - Google Maps and SQL Server LINESTRING length inconsistent...

javascript sql-server google-maps gis sqlgeography
Rectangle 27 5

Approach 1: Use the SQL function on the client

After going through all the different options your best option seems to be to use the same literal function on both the server and the client. This can be achieved in two ways:

In this case you would trigger an AJAX query on the client to the server, which in turn queries the database for the specific calculation you want and return it to the client.

This might sound quite impossible, but using xp_cmdshell it's possible to execute command line commands from sql, and you can run javascript from the terminal using something like node.js, so everything you're left with is implementing the Vincenty function to be called from the command line.

The big question here is how the performance will be. Starting and stopping a node instance every few seconds seems like a relatively bad idea, so it would be far more optimal to code a service in node to do this work, however I would not know what the best way would be for sql to interact with such a service. The simplest approach would probably be to have it do a http request to something like localhost:8888/?lat1=&lng1=&etc., but that's starting to be nearly as complex as approach 1.

Approach 1 still seems to be the most reasonable one, although approach 2 gives you a lot more flexibility to do exactly what you want. For a private project or a perfectionist project I think I would go with approach 2, for a 'we need to finish this and we do not have time for surprises or optimalization'-kinda project I think I would advise approach number 1.

If accuracy isn't an issue then google maps built in function and MS SQL SRID 104001 will work as a solution that matches close enough. In my specific application accuracy does matter but seems approach 2 may be what is needed.

javascript - Google Maps and SQL Server LINESTRING length inconsistent...

javascript sql-server google-maps gis sqlgeography
Rectangle 27 5

Approach 1: Use the SQL function on the client

After going through all the different options your best option seems to be to use the same literal function on both the server and the client. This can be achieved in two ways:

In this case you would trigger an AJAX query on the client to the server, which in turn queries the database for the specific calculation you want and return it to the client.

This might sound quite impossible, but using xp_cmdshell it's possible to execute command line commands from sql, and you can run javascript from the terminal using something like node.js, so everything you're left with is implementing the Vincenty function to be called from the command line.

The big question here is how the performance will be. Starting and stopping a node instance every few seconds seems like a relatively bad idea, so it would be far more optimal to code a service in node to do this work, however I would not know what the best way would be for sql to interact with such a service. The simplest approach would probably be to have it do a http request to something like localhost:8888/?lat1=&lng1=&etc., but that's starting to be nearly as complex as approach 1.

Approach 1 still seems to be the most reasonable one, although approach 2 gives you a lot more flexibility to do exactly what you want. For a private project or a perfectionist project I think I would go with approach 2, for a 'we need to finish this and we do not have time for surprises or optimalization'-kinda project I think I would advise approach number 1.

If accuracy isn't an issue then google maps built in function and MS SQL SRID 104001 will work as a solution that matches close enough. In my specific application accuracy does matter but seems approach 2 may be what is needed.

javascript - Google Maps and SQL Server LINESTRING length inconsistent...

javascript sql-server google-maps gis sqlgeography
Rectangle 27 5

Approach 1: Use the SQL function on the client

After going through all the different options your best option seems to be to use the same literal function on both the server and the client. This can be achieved in two ways:

In this case you would trigger an AJAX query on the client to the server, which in turn queries the database for the specific calculation you want and return it to the client.

This might sound quite impossible, but using xp_cmdshell it's possible to execute command line commands from sql, and you can run javascript from the terminal using something like node.js, so everything you're left with is implementing the Vincenty function to be called from the command line.

The big question here is how the performance will be. Starting and stopping a node instance every few seconds seems like a relatively bad idea, so it would be far more optimal to code a service in node to do this work, however I would not know what the best way would be for sql to interact with such a service. The simplest approach would probably be to have it do a http request to something like localhost:8888/?lat1=&lng1=&etc., but that's starting to be nearly as complex as approach 1.

Approach 1 still seems to be the most reasonable one, although approach 2 gives you a lot more flexibility to do exactly what you want. For a private project or a perfectionist project I think I would go with approach 2, for a 'we need to finish this and we do not have time for surprises or optimalization'-kinda project I think I would advise approach number 1.

If accuracy isn't an issue then google maps built in function and MS SQL SRID 104001 will work as a solution that matches close enough. In my specific application accuracy does matter but seems approach 2 may be what is needed.

Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).

javascript - Google Maps and SQL Server LINESTRING length inconsistent...

javascript sql-server google-maps gis sqlgeography
Rectangle 27 1

There are several methods you can use for sharing content between two ...

Apart from the options below (and the other options, such as Web storage - discussed here, or JSON options), there is no way to send data from one Activity to another. You should either reconsider how you are doing what you are trying to do, or consider using a different Driver.

If the code is open source or open licensed, you may consider hacking in Serializable or Parcelable by extracting the source and modifying it to fit your needs. More on decompiling Android source is available here.

1. SharedPreferences, SQLite, Serialization, or Content Providers. These will all require you to break down your Driver Object into simple types. More on storage can be found in the docs.

2. Parcelables can be shared via Intent between Activities.

SharedPreferences
SQLite
Serialization

2. You can set it to a static variable. For example, have a Store class where you save static variables:

public class Store {
    /** provides static reference to the driver */
    public static Object driver;
}
Store.driver = myDriver;

and to get from anywhere, just do:

Object driver = Store.driver;

3. Create a custom Application class and set this in your Android Manifest. This application can store the driver, and doesn't necessarily have to be static. More on this can be found at Extending Application to share variables globally.

4. The third option is to create a singleton accessor to your Activity. So, in your activity that has the driver referenced, add the following class variable:

private static MyActivity self;//replace MyActivity with the name of your class.
public static MyActivity sharedMyActivity() {
    return self;
}

Finally, add this line in onCreate (after the call to super.onCreate(...)):

self = this;
Object driver = MyActivity.sharedMyActivity().getDriver();

As for part two of your question - if you are attempting to read from and write to a hardware device in an Activity that does not provide USB permissions, this will not work.

Thanks for the suggestions, but don't those all require the activities to all be within the same application? In my case, that's not an option.

@ryan0270, yes, My mistake. I have updated my post to include both cases, and discuss your options, as well as part 2 of your question at the bottom.

Android: pass object without serialization - Stack Overflow

android android-intent
Rectangle 27 1

I was able to create a new Plone 4.1.4 site with a new Dexterity content-type using this buildout. This should not be an official answer but pasting the configuration to a volatile service like pastebin is not an option for permanent documentation.

# buildout.cfg file for Plone 4 development work
# - for production installations please use http://plone.org/download
# Each part has more information about its recipe on PyPi
# http://pypi.python.org/pypi 
# ... just reach by the recipe name
[buildout]
parts =  
    instance
    zopepy
    i18ndude
    zopeskel
    test
#    omelette

extends = 
    http://dist.plone.org/release/4.1-latest/versions.cfg
    http://good-py.appspot.com/release/dexterity/1.2.1?plone=4.1.4

# Add additional egg download sources here. dist.plone.org contains archives
# of Plone packages.
find-links =
    http://dist.plone.org/release/4.1-latest
    http://dist.plone.org/thirdparty

extensions = 
    mr.developer
    buildout.dumppickedversions

sources = sources

versions = versions

auto-checkout = 
    nva.borrow

# Create bin/instance command to manage Zope start up and shutdown
[instance]
recipe = plone.recipe.zope2instance
user = admin:admin
http-address = 16080
debug-mode = off
verbose-security = on
blob-storage = var/blobstorage
zope-conf-additional = %import sauna.reload

eggs =
    Pillow
    Plone
    nva.borrow
    sauna.reload
    plone.app.dexterity

# Some pre-Plone 3.3 packages may need you to register the package name here in 
# order their configure.zcml to be run (http://plone.org/products/plone/roadmap/247)
# - this is never required for packages in the Products namespace (Products.*)
zcml =
#    nva.borrow
    sauna.reload


# zopepy commands allows you to execute Python scripts using a PYTHONPATH 
# including all the configured eggs
[zopepy]
recipe = zc.recipe.egg
eggs = ${instance:eggs}
interpreter = zopepy
scripts = zopepy

# create bin/i18ndude command
[i18ndude]
unzip = true
recipe = zc.recipe.egg
eggs = i18ndude

# create bin/test command
[test]
recipe = zc.recipe.testrunner
defaults = ['--auto-color', '--auto-progress']
eggs =
    ${instance:eggs}

# create ZopeSkel and paster commands with dexterity support
[zopeskel]
recipe = zc.recipe.egg
eggs =
    ZopeSkel<=2.99
    PasteScript
    zopeskel.dexterity<=2.99
    ${instance:eggs}

# symlinks all Python source code to parts/omelette folder when buildout is run
# windows users will need to install additional software for this part to build 
# correctly.  See http://pypi.python.org/pypi/collective.recipe.omelette for
# relevant details.
# [omelette]
# recipe = collective.recipe.omelette
# eggs = ${instance:eggs}

# Put your mr.developer managed source code repositories here, see
# http://pypi.python.org/pypi/mr.developer for details on the format of
# this part
[sources]
nva.borrow = svn https://novareto.googlecode.com/svn/nva.borrow/trunk

# Version pindowns for new style products go here - this section extends one 
# provided in http://dist.plone.org/release/
[versions]

that's possibly a culprit from a former ZopeSkel call

I created a new virtualenv, copied your buildout.cfg, run boostrap.py and bin/buildout. Was able to start bin/instance fg (with some warnings). Tried to run "../bin/zopeskel plone domma.contenta" in my src folder. Results in "ImportError: No module named util.template".

I would like to contact you "outside" of SO. If it's ok for you, please let me know.

python - Implementing a simple dexterity content type for plone 4 - St...

python plone dexterity zopeskel
Rectangle 27 2

You have to use "views node field" http://drupal.org/project/viewsnodefield, after installing this module you have to select the "Node content" in the display (like blocks,page). then click the add display. if you want to display the content like this page http://www.richtown.ae/?q=content/arabian-ranches then you have to download the views_galleriffic module and install it and choose the style option "Galleriffic Gallery". You can choose the content type by using the filter in the views.

i implemented this in my website richtown.ae if you still unclear please send me the email social@richtown.ae i will reply you & ready to help you we can share information i am using this module in drupal 6.

Drupal 7: Add view to content type - Stack Overflow

drupal drupal-7 drupal-views drupal-theming drupal-panels
Rectangle 27 2

The third option of using the Accept/Content-Type headers allows the media types to be a representation of a data, separate from the data itself.

Which uses the http headers to allow clients to choose the format of the data, as well as the version. So in your case, the request could look something like:

curl http://api.host.com/transactions -H "Accept: application/summary+json"

And the response would contain a body of your simplified data format and the Content-Type header set to application/summary+json

If you want to be more pedantic about it, you could also use a vendor media type as application/vnd.yourcompany.summary+json. In this case, vnd implies that the media type is a vendor typically associated with application specific media types.

REST API: Using content type vs custom param or endpoint - Stack Overf...

api rest resources
Rectangle 27 12

GIT would NOT have common SHA1's for two text files with identical text but different end-of-line(EOL) mechanisms inside the binary representation. The content is stored as a blob, which is reused if another identical copy is de-posited into the re-pository (space saving!)

The default choice taken by (the) GIT (designers) is to use the *nix style EOL character (LF only) whenever possible, so that for the same text content you get the same SHA1. (probably an important consideration ;-)

Because the content/blob no longer remembers the user's original EOL choice (remember it's possibly now in some distant repository) , Git has to make some guesses (option based) about how to recreate the original user's file (was it CRLF or was it simply LF) in a manner that you (and your tools) can use.

The normal recommendation is that locally each user (a) converts to *nix LF endings when committing into a blob (so all will see common SHA1 blob names) (a/k/a the Right Thing), and (b) locally set the re-creation option to their local system setting e.g. *nix(LF) or Windows(CRLF), etc.

Set some local standards for your users, and have the one big 'EOL/LF/CRLF & Whitespace Correction commit', and you'll be fine (plus training re-training of new users)

You can also make sure you (each user) uses a common white-space adjustment setting so that tabs v spaces, and trailing whitespaces doesn't cause more diff inconveniences!

A good reason to follow LF storage internally (which may result in setting autocrlf to true). Moreover: it is my experience, that applying patches may fail when files are stored with CRLF in blobs, even when the patch files also have CRLF in the diff content parts.

line endings - Definitive recommendation for git autocrlf settings - S...

git line-endings
Rectangle 27 11

There is one alternative to <iframe> and that's the <object> tag. It can display content from different sources as well. The pro is it being conform to the xhtml-standards and encouraged to use but there's not such a broad / usable support in older browsers (you have to mess with it to get it right in IE). It's used as followed:

<object data="page.html" width="400" height="300" type="text/html">
    Alternative Content
</object>
<iframe>
<object>

I for myself never saw an <iframe> being the cause for a slowdown, but that still might be possible. If that is an option, you should definitely try what ocanal said before in the comments: Let your script work on an wrapper container-div instead of the body-element and so embed it directly on the mainpage.

For the browser it shouldn't be much more than some overhead from handling a second document, so you could guess that it's just that little more to make your pc run slow. So it might be a good idea to optimize the code in general:

Look if you can find the bottleneck causing the slowdown. Possible causes could be

  • altering the DOM a lot - which is always slow
  • acting a lot on things not even visible on screen
  • getting attributes from objects. For every additional period you use it means more work for your cpu: // instead of using this over and over again house.roof.window.handle.open(); house.roof.window.handle.close(); // save it to a var and use that instead var handle = house.roof.window.handle; handle.open(); handle.close();
  • Updating the game in short equal intervals by window.setTimeout() may also be too fast and waste cpu power unnecessarily (or too slow and won't look fine then, but never really right) - so you can use the new window.requestAnimationFrame. The vendor-prefixed variants are implemented in the current versions of all important browsers and it's easy to provide a fallback to the old method.
  • As a last thought: Maybe it even helps in some meta-magical way to include the script file itself on the mainpage and not in the embeded document

can i use this in place of iframe in sending email via AWS?

How would one load external content using just a div?

javascript - Alternative to iframe - Stack Overflow

javascript html5 iframe
Rectangle 27 1

1:- There is a band called "Last Page Footer" in report which you can use 
    to print the content only on the last page. To add "Last Page Footer" 
    band go to "Report Inspector" and right click on the band and add.

2:- You can also achieve this by adding Report Groups option, to add report 
    groups go to report inspector and right click on the report and click on the
    "Add Report Group" and add both Header and Footer, Header can be used as
    summary band for graphs and footer can be used as last page footer.

Jasper reports content on last page, other pages show empty space - St...

jasper-reports
Rectangle 27 19

The best option is 0 and 1 (as numbers - another answer suggests 0 and 1 as CHAR for space-efficiency but that's a bit too twisted for me), using NOT NULL and a check constraint to limit contents to those values. (If you need the column to be nullable, then it's not a boolean you're dealing with but an enumeration with three values...)

  • Language independent. 'Y' and 'N' would be fine if everyone used it. But they don't. In France they use 'O' and 'N' (I have seen this with my own eyes). I haven't programmed in Finland to see whether they use 'E' and 'K' there - no doubt they're smarter than that, but you can't be sure.
  • Congruent with practice in widely-used programming languages (C, C++, Perl, Javascript)
  • Plays better with the application layer e.g. Hibernate
select sum(is_ripe) from bananas
select count(*) from bananas where is_ripe = 'Y'
select sum(case is_ripe when 'Y' then 1 else 0) from bananas

Another poster suggested 'Y'/null for performance gains. If you've proven that you need the performance, then fair enough, but otherwise avoid since it makes querying less natural (some_column is null instead of some_column = 0) and in a left join you'll conflate falseness with nonexistent records.

You find that these days that a lot Booleans are TriState ie true, false and unknown. which fits perfectly with the database null idea. simply because alot of times knowing no answer has been given is vitally important

Yes, true-false-unknown can be required, though if I were picky (which I am), I'd say it shouldn't really be described as a Boolean, because it isn't.

if your going to be that picky then you can make the same argument for every data type. as under strict definition integer, double (i guess i should say double length twos complement floating point), Binary, string, etc. all assume a value is provided but database implementations always add a null value option Boolean isn't any different

true, on a plus note for your method if you configure your number correctly its can also be stored in the same single byte as a char field, which nullifies the size argument against using 0 / 1, i can't find the link currently but storage for a number ranges from 1 - 22 bytes depending on configuration

I suspect the downvotes are due to a legacy viewpoint on choosing the most memory efficient implementation. This day and age memory efficiency is far less of a priority and should be taken into account after usability and compatibility. To anyone who may respond to this comment, I recommend reading up on premature optimization. That is exactly what is occurring by choosing 'Y/N' purely based on memory efficiency. You're losing native compatibility with a set of commonly used frameworks because of that decision.

Boolean Field in Oracle - Stack Overflow

oracle boolean sqldatatypes
Rectangle 27 19

The best option is 0 and 1 (as numbers - another answer suggests 0 and 1 as CHAR for space-efficiency but that's a bit too twisted for me), using NOT NULL and a check constraint to limit contents to those values. (If you need the column to be nullable, then it's not a boolean you're dealing with but an enumeration with three values...)

  • Language independent. 'Y' and 'N' would be fine if everyone used it. But they don't. In France they use 'O' and 'N' (I have seen this with my own eyes). I haven't programmed in Finland to see whether they use 'E' and 'K' there - no doubt they're smarter than that, but you can't be sure.
  • Congruent with practice in widely-used programming languages (C, C++, Perl, Javascript)
  • Plays better with the application layer e.g. Hibernate
select sum(is_ripe) from bananas
select count(*) from bananas where is_ripe = 'Y'
select sum(case is_ripe when 'Y' then 1 else 0) from bananas

Another poster suggested 'Y'/null for performance gains. If you've proven that you need the performance, then fair enough, but otherwise avoid since it makes querying less natural (some_column is null instead of some_column = 0) and in a left join you'll conflate falseness with nonexistent records.

You find that these days that a lot Booleans are TriState ie true, false and unknown. which fits perfectly with the database null idea. simply because alot of times knowing no answer has been given is vitally important

Yes, true-false-unknown can be required, though if I were picky (which I am), I'd say it shouldn't really be described as a Boolean, because it isn't.

if your going to be that picky then you can make the same argument for every data type. as under strict definition integer, double (i guess i should say double length twos complement floating point), Binary, string, etc. all assume a value is provided but database implementations always add a null value option Boolean isn't any different

true, on a plus note for your method if you configure your number correctly its can also be stored in the same single byte as a char field, which nullifies the size argument against using 0 / 1, i can't find the link currently but storage for a number ranges from 1 - 22 bytes depending on configuration

I suspect the downvotes are due to a legacy viewpoint on choosing the most memory efficient implementation. This day and age memory efficiency is far less of a priority and should be taken into account after usability and compatibility. To anyone who may respond to this comment, I recommend reading up on premature optimization. That is exactly what is occurring by choosing 'Y/N' purely based on memory efficiency. You're losing native compatibility with a set of commonly used frameworks because of that decision.

Boolean Field in Oracle - Stack Overflow

oracle boolean sqldatatypes
Rectangle 27 18

The best option is 0 and 1 (as numbers - another answer suggests 0 and 1 as CHAR for space-efficiency but that's a bit too twisted for me), using NOT NULL and a check constraint to limit contents to those values. (If you need the column to be nullable, then it's not a boolean you're dealing with but an enumeration with three values...)

  • Language independent. 'Y' and 'N' would be fine if everyone used it. But they don't. In France they use 'O' and 'N' (I have seen this with my own eyes). I haven't programmed in Finland to see whether they use 'E' and 'K' there - no doubt they're smarter than that, but you can't be sure.
  • Congruent with practice in widely-used programming languages (C, C++, Perl, Javascript)
  • Plays better with the application layer e.g. Hibernate
select sum(is_ripe) from bananas
select count(*) from bananas where is_ripe = 'Y'
select sum(case is_ripe when 'Y' then 1 else 0) from bananas

Another poster suggested 'Y'/null for performance gains. If you've proven that you need the performance, then fair enough, but otherwise avoid since it makes querying less natural (some_column is null instead of some_column = 0) and in a left join you'll conflate falseness with nonexistent records.

You find that these days that a lot Booleans are TriState ie true, false and unknown. which fits perfectly with the database null idea. simply because alot of times knowing no answer has been given is vitally important

Yes, true-false-unknown can be required, though if I were picky (which I am), I'd say it shouldn't really be described as a Boolean, because it isn't.

if your going to be that picky then you can make the same argument for every data type. as under strict definition integer, double (i guess i should say double length twos complement floating point), Binary, string, etc. all assume a value is provided but database implementations always add a null value option Boolean isn't any different

true, on a plus note for your method if you configure your number correctly its can also be stored in the same single byte as a char field, which nullifies the size argument against using 0 / 1, i can't find the link currently but storage for a number ranges from 1 - 22 bytes depending on configuration

I suspect the downvotes are due to a legacy viewpoint on choosing the most memory efficient implementation. This day and age memory efficiency is far less of a priority and should be taken into account after usability and compatibility. To anyone who may respond to this comment, I recommend reading up on premature optimization. That is exactly what is occurring by choosing 'Y/N' purely based on memory efficiency. You're losing native compatibility with a set of commonly used frameworks because of that decision.

Boolean Field in Oracle - Stack Overflow

oracle boolean sqldatatypes
Rectangle 27 7

Your question is pretty broad. For instance, what do you mean by the "the best way to access the database content using EF"? The best way in terms of performance?

I will try to answer by giving an option I prefer (of which I mostly use some variant), which uses the repository pattern. If you'd use your EF sets directly as a repository you might argue you wouldn't need the repository pattern, but I like to wrap those in one of my own.

Since I can't know what you mean by the best way, I'll give my personal preference which would suite a typical web project.

I won't be posting all the code to make it completely functional, but you should get a clear idea of what's going on.

UI ----------> Domain.Logic (w. Domain.Models) -----------------> Data (Holding the EF Context).

public partial class EFContextContainer : DbContext 
enter code here
public EFContextContainer ()
        : base("name=EFContextContainer")
    {
    }

public DbSet<IdentityUser> IdentityUsers { get;set; }

With a wrapper returning the context:

public static class Database
{
    public static EFContextContainerGetContext()
    {
        return new EFContextContainer();
    }

}

You could have a repository setup like this:

public interface IRepository<T> where T : class
{
    IQueryable<T> GetAll();
    T GetById(Guid id);
    void Add(T entity);
    void Update(T entity);
    void Delete(T entity);
    void Delete(Guid id);
}

Implementation (only implemented the Add(T entity) for the sake of brevity):

public class EFRepository<T> : IRepository<T>, IDisposable where T : class
{
    public EFRepository(DbContext dbContext)
    {
        if (dbContext == null)
            throw new ArgumentNullException("dbContext");
        DbContext = dbContext;
        DbSet = DbContext.Set<T>();

    }

    protected DbContext DbContext { get; set; }

    protected DbSet<T> DbSet { get; set; }

    public virtual void Add(T entity)
    {
        DbEntityEntry dbEntityEntry = DbContext.Entry(entity);
        if (dbEntityEntry.State != EntityState.Detached)
        {
            dbEntityEntry.State = EntityState.Added;
        }
        else
        {
            DbSet.Add(entity);
        }
    }

public void Dispose()
    {
        DbContext.Dispose();
    }

}

Domain.Logic (IdentityUserManager would be a class in Domain.Models):

public class IdentityUserManager
{
    public void Add(IdentityUser idUser)
    {
        using(var idUserRepository = new EFRepository<IdentityUser>(Database.GetContext())
        {
            idUserRepository.Add(idUser);
        }
    }
}
[HttpPost]
public ActionResult Post(UserViewModel model)
{
    UserIdentity user = MapUser(model);
    var userManager = new IdentityUserManager();
    userManager.Add(user);

    return View(new UserViewModel());
}

Admitting, there can be a lot more abstraction in this code, but it would be ridiculous to pen down an entire solution here. For instance, you could use the Unit of Work pattern as well, which works great with the repository pattern. So read this a an example, not a full guide on how to implement this setup. Things could be set up a lot cleaner than this example.

For an in-depth view of the implementation of some of these patterns, I urge you to take a look into the course Single Page Apps by John Papa on Plural Sight. He does an excellent job explaining the benefits of these patterns and how to implement them.

Thanks for this @bump! What I meant was is it worth using a repository pattern? is it not? If so, how would you go on about implementing this in a separate project. I am a bit confused how the each 'layer' glues together and I haven't found any tutorial (targeted for ASP.NET MVC) to explain this with code example. I got the following: WebApp.Model (this holds all POCO classes) WebApp.ViewModel (this holds viewmodels, with data attributes) WebApp.Controllers WebApp.Views Where does the logic go? i.e GetAllUsers() id love a tutorial on ASP.NET MVC layering/separation of concerns

@teh0wner My example uses different projects (the repository is in the Data layer). In my opinion it's definitely worth using a repository. The mentioned course by John Papa is an excellent tutorial for separation of concerns with MVC (you can get a free trial with PluralSight). So check that out.

Had a look at the tutorial and must admit it's been brilliant. One question though.. Where would IdentityUser go? As it needs to reference EntityFramework it wouldn't really be a POCO class.

@teh0wner Why not? You can use your POCOs as Code First Entity models. You can adorn them with attributes and everything.

c# - ASP.NET MVC, EntityFramework, DBContext, Repository in a differen...

c# asp.net asp.net-mvc entity-framework data-access-layer
Rectangle 27 23

c:\temp>tidy -help
tidy [option...] [file...] [option...] [file...]
Utility to clean up and pretty print HTML/XHTML/XML
see http://tidy.sourceforge.net/

Options for HTML Tidy for Windows released on 14 February 2006:

File manipulation
-----------------
 -output <file>, -o  write output to the specified <file>
 <file>
 -config <file>      set configuration options from the specified <file>
 -file <file>, -f    write errors to the specified <file>
 <file>
 -modify, -m         modify the original input files

Processing directives
---------------------
 -indent, -i         indent element content
 -wrap <column>, -w  wrap text at the specified <column>. 0 is assumed if
 <column>            <column> is missing. When this option is omitted, the
                     default of the configuration option "wrap" applies.
 -upper, -u          force tags to upper case
 -clean, -c          replace FONT, NOBR and CENTER tags by CSS
 -bare, -b           strip out smart quotes and em dashes, etc.
 -numeric, -n        output numeric rather than named entities
 -errors, -e         only show errors
 -quiet, -q          suppress nonessential output
 -omit               omit optional end tags
 -xml                specify the input is well formed XML
 -asxml, -asxhtml    convert HTML to well formed XHTML
 -ashtml             force XHTML to well formed HTML
 -access <level>     do additional accessibility checks (<level> = 0, 1, 2, 3).
                     0 is assumed if <level> is missing.

Character encodings
-------------------
 -raw                output values above 127 without conversion to entities
 -ascii              use ISO-8859-1 for input, US-ASCII for output
 -latin0             use ISO-8859-15 for input, US-ASCII for output
 -latin1             use ISO-8859-1 for both input and output
 -iso2022            use ISO-2022 for both input and output
 -utf8               use UTF-8 for both input and output
 -mac                use MacRoman for input, US-ASCII for output
 -win1252            use Windows-1252 for input, US-ASCII for output
 -ibm858             use IBM-858 (CP850+Euro) for input, US-ASCII for output
 -utf16le            use UTF-16LE for both input and output
 -utf16be            use UTF-16BE for both input and output
 -utf16              use UTF-16 for both input and output
 -big5               use Big5 for both input and output
 -shiftjis           use Shift_JIS for both input and output
 -language <lang>    set the two-letter language code <lang> (for future use)

Miscellaneous
-------------
 -version, -v        show the version of Tidy
 -help, -h, -?       list the command line options
 -xml-help           list the command line options in XML format
 -help-config        list all configuration options
 -xml-config         list all configuration options in XML format
 -show-config        list the current configuration settings

Use --blah blarg for any configuration option "blah" with argument "blarg"

Input/Output default to stdin/stdout respectively
Single letter options apart from -f may be combined
as in:  tidy -f errs.txt -imu foo.html
For further info on HTML see http://www.w3.org/MarkUp

c# - How to convert HTML to XHTML? - Stack Overflow

c# .net html xhtml converter
Rectangle 27 1

There are different options to strip (see manpage) and I think you'll want to use the -r option. You can set the type of stripping to perform from within the Xcode project settings. See if you can relate the options in Xcode with the options in the manpage.

Note: you can explicitly set command line parameters for strip via the "Additional Strip Flags" option in Build Settings. Unfortunately, strip with "-r" still fails.

ios - Xcode Archive debug strip errors - Stack Overflow

ios xcode strip
Rectangle 27 33

HTML select elements have a selectedIndex property that can be written to in order to select a particular option:

$('select').prop('selectedIndex', 3); // select 4th option

Using plain JavaScript this can be achieved by:

// use first select element
var el = document.getElementsByTagName('select')[0]; 
// assuming el is not null, select 4th option
el.selectedIndex = 3;

This should be the accepted answer... It looks like the sanest and cleanest solution...

The problem is that it doesn't fire an 'onchange' event.

@GuyKorland This happens with none of the other answers either, demonstrated here; if you want an event to fire, you have to do it manually.

@Jack Is there a short way? The only way I found: var fireEvent = function(selectElement){ if(selectElement){ if ("fireEvent" in selectElement){ selectElement.fireEvent("onchange"); } else { var evt = document.createEvent("HTMLEvents"); evt.initEvent("change", false, true); selectElement.dispatchEvent(evt); } } }

@GuyKorland jQuery exposes .trigger() for that, but since you're already doing this with code it would be easy to also call the change handlers yourself.

javascript - use jquery to select a dropdown option - Stack Overflow

javascript jquery html select drop-down-menu
Rectangle 27 2

My guess is that you are using a separate war directory in your external server, have copied all of your static content over (including the *.gwt.rpc files) to that war directory, and then changed something about the serializable models that you are passing across your RPC calls. Whenever these models change the generated .gwt.rpc files will have changed. Your server would be using one variation of the serialization policies and your client java debugging would be using a different one.

I can think of two options:

Option #2 is the one I go with when dealing with large complex systems that require an external server.

This was my thought as well but I am using the same war directory. However the classes that the system is complaining about are in a seperate maven module. I am wondering if there is something that Intellij does when running in complete dev mode that it doesn't when the noserver flag is set?

xsee, Is there anything special I have to do in order to make this happen.

Hmmm, I don't use IntelliJ so it's hard for me to speculate but I doubt it is doing anything that would be too different from what eclipse does in this scenario. I wonder if your external Jetty server is caching the serialization policies? I haven't used an external Jetty in awhile so I don't have any configuration files handy to reference.

Ok, thanks. I think it may be related to the compiler setup. I am investigating ;)

gwt rpc - Intermitant serialization exception with GWT Dev mode and ex...

gwt gwt-rpc gwt-platform gwtp
Rectangle 27 1

If you don't want to use views, then you need to write your custom module to get the related content, for that many ways are available i will give one option step by step,

  • Add a taxonomy reference filed in C1.
  • Add taxonomy reference field to the C2 with same taxonomy vocabulary.
  • Now on your node page you will get the tags associated with that C1 node,
  • Query in C2 taxonomy field table with the tid's associated with C1,
  • Get the entity_id from that table to get the node id's related to your current node.

php - Displaying related content on a page - Stack Overflow

php drupal-7 content-type related-content