Rectangle 27 7

This is a paste from one of my methods (Restlet 2.0). Here I have a form that includes one file upload plus other fields, therefore it is rather complete:

@Post
public Representation createTransaction(Representation entity) {
    Representation rep = null;
    if (entity != null) {
        if (MediaType.MULTIPART_FORM_DATA.equals(entity.getMediaType(), true)) {
            // 1/ Create a factory for disk-based file items
            DiskFileItemFactory factory = new DiskFileItemFactory();
            factory.setSizeThreshold(1000240);

            // 2/ Create a new file upload handler
            RestletFileUpload upload = new RestletFileUpload(factory);
            List<FileItem> items;
            try {
                // 3/ Request is parsed by the handler which generates a list of FileItems
                items = upload.parseRequest(getRequest());

                Map<String, String> props = new HashMap<String, String>();
                File file = null;
                String filename = null;

                for (final Iterator<FileItem> it = items.iterator(); it.hasNext(); ) {
                    FileItem fi = it.next();
                    String name = fi.getName();
                    if (name == null) {
                        props.put(fi.getFieldName(), new String(fi.get(), "UTF-8"));
                    } else {
                        String tempDir = System.getProperty("java.io.tmpdir");
                        file = new File(tempDir + File.separator + "file.txt");
                        filename = name;
                        fi.getInputStream();
                        fi.write(file);
                    }
                }

                // [...] my processing code

                String redirectUrl = ...; // address of newly created resource
                getResponse().redirectSeeOther(redirectUrl);
            } catch (Exception e) {
                // The message of all thrown exception is sent back to
                // client as simple plain text
                getResponse().setStatus(Status.CLIENT_ERROR_BAD_REQUEST);
                e.printStackTrace();
                rep = new StringRepresentation(e.getMessage(), MediaType.TEXT_PLAIN);
            }
        } else {
            // other format != multipart form data
            getResponse().setStatus(Status.CLIENT_ERROR_BAD_REQUEST);
            rep = new StringRepresentation("Multipart/form-data required", MediaType.TEXT_PLAIN);
        }
    } else {
        // POST request with no entity.
        getResponse().setStatus(Status.CLIENT_ERROR_BAD_REQUEST);
        rep = new StringRepresentation("Error", MediaType.TEXT_PLAIN);
    }

    return rep;
}

I'll end up refactoring it to something more generic, but this is what I have by now.

but don't you need an annotation for the Post request, like @Post("form") or something? I get a 405 returned for doing something very similar

I do have an @Post annotation (but with no parameters). The parameter limits the representations you accept (restlet.org/documentation/2.0/jse/api/org/restlet/resource/). The HTTP 405 is method not allowed (w3.org/Protocols/rfc2616/rfc2616-sec10.html) - Restlet may not recognise your resource as 'postable' if no @Post annotation is present.

I had it as just @Post, and it continued to return a 405. Then I added this: getMetadataService().addExtension("multipart", MediaType.MULTIPART_FORM_DATA, true); to the resources constructor, and changed my annotation from @Post to @Post("multipart") and it then worked.

java - RESTlet: How to process multipart/form-data requests? - Stack O...

java restlet multipartform-data
Rectangle 27 7

This is a paste from one of my methods (Restlet 2.0). Here I have a form that includes one file upload plus other fields, therefore it is rather complete:

@Post
public Representation createTransaction(Representation entity) {
    Representation rep = null;
    if (entity != null) {
        if (MediaType.MULTIPART_FORM_DATA.equals(entity.getMediaType(), true)) {
            // 1/ Create a factory for disk-based file items
            DiskFileItemFactory factory = new DiskFileItemFactory();
            factory.setSizeThreshold(1000240);

            // 2/ Create a new file upload handler
            RestletFileUpload upload = new RestletFileUpload(factory);
            List<FileItem> items;
            try {
                // 3/ Request is parsed by the handler which generates a list of FileItems
                items = upload.parseRequest(getRequest());

                Map<String, String> props = new HashMap<String, String>();
                File file = null;
                String filename = null;

                for (final Iterator<FileItem> it = items.iterator(); it.hasNext(); ) {
                    FileItem fi = it.next();
                    String name = fi.getName();
                    if (name == null) {
                        props.put(fi.getFieldName(), new String(fi.get(), "UTF-8"));
                    } else {
                        String tempDir = System.getProperty("java.io.tmpdir");
                        file = new File(tempDir + File.separator + "file.txt");
                        filename = name;
                        fi.getInputStream();
                        fi.write(file);
                    }
                }

                // [...] my processing code

                String redirectUrl = ...; // address of newly created resource
                getResponse().redirectSeeOther(redirectUrl);
            } catch (Exception e) {
                // The message of all thrown exception is sent back to
                // client as simple plain text
                getResponse().setStatus(Status.CLIENT_ERROR_BAD_REQUEST);
                e.printStackTrace();
                rep = new StringRepresentation(e.getMessage(), MediaType.TEXT_PLAIN);
            }
        } else {
            // other format != multipart form data
            getResponse().setStatus(Status.CLIENT_ERROR_BAD_REQUEST);
            rep = new StringRepresentation("Multipart/form-data required", MediaType.TEXT_PLAIN);
        }
    } else {
        // POST request with no entity.
        getResponse().setStatus(Status.CLIENT_ERROR_BAD_REQUEST);
        rep = new StringRepresentation("Error", MediaType.TEXT_PLAIN);
    }

    return rep;
}

I'll end up refactoring it to something more generic, but this is what I have by now.

but don't you need an annotation for the Post request, like @Post("form") or something? I get a 405 returned for doing something very similar

I do have an @Post annotation (but with no parameters). The parameter limits the representations you accept (restlet.org/documentation/2.0/jse/api/org/restlet/resource/). The HTTP 405 is method not allowed (w3.org/Protocols/rfc2616/rfc2616-sec10.html) - Restlet may not recognise your resource as 'postable' if no @Post annotation is present.

I had it as just @Post, and it continued to return a 405. Then I added this: getMetadataService().addExtension("multipart", MediaType.MULTIPART_FORM_DATA, true); to the resources constructor, and changed my annotation from @Post to @Post("multipart") and it then worked.

java - RESTlet: How to process multipart/form-data requests? - Stack O...

java restlet multipartform-data
Rectangle 27 1

Then in your loop through your items, you can test whether they are or not fileItems with the method: isFormField().

Testing if a fileItem is a formField... makes sens ? ;) but it works.

java - RESTlet: How to process multipart/form-data requests? - Stack O...

java restlet multipartform-data
Rectangle 27 1

Then in your loop through your items, you can test whether they are or not fileItems with the method: isFormField().

Testing if a fileItem is a formField... makes sens ? ;) but it works.

java - RESTlet: How to process multipart/form-data requests? - Stack O...

java restlet multipartform-data
Rectangle 27 7

What the stack trace tells you is that Hibernate is in the process of initialising itself, and in particular, is executing Configuration.generateSchemaCreationScript, which goes through all your mapped tables and generates DDL for them. As part of this, it queries the existing columns and converts them into an internal Hibernate representation. It does this by calling ResultSetMetaData::getColumnType and then calling TypeNames::get with the resulting type code. The problem is that getColumnType is returning a type code of 1111, which means 'other'), and Hibernate doesn't know what to do with that.

Basically, somewhere in one of your tables is a column of a type Hibernate can't handle. If you can work out which column that is, you can start thinking about what to do about it.

java - org.hibernate.MappingException: No Dialect mapping for JDBC typ...

java hibernate postgresql
Rectangle 27 35

Reflection requires some metadata about types to be stored somewhere that can be queried. Since C++ compiles to native machine code and undergoes heavy changes due to optimization, high level view of the application is pretty much lost in the process of compilation, consequently, it won't be possible to query them at run time. Java and .NET use a very high level representation in the binary code for virtual machines making this level of reflection possible. In some C++ implementations, however, there is something called Run Time Type Information (RTTI) which can be considered a stripped down version of reflection.

RTTI is in the C++ standard.

But not all C++ implementations are standard. I've seen implementations that don't support RTTI.

And most implementations that do support RTTI also support turning it off via compiler options.

Why does C++ not have reflection? - Stack Overflow

c++ reflection
Rectangle 27 35

Reflection requires some metadata about types to be stored somewhere that can be queried. Since C++ compiles to native machine code and undergoes heavy changes due to optimization, high level view of the application is pretty much lost in the process of compilation, consequently, it won't be possible to query them at run time. Java and .NET use a very high level representation in the binary code for virtual machines making this level of reflection possible. In some C++ implementations, however, there is something called Run Time Type Information (RTTI) which can be considered a stripped down version of reflection.

RTTI is in the C++ standard.

But not all C++ implementations are standard. I've seen implementations that don't support RTTI.

And most implementations that do support RTTI also support turning it off via compiler options.

If a C++ implementation is not standard, then it is not a C++ implementation, because C++ is the standard and the standard is C++.

Why does C++ not have reflection? - Stack Overflow

c++ reflection
Rectangle 27 35

Reflection requires some metadata about types to be stored somewhere that can be queried. Since C++ compiles to native machine code and undergoes heavy changes due to optimization, high level view of the application is pretty much lost in the process of compilation, consequently, it won't be possible to query them at run time. Java and .NET use a very high level representation in the binary code for virtual machines making this level of reflection possible. In some C++ implementations, however, there is something called Run Time Type Information (RTTI) which can be considered a stripped down version of reflection.

RTTI is in the C++ standard.

But not all C++ implementations are standard. I've seen implementations that don't support RTTI.

And most implementations that do support RTTI also support turning it off via compiler options.

Why does C++ not have reflection? - Stack Overflow

c++ reflection
Rectangle 27 1

When the compilers compiles to byte code a process called Erasure happens. This remove the type information from the collections. I believe it will manually do the casts etc as part of the process of generating the byte code. If you remove the generic parts of your class (ie the <..> ) then you will see you have two saveAll methods. The error is that you have two save all methods will the same signature. The collections have type object in the byte code.

Try removing the <..> which might make it clearer. When you put the <...> back in then consider the name of the methods. If they are different it should compile.

Also I dont think this is a hibernate problem so this tag should be removed. It is a java generic problem you have.

What you could do here is type the class

public class Bar extends Foo<MyClass>

and then have the method types to T

public void saveAll(Collection<MyClass> stuff) {
   super.saveAll(stuff);
}

and then the declaration of Foo would be something like

public abstract class Bar extends Foo<T> {
    public void saveAll(Collection<T> stuff) {
}

Java generics name clash , has the same erasure - Stack Overflow

java generics
Rectangle 27 1

Proxool has a number of issues: - Under heavy load can exceed max number of connections and not return below max - Can manage to not return to min connections even after connections expire - Can lock up the entire pool (and all server/client threads) if it has trouble connecting to the database during HouseKeeper thread (does not use .setQueryTimeout) - HouseKeeper thread, while having connection pool lock for its process, requests the Prototyper thread to recreate connections (sweep) which can result in race condition/lockup. In these method calls the last parameter should always be sweep:false during the loop, only sweep:true below it. - HouseKeeper only needs the single PrototypeController sweep at the end and has more [mentioned above] - HouseKeeper thread checks for testing of connections before seeing what connections may be expired [some risk of testing expired connection that may be broken/terminated through other timeouts to DB in firewall, etc.] - The project has unfinished code (properties that are defined but not acted upon) - The Default max connection life if not defined is 4 hours (excessive) - HouseKeeper thread runs every five seconds per pool (excessive) You can modify the code and make these improvements. But as it was created in 2003, and updated in 2008, its lacking nearly 10 years of java improvements that solutions like hikaricp utilize.

java - How to establish a connection pool in JDBC? - Stack Overflow

java jdbc connection-pooling
Rectangle 27 2

I agree, the answer is no. I'm thinking about the Singleton pattern in PHP right now and have decided that although the Singleton pattern can be coded in PHP, it is not really implemented since the instance is not shared across requests (or stored in the process memory which is the case for web server environments like ASP.net, Java and Ruby on Rails?) You can serialize the Singleton instance and store it in session, but still, it's not shared across sessions. I speculate that it would have to be stored in cache in order to fully implement the Singleton pattern in PHP. But I haven't done that yet, so I'm not certain.

class - Singleton in PHP - Stack Overflow

php class singleton
Rectangle 27 0

This is a paste from one of my methods (Restlet 2.0). Here I have a form that includes one file upload plus other fields, therefore it is rather complete:

@Post
public Representation createTransaction(Representation entity) {
    Representation rep = null;
    if (entity != null) {
        if (MediaType.MULTIPART_FORM_DATA.equals(entity.getMediaType(), true)) {
            // 1/ Create a factory for disk-based file items
            DiskFileItemFactory factory = new DiskFileItemFactory();
            factory.setSizeThreshold(1000240);

            // 2/ Create a new file upload handler
            RestletFileUpload upload = new RestletFileUpload(factory);
            List<FileItem> items;
            try {
                // 3/ Request is parsed by the handler which generates a list of FileItems
                items = upload.parseRequest(getRequest());

                Map<String, String> props = new HashMap<String, String>();
                File file = null;
                String filename = null;

                for (final Iterator<FileItem> it = items.iterator(); it.hasNext(); ) {
                    FileItem fi = it.next();
                    String name = fi.getName();
                    if (name == null) {
                        props.put(fi.getFieldName(), new String(fi.get(), "UTF-8"));
                    } else {
                        String tempDir = System.getProperty("java.io.tmpdir");
                        file = new File(tempDir + File.separator + "file.txt");
                        filename = name;
                        fi.getInputStream();
                        fi.write(file);
                    }
                }

                // [...] my processing code

                String redirectUrl = ...; // address of newly created resource
                getResponse().redirectSeeOther(redirectUrl);
            } catch (Exception e) {
                // The message of all thrown exception is sent back to
                // client as simple plain text
                getResponse().setStatus(Status.CLIENT_ERROR_BAD_REQUEST);
                e.printStackTrace();
                rep = new StringRepresentation(e.getMessage(), MediaType.TEXT_PLAIN);
            }
        } else {
            // other format != multipart form data
            getResponse().setStatus(Status.CLIENT_ERROR_BAD_REQUEST);
            rep = new StringRepresentation("Multipart/form-data required", MediaType.TEXT_PLAIN);
        }
    } else {
        // POST request with no entity.
        getResponse().setStatus(Status.CLIENT_ERROR_BAD_REQUEST);
        rep = new StringRepresentation("Error", MediaType.TEXT_PLAIN);
    }

    return rep;
}

I'll end up refactoring it to something more generic, but this is what I have by now.

but don't you need an annotation for the Post request, like @Post("form") or something? I get a 405 returned for doing something very similar

I do have an @Post annotation (but with no parameters). The parameter limits the representations you accept (restlet.org/documentation/2.0/jse/api/org/restlet/resource/). The HTTP 405 is method not allowed (w3.org/Protocols/rfc2616/rfc2616-sec10.html) - Restlet may not recognise your resource as 'postable' if no @Post annotation is present.

I had it as just @Post, and it continued to return a 405. Then I added this: getMetadataService().addExtension("multipart", MediaType.MULTIPART_FORM_DATA, true); to the resources constructor, and changed my annotation from @Post to @Post("multipart") and it then worked.

java - RESTlet: How to process multipart/form-data requests? - Stack O...

java restlet multipartform-data
Rectangle 27 0

It depends on how you want them to communicate. To have Java and Haskell code running natively in the same process and exchanging data in memory via their respective FFIs is a huge problem, not least because you have two GCs fighting over the data, and two compilers both of which have their own ideas about representing various data types. Getting Haskell compiled under the JVM is likewise difficult because the JVM does not (at present) have any concept of closures.

Of course these things can be done, but getting from demonstrator to industrial tool takes huge effort. My understanding is that the tools you mention never made it past the demonstrator stage.

A simpler, if less elegant, solution, is to write your Haskell program as a server process that is sent data over sockets from the Java. If performance and volume is not too high then coding it up in JSON would probably be simple, as both sides have libraries to support it.

it is not so much the lack of JVM closures. Many JVM languages have closures, technically so does Java with inner classes, and Java 8 has an elegant implementation of lambdas proposed. The problem is 1. no tail calls 2. allocator/GC that is non-optimal for functional languages (Haskell code allocates many more small short lived objects than Java). Still Frege etc show that it is possible.

You can get tail call elimination from a code manipulator like Kilim, but it isn't very convenenient in general github.com/kilim/kilim/blob/master/docs/IFAQ.txt

Communication between Java and Haskell - Stack Overflow

java haskell jni
Rectangle 27 0

The error message indicates that there is no HttpMessageConverter registered for a multi-part/MIME part of content type: application/octet-stream. Still, your jarFile parameter is most likely correctly identified as application/octet-stream, so I'm assuming there's a mismatch in the parameter mapping.

So, first try setting the same name for the parameter and the form's input element.

Another problem is that the JSON is uploaded as a (regular) value of a text input in the form, not as a separate part in the multi-part/MIME. So there's no content-type header associated with it to find out that Spring should use the JSON deserializer. You can use @RequestParam instead and register a specific converter like in this answer: JSON parameter in spring MVC controller

java - How to process a multipart request consisting of a file and a J...

java json spring rest multipartform-data
Rectangle 27 0

Then in your loop through your items, you can test whether they are or not fileItems with the method: isFormField().

Testing if a fileItem is a formField... makes sens ? ;) but it works.

java - RESTlet: How to process multipart/form-data requests? - Stack O...

java restlet multipartform-data
Rectangle 27 0

This type of behavior is perfect for transactions. If your code were to start a transaction, the database would know how to keep the data consistent when a transaction aborted due to a process dying. You might be reinventing the wheel here. The Daily WTF has a good example of how we fool ourselves into reinventing the wheel.

You're preaching to the choir. I was told we couldn't use the transactional version of the DB...so I think I'll need to ask "why not?" before going much further.

java - JUnit Test a Database Failure? - Stack Overflow

java unit-testing junit process
Rectangle 27 0

# 480 weeks
   #######################################
### Apache configuration directives ###
###   for mod_gzip 1.3.26.1a        ###
#######################################

##########################
### loading the module ###
##########################

# ---------------------------------------------------------------------
# load DLL / Win32:
# LoadModule gzip_module modules/ApacheModuleGzip.dll
#
# load DSO / UNIX:
# LoadModule gzip_module modules/mod_gzip.so
#
# (none of both if module has been compiled in statically;

#  the exact file name may depend upon the exact compilation method used
#  for this module)

# ---------------------------------------------------------------------

  <ifModule mod_gzip.c>

########################
### responsibilities ###
########################

# ---------------------------------------------------------------------
# use mod_gzip at all?
  mod_gzip_on                   Yes
# (you can especially enable mod_gzip inside the central server
#  configuration but disable it inside some directories ot virtual
#  hosts by using this directive.)
# ---------------------------------------------------------------------

######################################
### statically precompressed files ###
######################################

# ---------------------------------------------------------------------
# let mod_gzip perform 'partial content negotiation'?
  mod_gzip_can_negotiate        Yes
# (if this option is active and a static file is to be served in com-
#  pressed for, then mod_gzip will look for a static precompressed
#  version of this file with a defined additional extension - see next
#  directive - which would be delivered with priority. This would allow
#  for avoiding to repeatedly compress the same static file and thus
#  saving CPU time.
#  No dynamic caching of this file is provided; currently the user
#  himself is responsible for creating and updating the precompressed
#  file's content.

#  From version 1.3.19.2a mod_gzip automatically recognizes whether
#  a statically precompressed file is older than its uncompressed
#  original and in this case will serve the content of the original
#  file in uncompressed form - as to rather serve correct data than
#  outdated ones ...)

# ---------------------------------------------------------------------

# extension (suffix) for statically precompressed files
  mod_gzip_static_suffix        .gz
  AddEncoding              gzip .gz
# (effect: see previous directive; this string will be appended to the
#  name of the original file.
#  be sure to configure the encoding 'gzip' for this extension as well,
#  because mod_gzip doesn't serve the content itself but simply generates
#  an Apache internal redirection to this URL. Therefore the remaining
#  Apache configuration is responsible for setting the 'Content-Encoding'
#  header properly ...
#  prior to version 1.3.19.2a this value was not configurable.)

# ---------------------------------------------------------------------

# automatic updates for statically precompressed files
  mod_gzip_update_static        No
# (if set to 'Yes', this directive (being new in version 1.3.26.1a) would
# cause mod_gzip to automatically update an outdated version of any
# statically precompressed file during the request, i. e. compress the
# originally requested file and overwrite the precompressed variant
# file with it!
# for each automatic update of this type, mod_gzip will write a message
# of the severity 'notice' into the Apache error_log.
# while doing so, mod_gzip will directly read the original file's content.
# therefore this content cannot be interpreted by any other Apache module
# during the request. this might possibly not be what you want - hopefully
# it will be what most users want, because it works fast this way.
# use this configuration with a lot of care, and be sure that you don't
# inadvertantly cause valuable files within the URL tree to be overwritten.
# this isn't a feature to be used for mass hosting servers, especially
# because mod_gzip might experience access control problems there - the
# userid the Apache processes are running under need to have write access
# to the precompressed files of all users, which may not automatically be
# the case.)
# [mod_gzip error handling in this situation??? what will be served?]

# ---------------------------------------------------------------------

###################
### bureaucracy ###
###################

# ---------------------------------------------------------------------
# display status for mod_gzip
  mod_gzip_command_version      '/mod_gzip_status'
# (defines an URL to display the status of mod_gzip; can be specified
# individually for each installation and protected against access via
# <location> section for privacy reasons)
# ---------------------------------------------------------------------
# The status display will look like this:
#       mod_gzip is available...
#       mod_gzip_version = 1.3.26.1a
#       mod_gzip_on = Yes/No
# and thus will provide information about
# - mod_gzip being installed at the server and working correctly,
# - which version has been installed and
# - whether mod_gzip has been set 'active' for this Location
#   (-> mod_gzip_on)
# ---------------------------------------------------------------------

#######################
### data management ###
#######################

# ---------------------------------------------------------------------
# Working directory for temporary files and the compression cache
# if not specified, the following default values are used:
# [Win32=c:temp], [UNIX=/tmp]
# mod_gzip_temp_dir             /tmp
# (This directory must already exist and the userid being used for
#  running the Apache server must have read and write access to this
#  directory.
#  Unlike other Apache directives an absolute path name must be specified
#  here; a relative value will not be interpreted relatively to ServerRoot.
#  This pastname must NOT be terminated with '/'.
#  For maximum performance this directory should be located on a RAM disk,
#  if the file system isn't already being cached efficiently
# ---------------------------------------------------------------------
# Save temporary work files [Yes, No]
  mod_gzip_keep_workfiles       No
# (one file per HTTP request - set to 'yes' for debugging purpose only!)
# ---------------------------------------------------------------------

##################
### file sizes ###
##################

# ---------------------------------------------------------------------
# minimum size (in bytes) for files to be compressed
  mod_gzip_minimum_file_size    500
# (for very small files compression will produce only small absolute gains
#  [you will still save about 50% of the content, but some additional
#  500 bytes of HTTP and TCP headers will always remain uncompressed],
#  but still produce CPU load for both client and server.
#  mod_gzip will automatically set smaller values than 300 bytes for
#  this directive to exactly this value 300.)
# ---------------------------------------------------------------------
# maximum size (in bytes) for files to be compressed
  mod_gzip_maximum_file_size    500000
# (for very large files compression may eventually take rather long and
#  thus delay the start of the transmission.
#  Furthermode a limitation at this point prevents the server from
#  producing output of unlimited size in case of some endless loop
#  inside a CGI script - or even trying to compress streaming data -
#  which might otherwise cause the creation of a temporary file of
#  any size and even fill up the whole hard disk.
#  On the other hand, compression will have a much more perceivable
#  subjective effect for large files ... so be sure to fine-tune this
#  according to your requirements.)
# ---------------------------------------------------------------------
# maximum size (in bytes) for files to be compressed in memory
  mod_gzip_maximum_inmem_size   60000
# (larger files will be compressed into the temp file directory; adapt
#  this value to your server's available main memory.
#  In mod_gzip 1.3.19.x larger values will automatically be limited to
#  60000 because some operating systems are said to have problems
#  allocating more than 64 kb of memory at a time.
# ---------------------------------------------------------------------

####################
### requirements ###
####################

# (see chapter about caching for problems when using these directives.)
# ---------------------------------------------------------------------
# Required HTTP version of the client
# Possible values: 1000 = HTTP/1.0, 1001 = HTTP/1.1, ...
# This directive uses the same numeric protocol values as Apache does
# internally
  mod_gzip_min_http             1000
# (By using this directive you may exclude old browsers, search engines
#  etc. from the compression procedure: if the user agent doesn't
#  declare itself capable of understanding at least the HTTP level
#  specified here, only uncompressed data will be delivered - no matter
#  what else it claims to be able to. The value of '1001' will especially
#  exclude Netscape 4.x. and a lot of proxy servers.)
# ---------------------------------------------------------------------

# HTTP methods to be handled
# Possible values: 'GET', 'POST' or a list of both values.
  mod_gzip_handle_methods        GET POST
# (By using this directive you may particularly exclude POST requests
#  from the compression procedure. There are known cases where the
#  handling of these requests by previous mod_gzip versions could cause
#  problems.
#  Before version 1.3.19.2a this value was not configurable.)

# ---------------------------------------------------------------------

###############
### filters ###
###############

# ---------------------------------------------------------------------
# which files are to be compressed?
#
# The order of processing during each of both phases is not important,
# but to trigger the compression of a request's content this request
# a) must match at least one include rule in each of both phases and
# b) must not match an exclude rule in any of both phases.
# These rules are not minimal, they are meant to serve as example only.
#

# Note that all parameter values of the directives in this section are
# evaluated as regular expressions, and not in a case-sensitive way.

# ---------------------------------------------------------------------
# phase 1: (reqheader, uri, file, handler)
# ========================================
# NO:   special broken browsers which request for gzipped content
#       but then aren't able to handle it correctly
  mod_gzip_item_exclude         reqheader  "User-agent: Mozilla/4.0[678]"

# From version 1.3.19.2a on I advise against using filters
# for User-agents, as this will cause HTTP-Headers 'Vary: User-Agent'
# to be generated, thus making life more difficult for proxy servers.

#
# JA:   HTML-Dokumente
  mod_gzip_item_include         file       .html$
#
# NO:   include files / JavaScript & CSS (due to Netscape4 bugs)
  mod_gzip_item_exclude         file       .js$
  mod_gzip_item_exclude         file       .css$
#
# YES:  CGI scripts
  mod_gzip_item_include         file       .pl$
  mod_gzip_item_include         handler    ^cgi-script$
#
# phase 2: (mime, rspheader)
# ===========================
# YES:  normal HTML files, normal text files, Apache directory listings
  mod_gzip_item_include         mime       ^text/html$
  mod_gzip_item_include         mime       ^text/plain$
  mod_gzip_item_include         mime       ^httpd/unix-directory$
#
# NO:   images (GIF etc., will rarely ever save anything)
  mod_gzip_item_exclude         mime       ^image/
# ---------------------------------------------------------------------
# In fact mod_gzip is checking only the first 4 characters of the 1st
# operand (in case of uri even the first 2 characters only, as to
# allow for values like url).
# ---------------------------------------------------------------------
# The table for mod_gzip_item rules (include and exclude) cannot contain
# more than 256 entries; when this number is exceeded mod_gzip will
# output the message "mod_gzip: ERROR: Item index is full"
# and report a configuration error to the Apache server.
# ---------------------------------------------------------------------
# The directive values described here are meant to describe the requests
# elected for compression most exactly.
# Especially for the mime rules it has to be made clear that the HTTP
# header 'Content-Type' (that will be checked by mod_gzip for this rule)
# in some cases may contain not only a MIME type but additionally a
# character set description (charset) as well.
# If this is the case for the requests to be handled then you need to
# remove the '$' char at the end of the corresponding value so that now
# only the prefix of this value will be tested for matching.
# ---------------------------------------------------------------------

##########################
### transfer encodings ###
##########################

# ---------------------------------------------------------------------
# Allow mod_gzip to eliminate the HTTP header
#    'Transfer-encoding: chunked'
# and join the chunks to one (compressable) packet
  mod_gzip_dechunk              Yes
# (this is required for handling several types of dynamically generated
# contents, especially for CGI and SSI pages, but also for pages produced
# by some Java Servlet interpreters.
# ---------------------------------------------------------------------

###############
### logging ###
###############

# ---------------------------------------------------------------------
# Extended log format (for testing the compression effect)
  LogFormat                     "%h %l %u %t "%V %r" %<s %b mod_gzip: %{mod_gzip_result}n In:%{mod_gzip_input_size}n -< Out:%{mod_gzip_output_size}n = %{mod_gzip_compression_ratio}n pct." common_with_mod_gzip_info2
# ---------------------------------------------------------------------
# Create additional log file
  CustomLog                     logs/mod_gzip.log common_with_mod_gzip_info2
# (surely you can redefine your normal log file format, but you mal well
#  keep its format standard compatible for evaluation by standard web
#  analysis tools. So we just create another log file.)
# ---------------------------------------------------------------------
# Volume computation of the delivered files inside the Apache access_log:
# count HTTP header size (in bytes) as part of total output size
  mod_gzip_add_header_count     Yes
# (This will be more than the pure document content, but it will more
#  realistically describe the total output traffic of the HTTP request)
# ---------------------------------------------------------------------

###############
### proxies ###
###############

# ---------------------------------------------------------------------
# sending a 'Vary' HTTP header
  mod_gzip_send_vary            On
# (see chapter about caching for this directive.)
#  don't change this unless you absolutely know what you are doing!
# ---------------------------------------------------------------------

  </ifModule>

.htaccess - gzip compression code in htaccess not working - Stack Over...

.htaccess compression gzip
Rectangle 27 0

The error message indicates that there is no HttpMessageConverter registered for a multi-part/MIME part of content type: application/octet-stream. Still, your jarFile parameter is most likely correctly identified as application/octet-stream, so I'm assuming there's a mismatch in the parameter mapping.

So, first try setting the same name for the parameter and the form's input element.

Another problem is that the JSON is uploaded as a (regular) value of a text input in the form, not as a separate part in the multi-part/MIME. So there's no content-type header associated with it to find out that Spring should use the JSON deserializer. You can use @RequestParam instead and register a specific converter like in this answer: JSON parameter in spring MVC controller

java - How to process a multipart request consisting of a file and a J...

java json spring rest multipartform-data
Rectangle 27 0

To be honest, and i don't have a whole lot of experience developing on Java application servers, but i think you would do better just using a pure Java book (college/uni level), learning object oriented principles(inheritance, polymorphism, encapsulation) and java's OO constructs if that is not your strong suit. During the process you'll be able to compare and contrast the features to php you've already seen. The reason i say this is because the java API's are constructed using these things very heavily (abstract classes, packages, inheritance etc.) and you'll have to work with them, not against them to get anything done in terms of plumbing. In my experience, comparative books don't work, because it comes from the mindset that the two languages just have differing terminology, but are fundamentally the same in some way. This is misleading, because java ee has a lot more underlying building blocks that you have to be aware of before you jump in and start writing code for HTTP requests.

I would look at the official documentation for your preferred server, and for the java ee background reading I would stick to oracle's documentation, as it avoids misinterpretation of definitions.

just out of curiosity, is there a particular reason you have not moved toward a more object oriented approach of PHP before moving to the monstrosity of java ee?

You seem to have misinterpreted my question. I am not looking for a PHP language vs Java language book. I am looking for a book that teaches someone who knows PHP web development how to standard do web development in the Java environment.

ahh i see - sorry about that, i think the only thing i could offer then are some links:

Beginning Java web development for experienced PHP web developers? - S...

java php jsp
Rectangle 27 0

The default content type is 'application/octet-stream'. Since you are uploading jar file and JSON the content type should be set in the @RequestMapping annotation as follows:

@RequestMapping(value="/create", method=RequestMethod.POST, headers="content-type=application/json,application/java-archive")

java - How to process a multipart request consisting of a file and a J...

java json spring rest multipartform-data